← Return to search results
Back to Prindle Institute
FeaturedOpinionTechnology

The US’s Action Plan to “Prevent Woke AI”

By Giacomo Figà-Talamanca
8 Aug 2025

For a few years now, “digital” or “technological” sovereignty has been a prominent topic within AI Ethics and regulatory policies. The challenge being: how can government actors properly rule in the interest of their citizens, while governments (and citizens) must rely on technologies developed by a handful of companies they do not have clear control over? Many efforts to address this challenge consisted either in regulations, such as the EU’s AI Act, or various forms of agreement between (supra)national actors and tech companies.

Unfortunately, the White House’s “America’s AI Action Plan” and the three Executive Orders published on the same day ignore this thorny issue entirely. Instead, these policy proposals aim at deregulating AI development by American Tech companies “to achieve global dominance in artificial intelligence.” The general thrust is clear: deregulate AI development, promote its deployment across society, and export widely so as to strengthen the U.S.’s global standing.

In advancing these interests, one keyword sticks out like a sore thumb: “Woke AI.” As a millennial, it feels surreal to see a term that I have primarily experienced as Internet lingo make its way into a Presidential Executive Order. While this is far from the first time that the term “woke” has been utilized by the president to pejoratively address the values of the opposition, it’s far from clear what precise danger such language is meant to evoke. What kind of threat does “Woke AI” represent?

The July 23rd Executive Order “Preventing Woke AI in the Federal Government” does not attempt to define the term. Instead, it states that AI systems should provide reliable outputs, free from ideological biases or social agendas that might undermine their reliability. In particular, the Order identifies “diversity, identity, and inclusion” (DEI) as a “destructive ideology” that manipulates information regarding race or sex, and incorporates “concepts like critical race theory, transgenderism, unconscious bias, and systemic racism.” The Order then identifies “Unbiased AI Principles” that will guide development going forward. Chief among these is the command that AI must be truth-seeking and ideologically neutral – “not manipulat[ing] responses in favor of ideological dogmas such as DEI” – to ensure that AI systems are trustworthy.

To many AI ethicists (including myself), the Order reads like a series of non-sequiturs. It demands that tech companies reject any notion related to DEI in their AI development guidelines, yet it is quite unspecific regarding what such rejection would entail in practice. Let us set aside the countless examples of AI systems being unlawfully biased on the basis of race, gender, economic status, and disability in a variety of domains. Let us also set aside the practical impossibility for AI systems to be “unbiased” given that they are technologies literally designed to identify potentially meaningful patterns and sort accordingly. And, finally, let us set aside the irony of the clear ideological grounds motivating the Order’s intention to generate non-partisan results. What little remains when all these difficulties have been accounted for doesn’t amount to much. And it’s worth asking why the focus on “anti-woke AI” represents such a large part of the White House’s overall AI strategy.

The answer to that question becomes much clearer when looking at how – and where – “woke AI” crops up. From the beginning, responsible AI policy is described as integral to the goal of protecting free speech American values. Ultimately, AI outputs must “objectively reflect truth rather than social engineering agendas.” For that reason, references to “misinformation,” regarding things like DEI and climate change, must be removed. But this kind of censorship seems odd considering the stated desire to promote freedom of speech, especially because the Plan is explicitly stating what to not talk about – censoring tech companies from mentioning those topics as relevant concerns.

Ultimately, it often feels like the concern over “Woke AI” is merely a pretense for removing safeguards in order to accelerate AI development. This intent is made explicit at several points of the Plan. At its very introduction (and in reference to the Vice President’s remarks at the AI Action Summit last February) any “onerous” regulation towards AI development would mean paralyzing this technology’s potential – a reason why the current administration rescinded Biden’s “dangerous” Executive Order on AI. (Interestingly enough, many saw that regulation as quite lenient, all things considered, especially compared to the EU’s AI Act.) Any mention of regulation in the Plan that does not originate from the current White House is considered “onerous,” “burdensome,” or in some way unreasonable in slowing down AI development.

Even more poignantly, the Plan is quite clear in its intention to counter Chinese influence: it refers to the governance frameworks proposed by international organizations such as the UN, the G7, and the G20 as “vague ‘codes of conduct’ that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies attempting to shape standards for facial recognition and surveillance.” Safeguards meant to protect individual rights and privacy are written off as the calculated design of the U.S.’s largest geopolitical competitor.

But the Plan is not simply a rhetorical tool to signal dominance within the U.S.’s political discourse. Rather, it is a means of vilifying any obstacle to the “move fast and break things” approach as “woke.” This language is not only meant to clearly separate the current White House’s position from that of their predecessor’s, but to pave the way for deregulation. The fear is that this attitudinal shift cedes far too much power to unaccountable tech companies. Without stronger guardrails in place, we may all get run over.

Giacomo Figà-Talamanca graduated with his Master's in Philosophy of Mind at Radboud University, and is currently working as a series editor for the American Philosophical Association. His interests include the ethics of vulnerability, digital and AI ethics, as well as philosophy of popular culture.
Related Stories