On July 23, 2025, President Donald Trump unveiled an executive order aimed at preventing what he labels as "woke" bias in artificial intelligence (AI) systems, claiming a need for these technologies to reflect his administration’s viewpoint on truth. This directive emerges as part of a broader action plan from the Trump administration that seeks to assert American AI dominance in the global arena, particularly against China.
The executive order specifies that AI systems used by the US government should embody freedom of speech and express factual accuracy, departing from what it terms "social engineering agendas." However, the plan raises significant concerns about whose definition of truth is being prioritized. The administration has directed the Department of Commerce to eliminate references to topics like misinformation, diversity, equity, inclusion, and even climate change from existing policies. This suggests a potentially selective approach to what constitutes truth and could influence the development and deployment of AI models in a manner reminiscent of authoritarian regimes.
Trump’s rhetoric at the announcement included denouncing "woke Marxist lunacy" in AI development, emphasizing a need for models that align with a conservative worldview. The executive order obscures potential ideological bias under the guise of promoting neutrality and truth, questioning the grounding principles of free speech and expression as the government appears to steer AI applications towards a specific political agenda.
Responses from the tech community, however, have been muted. Major companies like OpenAI, Anthropic, and Google have refrained from vocally opposing the executive order, possibly due to the promise of favorable conditions for AI development under the Trump administration. Despite the call for neutrality, Trump’s approach could carry significant implications for how AI systems are programmed and what types of outputs are deemed acceptable—especially if they challenge the administration’s narratives.
Experts, including some political figures like Senator Edward Markey, have voiced alarm over the ramifications of the order. They warn that it may create financial incentives tailored toward conforming AI outputs to align with the government’s preferences. Instead of fostering an unbiased environment for information dissemination, the move could align AI systems with particular political ideals, potentially threatening the independence of media and information in a landscape where AI is increasingly integral to news and data consumption.
As discussions continue, the divergence between the administration’s stated goals of free expression and the implications for AI developers presents a complex legal and ethical landscape. There remains significant apprehension that, without resistance from the tech industry, the integrity of AI’s role as an unbiased information medium could be compromised, resulting in a manipulation of truth that undermines foundational democratic values.