Consultancy Circle

Artificial Intelligence, Investing, Commerce and the Future of Work

DeSantis Takes a Stand: AI Skepticism on the Rise

Florida Governor Ron DeSantis has stepped into the arena of national AI policy—not as a proponent of rapid innovation, but as one of its most vociferous skeptics. In politically charged remarks made in late December 2025, DeSantis drew a hard line against what he sees as a growing threat: the unchecked rise of artificial intelligence, especially AI systems aligned with progressive ideological leanings. His stance places him at the forefront of a widening skepticism that spans political ideology, economic flank security, and citizen concerns about surveillance and job displacement. As AI’s societal imprint deepens, DeSantis’ message is both resonant and polarizing, prompting critical questions about whether skepticism will become a dominant regulatory theme heading into the 2026 election cycle.

DeSantis’ Anti-AI Messaging: Ideological Stakes or Strategic Positioning?

DeSantis’ rhetoric, delivered at a campaign event in Iowa and covered prominently by Politico (2025), did more than critique technology. It couched AI as an existential cultural force capable of reshaping democratic governance. “We have to reject that with every fiber of our being,” he declared, referring to perceived ideological bias within major AI platforms, specifically systems developed by OpenAI, Anthropic and other U.S.-based leaders.

This rhetoric positions DeSantis as the populist counterweight to prevailing AI optimism that permeates Silicon Valley and segments of the federal government. Notably, he warned against “delegating human judgments to machines” that might be programmed with values at odds with traditional or conservative principles. While reminiscent of past tech-skeptic conservative statements, DeSantis’ commentary directly links AI policy to democratic control and cultural sovereignty, a message that seems strategically shaped for the 2026 political landscape.

The Landscape of AI Regulation: Emerging Fault Lines

While the Biden administration has advanced an executive-level strategy for AI safety—most recently the Office of Management and Budget’s updated draft AI policy guidance from January 2025 (White House OMB, 2025)—DeSantis’ opposition is not merely bureaucratic. His skepticism is rooted in a broader mistrust of centralization and elite-led oversight boards such as the National AI Advisory Committee or NIST’s AI Risk Management Framework. DeSantis argues that these institutions presume legitimacy without democratic vetting.

The tensions reflect a division in how to implement guardrails in AI development. On one side, Washington and developers like OpenAI push for national alignment on AI safety, inspired by fears of runaway systems, model collapse, and algorithmic disasters. On the other, DeSantis and a slowly growing cadre of local and state leaders aim for decentralized control, strict boundaries on AI’s governmental use, and potentially legislative “firewalls” to delineate which AI systems may or may not operate within Florida’s borders.

Tech Community Responds: Industry Balancing Act

DeSantis’ comments did not go unanswered. Senior figures at OpenAI, including interim policy chief Anna Makanju, have reaffirmed their commitment to value-pluralistic development. According to a recent OpenAI blog post (January 2025) introducing GPT-5’s System Card demo, the company is building “configurable AI outputs” that allow user-side customization of political tone and sociocultural contexts. However, critics—including DeSantis—argue that baseline model biases persist regardless of optional settings.

As the dialogue intensifies, technology firms are attempting to hedge against this politicization. NVIDIA, for instance, emphasized in its recent January 2025 earnings call that its value in the AI supply chain is “infrastructure-neutral,” stressing that GPU capability serves all ideological or industry applications (Fortune, 2025). Likewise, Anthropic has doubled down on constitutional AI frameworks and added a transparency API for researchers to audit model behavior in real-time (Anthropic Blog, January 2025).

Still, the neutrality of these stances is under mounting scrutiny as AI systems increasingly shape access to public discourse, education, and automated decision-making in judicial and welfare contexts. With AI shaping reality filters, neutrality itself is becoming an object of political contention.

Public Sentiment: Skepticism Rising across the Spectrum

DeSantis’ messaging may be more aligned with public sentiment than initially assumed. A January 2025 Pew Research survey found that 59% of Americans now express “moderate to high concern” about the use of AI in government decision-making—a 14-point increase since October 2024. Critically, concern cuts across partisan lines: 62% of Republicans and 57% of Democrats reported unease with how AI could interpret or implement policy decisions.

This unease echoes in younger demographics too. Gallup reported in February 2025 that while 71% of Gen Z respondents use generative AI weekly, only 38% trust it to make unbiased recommendations in legal or hiring contexts (Gallup, 2025). The divergence between user uptake and trust exposes an emerging schism where AI utility is high, but perceived legitimacy remains fragile.

Demographic Weekly Gen AI Usage Trust in AI for Public Policy
Gen Z (18–27) 71% 38%
Millennials (28–43) 61% 42%
Gen X + Boomers 40% 35%

This data underscores the challenge technocratic developers face: delivering transformational tools while maintaining democratic legitimacy and user trust. It also amplifies DeSantis’ critique, even if his underlying motivations are as political as they are philosophical.

Policy Implications: What AI Governance Could Look Like in a DeSantis Framework

If DeSantis were to shape federal AI policy or influence a coalition of state-level restrictions, the regulatory frame around AI could become far more fragmented. Several key implications arise from his approach:

  • Decentralization of AI oversight: States may assert stronger control over how and where AI systems are deployed. Florida’s legislature is reportedly considering bills requiring all AI vendors to disclose source code or classifier logs before operating within public schools or agencies.
  • Firewall policies: Governments could introduce content-neutrality audits or even prohibit use of models that perform “sociocultural inference.”
  • Liability escalation: AI outputs tied to political bias, discriminatory impacts, or election interference could open vendors to broader class-action exposure.

These moves could deter smaller AI players from public-sector deployments entirely. Already, legal analysts at DLA Piper (2025) note that startup AI vendors are pausing pilot programs in Florida and Texas as uncertainty over disclosure laws grows.

Looking Ahead: 2026 and the Entangled Future of AI and Partisan Identity

What was once a technical domain governed by global standards and academic consensus is now entering the slipstream of ideological bifurcation. 2026 will likely see a proliferation of AI-related legislative proposals at the state level—many echoing DeSantis’ framing of AI as a “values battlefield.” Meanwhile, national agencies grapple with setting equilibrium policies that allow for innovation without sparking public distrust.

The contrasting visions—one emphasizing engineering-centric safety and scalability, the other privileging constitutional limitations and moral oversight—are not easily reconciled. Political momentum could tilt toward caution if additional high-profile AI failures or misuses emerge. The false-deepfake arrest lawsuit filed in San Jose in late January 2025 (Courthouse News Service, 2025) may embolden legislators to endorse stricter scrutiny across party lines.

In that context, DeSantis’ warnings serve both as a policy prescription and a metaphor for a deeper contest over who—and what—gets to shape public conscience in the digital age. Whether voters embrace his vision or reject it will shape not just the future of AI regulation, but the cultural architecture of America’s next information paradigm.

by Alphonse G

This article is based on and inspired by Politico

References (APA Style):

Anthropic. (2025, January). Transparency API Announcement. Retrieved from https://www.anthropic.com/news/transparency-pilot-api-2025

CourtHouse News Service. (2025, January 29). Facial recognition AI leads to false arrest civil suit. Retrieved from https://www.courthousenews.com/facial-recognition-ai-leads-to-false-arrest-civil-suit/

DLA Piper. (2025, February). State Approaches to AI Regulation. Retrieved from https://www.dlapiper.com/en-us/insights/publications/2025/02/state-approaches-to-ai-regulation/

Fortune. (2025, January 24). NVIDIA hedges against AI political backlash. Retrieved from https://fortune.com/2025/01/24/nvidia-hedges-against-ai-political-backlash/

Gallup. (2025, February). Gen Z Reports Growing AI Distrust. Retrieved from https://news.gallup.com/poll/2025/gen-z-ai-fears-growing.aspx

OpenAI. (2025, January). System Card GPT-5. Retrieved from https://openai.com/blog/system-card-gpt-5

Pew Research Center. (2025, January 15). Americans growing wary of AI in government. Retrieved from https://www.pewresearch.org/short-reads/2025/01/15/americans-growing-wary-of-ai-in-government/

Politico. (2025, December 27). ‘We have to reject that with every fiber of our being’: DeSantis emerges as a chief AI skeptic. Retrieved from https://www.politico.com/news/2025/12/27/we-have-to-reject-that-with-every-fiber-of-our-being-desantis-emerges-as-a-chief-ai-skeptic-00704333

White House Office of Management and Budget. (2025, January 22). Draft AI policy rules released. Retrieved from https://www.whitehouse.gov/omb/briefing-room/2025/01/22/draft-ai-rules-released/

Note that some references may no longer be available at the time of your reading due to page moves or expirations of source articles.