Consultancy Circle

Artificial Intelligence, Investing, Commerce and the Future of Work

The Mysterious Vanishing of an Anti-AI Advocate

On November 1, 2025, a prominent artificial intelligence critic vanished under circumstances that have sparked widespread debate, confusion, and palpable anxiety among those concerned with the societal trajectory of AI. Sam Kirchner, a 38-year-old computer science dropout turned independent researcher and critic of machine learning ethics, disappeared from his Berkeley apartment, leaving behind a darkened laptop, a half-packed suitcase, and a digital trail of warnings about a world increasingly overwhelmed by algorithmic control. His disappearance, now formally under FBI investigation, marks the intersection of powerful ideological currents in the global AI debate — and has led some to question whether dissent against highly integrated AI power structures is still possible, let alone safe.

The Missing Man Who Provoked an AI Empire

Unlike most technologists advocating for ethical constraints within artificial intelligence, Sam Kirchner never worked for a tech giant. Rather, his influence emerged from outside the industry’s fortified towers. Kirchner rose in prominence in late 2024 through a Substack called Stop AGI, which drew over 700,000 subscribers in less than eight months. His essays — often acerbic, wide-ranging, and infused with citations from legal history, computational theory, and political critique — centered on three core ideas: first, that advanced AI development is being monopolized by a narrow band of actors; second, that these technologies are displacing democratic decision-making; and third, that surveillance and prediction engines are being normalized under the euphemism of “optimization.”

His writings reached a crescendo in September 2025, around the time of OpenAI’s DevDay where GPT-5 Turbo was unveiled with native APIs enabling enterprise-wide adoption for autonomous workflows (OpenAI, 2025). Kirchner argued the platform’s semi-autonomous agent clusters were bordering on the threshold of “unaccountable machine labor allocation” — a speculative yet increasingly relevant concern. Reviewing leaked technical documentation from open-source collaborators, he accused multiple companies, including OpenAI and Anthropic, of modeling layers that could not be fully audited for property-based safety or latent behavior chaining.

The Growing Risks of Anti-AI Advocacy

Kirchner’s disappearance must be contextualized within the larger backlash faced by other AI critics. His case is not isolated. According to a 2025 report by the Electronic Frontier Foundation, more than 17 researchers and whistleblowers expressing skepticism of frontier AI labs experienced doxxing campaigns, targeted cyber intrusions, or reputational sabotage over the last year (EFF, 2025).

Moreover, deepfake proliferation has exacerbated the risks. In October 2025, two AI ethicists from France were impersonated in synthetic video clips appearing to praise AI military applications — only to discover their likenesses had been manipulated using stolen training data from conference video archives. Regulation has clearly lagged: the Federal Trade Commission’s last ruling on AI-generated impersonations dates back to Q3 2024, and new enforcement guidance on generative misuse has yet to materialize, according to a November 2025 update from the FTC (FTC, 2025).

Legal ambiguity has created a gray area where bad-faith actors can suppress public dissent with stochastic harassment — momentary intersections of platform design, algorithmic exposure, and mass scale amplification.

The Convergence of Tech, Power, and Silence

Many fear that Kirchner ran afoul of more than generic online hostility. Sources close to the missing author, including a researcher from the AI Accountability Lab who requested anonymity, claim Kirchner had recently obtained evidence of protocol-level integrations between enterprise AI suites and federal security agencies through third-party contractors — particularly in predictive analytics systems for policing and labor automation. This follows recent investigative reporting showing that Palantir’s Gotham system has been enhanced via LLM-augmented decision modules for Department of Homeland Security deployment (MIT Technology Review, 2025).

If true, Kirchner’s findings might have threatened both private and public stakeholders. While no hard proof links federal agencies to the disappearance, observers point out that in August 2025, he posted a short encrypted memo suggesting he would “soon publish something with severe implications for AI contractors inside U.S. infrastructure.” No such publication emerged. Instead, his post was deleted a week later, followed by a dramatic drop in Substack updates.

The Economics of Suppressing AI Criticism

At stake is not merely individual safety, but the structure of incentive and control in the AI economy. As of Q4 2025, frontier AI companies hold combined valuations exceeding $4.8 trillion, with Microsoft, Alphabet, and NVIDIA converging strategically across data center architecture, silicon design, and model customization layers (Investopedia, 2025). Criticism that could erode consumer trust or regulator engagement is not merely inconvenient — it threatens capital flow and geopolitical alignment. Here’s a current financial snapshot:

Company AI Revenue Growth (YoY) Q3 2025 AI Spend
Microsoft +38% $16.2B
Alphabet +42% $14.5B
NVIDIA +53% $11.9B

These figures underscore a reality where regulatory hesitancy is reinforced by economic entrenchment. As McKinsey’s October 2025 report emphasizes, the consolidation of AI value chains increases resistance to openness, even among firms practicing so-called “responsible AI” (McKinsey MGI, 2025).

Inside the Silent Web of Tech Activism

Despite these headwinds, digital resistance continues to ferment. Kirchner’s vanishing has galvanized a new wave of decentralized networks pushing for algorithmic transparency, data sovereignty, and counter-modeling tools. One emerging initiative, Codename Alecto, operates across Mastodon and Matrix chat environments, offering open-source audits of AI models deployed in public infrastructure. While nascent, it already has over 50 volunteer researchers and aims to archive all procedural documentation of AI impact assessments submitted under the EU’s Digital Services Act of 2024 (DSA data is the most recent publicly accessible as of January 2025).

Meanwhile, the AI Now Institute has called for an international whistleblower immunity agreement, similar to protections offered to counterintelligence sources. A November 2025 statement requested UN-level engagement, noting the sharpening asymmetry between commercial secrecy and public knowledge in AI influence systems (AI Now, 2025).

Regulatory Response: Too Slow, Too Late?

Kirchner’s case has begun to reframe the urgency of policy reform. In the U.S. Senate, Senators Wyden and Booker introduced a bill in November 2025 mandating a public registry of high-impact AI deployments across healthcare, justice, and defense. The bill — formally titled the Algorithmic Accountability & Integrity Act of 2025 — includes whistleblower protections for AI developers reporting undocumented behaviors or unauthorized external integrations (Congress.gov, 2025).

Meanwhile, the European Commission moved ahead on an early implementation pilot of the EU AI Act. The focus includes conditional bans on AI models used for affect recognition in public spaces — a domain touched upon indirectly by Kirchner’s earlier writing. However, critics argue institutional capture remains a backdoor concern, as lobbying disclosures from Q3 2025 show record spending by Big Tech on EU AI guidance frameworks (Politico Europe, 2025).

What Comes Next? The Stakes for 2026

Looking forward to 2026 and beyond, the key question is not merely whether dissenting figures like Sam Kirchner are at personal risk, but whether civil society has sufficient leverage to slow, shape, or supervise AI deployment in critical domains. Venture capital flows into “human-in-the-loop” systems show a mild resurgence — an attempt to re-centering human judgment in AI oversight. But even this trend may be shallow. As a December 2025 survey by Deloitte shows, 74% of enterprise adopters are prioritizing “complete AI autonomy” within three years (Deloitte Insights, 2025).

In that context, voices like Kirchner — radical, unbranded, uncompromised — may represent the final resonance of critique before systems of such complexity become naturalized. Whether his absence is voluntary, coerced, or something in between, its symbolism stands. In an algorithmically enhanced society, the evaporation of dissent may not require conspiracy; it may simply require indifference, fragmentation, and quant-driven speed.

by Alphonse G

This article is based on and inspired by The Atlantic’s 2025 investigation into Sam Kirchner’s disappearance

References (APA Style):

  • Electronic Frontier Foundation. (2025, October). Cyber repression of frontier AI critics. https://www.eff.org/deeplinks/2025/10/frontier-whistleblowers-cyber-repression
  • Federal Trade Commission. (2025, November). FTC seeks comments on generative AI guardrails. https://www.ftc.gov/news-events/news/press-releases/2025/11/ftc-seeks-comments-generative-ai-guardrails
  • Investopedia. (2025, October). Top AI-driven tech stocks Q3 2025. https://www.investopedia.com/top-tech-stocks-ai-q4-2025-8382198
  • McKinsey Global Institute. (2025, October). Responsible AI: Towards a regulatory framework. https://www.mckinsey.com/mgi/reports/2025-responsible-ai
  • MIT Technology Review. (2025, October). Palantir adds LLM modules to national security platforms. https://www.technologyreview.com/2025/10/28/palantir-llm-gotham-national-security/
  • OpenAI. (2025, November). DevDay 2025: GPT-5 Turbo and Custom Agents. https://openai.com/blog/devday-2025-summary
  • Politico Europe. (2025, October). Lobbying intensifies on eve of AI Act implementation. https://www.politico.eu/article/eu-ai-lobby-spending-2025/
  • Deloitte Insights. (2025, December). Horizon AI: Enterprise priorities for 2026. https://www2.deloitte.com/us/en/insights/2025-ai-trends-horizon.html
  • Congress.gov. (2025, November). Algorithmic Accountability & Integrity Act of 2025. https://www.congress.gov/bill/118th-congress/senate-bill/3012/text
  • AI Now Institute. (2025, November). Call for international AI whistleblower protections. https://ainowinstitute.org/blog/2025-nov-emergency-ai-access

Note that some references may no longer be available at the time of your reading due to page moves or expirations of source articles.