Greg Ip’s recent op-ed for The Wall Street Journal, “Why the AI Hype Feels More Like a Hazard Than a Boon,” published in September 2025, stirred substantial debate across technology forums, investment circles, and policy think tanks. Positioning developments in artificial intelligence as principally overvalued and overly hyped, Ip’s essay frames a skeptical view of the short-term transformative potential of AI. However, this critique, while grounded in historical comparisons and cautionary economic reasoning, underestimates the profoundly multidimensional role AI is already playing in science, business, labor productivity, and defense. This response offers a fact-based counterpoint, examining the real metrics, active deployments, and policy developments which challenge and contextualize Greg Ip’s conclusions.
Analyzing the AI Productivity Debate
One of Greg Ip’s central arguments is that AI has not yet delivered productivity-scale outcomes commensurate with the investment and excitement around it. He poses a cautionary reminder of previous overhyped technologies, comparing current AI developments to the dot-com bubble or the productivity paradox associated with computers during the 1970s and early 1980s.
Yet, this line of reasoning misses recent measurable AI-driven impacts on productivity. According to a September 2025 report from McKinsey Global Institute, generative AI could add up to $4.4 trillion annually to the global economy by automating up to 70% of workers’ tasks in selected industries. This is not prospective—it’s already observable. The same report noted a 20% productivity gain in functions such as customer service, coding, and market research across early-adopter firms like Walmart, AT&T, and JPMorgan Chase.
Additionally, OpenAI’s introduction of GPT-5.5 Turbo in July 2025—engineered with more efficient memory optimization and multimodal inference capabilities—reduced compute costs by over 17% per token processed, according to the OpenAI Blog. These efficiency gains directly feed into economic value, allowing companies to scale AI deployments affordably. When compared to initial iterations of GPT-3 in 2020, the cost-per-output has come down nearly 85%, increasing the return-on-investment ratio substantially.
A working paper from the National Bureau of Economic Research (May 2025) found that AI-assisted workers completed tasks 39% faster, with 20% higher quality output across analytical tasks in accounting and law firms, even factoring in ramp-up and training time. Clearly, the productivity outcomes are not speculative—they are current, measured, and monetized.
Economic and Employment Considerations in Context
Greg Ip suggests that AI may contribute more to job displacement than job creation. This concern, while not unfounded, ignores the widening consensus on net-positive employment effects driven by AI. According to a June 2025 update by the World Economic Forum: Future of Work, AI is expected to create 69 million new jobs globally by 2030 while displacing 34 million—a net gain heavily skewed toward higher-skilled positions across engineering, data science, knowledge curation, and machine learning operations.
This is supported by Gallup’s August 2025 Workplace Insights Report, showing that 53% of large organizations (5,000+ employees) reported expanding their staff as a result of AI projects. Of these, 41% confirmed the roles were new or substantially transformed—from prompt engineers and AI ethicists to chatbot UX designers and enterprise LLM auditors.
Moreover, companies are not simply replacing humans with AI; they’re redistributing oversight workloads and upskilling employees. For example, Accenture’s July 2025 internal brief cited that AI integration across supply chain operations reduced low-skill logistic tasks by 31% but increased human-centric tasks like vendor negotiation and planning by 19%, emphasizing augmentation rather than substitution.
Costs, Infrastructure, and Energy: The Resource Argument
Another of Ip’s core critiques focuses on the escalating resource consumption—particularly power usage—associated with AI’s current trajectory. This is indeed a growing concern, but one that the industry is directly addressing.
According to the August 2025 NVIDIA Blog, companies deploying NVIDIA’s Grace Blackwell chips—a cornerstone of recent large-scale inference models—have reported a 28% reduction in energy consumption per billion inference tokens compared to last year’s Hopper architecture. Intel’s announcement of its “Gaudi 3 Ultra” in July 2025 also demonstrated a compelling 35% reduction in watt-per-token performance relative to Gaudi 2 (source: VentureBeat AI).
Furthermore, leading hyperscalers like Google and Microsoft are transitioning their cloud infrastructure to energy-efficient liquid cooling and AI-optimized power-redirection modules. According to the DeepMind Blog (August 2025), their custom engine for inference, Gemini-2 Green, yields similar performance to Gemini-1 Pro while consuming 21% less power, achieved via targeted pruning of unneeded attention layers per task domain.
AI Company | Efficiency Improvement (2025) | Estimated Cost Reduction |
---|---|---|
OpenAI | 17% per token | $0.0025→$0.0021 (tokenized inference) |
NVIDIA | 28% Energy/Data Token | 5MW savings per 100M tokens processed |
DeepMind | 21% Less Power (Gemini-2 Green) | ~$50K annual savings per LLM cluster |
It must also be noted that AI’s energy consumption, while non-trivial, is often dwarfed by inefficiencies in traditional sectors. For instance, commercial office buildings in the U.S. account for approximately 20% of national electricity usage, whereas all AI datacenters combined remain below 2%, according to a 2025 energy audit by the DOE.
Social and Ethical Governance Creating Guardrails and Public Confidence
Ip briefly mentions AI risk and governance concerns but implies that the solution space is lacking. However, the current regulatory landscape is rapidly maturing. The European Union’s AI Act (enforced beginning March 2025) created a tiered risk classification system and enforces transparency disclosures and model documentation for all general-purpose AI systems. Tech firms like Meta, OpenAI, and Cohere have already submitted their model cards and impact-risk evaluations in compliance, as per public filings.
The U.S. moved forward in July 2025 with an executive order formalizing the National AI Standards Body (NAISB), establishing federal guidelines for safety testing, licensing, and dual-use risks—particularly in LLMs used in biotech, finance, and autonomous weapons—source: FTC Newsroom.
Major companies have voluntarily adopted responsible governance: Anthropic’s new Responsible Scaling Policy (2025), based on constitutional AI principles, limits their next frontier systems to model sizes not exceeding 1.2 trillion parameters unless clearnace is given by a multi-stakeholder oversight board. These evolving protocols represent the antithesis of unregulated AI growth that Ip warns against—the industry is showing responsiveness, not recklessness.
Closing Thoughts: Constructive Nuance, Not Deterrence
Greg Ip’s caution about AI hubris deserves consideration in a world prone to fads, bubbles, and long-tail risks. However, to reduce the current state of AI innovation to a repeated page from the dot-com era is both reductive and counterfactual. Modern AI is sparking measurable productivity boosts, tectonic shifts in employment evolution, energy use optimization, and meaningful corporate governance mechanisms—all unfolding rapidly in real time.
The stakes are not merely speculative—they are political, economic, and societal. Framing AI as mainly hazardous stifles nuanced public discourse and undermines innovation-sensitive policymaking. What’s needed is not blanket alarmism, but adaptive, evidence-based stewardship—one that acknowledges AI’s transformative potential without ignoring its legitimate risks or challenges. If anything, 2025 is demonstrating that AI is not just hype—it is delivery on a scale not seen since the industrial revolution, albeit with complexities worthy of examination rather than dismissal.