Consultancy Circle

Artificial Intelligence, Investing, Commerce and the Future of Work

Open Standards: The Key to AI Innovation and Sustainability

As artificial intelligence (AI) continues to reshape industries, economies, and societies, the call for open standards has never been more urgent. The push towards standardization in AI goes beyond interoperability — it’s a fundamental requirement for innovation, sustainability, and ethical development. The explosion of proprietary models, fragmented tools, and closed ecosystems threatens to make AI progress inefficient, expensive, and monopolized. Without open standards, developers, regulators, and end-users risk losing collective control over one of the most transformative technologies of our time.

The Innovation Paradox in AI Development

AI innovation is currently trapped in what has been referred to as the “innovation paradox” — a phenomenon where the rapid evolution of capabilities is ironically stifled by the inability to share, compare, or replicate these innovations. As highlighted in the VentureBeat article by Ben Dickson, the increasing complexity of models, datasets, and accelerators creates an ecosystem where even foundational collaboration becomes impractical. For instance, with each firm developing its model training pipelines, hardware compatibility layers, and datasets, the cost of adopting improvements or even understanding a model’s behavior skyrockets.

This fragmented development environment introduces a misalignment between innovation and scalability. A new algorithm or technique might exist but replicating or integrating it requires rebuilding large parts of your stack — a highly wasteful process. Without shared standards, the industry risks duplicating effort across similar projects, thereby inflating costs and delaying progress.

Tech entities like OpenAI and DeepMind are powerful innovation centers, but even they rely on broader community feedback and collaboration. For AI to truly benefit society, every player — from small startups to public institutions — must have access to the tools and frameworks needed to contribute. This can only occur through standardized APIs, model benchmarks, data annotation protocols, and interface layers.

Economic Implications of Proprietary AI Silos

AI development is an expensive endeavor. According to McKinsey Global Institute, state-of-the-art large language models (LLMs) can cost tens of millions of dollars in training compute alone. A study by AI Index 2023 (Stanford HAI) reports that GPT-4 likely cost over $100 million to develop, factoring in GPU consumption and RLHF (Reinforcement Learning from Human Feedback) costs. With only a few players able to absorb such expenses, market concentration is worsening — capitalizing the power of AI into the hands of a few technology conglomerates.

By contrast, open standards reduce redundancy and distribute infrastructural resources more effectively. Take Kubernetes and TensorFlow — both open-source standards/tools — which fostered entire ecosystems of applications, startups, and research initiatives. Similar frameworks in AI inference, model deployment, and dataset handling could democratize the field. Moreover, compliance costs with emerging AI legislation — such as the EU’s AI Act or California’s AI transparency laws — could be dramatically reduced if industry-wide standards were in place.

The financial argument isn’t just theoretical. NVIDIA reports that data center demand for AI chips like the H100 continues to set record highs, with revenue of over $14 billion in Q3 2023 alone, as cited in their quarterly earnings blog. Yet, many of these chips remain underutilized due to idiosyncratic model requirements and lack of flexibility in deploying cross-architecture AI applications. Developers are forced to fine-tune models for each target hardware, increasing both cost and complexity. Standardization would enable model portability across hardware environments and leverage compute resources more flexibly.

Environmental and Sustainability Considerations

Training large-scale AI models consumes immense energy. According to a recent MIT Technology Review article, training GPT-3 consumed approximately 1,287 megawatt-hours of electricity, equivalent to the yearly energy usage of hundreds of homes. Furthermore, Inferencing — the process of using the model for real-time outputs — consumes exponentially more energy than training due to its continuous run-time.

Open standards pave the way for more efficient carbon-aware scheduling, model quantization, and multi-backend inferencing, all of which reduce the environmental burden. The Green Software Foundation advocates standardized techniques for measuring and reporting energy consumption of software processes — a practice that, if extended to AI, could incentivize greener model deployment strategies.

Incorporating such standards into foundation models (like Anthropic’s Claude and Meta’s LLaMA series) or even edge-deployed AI systems (e.g., on mobile via Google’s TensorFlow Lite) could minimize waste. Sustainability is no longer just a feature; in high-volume use cases — such as Google Search, which uses LLMs for query refinement — lower power consumption translates directly into cost savings and improved user experience.

The Role of Regulations and Global Compliance

Governments around the world are racing to regulate AI, and open standards present a clear route to compliance. In the U.S., the Federal Trade Commission (FTC) has launched investigations into companies suspected of deceptive or unfair surveillance practices with AI applications (FTC press releases). Similarly, the Global Partnership on AI (GPAI) pushes for a collaborative governance model, where accountability and auditability are built into AI systems.

However, regulators face a steep challenge without standardized implementations of critical features like explainability, fairness assessments, and privacy guarantees. Institutions such as the World Economic Forum emphasize that lack of software interpretability impedes not only transparency but also equitable deployment. AI should not be governed by proprietary semantics or black-box logic only its creators understand.

By aligning development with ISO/IEC and NIST-type standards for AI—currently under development—companies will gain a blueprint for legally and ethically sound systems. The open standard movement is being bolstered by nonprofits and consortia, such as the Linux Foundation’s LF AI & Data project and the Open Neural Network Exchange (ONNX), which promotes standardized formats for model sharing between PyTorch, TensorFlow, and other frameworks.

Industry Collaboration and Competing Models

The rise of competing AI models has introduced both innovation and fragmentation. In 2024, transformative models include OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude 3, Mistral’s Mixtral, Meta’s LLaMA 3, and Cohere’s Command-R. A critical challenge has emerged: there is no consistent method to benchmark, compare, or integrate these models under a unified metric system.

Currently, evaluations such as MMLU (Massive Multi-task Language Understanding) or BigBench vary in their interpretations from one model to another. Companies like Hugging Face attempt to create common grounds (e.g., Open LLM Leaderboards), but the ecosystem remains fragmented. Independent benchmarks, as emphasized by The Gradient, are often outdated by the time they are published — further revealing the need for real-time standardization in scores, dataset splits, and reporting techniques.

Below is a table that outlines some key differences and overlaps between the major 2024 AI models:

Model Provider License Type Uses Open Standards
GPT-4 OpenAI Proprietary Partial
LLaMA 3 Meta Open (with restrictions) Yes
Command-R+ Cohere Open Yes
Claude 3 Anthropic Proprietary Limited

By adopting shared standards at the training, evaluation, ethical, and deployment levels, these models can co-exist in a richer ecosystem that promotes both competition and collaboration. The broader industry — including NVIDIA, AWS, Hugging Face, and even non-tech sectors using AI — must coalesce around shared interfaces. More than performance metrics, it’s about sustainable progress.

Conclusion: A Call to Action for AI’s Future

Open standards are not just a technical convenience; they are the linchpin for AI’s inclusive growth, safety, and long-term sustainability. From lowering costs and increasing energy efficiency to enabling fair regulation and global collaboration, standards are the building blocks of a resilient AI future. As we stand at the tipping point of AI’s societal integration, investing in open interface layers, common evaluation metrics, and shared datasets is not optional — it is essential.

Industry, academia, and regulators must work hand-in-hand to build infrastructure and governance frameworks that mirror the Internet’s own success story. Much like TCP/IP enabled the explosion of the web, standardized AI architecture could unlock a new renaissance in responsible intelligence. The choice is stark: siloed, extractionist AI driven by market domination, or an interoperable, ethical AI ecosystem steering global innovation forward.

by Calix M

This article is inspired by this original VentureBeat post.

APA Citations

  • Ben Dickson. (2024). “MCP and the innovation paradox: Why open standards will save AI from itself.” VentureBeat. https://venturebeat.com/ai/mcp-and-the-innovation-paradox-why-open-standards-will-save-ai-from-itself/
  • MIT Technology Review. (2023). New energy estimates for AI model training. https://www.technologyreview.com/
  • NVIDIA. (2023). Q3 FY24 Earnings Highlights. https://blogs.nvidia.com/blog/2023/11/21/q3-fy24-earnings/
  • OpenAI Blog. (2023). GPT-4 Technical Report. https://openai.com/blog/gpt-4
  • McKinsey Global Institute. (2023). GenAI economics and productivity potential. https://www.mckinsey.com/mgi
  • The Gradient. (2024). Evaluating Language Models Fairly. https://thegradient.pub
  • FTC News. (2024). AI Fair Use Updates. https://www.ftc.gov/news-events/news/press-releases
  • World Economic Forum. (2024). Future of Work Reports. https://www.weforum.org/focus/future-of-work
  • DeepMind Blog. (2024). On scientific benchmarks and reproducibility. https://www.deepmind.com/blog
  • Hugging Face. (2024). Open LLM Leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard

Note that some references may no longer be available at the time of your reading due to page moves or expirations of source articles.