Sign up for Decoding

Nov 2025

Decoding Responsible AI: Public sector resilience in the midst of an AI bubble

This article was originally published in Decoding, our monthly briefing on the latest trends in government technology. Sign up here to receive future editions directly in your inbox.

In March, our first AI-focused edition of Decoding examined what governments need to deploy AI responsibly: strong governance, clear risk frameworks and robust digital infrastructure. Eight months later, the environment has shifted again. The AI market has accelerated faster than many expected; the EU’s AI policy is being reopened through the Digital Omnibus, and Member States are pushing for clearer safeguards to match the rapid pace of adoption.

Responsible AI is no longer only about ethics or governance frameworks. It is also about resilience: the ability of public institutions to make evidence-based decisions in a market shaped by uncertainty, concentration, and strong inevitability narratives that push adoption faster than evidence supports. 

This edition explores what that means for procurement, resilience and public trust at a time when governments are being urged to deploy AI tools at scale. We also highlight developments across Europe, along with national insights from Denmark.

In this edition, you’ll read about:

  • How interdependent AI markets create risks for procurement and resilience
  • The EU’s Digital Omnibus and its implications for AI regulation
  • Global developments in AI
  • CAISA’s insights on Artificial General Intelligence 

When market hype meets public procurement

The market context: signs of an AI bubble

Over the past year, AI has driven record stock-market gains. Analysts, research institutes, and industry leaders describe today’s conditions as strikingly similar to those of previous tech bubbles. A growing share of investment is concentrated in a small number of companies whose success depends entirely on AI breakthroughs. Examples include Nvidia and foundational model providers such as OpenAI, Perplexity, and CoreWeave – companies interconnected through chip supply, cloud infrastructure and investment partnerships.

These interdependencies create systemic risk. If valuations fall or corporate strategies shift, downstream effects could be rapid. The expansion of AI exposure into public markets also increases systemic vulnerability for pension funds and long-term institutional investors.

Why this matters for the public sector

Public-sector organisations procure and deploy AI systems from the same market that investors now worry is overheating. But public institutions have different incentives and lower risk tolerance. As AI adoption accelerates, governments face pressures that include:

  • Vendor instability: Startups may pivot or collapse; large companies may discontinue products.
  • Lock-in risk: Proprietary systems make it difficult to switch providers if market conditions shift.
  • Pricing volatility: Compute-based pricing models remain unpredictable.
  • Innovation pressure: Political and organisational incentives encourage rapid adoption without adequate evaluation.
  • Limited internal capacity: Many authorities lack the technical expertise to assess model capabilities and limitations.

Responsible AI therefore becomes a form of organisational resilience. Fairness, contestability, and transparency are not only ethical principles; they are also risk-management tools that protect governments in volatile markets.

A resilience-focused procurement and governance approach can help public institutions withstand hype cycles and market corrections. 

Protective measures that governments can take:

  • Use structured risk assessments for AI procurement
  • Prioritise modular, exportable solutions that reduce lock-in
  • Require vendors to demonstrate model lifecycle stability and financial resilience
  • Strengthen cross-agency procurement competencies and shared evaluation resources
  • Build strategic partnerships with research institutions and digital authorities 

Across Europe, these questions become even more pressing as legislative frameworks evolve.

EU’s AI Digital Omnibus

On 19 November, the European Commission presented its Digital Omnibus package, proposing significant amendments to the AI Act. The Commission frames the package as an effort to promote European leadership while safeguarding fundamental rights

The proposal has sparked political debate. Civil society groups warn that streamlining could weaken protections, particularly under the GDPR, by allowing companies to decide when data counts as ‘personal’ and by expanding the use of personal information for AI development without meaningful user consent. Industry voices welcome the package, but some argue that it does little to simplify EU rules or provide a competitive edge, leaving much of the regulatory patchwork intact while granting non-EU companies similar data access benefits. Political groups in the European Parliament have opposed parts of the proposal.

For national and local authorities, including procurement bodies, the package introduces uncertainty. Procurement planning depends on regulatory clarity. Whether the amended AI Act ultimately strengthens oversight or accelerates adoption will depend on the final political compromise and the timeline for Parliament negotiations.

→ Read the full proposals here and here.

Global spotlights

🇪🇺 EU investigates Amazon and Microsoft cloud under DMA: The European Commission has launched three market investigations under the Digital Markets Act (DMA), focusing on cloud computing. Two will assess whether Amazon Web Services and Microsoft Azure act as “gatekeepers” despite not meeting DMA size thresholds. A third will examine if the DMA effectively addresses anti-competitive practices, such as interoperability barriers or data restrictions. The aim is to ensure fair, open, and competitive cloud services, vital for AI development and Europe’s digital sovereignty.

🇺🇸 US launches ‘Genesis Mission’ to put AI at the heart of science: The Trump administration has unveiled the Genesis Mission, aiming to turn government databases and supercomputers into a massive AI research platform. The initiative seeks to accelerate discoveries in energy, health, security and more, while maintaining US leadership in AI. The project will involve national labs, universities and major tech companies, but details on funding remain vague. 

🇬🇧 Call for UK AI engineering strategy: A report from Autodesk and the Association for Consultancy and Engineering urges the UK government to create a National AI in Engineering Strategy to keep the UK competitive. AI has boosted engineering productivity by up to 40% and reduced project overruns by 25%, but adoption remains uneven and is not supported by policy. The report recommends ethical AI governance, controlled testing, and investment in skills and training to empower engineers and sustain infrastructure innovation.

🇸🇪 Swedish government proposes real-time AI facial recognition for police: The government plans to allow police to use AI for real-time facial recognition to tackle serious crime, including human trafficking, murder, and drug and weapon offences. The Discrimination Ombudsman warns of potential systemic bias and notes that the memorandum does not include any analysis of the risks of discrimination, e.g., related to people's ethnic background. The law is expected to take effect on 1 May 2026.

🇩🇪 Germany’s AI infrastructure boom strains the grid: Germany’s rapid expansion of data centres is exposing energy and infrastructure limits, particularly in Frankfurt, where power capacity is already fully allocated for years ahead. As Europe scales “AI factories” and gigafactories, Germany faces rising electricity demand, grid congestion and increased reliance on gas-powered backup generation, raising questions about sustainability, costs and long-term resilience.

🇫🇷 France rallies EU behind sovereign AI push: France is accelerating its campaign for European-built AI, with President Emmanuel Macron urging governments and industry to prioritise homegrown systems over US and Chinese models. Speaking at Paris’s “Adopt AI Summit”, he called for a simplified EU regulatory regime, significant new investment through a Franco-German Important Project of Common European Interest on AI, and the adoption of sovereignty criteria for AI gigafactories and public procurement.

🇮🇳 AI leaders are accelerating investment in India: Global AI firms are expanding in India through new offices, data-centre commitments, and large-scale partnerships. Anthropic, Google, OpenAI and Perplexity all announced major India-focused initiatives in 2025, driven by the country’s vast user base, engineering talent and supportive regulatory environment. India, meanwhile, is positioning itself as a sovereign AI hub, investing in domestic compute capacity and local-language models ahead of hosting the AI Impact Summit in February 2026.

CAISA: Artificial General Intelligence and its societal implications

Denmark’s national centre for AI in society, CAISA, has released a research brief exploring the promises and perils of Artificial General Intelligence (AGI). The report examines the assumptions underlying AGI expectations and identifies lessons for public-sector decision-makers.

The brief challenges the narrative that AGI could soon trigger an “intelligence explosion,” where systems become broadly intelligent and capable of independently pursuing complex goals. CAISA argues that these assumptions are weakly supported by evidence and warns against treating AGI as an imminent certainty. 

Instead, the brief emphasises current, tangible challenges: algorithmic bias, privacy concerns, corporate concentration of power, and risks to democratic oversight. By focusing on these issues, the report urges governments and public institutions to direct attention and resources toward measurable societal impacts rather than speculative future scenarios.

CAISA recommends an evidence-based approach to AGI research and governance. Interdisciplinary studies that combine technical, social, ethical, and political perspectives are critical before treating AGI as a central regulatory or policy priority. Public trust and resilience depend on transparency, accountability, and the ability to evaluate AI systems effectively.

The report highlights several implications:

  • AGI remains speculative: public-sector AI strategies should prioritise known risks, such as fairness, contestability, and transparency.
  • Governance of existing AI systems should be strengthened to safeguard rights and trust.
  • Strengthen local data sovereignty and evaluation capacity, including a robust national infrastructure for research and analysis.
  • Public funding should support interdisciplinary research and studies that address the social, ethical, and political dimensions of AI alongside technical innovation.
  • Public institutions should design for long-term resilience by avoiding vendor lock-in and hype-driven investments.

CAISA offers a measured perspective at a moment when AI markets and policymaking are heavily influenced by speculation. The brief provides a model for how governments can adopt AI responsibly, ensuring that public value, trust and resilience remain central.

→ Read the full report here.

What we look forward to

📆 CyberConfex 2026: A one-day event focused on public-sector cybersecurity, featuring updates from practitioners, discussions on current approaches, and opportunities to engage with peers and specialists. The theme of 2026 is "Cybersecurity in the Age of AI and Advanced Threats".
5 February 2026, London.
→ Read more and register here.

📆 Techarena 2026: Techarena will host its annual technology and business gathering in Stockholm, featuring discussions across AI, innovation, med-tech, defence-tech, product development, and sustainability. The two-day event includes talks, panel sessions, side events and networking opportunities for companies, researchers, investors and public-sector representatives.
11–12 February 2026, Stockholm.
→ Read more here.

📆 AI Impact Summit 2026: Hosted in New Delhi, the summit focuses on moving global AI governance from high-level principles to practical outcomes. Structured around seven themes – including human capital, inclusion, trusted AI, resilience and democratised resources – the event aims to support more inclusive, sustainable and development-oriented uses of AI. Organisations worldwide are invited to contribute in-person sessions as part of the official programme.
16–20 February 2026, New Delhi.
→ Read more here.

Questions or feedback?

For questions, comments, or suggestions regarding this article, please get in touch with Emilia.

Stay updated

Enjoyed this edition of Decoding? Subscribe here to receive future insights on digital public services directly in your inbox.