What happened
A Politico report published on May 3 argues that the European Union is in danger of burning through roughly €20 billion on AI compute ambitions without a realistic plan for turning infrastructure spending into competitive advantage. The criticism is aimed at Europe's push to fund large-scale AI computing capacity while still lagging on the harder parts of adoption: productization, data access, integration, and execution inside actual businesses.
The core tension is familiar. Policymakers want sovereign AI capacity, local clouds, and strategic independence from U.S. hyperscalers. But compute alone does not create useful AI systems. If the surrounding stack is weak, meaning fragmented data, unclear ownership, thin engineering capacity, and too few production use cases, big infrastructure bets can end up as political theater instead of industrial leverage.
That does not make the sovereignty goal wrong. It makes the sequencing question urgent. The market is moving from model headlines to operational systems, and Europe now has to decide whether it wants to fund raw horsepower, or complete AI delivery chains that businesses can actually use.
Why it matters
This matters because enterprise AI value is not created at the GPU layer alone. The real bottlenecks usually sit one level up: extracting data from documents, connecting models to ERP and CRM systems, enforcing policies, handling human review, and keeping costs predictable in production. A region can invest billions in compute and still fail to create usable business outcomes if those layers stay immature.
It also matters for cost discipline. Many organizations are discovering that frontier-model enthusiasm can hide weak unit economics. If every problem gets framed as a need for more compute, teams miss cheaper wins built on retrieval, smaller models, workflow design, and targeted automation. In that sense, the Politico critique lands at exactly the right time. The question is no longer whether Europe should have AI infrastructure, but what kind of workloads that infrastructure should actually serve.
For European buyers, there is a second implication. Sovereign AI cannot just mean an owned data center. It has to mean controllable architectures, auditable data flows, clear governance, and the option to mix open and closed components without becoming trapped in one vendor stack.
Laava perspective
At Laava, we think this is the useful correction the market needs. Most enterprise AI projects do not fail because there was not enough compute. They fail because the workflow was vague, the data was messy, the integration work was underestimated, or the business case depended on a flashy demo instead of repeatable operations.
That is why we frame AI as business process engineering first, model choice second. If you want sovereign AI in practice, start with a document-heavy workflow, map the decision points, connect the systems of record, and use the smallest model stack that can do the job reliably. Sovereignty is not just where the GPUs sit. It is whether you control the process, the data movement, the fallback logic, and the economics.
For Dutch and European firms, this creates a more grounded roadmap. Use infrastructure strategy to support production workflows, not the other way around. The companies that win will not be the ones with the biggest cluster press release. They will be the ones that turn PDFs, emails, approvals, and internal knowledge into boringly reliable automated operations.
What you can do
If you are evaluating sovereign AI plans, ask four practical questions. Which workflows are expensive enough to automate now? Where does unstructured data enter the process? Which systems need to be updated automatically? And which parts really require frontier-model compute versus smaller, cheaper components?
Then pilot one use case end to end. Think invoice intake, customer service triage, proposal drafting, or policy lookup with approvals. That will tell you more about your real AI readiness than another abstract compute strategy deck ever will.