The ECB’s Top Banking Regulator Just Said AI Forces a Rethink of Global Financial Infrastructure.
On May 9, 2026, José Luis Escrivá — Governor of the Bank of Spain and, by that office, a member of the ECB Governing Council — said at an event in Tarragona that recent advances in artificial intelligence compel central banks and supervisors to reassess the robustness of the financial infrastructure they oversee.
It was a short statement, but the regulatory weight behind it is not. Escrivá speaks as part of the governing body that sets eurozone monetary policy — and as a member of the ECB Supervisory Board's broader orbit, whose direct mandate covers 113 significant eurozone banks. The same week, the IMF published a report warning that AI-powered cyberattacks could trigger systemic shocks to global financial markets. The timing is not a coincidence.
Three risks drive this review: AI concentration among a handful of providers creating single points of failure across the entire banking sector; model risk from AI systems making correlated bad decisions at machine speed; and cybersecurity exposure amplified by AI-powered attack tools. Each is real, each is growing, and none of them fit neatly into the existing regulatory toolkit.
- 85%+ of large eurozone banks already use AI in some form — ECB Banking Supervision, 2026
- 88% of enterprise LLM market controlled by just three companies — Anthropic, OpenAI, Google — Menlo Ventures, 2025
- 19 critical ICT third-party providers designated under DORA as of 2025 — covering AI, cloud, and payment infrastructure — EU DORA framework, January 2025
- $450B in large-bank C&I commitments to AI-adjacent industries as of late 2025 — up from $250B in 2015 — Chicago Fed, 2026
He is not just the Bank of Spain’s governor. He sits on the body that supervises 113 European banks.
José Luis Escrivá was appointed Governor of the Banco de España by the Spanish government of Prime Minister Pedro Sánchez in September 2024. The appointment carried an automatic seat on the ECB Governing Council — the 26-member body that sets monetary policy for the 20-nation eurozone — and on the ECB General Council. He is not a peripheral voice. His career includes a decade-long stint as Head of the Monetary Policy Division at the ECB itself (1999–2004), and a posting as the BIS Chief Representative for the Americas.
The ECB’s role in banking is worth being precise about. The ECB is not solely the eurozone’s central bank — it is also, since the Single Supervisory Mechanism (SSM) took effect in 2014, the direct prudential supervisor of the 113 most significant eurozone banks. When ECB supervisory officials or Governing Council members speak about financial infrastructure resilience, they are speaking in a regulatory capacity that has actual examination authority over institutions with combined assets exceeding €25 trillion.
“Recent developments in artificial intelligence force us to reassess the robustness of our financial infrastructure and our cybersecurity.”
José Luis Escrivá · Governor, Bank of Spain · ECB Governing Council member · Tarragona, Spain, May 9, 2026
Three companies control 88% of enterprise AI. Every bank using the same model is a single point of failure.
The most structurally novel risk AI introduces to finance is not the one that gets the most press coverage. It is not a rogue trading algorithm. It is not a chatbot giving bad advice. It is concentration— the fact that the global financial system is increasingly dependent on a small handful of AI providers, and that dependence creates a systemic vulnerability that no individual bank’s risk team can mitigate on its own.
According to Menlo Ventures, three companies — Anthropic, OpenAI, and Google — control roughly 88% of the enterprise large-language-model market. The December 2025 report by the European Systemic Risk Board’s Advisory Scientific Committee identified concentration and entry barriers as one of five features of AI that significantly amplify systemic risks in the financial system. If a major AI provider experiences a critical failure — a security breach, a model collapse, a regulatory shutdown — the correlated exposure across institutions that all rely on that same provider could transmit the shock simultaneously to dozens or hundreds of banks.
AI models don’t just make one bad decision. They make the same bad decision 10,000 times a second.
Traditional model risk — the risk that a quantitative model produces wrong outputs and causes financial loss — has existed since banks started using scoring models for credit in the 1980s. The Bank of England’s Financial Stability in Focus report on AI (April 2025) identified a qualitatively new dimension: AI models introduce model uniformity risk at a scale that earlier quantitative models could not.
When dozens of banks run lending decisions through the same foundation model or the same fine-tuned derivative, a flaw in that model’s training data or its objective function will produce correlated errors across the sector simultaneously. If the model systematically underestimates credit risk in a particular sector during a specific macroeconomic regime — which is precisely when accurate credit assessment matters most — the resulting wave of misallocated loans appears on multiple institution balance sheets at once. The ESRB’s Advisory Scientific Committee flagged exactly this in December 2025, naming model uniformity as one of five AI features that significantly amplify systemic risk, alongside concentration, monitoring challenges, overreliance, and speed.
Attackers use AI to find vulnerabilities faster than defenders can patch them. That asymmetry is the IMF’s core warning.
Escrivá’s reference to cybersecurity was not incidental — it was the most immediately concrete of the three AI risks he cited. On May 7, 2026, two days before his Tarragona remarks, the IMF published “Financial Stability Risks Mount as Artificial Intelligence Fuels Cyberattacks,” a research blog post that the institution described as a formal warning to policymakers.
The IMF’s core argument: advanced AI models dramatically reduce the time and cost needed to identify and exploit vulnerabilities in software. Attackers can now operate at machine speed — discovering and targeting weaknesses in widely used systems faster than patching and remediation cycles can respond. In a financial sector built on shared software, shared cloud infrastructure, and shared payment networks, this speed asymmetry is not a firm-level problem. It is a systemic problem. Extreme cyber-incident losses can trigger funding strains, raise solvency concerns at multiple institutions simultaneously, and disrupt financial intermediation at the macro level.
DORA went live in January 2025. But its AI provisions are still catching up to where the market is.
The ECB’s primary structural response to AI and digital operational risk has been the Digital Operational Resilience Act (DORA), which became applicable across the European Union in January 2025. DORA introduced a mandatory framework for ICT risk management, incident reporting, resilience testing, and — most relevant to Escrivá’s remarks — an oversight framework for critical third-party ICT providers. As of 2025, 19 providers have been formally designated under that framework, covering major cloud and infrastructure vendors.
In the ECB’s supervisory priorities for 2026–2028, published November 2025, AI features under Supervisory Priority 2 (operational resilience and ICT capabilities). The ECB announced it would continue monitoring AI with a more targeted focus on generative AI applications specifically— widening its existing investigation into banks’ AI use to assess prudential materiality and inherent risks. More than 85% of large banks under European supervision already use AI in some form, and that share is accelerating with generative and agentic AI.
Separately, the ECB has been conducting direct AI workshops with supervised banks since 2025, asking institutions for more detail on their AI strategies, governance frameworks, and risk management approaches. In February 2026, the ECB Banking Supervision published a speech titled “Technology is neutral, governance is not: AI adoption in the banking sector,” which laid out the regulatory philosophy: the ECB is not trying to slow AI adoption, but it is insisting that the governance structures around adoption keep pace with the capabilities being deployed.
Every major central bank is watching the same risks. None of them has a definitive answer yet.
“Review” in regulatory language is not a think-piece. It means data requests, examiner dialogues, and potential rule changes.
When ECB officials call for a “review” of financial infrastructure resilience, the practical pathway typically involves several concrete steps. First, supervisory data requests: the ECB asks significant institutions to disclose their AI vendor relationships, third-party dependencies, and the share of critical processes that are now AI-automated. The ECB has already done exactly this — in February 2026, it was reported to have asked a number of individual eurozone lenders for more detail on their lending to AI-adjacent sectors, including data centers.
Second, it means stress-testing scenarios. The ECB and its supervisory arm can and do run scenario analyses that ask: what happens to a bank’s operational resilience if its primary AI provider is unavailable for 72 hours? What if a model powering automated credit scoring fails silently? These scenarios may eventually become formal supervisory requirements — analogous to existing ICT resilience tests under DORA.
Third, in the medium term, it could mean concentration limits — regulatory caps on how much of a bank’s critical operational infrastructure can depend on any single AI provider, analogous to the single-counterparty exposure limits that already govern lending books. The ESRB’s December 2025 report explicitly recommended exactly this class of prudential adjustment.