AI · Financial Regulation · May 9, 2026
§ AI · ECB Banking Supervision · Systemic Risk

The ECB’s Top Banking Regulator Just Said AI Forces a Rethink of Global Financial Infrastructure.

On May 9, 2026, José Luis Escrivá — Governor of the Bank of Spain and, by that office, a member of the ECB Governing Council — said at an event in Tarragona that recent advances in artificial intelligence compel central banks and supervisors to reassess the robustness of the financial infrastructure they oversee.

It was a short statement, but the regulatory weight behind it is not. Escrivá speaks as part of the governing body that sets eurozone monetary policy — and as a member of the ECB Supervisory Board's broader orbit, whose direct mandate covers 113 significant eurozone banks. The same week, the IMF published a report warning that AI-powered cyberattacks could trigger systemic shocks to global financial markets. The timing is not a coincidence.

Three risks drive this review: AI concentration among a handful of providers creating single points of failure across the entire banking sector; model risk from AI systems making correlated bad decisions at machine speed; and cybersecurity exposure amplified by AI-powered attack tools. Each is real, each is growing, and none of them fit neatly into the existing regulatory toolkit.

§ 01 / Who Is Escrivá — and Why His Words Carry Weight

He is not just the Bank of Spain’s governor. He sits on the body that supervises 113 European banks.

José Luis Escrivá was appointed Governor of the Banco de España by the Spanish government of Prime Minister Pedro Sánchez in September 2024. The appointment carried an automatic seat on the ECB Governing Council — the 26-member body that sets monetary policy for the 20-nation eurozone — and on the ECB General Council. He is not a peripheral voice. His career includes a decade-long stint as Head of the Monetary Policy Division at the ECB itself (1999–2004), and a posting as the BIS Chief Representative for the Americas.

The ECB’s role in banking is worth being precise about. The ECB is not solely the eurozone’s central bank — it is also, since the Single Supervisory Mechanism (SSM) took effect in 2014, the direct prudential supervisor of the 113 most significant eurozone banks. When ECB supervisory officials or Governing Council members speak about financial infrastructure resilience, they are speaking in a regulatory capacity that has actual examination authority over institutions with combined assets exceeding €25 trillion.

Recent developments in artificial intelligence force us to reassess the robustness of our financial infrastructure and our cybersecurity.

José Luis Escrivá · Governor, Bank of Spain · ECB Governing Council member · Tarragona, Spain, May 9, 2026
§ 02 / Concentration Risk — When All the Banks Bet on the Same AI

Three companies control 88% of enterprise AI. Every bank using the same model is a single point of failure.

The most structurally novel risk AI introduces to finance is not the one that gets the most press coverage. It is not a rogue trading algorithm. It is not a chatbot giving bad advice. It is concentration— the fact that the global financial system is increasingly dependent on a small handful of AI providers, and that dependence creates a systemic vulnerability that no individual bank’s risk team can mitigate on its own.

According to Menlo Ventures, three companies — Anthropic, OpenAI, and Google — control roughly 88% of the enterprise large-language-model market. The December 2025 report by the European Systemic Risk Board’s Advisory Scientific Committee identified concentration and entry barriers as one of five features of AI that significantly amplify systemic risks in the financial system. If a major AI provider experiences a critical failure — a security breach, a model collapse, a regulatory shutdown — the correlated exposure across institutions that all rely on that same provider could transmit the shock simultaneously to dozens or hundreds of banks.

The Concentration Scenario
Imagine three of the largest eurozone banks — each running independent loan-origination, fraud-detection, and market-making operations — all using the same foundation model from the same cloud AI provider. The provider’s model goes down for 48 hours due to a cyberattack or a catastrophic inference failure. All three banks lose the ability to process automated credit decisions at scale simultaneously. Their retail lending pipelines freeze at the same moment. Regulators see identical operational failures appear at the same time across the sector, with no single institution’s failure to blame. That is concentration risk. That is what the existing single-counterparty exposure limits — designed for traditional loan books — were not built to catch. Under DORA’s new critical third-party framework (live January 2025), 19 critical ICT providers are now formally designated and subject to ECB oversight. But the rule only covers designation; it does not cap how many banks can rely on any one of them.
AI in Banking: Regulation and Opportunity — Banking Risk & Regulatory Academy 2026
§ 03 / Model Risk — When the Algorithm Is Wrong, at Scale, at Speed

AI models don’t just make one bad decision. They make the same bad decision 10,000 times a second.

Traditional model risk — the risk that a quantitative model produces wrong outputs and causes financial loss — has existed since banks started using scoring models for credit in the 1980s. The Bank of England’s Financial Stability in Focus report on AI (April 2025) identified a qualitatively new dimension: AI models introduce model uniformity risk at a scale that earlier quantitative models could not.

When dozens of banks run lending decisions through the same foundation model or the same fine-tuned derivative, a flaw in that model’s training data or its objective function will produce correlated errors across the sector simultaneously. If the model systematically underestimates credit risk in a particular sector during a specific macroeconomic regime — which is precisely when accurate credit assessment matters most — the resulting wave of misallocated loans appears on multiple institution balance sheets at once. The ESRB’s Advisory Scientific Committee flagged exactly this in December 2025, naming model uniformity as one of five AI features that significantly amplify systemic risk, alongside concentration, monitoring challenges, overreliance, and speed.

ESRB advisory scientific committee — five AI features that amplify systemic risk (December 2025)
Concentration
A small number of AI providers creates single points of failure across the financial system. No individual institution can mitigate the systemic dimension of this exposure.
Model uniformity
Shared foundation models mean shared biases and shared failure modes. Correlated errors across institutions create sector-wide exposures that look nothing like idiosyncratic firm risk.
Monitoring challenges
AI systems are harder to interpret, audit, and stress-test than traditional quantitative models. Supervisors cannot easily verify what the model is actually optimizing for.
Overreliance and excessive trust
As AI handles more decisions automatically, human oversight atrophies. Staff lose the expertise to second-guess the model precisely when second-guessing it most matters.
Speed
AI executes decisions at machine speed, meaning a bad decision propagates across a portfolio before any human oversight loop can intervene. Errors that took weeks to accumulate under manual processes accumulate in milliseconds.
Source: ESRB Advisory Scientific Committee Report No. 16, December 2025
Navigating AI and Model Risk Management in Banking — Panel Discussion
§ 04 / Cybersecurity — AI on the Attack Side

Attackers use AI to find vulnerabilities faster than defenders can patch them. That asymmetry is the IMF’s core warning.

Escrivá’s reference to cybersecurity was not incidental — it was the most immediately concrete of the three AI risks he cited. On May 7, 2026, two days before his Tarragona remarks, the IMF published “Financial Stability Risks Mount as Artificial Intelligence Fuels Cyberattacks,” a research blog post that the institution described as a formal warning to policymakers.

The IMF’s core argument: advanced AI models dramatically reduce the time and cost needed to identify and exploit vulnerabilities in software. Attackers can now operate at machine speed — discovering and targeting weaknesses in widely used systems faster than patching and remediation cycles can respond. In a financial sector built on shared software, shared cloud infrastructure, and shared payment networks, this speed asymmetry is not a firm-level problem. It is a systemic problem. Extreme cyber-incident losses can trigger funding strains, raise solvency concerns at multiple institutions simultaneously, and disrupt financial intermediation at the macro level.

Why Finance Is Particularly Exposed
The financial system’s interconnection is its greatest strength and its greatest vulnerability. Payment networks, settlement systems, clearing houses, and correspondent banking relationships are designed to move money frictionlessly between institutions. That frictionlessness means a cyber-event that compromises one node propagates rapidly to every connected node. AI-powered attackers can now scan an entire sector’s shared software stack simultaneously, rather than probing institutions one at a time. The IMF specifically noted that “closed, industry-specific financial software is harder to target than open-source infrastructure — but these buffers are likely to erode quickly as model training expands, capabilities diffuse, and leaks occur.” That is a timeline problem, not a permanent protection.
§ 05 / What the ECB Is Actually Doing — DORA, Supervisory Priorities, AI Workshops

DORA went live in January 2025. But its AI provisions are still catching up to where the market is.

The ECB’s primary structural response to AI and digital operational risk has been the Digital Operational Resilience Act (DORA), which became applicable across the European Union in January 2025. DORA introduced a mandatory framework for ICT risk management, incident reporting, resilience testing, and — most relevant to Escrivá’s remarks — an oversight framework for critical third-party ICT providers. As of 2025, 19 providers have been formally designated under that framework, covering major cloud and infrastructure vendors.

In the ECB’s supervisory priorities for 2026–2028, published November 2025, AI features under Supervisory Priority 2 (operational resilience and ICT capabilities). The ECB announced it would continue monitoring AI with a more targeted focus on generative AI applications specifically— widening its existing investigation into banks’ AI use to assess prudential materiality and inherent risks. More than 85% of large banks under European supervision already use AI in some form, and that share is accelerating with generative and agentic AI.

Separately, the ECB has been conducting direct AI workshops with supervised banks since 2025, asking institutions for more detail on their AI strategies, governance frameworks, and risk management approaches. In February 2026, the ECB Banking Supervision published a speech titled “Technology is neutral, governance is not: AI adoption in the banking sector,” which laid out the regulatory philosophy: the ECB is not trying to slow AI adoption, but it is insisting that the governance structures around adoption keep pace with the capabilities being deployed.

§ 06 / Global Central Bank Posture — Fed, BOE, BIS, IMF

Every major central bank is watching the same risks. None of them has a definitive answer yet.

Global regulatory posture on AI and financial stability — May 2026
ECB / SSM
Active supervisory review
DORA operational since Jan 2025. Generative AI added to 2026-28 supervisory priorities. Direct AI workshops with significant institutions. Escrivá publicly calls for infrastructure reassessment (May 2026).
Federal Reserve
Monitoring with scenario flagging
Spring 2026 Financial Stability Report flags AI concerns alongside geopolitical risk as top systemic threats. Chicago Fed research (2026) quantifies tail risk from bank exposure to AI industry borrowers — $450B in C&I commitments to AI-adjacent sectors.
Bank of England
Active scenario analysis
Financial Stability in Focus report (April 2025) dedicated to AI. BoE running scenario analyses on how AI agents might influence market dynamics, particularly herd behavior during stress. FPC flagged AI-asset price correction as a transmission channel risk.
BIS
Research and standard-setting
Published financial stability implications of AI framework; warns about third-party concentration and model risk. Annual Economic Report 2025 covers AI-driven productivity and risk tradeoffs. Coordinates international supervisory approach.
IMF
Active warning (May 2026)
Published formal blog-post warning on May 7, 2026: AI-fueled cyberattacks mount as a financial stability threat. Recommends robust resilience standards, supervision focused on systemic transmission, and public-private threat intelligence sharing.
ESRB
Systemic risk taxonomy published
Advisory Scientific Committee Report No. 16 (December 2025) identifies five AI features that amplify systemic risk. Recommends competition policy, capital/liquidity adjustments, and enhanced supervision as a three-part response.
§ 07 / What 'Infrastructure Review' Actually Means in Practice

“Review” in regulatory language is not a think-piece. It means data requests, examiner dialogues, and potential rule changes.

When ECB officials call for a “review” of financial infrastructure resilience, the practical pathway typically involves several concrete steps. First, supervisory data requests: the ECB asks significant institutions to disclose their AI vendor relationships, third-party dependencies, and the share of critical processes that are now AI-automated. The ECB has already done exactly this — in February 2026, it was reported to have asked a number of individual eurozone lenders for more detail on their lending to AI-adjacent sectors, including data centers.

Second, it means stress-testing scenarios. The ECB and its supervisory arm can and do run scenario analyses that ask: what happens to a bank’s operational resilience if its primary AI provider is unavailable for 72 hours? What if a model powering automated credit scoring fails silently? These scenarios may eventually become formal supervisory requirements — analogous to existing ICT resilience tests under DORA.

Third, in the medium term, it could mean concentration limits — regulatory caps on how much of a bank’s critical operational infrastructure can depend on any single AI provider, analogous to the single-counterparty exposure limits that already govern lending books. The ESRB’s December 2025 report explicitly recommended exactly this class of prudential adjustment.

The Honest Limitation
What Escrivá did not say — and what no regulator has said yet — is what specifically the new rules will look like, what the timeline is, or who leads the international coordination. AI infrastructure is global; financial regulation is largely jurisdictional. A bank can theoretically shift its AI provider to a non-EU vendor and remain outside the ECB’s direct supervisory reach for that contract. The FSB and BIS are the coordinators for international financial regulatory standards, and neither has produced binding AI-specific capital or concentration rules yet. The current state is: every major regulatory body has identified the risks in detail. None has resolved them in enforceable rules.
§ 08 / The Bottom Line
Why This Matters
Escrivá’s statement on May 9, 2026 is a senior ECB Governing Council member on the record saying the financial system’s AI exposure is now material enough to require an active institutional response — not a future policy agenda item, not a research paper. The same week, the IMF published a formal warning using the phrase “systemic shock.” Together, they mark a shift: AI risk in finance is no longer a theoretical concern discussed at academic conferences. It is in the active work plans of the bodies that actually write the rules and conduct the examinations. The question is no longer whether AI poses systemic financial risk. The question is whether the regulatory infrastructure can build concentration limits, model-risk standards, and cybersecurity resilience requirements fast enough to keep pace with the systems already deployed inside the banks they supervise.
§ 09 / Sources
Last updated: May 9, 2026 · 12:00 PM ET