When money thinks for itself
Manmohan Parkash | Saturday, 9 May 2026
The next financial crisis may not begin with a housing bubble or a sovereign default. It may begin with a line of code-executed flawlessly, at scale, and entirely without human hesitation.
Agentic AI-systems that do not merely assist but act-marks a decisive break from the past. For decades, finance has digitised, automated and accelerated. But it has remained, at its core, a human system: analysts interpret, traders decide, risk managers approve. Even the most advanced algorithms have largely operated within human-defined boundaries.
That boundary is dissolving.
Agentic AI can monitor markets, rebalance portfolios, assess creditworthiness, initiate payments and adapt strategies in real time-without waiting for human instruction. It compresses the traditional chain of financial decision-making into a single, continuous loop: observe, decide, execute.
This is a structural shift-from human-centred finance to machine-coordinated finance.
Banks will not disappear in this transition. But they will change in ways that make today's institutions look almost quaint.
Instead of layered departments-compliance, underwriting, operations-financial institutions are becoming networks of interacting AI agents. These systems can orchestrate complex workflows end-to-end, from onboarding a customer to assessing risk to executing transactions.
The gains are undeniable. Costs fall. Speed increases. Errors decline. A single employee may soon supervise systems that perform the work of dozens.
But history offers a cautionary note: efficiency in finance often comes at the expense of resilience. Systems optimised for speed and precision can become brittle when confronted with the unexpected.
Agentic AI magnifies this risk. It does not merely accelerate decisions-it multiplies them, across markets, institutions and borders, all at once.
Financial stability has traditionally been about managing identifiable risks: credit risk, market risk, liquidity risk. Agentic AI introduces something less familiar and far harder to contain-behavioural risk at the system level.
What happens when thousands of autonomous agents, trained on similar data and optimising for similar objectives, react simultaneously to the same signal? What happens when a small modelling error is executed not once, but millions of times in rapid succession?
We have glimpsed this dynamic before in algorithmic trading "flash crashes". But agentic AI extends the logic far beyond trading floors-into lending, payments, insurance and asset management.
The danger is not a single faulty model. It is the emergent behaviour of many models interacting, amplifying each other's decisions in ways no individual institution fully understands.
Modern finance rests on a simple premise: decisions can be traced, explained and, if necessary, challenged. Agentic AI complicates all three.
These systems are adaptive and often opaque. They do not follow static rules; they evolve. When an autonomous agent makes a flawed lending decision, triggers a cascade of trades or misprices risk, responsibility becomes diffuse. Was it the institution that deployed the system, the engineers who designed it, or the model itself?
Without clear answers, accountability-the bedrock of financial trust-begins to erode.
Like previous technological revolutions, agentic AI will not affect all economies equally.
Financial centres with deep data infrastructure and AI capabilities will move first, capturing efficiency gains and competitive advantages. Others may find themselves forced to adopt systems they neither fully control nor fully understand.
For emerging markets, the dilemma is acute: adopt too slowly and risk irrelevance; adopt too quickly and risk instability.
At the same time, autonomous financial agents could bypass traditional institutions altogether, particularly in regions where trust in banks is already fragile. The result may not just be disruption within the system, but a reconfiguration of the system itself.
The temptation for policymakers will be to wait-to study the technology, to observe its impact, to regulate once risks become clear. That approach will fail.
By the time systemic risks are visible, they may already be embedded in the architecture of global finance. Instead, governments should act on three priorities.
First, regulate systems, not just institutions. Oversight must extend to networks of autonomous agents, with requirements for auditability, transparency and real-time monitoring. Supervising machine-speed finance with human-speed tools is no longer viable.
Second, impose guardrails on autonomy. Not every financial decision should be delegated. Clear limits are needed on where AI can act independently, alongside mechanisms-digital circuit breakers-that can halt or override systems behaving abnormally.
Third, invest in capability. Regulators and institutions alike must develop expertise in AI risk, governance and system behaviour. Without this, oversight will lag permanently behind innovation.
Finally, coordination will be essential. Financial markets are global; agentic systems will be too. Fragmented regulation risks creating weak links that become points of systemic failure.
Agentic AI will spread quietly, embedded in workflows, scaling through incremental adoption until the system itself begins to change.
At some point-perhaps sooner than expected-the balance will tip. Financial markets will no longer be primarily shaped by human decisions, but by the interactions of autonomous systems.
Finance has always been about managing risk. What is new is that the system itself is becoming an active, adaptive participant in that risk.
When money begins to think for itself, the question is no longer how fast or how efficient the system can become. It is whether we can still understand-and control-the forces we have set in motion.
Manmohan Parkash is a former Senior Advisor in the Office of the President and former Deputy Director General for South Asia at the Asian Development Bank (ADB). Views expressed are personal. manmohanparkashgmail.com