The New Delhi Pivot
What AI means for development
MDB jobs disappear fast. Are you tracking the latest openings?
Subscribe to the Premium Plan and never miss an opportunity:
Get Staff roles on Mondays
Get Consultant roles on Fridays
Grab a free trial here to see what you’ve been missing (no payment required).
In February 2026, a set of ideas that had lived mostly in policy papers and conference panels hardened into something operational: artificial intelligence as infrastructure. The shift matters because it changes what “development” looks like in practice, what multilateral institutions finance, and what skills the next generation of development professionals must bring to the table.
The hall at Bharat Mandapam was built for spectacle, but the mood that week carried something more demanding than ceremony. Delegations filed in with the familiar choreography of global summits, translators waiting behind glass, staff moving with clipboards and practiced urgency. Outside, New Delhi carried on as it always does: traffic, dust, heat stored in the concrete. Inside, the language of the moment was already set. It was clear this was a summit about who would shape the change, who would pay for it, and who would live with the consequences.
By the time the India AI Impact Summit closed, the headline was diplomatic: the New Delhi Declaration on AI Impact, endorsed by a broad coalition that included major powers, a rare alignment in a fractured geopolitical climate. But the real story sat beneath the signatures. India had reframed the global argument, moving it from a narrow preoccupation with safety and frontier capability towards a more direct question: what does AI do for people who need schools, clinics, jobs, electricity, and credible public services?
The declaration’s organising phrase was Sanskrit, delivered with the confidence of a host setting the terms of debate: “Sarvajan Hitaya, Sarvajan Sukhaya”, welfare for all, happiness for all. It sounded like a moral statement. It was also a strategic one.
Moments of technological change are rarely decided by the technology alone. They are decided by distribution, institutions, and incentives. New Delhi became a hinge in the AI story because it treated AI as public infrastructure, the way earlier generations treated roads, power grids, and ports. The drama comes from the obvious tension: infrastructure concentrates power when it is scarce, and spreads power when it is made accessible.
India’s officials spoke as if they were building a new grammar for global cooperation. The declaration was structured around seven “chakras”, pillars that tried to turn sentiment into an operating plan: human capital, inclusion, trusted systems, resilience and energy efficiency, AI for science, democratised resources, and economic growth linked to social good. Lists like this can read as diplomatic wallpaper. In the room, they landed as a set of instructions for financiers and civil servants.
The summit’s deeper wager was simple to state and difficult to execute: AI could become a global common good, rather than a trophy asset guarded by a few companies and countries. That wager immediately ran into the hard realities of physics and money. AI runs on compute, compute runs on energy, and energy runs on grids that many countries still struggle to make reliable. Whoever controls the stack controls the pace of adoption.
That is why, in the middle of speeches about inclusion, the summit also felt like a marketplace. Investment commitments were announced in the language of scale and inevitability, with figures large enough to signal that the race for infrastructure had already started. India’s IT minister, Ashwini Vaishnaw, described a surge of pledged investment in AI-related infrastructure that approached the high hundreds of billions of dollars. The number functioned as a warning and an invitation. The warning was that AI infrastructure would be built with or without development institutions. The invitation was that multilateral finance could shape where it went, how it was powered, and who benefited.
For multilateral development banks, the implication was uncomfortable. Their comparative advantage has always been the patient work of turning public purpose into financed projects: feasibility, safeguards, procurement, governance, measurement. AI threatens to accelerate everything around them, including the gap between what can be built and what can be governed. The summit’s framing made the challenge explicit: many of the people who might gain most from AI are still excluded from the digital basics. In 2025, the ITU estimated that 2.2 billion people remained offline. And even among those online, access to generative AI has skewed heavily towards richer contexts. A World Bank analysis of generative AI traffic found low-income countries accounted for less than one percent of GenAI traffic.
Those numbers are more than context. They are the plot.
One scene, recounted again and again in side conversations that week, captured the new reality. A senior technology executive, Brad Smith of Microsoft, warned that infrastructure alone would not carry the day. Training and skills would decide whether AI widened inequality or narrowed it. It was a practical point delivered into a room full of strategic rhetoric: data centres do not teach children to read, and large models do not automatically translate into state capacity.
That tension leads directly to a concept that kept surfacing in the summit’s orbit: “small AI”. The term is easy to misunderstand. It does not mean trivial ambition. It means systems designed to work within constraints, to run on modest devices, to function with intermittent connectivity, to fit local languages and workflows, and to deliver services without requiring a frontier-scale model in the cloud. In development terms, it resembles what mobile banking did for finance: it skipped a step in the infrastructure ladder, then rewired behaviour at scale.
The attraction for institutions like the World Bank and ADB is obvious. A bank can fund a national health information system, a digital identity layer, or a fraud detection pipeline. A bank can also fund governance, standards, and interoperability. If AI is becoming a layer inside all those systems, then development finance is being pulled into the AI stack whether it chooses to be or not. The World Development Report 2026, focused on AI as a general-purpose technology, frames the stakes as institutional choices: how countries govern AI, adopt it responsibly, and capture benefits without compounding risk.
This is where the story sharpens, because the consequences are not evenly distributed across careers.
The summit’s conversation about jobs carried a particular edge. In public, speakers returned to familiar assurances: technology changes work, societies adapt, new roles emerge. In private, many participants circled a more immediate problem: early-career rungs are thinning. Entry-level tasks in research, analysis, drafting, and basic data work have become easier to automate, which changes how organisations hire and train. Even the World Bank’s concept framing points towards this direction, citing evidence that AI can automate entry-level tasks, with knock-on effects for job postings and wage offers in non-AI roles.
For someone aiming for a career in multilateral development, the implication is specific. The old path often started with aggregation: summarising literature, cleaning datasets, building tables, drafting project notes. AI can do much of that quickly. The value shifts to judgement: verifying claims, handling trade-offs, navigating ethics and safeguards, designing systems that survive contact with reality, and explaining choices to governments and communities.
The summit tried to meet that challenge with new frameworks designed to travel well across borders. The most prominent was “MANAV”, presented by India as a human-centric approach to AI governance: Moral and ethical systems, Accountable governance, National sovereignty, Accessible and inclusive, Valid and legitimate. Acronyms can be gimmicks. This one served a clearer purpose. It offered a checklist for officials and financiers who need to say yes or no to projects. It also made a claim about sovereignty: countries should maintain control over their data and infrastructure, even while using best-in-class tools.
Sovereignty, however, comes with bills. If a country wants meaningful AI capability, it needs compute, data governance, energy, connectivity, and talent. It needs standards that allow systems to talk to each other. It needs procurement that can buy services without handing over the state’s nervous system to the first vendor that shows up with a demo. It needs public trust, especially as deepfakes and synthetic media become cheap to produce and hard to detect.
In that context, one announcement drew particular attention because it blended geopolitics with deployment. The United States unveiled the U.S. Tech Corps, framed as a Peace Corps initiative designed to send volunteer technical talent to partner governments to help with last-mile implementation of AI applications for public services. On its face, it sounded like capacity building. In strategic terms, it was also a bid to shape the standards and platforms that become default across developing countries.
This is the kind of detail that makes a summit feel like a turning point rather than a talk shop. When volunteer technologists are embedded in ministries, the argument is no longer theoretical. It becomes operational: what data is collected, where it is stored, how models are audited, who has access, what happens when a system fails.
For development institutions, the scene in New Delhi lands as a mandate to evolve. The declaration’s pillars, the MANAV checklist, and the push towards democratised compute all point towards a world where AI is evaluated like infrastructure: as an input to growth, a determinant of state capacity, and a potential amplifier of inequality. The old project categories remain, but their internal mechanics change. A health project becomes a data governance project. An education project becomes a language and assessment project. A social protection project becomes an identity and fraud project. Each one carries new risks and new opportunities.
In the end, the New Delhi Pivot is less about a single declaration than about a shift in who gets to define the problem. Safety remains important, and the summit did not pretend otherwise. But the centre of gravity moved towards impact, access, and sovereignty, themes that resonate across the Global South because they map onto lived constraints: unreliable power, limited budgets, thin skills pipelines, and institutions asked to do more with less.
A week after the summit, the corridors of the multilateral system still looked the same: mission schedules, procurement deadlines, project documents. Yet something subtle had changed. AI was no longer a specialist topic, delegated to a digital team or a research unit. It had been recast as part of the development stack itself.
There is, however, a lingering question that gets left behind when a technology becomes infrastructure, it stops being optional. The only remaining choice is whether institutions shape it early, or spend a decade reacting to systems they did not design.
Make sure you subscribe to MDB Jobs to get the latest vacancies delivered straight to your inbox each Monday, and consultant positions each Friday.









