Taking stock: The Impact of the India AI Impact Summit 2026
India’s hosting of the AI Impact Summit 2026 was an ambitious undertaking. With 600,000 attendees and 92 signatories to the New Delhi Declaration, the Summit was a showcase of a Global South country taking a leading role in shaping the AI governance agenda. The Summit’s official framing centered on infrastructure, compute, and equitable access to AI. What emerged across the week, and across FPF’s engagements in New Delhi before and during the Summit, was a global AI governance conversation defined by the tension between ambitious multilateral declarations and the slower, harder work of building the institutions and tools needed to make them real.
Now that the dust has settled, this blog post takes stock of the impact the Summit has had on the global AI governance conversation, drawing takeaways from FPF’s participation in events across Pre-Summit and the Summit itself. The threads that emerged from our engagements with the programming in New Delhi and now continue to manifest in various ways are: (1) the growing role of sandboxes as governance infrastructure; (2) whether global AI policy conversations can hold together in the face of geopolitical divergence; and (3) the sharpening focus on children’s safety and agentic AI as specific governance challenges that are moving faster than the frameworks designed to address them.
Theme 1: For AI governance to scale, it needs the right testing environments, and sandboxes are emerging as an answer
FPF participated in two events tied to India’s AI Impact Summit 2026, both co-organized with Nasscom. On 20 January 2026, FPF and Nasscom co-hosted a Pre-Summit Event in New Delhi titled “Building Safe Spaces for AI Impact: Regulatory and Private Sandboxes,” bringing together senior government leaders, regulators, global industry representatives, and policy experts. From 16–21 February 2026, Jules Polonetsky, CEO of FPF, Josh Lee Kok Thong, Managing Director for APAC, and Bilal Mohamed, Policy Manager for India, represented FPF at the Summit itself, co-organizing a high-level panel with Nasscom, hosting an FPF Salon Dinner on 17 February, and participating in bilateral engagements throughout the week.

The FPF delegation at the India AI Impact Summit 2026. From L-R: Josh Lee Kok Thong, Managing Director (APAC); Jules Polonetsky, CEO; Bilal Mohamed, Policy Manager for India Photo credit: Josh Lee
One of the clearest messages from the Pre-Summit Event was that the global AI governance conversation has moved decisively beyond the question of what principles should govern AI toward the more difficult question of how to build the regulatory infrastructure needed to put those principles into practice. Sandboxes (whether in their regulatory and private organizational forms), are emerging as one possible lever to achieving this.
The Pre-Summit Event’s first panel, moderated by Josh, brought together regulators from India, Singapore, and Brazil alongside industry experts to examine the evolution of regulatory sandboxing. Two key insights emerged:
- First, sandboxes have seen global uptake as a mechanism for translating governance principles into practice. Over 200 regulatory sandboxes are now in operation globally, 70 of which are focused on AI. More importantly, their function is changing. Where early sandboxes primarily granted permission for testing, well-designed sandboxes today generate the real-world evidence regulators need to write better-calibrated rules. Singapore’s Infocomm Media Development Authority (IMDA) has pioneered a phased methodology moving from case studies to guidelines to formal standards, offering a model of prospective enforcement grounded in observed technical reality.
- Second, sandboxes are becoming interoperable by necessity. AI-driven products cut across sectors in ways that engage multiple regulators simultaneously. The Reserve Bank of India’s Interoperable Regulatory Sandbox mechanism, introduced in 2022, was designed to test products that trigger obligations across jurisdictional lines. Similarly, Brazil’s Agencia Nacional de Proteção de Dados (ANPD) deliberately involves other regulators, technical experts, and civil society from the outset, recognizing that the questions sandboxes address are rarely confined to a single institution’s mandate.
The second panel examined how organizations are building private sandboxes for AI governance. The discussion, featuring representatives from Coforge, PayPal, Salesforce, Palo Alto Networks, and European Data Protection Supervisor (EDPS) AI Unit, highlighted two practical insights:
- First, private sandboxes help organizations build trust with both consumers and regulators. Sudheer described Salesforce’s “Customer Zero” approach: before any AI product reaches customers, it is deployed internally across Salesforce’s 80,000-person workforce. The Salesforce philosophy of “build it, use it, fix it, scale it, and then sell it” surfaces real-world failures that may be limited by laboratory testing and allows governance guardrails to be refined before external rollout. Sam described how Palo Alto Networks used isolated “dirty lab” environments to subject models to curated malicious prompts, simulating prompt injection, data leakage, and adversarial manipulation, to establish a behavioural baseline before deployment. For companies navigating frameworks like India’s Digital Personal Data Protection Act, 2023 (DPDP Act), internal sandboxes serve as a signal of due diligence to regulators, demonstrating structured processes throughout the product lifecycle.
- Second, unlike generative AI systems (whose failure modes are at least probabilistically characterized), agentic systems take autonomous actions, which means sandboxing must simulate intent rather than just behavior. More broadly, governance frameworks must be built to outlast the specific technologies they regulate. As Christian Lau of Dynamo AI described during the first panel, organizations must “separate the governance layer from the tech layer,” building accountability mechanisms that remain intact as models evolve.
Theme 2: Geopolitical divergence is exposing the limits of international AI governance
As the first Global South host of the AI Summits, India played an important bridging role, keeping the focus on how AI can drive economic development across Africa, South America, and Asia. The adoption of the New Delhi Declaration, signed by 92 countries and international organizations – including the US, China, and G7 nations – reflected genuine multilateral ambition, even as its voluntary and non-binding character also revealed the limits of that ambition.
The Summit provided a platform for different philosophies on AI governance and oversight to be articulated, with geopolitics in the backdrop. Michael Kratsios, Director of the White House Office of Science and Technology Policy, argued that AI policy must remain national and local, and that international fora risk creating centralized oversight that could stifle innovation under the guise of safety. Implementing this vision, the US outlined a set of parallel initiatives: an American AI Exports Program, new development finance instruments, a Tech Corps initiative embedding US technical experts with partner governments, and an AI Agent Standards Initiative through the Department of Commerce.
On the other hand, the President of France, Emannuel Macron, who hosted the previous edition of the AI Summit in Paris, promoted the EU AI Act in his speech as evidence that responsible and competitive AI are not in opposition, and argued for an approach that treats oversight as foundational to AI development rather than an obstacle to it.
India, as host, articulated its own approach. During the fireside chat concluding the Pre-Summit Event, S. Krishnan, Secretary, Ministry of Electronics and Information Technology (MeitY), outlined a philosophy of regulation “only when necessary,” explaining that India’s constitutional framework allows sectoral regulators such as Securities and Exchange Board of India (SEBI) and the Royal Bank of India (RBI) to oversee AI within their respective domains, rather than relying on a single, prescriptive national law. This middle path eyed by India relies heavily on the kind of regulatory infrastructure discussed in Theme 1.

FPF’s Managing Director for APAC Josh Lee Kok Thong engaging MeitY Secretary S. Krishnan during the fireside chat at the FPF-Nasscom Pre-Summit Event. Photo credit: Nasscom
FPF’s own Summit panel, titled “From Policy to Practice: Governing AI for Global Impact“, co-organized with Nasscom and moderated by Ashish Aggarwal (Nasscom), brought this tension into sharper relief. The panel featured Carina Prunkl (INRIA), Jules Polonetsky (FPF), Gail Kent (Google), Ivana Bartoletti (Wipro), and Wifredo Fernandez (xAI). Three insights from the discussion stood out.
First, it was highlighted that a critical question for the adoption of responsible AI practices is whether emerging baselines are clear and accessible enough to prevent a race to the bottom on safety. As Jules Polonetsky noted, weak or expensive compliance infrastructure creates competitive pressure to cut corners, a particular risk for startups and smaller players.
Second, governance frameworks must be built for specific contexts rather than transplanted from elsewhere. As Gail Kent noted, Indian users rely heavily on voice, video, and image-based inputs rather than text, which fundamentally changes the safety and privacy challenges that need local attention. Third, as Ivana Bartoletti argued, India’s “techno-legal” approach positions it to be an architect of governance solutions rather than a recipient of frameworks designed elsewhere.
These observations point to something important that focusing on divergent regulatory philosophies can obscure. The real risk in global AI governance may lie less in countries choosing different regulatory models, and more in those models being either ineffective overall or inaccessible to smaller actors that a shared floor on safety ceases to exist.

A packed full house at FPF’s and Nasscom’s official session at the India AI Impact Summit.
Photo credit: Josh Lee
Theme 3: There is a cross-border consensus to regulate for children’s safety, but approaches vary
Despite differences in AI regulatory philosophies exposed during the Summit, child safety emerged as a point of cross-border consensus. Prime Minister of India, Narendra Modi, called for AI to be child-safe and family-guided, and for mandatory authenticity labels on AI-generated content. President Macron urged India to join a coalition restricting social media access for children.
Prime Minister Modi’s remarks were also grounded in a domestic regulatory development that had unfolded days before the Summit. On 10 February 2026, MeitY notified the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, introducing India’s first formal framework for synthetically generated content. The amendments require intermediaries to label AI-generated content, block the creation and dissemination of child sexual abuse material and non-consensual intimate imagery, and comply with a three-hour takedown window for prohibited content.
In India, the momentum has not been limited to the federal government. On 6 March 2026, the state government of Karnataka announced in its 2026–27 State Budget a proposed ban on social media use for children under 16, citing concerns over digital addiction, mental health, and declining academic performance. On the same day, the Chief Minister of Andhra Pradesh, Chandrababu Naidu, announced that the state would implement a ban on social media for children under 13 within 90 days. At the federal level, the DPDP Act already requires parental consent for the processing of personal data of children below the age of 18.
India’s actions sit within a broader global trend. In July 2025, the EU adopted guidelines on the protection of minors under the DSA; Australia implemented a social media age ban for under-16s in December 2025; and Singapore’s IMDA introduced age assurance requirements for app stores. In the weeks since the Summit, that response has accelerated. The White House’s National Policy Framework for AI placed children’s safety at the center of its legislative recommendations. Dozens of chatbot safety bills are under consideration in state legislatures across the US, and the US Congress. In the UK, Prime Minister Keir Starmer announced that AI chatbots will be brought under the Online Safety Act. The World Economic Forum’s Global Risks Report 2026 ranked online harms among the top risks of the next decade.
Taken together, this activity signals that child safety in the age of AI has become the rare governance issue that commands cross-jurisdictional political consensus, even as the jurisdictions diverge on almost every other dimension of AI oversight. The harder question is whether frameworks across jurisdictions, which share the same underlying concerns but differ in their approaches to age assurance, parental consent, and platform liability, can converge enough to hold platforms to consistent and effective standards. It is a question that India, with its large minor population and newly enacted synthetic media rules, has a significant stake in helping to answer.
Conclusion
The vivid debates at the Summit showed that AI governance approaches will be shaped by the economic, political, and legal contexts in which different nations operate. The real question is whether enough common ground can be built to prevent a race to the bottom on safety and responsible AI, as was highlighted by the FPF-Nasscom panel.
India’s hosting of the Summit was an important signal that this work is genuinely global in its participants and ambitions. The governance gaps that came into focus in New Delhi, from agentic AI accountability to the protection of children in AI-mediated spaces, to the question of whether voluntary multilateral declarations can be turned into durable commitments, represent the agenda for the conversations ahead.