Table of Contents
This reflects the rising importance of AI governance and safety in global and domestic policy discussions. Recent events like the Quad Leaders’ Summit, UN Summit of the Future, and India’s leadership roles in the G20 and the Global Partnership on Artificial Intelligence (GPAI) underscore the timeliness of this initiative.
Strategic Context
- Global Leadership: India should leverage its recent leadership roles at the G20 and the Global Partnership on Artificial Intelligence (GPAI) to position itself as a unifying voice in AI governance.
- Global Digital Compact: The Summit of the Future resulted in the Global Digital Compact, emphasizing multi-stakeholder collaboration, human-centric oversight, and inclusive participation from developing countries as key pillars for AI governance and safety.
- Next Steps: The UN will initiate a Global Dialogue on AI, making it timely for India to establish an AI Safety Institute that engages with the Bletchley Process on AI Safety.
Global Trends in AI Safety Institutes
- Bletchley Process
- Initiated by the K. Safety Summit (November 2023) and expanded at the South Korea Safety Summit (May 2024).
- Aims to establish an international network of AI Safety Institutes to address risks from advanced AI technologies.
- The next summit is planned in France, continuing the collaborative trajectory.
- United States and United Kingdom
- Both countries were early adopters, setting up AI Safety Institutes to manage risks from frontier AI models.
- MoUs between the U.S. and U.K.:
- Share knowledge, resources, and expertise.
- Collaborate with AI labs for early access to large foundation models.
- Implement mechanisms to share technical inputs with labs before public rollout.
- Focus on cybersecurity, infrastructure security, biosphere safety, and national security threats.
- China: Established an Algorithm Registry, aiming to monitor and regulate algorithms for safety and alignment.
- European Union: Proposed an AI Office under its regulatory framework, combining oversight with compliance requirements.
Role and Functions of Safety Institutes
- Serve as technical government institutions, not regulators.
- Facilitate proactive information sharing and risk assessments.
- Promote external third-party testing and mitigation strategies for AI risks.
- Focus on transforming AI governance into an evidence-based discipline.
Key Objectives for India’s AI Safety Institute
- Operate as a technical research, testing, and standardisation agency.
- Be independent of regulatory and enforcement authorities.
- Integrate into the Bletchley network to leverage global expertise and resources.
Key Recommendations for India
- Lessons from Previous Initiatives: Concerns were raised regarding MeitY’s AI Advisory from March 2024, which suggested requiring government approvals prior to public rollouts of experimental AI systems.
- Critics questioned the Indian government’s capability to assess the safety of novel AI deployments adequately.
- Issues regarding bias, discrimination, and a one-size-fits-all approach indicated that the advisory lacked technical evidence.
- Regulatory Caution: India should avoid adopting prescriptive regulatory controls similar to those proposed in the European Union (EU) and China, which could stifle proactive information sharing among businesses and governments.
- Establishing specialized agencies like China’s Algorithm Registry or the EU’s AI Office is recognized; however, India should separate institution building from regulation-making to maximize effectiveness.
Domestic and Global Focus Areas
- Domestic Priorities: Address risks related to bias, discrimination, gender, social exclusion, labour markets, data privacy, and surveillance.
- Build institutional capacity for harm identification, risk assessment, and mitigation strategies.
- Global Engagement: Collaborate with international safety institutes and stakeholders.
- Amplify global majority perspectives on human-centric AI safety.
Potential Impact
If successfully implemented, India could emerge as a global leader in forward-thinking AI governance by:
- Championing diverse perspectives on risks associated with AI technologies.
- Deepening global dialogue around harm identification, risk mitigation strategies, red-teaming efforts, and standardization practices.
- Demonstrating a commitment to evidence-based policy solutions that are globally compatible.