Vatican at UN Security Council: AI and International Security
Understanding the Vatican's 2025 statement to the UN Security Council on artificial intelligence and international security. Essential for diplomats, military leaders, policymakers, and anyone concerned with AI in warfare and global stability.
📋 Table of Contents
Understanding the Statement
What is the 2025 UN Security Council AI statement?
In 2025, Archbishop Paul Gallagher delivered a statement at a UN Security Council open debate on artificial intelligence and international security. As the Vatican's Secretary for Relations with States (equivalent to foreign minister), Archbishop Gallagher presented the Holy See's position on AI's threats to global peace and security, with particular focus on autonomous weapons systems, AI-enabled surveillance and repression, algorithmic warfare, and the need for international governance frameworks. The statement represents the Vatican's diplomatic engagement at the highest level of international security discussions.
Why did the Vatican address the UN Security Council on AI?
The Vatican addresses the Security Council because AI represents a profound threat to international peace and security—the Council's core mandate. Archbishop Gallagher's statement emphasizes that autonomous weapons, AI-powered surveillance enabling repression, algorithmic manipulation of information, and AI arms races threaten global stability as significantly as nuclear weapons once did. The Holy See, as a permanent observer state at the UN with moral authority transcending national interests, is uniquely positioned to advocate for international cooperation on AI governance before catastrophic use occurs. This builds on Pope Francis's G7 warnings.
What makes this statement significant?
The statement is significant because: (1) it represents the Vatican's highest-level diplomatic engagement on AI security issues; (2) the Security Council rarely addresses technology questions, indicating growing recognition of AI's security implications; (3) the Vatican brings moral authority and long-term perspective to debates often dominated by national security interests; (4) it connects AI governance to existing international law and humanitarian principles; (5) it calls for concrete action—binding treaties, verification mechanisms, and enforcement—not just voluntary principles. The statement positions AI weapons control alongside nuclear arms control as essential to global security.
AI and Autonomous Weapons
What is the Vatican's position on autonomous weapons?
Archbishop Gallagher's statement calls for an absolute ban on lethal autonomous weapons systems (LAWS)—weapons that can select and engage targets without meaningful human control. The Vatican argues these systems are inherently immoral because: (1) they delegate life-and-death decisions to machines incapable of moral judgment; (2) they erode human responsibility and accountability for killing; (3) they lower the threshold for conflict by removing human psychological barriers to violence; (4) they threaten to make war "too easy" by eliminating human risk; (5) they violate human dignity by allowing algorithms to determine who lives or dies. This position aligns with the 2024 Peace Day message.
Real-World Challenge: The Autonomous Weapons Arms Race
Problem: Multiple nations are developing autonomous weapons systems. Without international agreement, an arms race seems inevitable, with each nation fearing being left behind.
Vatican Principle: Just as the world eventually banned chemical and biological weapons despite initial resistance, autonomous weapons must be prohibited through binding international treaty before widespread deployment makes control impossible.
🌐 UN Secretary-General's AI Advisory Body Formation (2024)
In October 2023, UN Secretary-General António Guterres established a high-level AI Advisory Body comprising 39 members from government, private sector, academia, and civil society to provide recommendations on AI's international governance. The body's 2024 interim report "Governing AI for Humanity" identified critical governance gaps: no binding international agreements on military AI, fragmented national regulations creating regulatory arbitrage, lack of mechanisms for cross-border AI incident response, and absence of verification systems for AI compliance. The body recommended establishing an International AI Governance Organization modeled on the IAEA, creating binding protocols for high-risk AI applications, and developing technical standards for AI transparency and accountability. However, the recommendations faced resistance from major powers unwilling to constrain military AI development. The Advisory Body's experience demonstrates both the urgent need for global AI governance and the political obstacles to achieving it, validating the Vatican's call for moral leadership to overcome narrow national interests in establishing effective AI governance frameworks.
⚖️ OECD AI Principles Global Adoption (2019-2024)
The OECD AI Principles, adopted in May 2019 and subsequently endorsed by over 50 countries including all G20 members, represent the first intergovernmental standard on AI but also reveal limitations of voluntary frameworks. The principles establish values-based guidance including inclusive growth, sustainable development, human-centered values, fairness, transparency, robustness, and accountability. By 2024, 42 countries had developed national AI strategies referencing OECD principles, and the OECD AI Policy Observatory tracked over 700 AI policy initiatives globally. However, implementation varies dramatically: while EU countries translated principles into binding regulations through the AI Act, other nations treated them as aspirational guidelines. Military AI remains explicitly excluded from the framework. Compliance is voluntary with no enforcement mechanisms. The OECD experience demonstrates that while soft law can build consensus and norms, the Vatican's call for binding treaties reflects recognition that voluntary principles alone cannot govern high-stakes AI applications, particularly in security domains where competitive pressures overwhelm ethical commitments without enforceable constraints.
Source: OECD AI Policy Observatory, Principles Implementation Report 2024
What about "meaningful human control" over weapons?
The Vatican statement emphasizes that all weapons systems must maintain "meaningful human control"—not just a human in the loop, but genuine human judgment about targeting and engagement decisions. This means: (1) humans must understand what the system will do and why; (2) humans must have sufficient information and time to make informed decisions; (3) humans must be able to override system decisions in real-time; (4) systems must not operate too fast for human comprehension; (5) accountability must rest with identifiable human decision-makers. AI can assist, but never replace, human moral judgment in life-and-death decisions.
How does AI affect cyber warfare and information operations?
According to Archbishop Gallagher, AI enables cyber warfare and information operations at unprecedented scale: automated hacking and infrastructure attacks, AI-generated disinformation campaigns, deepfakes undermining trust in evidence, algorithmic manipulation of public opinion, and targeting of critical systems. These capabilities threaten not just military targets but civilian infrastructure, democratic processes, and social cohesion. The Vatican calls for extending international humanitarian law and norms of warfare to cyber and information domains, prohibiting AI attacks on civilian systems and democratic processes.
What about AI surveillance and repression?
The statement warns that AI-powered surveillance—facial recognition, behavior prediction, social credit systems, and automated repression—threatens international security by enabling authoritarian control and human rights violations. When states use AI to monitor, control, and repress populations, it creates instability, drives migration crises, and undermines the international human rights framework essential to peace. The Vatican calls for international restrictions on surveillance technologies' export and use, particularly systems designed for mass monitoring and social control, connecting to common good principles.
International AI Governance
What international governance framework does the Vatican propose?
Archbishop Gallagher calls for: (1) Binding international treaty prohibiting autonomous weapons, not just voluntary guidelines; (2) Verification mechanisms allowing inspection and compliance monitoring; (3) Enforcement provisions with consequences for violations; (4) Universal participation including all nations and non-state actors; (5) Regular review process adapting to technological change; (6) International AI safety standards for dual-use technologies; (7) Dispute resolution mechanisms. The model is successful arms control treaties—Chemical Weapons Convention, Biological Weapons Convention—adapted to AI's unique challenges.
Why does the Vatican emphasize binding treaties over voluntary principles?
The Vatican argues that voluntary principles, while useful, are insufficient for security threats. When national security and military advantage are at stake, states won't voluntarily forgo dangerous capabilities unless assured competitors are similarly constrained. Binding treaties create: (1) legal obligations with international enforcement; (2) verification requirements providing transparency; (3) consequences for violations deterring non-compliance; (4) level playing field preventing arms races; (5) legitimacy for international intervention when violations occur. The stakes—autonomous weapons, AI-enabled mass destruction—demand legal frameworks, not just good intentions, as emphasized in the Rome Call.
How should existing international law apply to AI?
According to the Vatican statement, existing international humanitarian law fully applies to AI systems: (1) Distinction principle—systems must distinguish combatants from civilians; (2) Proportionality—attacks must not cause excessive civilian harm relative to military advantage; (3) Precaution—all feasible measures must protect civilians; (4) Accountability—individuals must be responsible for violations; (5) Prohibition of weapons causing unnecessary suffering. Many AI systems cannot reliably comply with these requirements, making their military use illegal. New treaties should clarify application and close loopholes, not create exceptions weakening humanitarian protections.
What role should the UN play in AI governance?
The Vatican envisions a central UN role: (1) Security Council oversight of AI threats to peace and security; (2) General Assembly development of international treaties and norms; (3) specialized agencies (IAEA model) for verification and compliance; (4) Human Rights Council monitoring of AI surveillance and repression; (5) International Court of Justice adjudicating disputes; (6) Secretary-General convening stakeholders—governments, companies, civil society—for governance negotiations. The UN's universal membership and existing structures make it the appropriate forum for international AI governance, though new institutions may be needed for technical oversight.
Call to Action
What specific actions does the Vatican call for?
Archbishop Gallagher calls on: UN Security Council to establish working group on AI security threats and support treaty negotiations; Member States to commit to meaningful human control in weapons systems and support binding autonomous weapons ban; Military and defense establishments to refuse developing fully autonomous weapons and implement ethical AI guidelines; Technology companies to refuse military contracts for autonomous weapons and support international governance; Scientists and engineers to pledge not to participate in autonomous weapons development; Civil society to advocate for strong international agreements. Preventing AI security catastrophe requires coordinated action from all stakeholders now, before systems are widely deployed.
How can nations balance security needs with AI governance?
The Vatican argues that strong AI governance enhances rather than undermines security: (1) preventing destabilizing arms races that make all nations less secure; (2) maintaining human judgment and accountability essential to just warfare; (3) preserving international law and norms protecting civilians; (4) building trust through transparency and verification; (5) channeling AI toward genuine security needs—cybersecurity, disaster response, peacekeeping support—rather than destabilizing weapons. True security comes through international cooperation and adherence to humanitarian principles, not through autonomous weapons arms races that threaten humanity. This aligns with the peace and security framework.
📚 Additional Vatican Resources
Where can I find more Vatican documents on this topic?
For deeper understanding from official Vatican sources, explore these documents:
- Pope Leo XIV at UN Summit (2025) - Expanded UN engagement
- Paris AI Summit (2025) - International cooperation
- Antiqua et Nova (2025) - Peace theology and AI
- Rome Call for AI Ethics (2023) - Ethical framework for peace
These documents provide official Vatican perspectives, historical context, and theological foundations for understanding AI ethics from a Catholic perspective.