Rome Call for AI Ethics (2020)
Understanding the Vatican's foundational Rome Call for AI Ethics signed in 2020. Essential for tech leaders, policymakers, ethicists, and anyone implementing ethical AI frameworks in organizations.
📋 Table of Contents
Understanding the Rome Call
What is the Rome Call for AI Ethics?
The Rome Call for AI Ethics is a foundational document signed in February 2020 at the Vatican, establishing six universal principles for ethical artificial intelligence development. The document was signed by representatives from Microsoft, IBM, the United Nations Food and Agriculture Organization (FAO), the Italian government, and the Pontifical Academy for Life. The Rome Call represents an unprecedented collaboration between major technology companies, governments, international organizations, and religious leaders to establish shared ethical principles for AI that transcend cultural and religious boundaries. It emphasizes human dignity, transparency, inclusion, accountability, impartiality, reliability, security, and privacy as essential to AI development.
Why did the Vatican create the Rome Call?
The Vatican created the Rome Call because AI development was proceeding rapidly without adequate ethical frameworks ensuring technology serves human dignity and the common good. The Pontifical Academy for Life recognized that AI poses unprecedented ethical challenges—autonomous weapons, algorithmic bias, privacy violations, job displacement, manipulation of behavior—requiring moral guidance beyond legal compliance or corporate self-regulation. The Vatican brought unique convening power as a neutral institution with moral authority transcending national interests, able to bring together competitors and adversaries around shared human values. The goal was establishing universal ethical principles before harmful AI practices became entrenched, similar to how international humanitarian law developed for warfare. This connects to broader Vatican teaching on AI and peace.
What makes the Rome Call unique?
The Rome Call is unique because it achieved what seemed impossible: major tech companies, governments, and religious leaders agreeing to shared ethical principles for AI. Previous AI ethics efforts were either: (1) corporate codes lacking external accountability, (2) government regulations limited to specific jurisdictions, or (3) academic frameworks lacking practical implementation. The Rome Call combined moral authority, corporate commitment, and practical application. It's unique in being: explicitly grounded in human dignity as ultimate value; universal rather than culturally specific; voluntary but publicly committed; focused on principles guiding design decisions, not just avoiding harms; and creating ongoing dialogue between tech industry and moral traditions. The document represents recognition that AI ethics requires collaboration across sectors.
The Six Principles
What are the six principles of the Rome Call?
The six principles are: (1) Transparency—AI systems must be explainable and understandable, not black boxes; (2) Inclusion—AI must not create or reinforce discrimination, but serve all people including marginalized groups; (3) Accountability—humans must remain responsible for AI decisions and their consequences; (4) Impartiality—AI must not create or amplify biases against individuals or groups; (5) Reliability—AI systems must work consistently, safely, and as intended; (6) Security and Privacy—AI must protect users' data and dignity. These principles are meant to be universal, applying across cultures, religions, and political systems. They prioritize human dignity and the common good over efficiency or profit.
Real-World Application: Hiring Algorithms
Challenge: AI hiring tools must screen thousands of applicants efficiently while avoiding discrimination.
Rome Call Principles Applied: Transparency requires explaining why candidates are rejected; Inclusion means ensuring the system doesn't discriminate by race, gender, or age; Accountability means humans review AI recommendations; Impartiality requires testing for bias; Reliability means consistent performance; Privacy protects candidate data.
Source: Reuters, "Amazon scraps secret AI recruiting tool," October 2018
How does "transparency" work in practice?
Transparency means users affected by AI decisions can understand: (1) that AI is being used, not just human judgment; (2) what data the system considers; (3) how the system makes decisions; (4) why specific outcomes occurred; (5) what recourse exists if decisions are wrong. This doesn't require revealing proprietary algorithms, but does require meaningful explanation. For example, if AI denies someone a loan, they should understand what factors influenced the decision and how to improve their application. Transparency prevents AI from being used to obscure responsibility or shield decisions from scrutiny. It ensures AI serves human dignity by treating people as subjects deserving understanding, not objects to be processed, as emphasized in common good teaching.
What does "inclusion" mean for AI development?
Inclusion requires ensuring AI benefits all people, especially the vulnerable and marginalized, rather than only serving the wealthy and connected. This means: (1) diverse representation in development teams so systems work for everyone; (2) testing AI across different populations, not just privileged groups; (3) ensuring access to AI benefits isn't limited by wealth or geography; (4) designing systems that work for people with disabilities, limited digital literacy, or non-dominant languages; (5) preventing AI from amplifying existing discrimination. Inclusion challenges the assumption that AI should serve only profitable demographics, insisting technology serve the common good including those often ignored by market forces. This connects to protecting vulnerable populations.
Why is "accountability" emphasized as a principle?
Accountability means humans remain responsible for AI decisions and cannot hide behind "the algorithm decided" when harmful outcomes occur. This requires: (1) identifiable persons responsible for AI systems and their impacts; (2) meaningful human oversight of consequential decisions; (3) mechanisms for redress when AI causes harm; (4) consequences for those who deploy harmful systems; (5) transparency enabling accountability. Without accountability, AI becomes tool for evading responsibility—blaming the machine rather than accepting human moral agency. The Rome Call insists humans must always answer for technology's impacts, ensuring AI serves rather than replaces human moral judgment. This is especially critical for autonomous weapons and high-stakes decisions.
Implementation & Impact
How should organizations implement the Rome Call principles?
Organizations can implement Rome Call principles by: (1) Ethics review boards—establishing diverse committees with real authority to approve or reject AI projects; (2) Impact assessments—evaluating each AI system against all six principles before deployment; (3) Transparency mechanisms—providing clear explanations of how AI systems work and make decisions; (4) Bias testing—rigorously testing for discrimination across different populations; (5) Human oversight—ensuring humans review consequential AI decisions; (6) Privacy protections—implementing strong data security and minimizing collection; (7) Accountability structures—establishing clear responsibility for AI impacts. Implementation requires organizational commitment to prioritize ethics alongside efficiency and profit, as discussed in Vatican-tech industry dialogues.
Real-World Example: Microsoft's Responsible AI Framework
The Situation: As one of the Rome Call's original signatories, Microsoft restructured its entire AI development process around the six principles, creating company-wide ethics review boards and mandatory impact assessments.
The Implementation: Microsoft established the Office of Responsible AI and Aether Committee in 2020, requiring all AI projects to pass ethics review before deployment. They developed transparency tools like InterpretML and Fairlearn to ensure AI systems meet Rome Call standards.
The Outcome: Microsoft publicly declined facial recognition contracts worth millions when they couldn't meet Rome Call principles, demonstrating real commitment. Their implementation became a model for other tech companies.
Is the Rome Call legally binding?
No, the Rome Call is not legally binding—it's a voluntary ethical commitment. However, its influence comes from: (1) public commitment creating reputational stakes for signatories; (2) moral authority of Vatican and other signatories; (3) framework for future regulation as governments develop AI laws; (4) pressure from civil society and consumers expecting ethical AI; (5) potential legal liability if ignoring principles leads to harm. While not law, the Rome Call shapes what counts as responsible AI development and gives stakeholders language to demand better. Some jurisdictions have incorporated Rome Call principles into emerging AI regulation, making them increasingly legally relevant. The voluntary nature enables broad participation while still creating real accountability.
What has been the Rome Call's impact since 2020?
The Rome Call's impact includes: (1) establishing shared ethical language across tech industry, governments, and civil society; (2) influencing emerging AI regulation—EU AI Act and other laws reference similar principles; (3) creating framework for ongoing Vatican engagement with tech leaders through initiatives like Minerva Dialogues; (4) legitimizing moral critique of AI beyond purely technical or legal concerns; (5) inspiring similar multi-stakeholder ethics initiatives; (6) providing practical framework organizations use for ethical AI implementation. While AI challenges persist, the Rome Call changed the conversation—making ethical considerations central to AI development discussions rather than afterthoughts. It demonstrated possibility of collaboration across traditional divides on technology governance.
How does the Rome Call relate to AI regulation?
The Rome Call complements rather than replaces regulation. While regulations provide legal requirements and enforcement, the Rome Call offers: (1) moral framework explaining why these requirements matter; (2) universal principles transcending specific jurisdictions; (3) guidance for situations not yet covered by law; (4) ethical foundation for developing good regulation; (5) mechanism for voluntary action before regulation exists. The relationship is synergistic: Rome Call principles inform what regulation should require, while regulation provides teeth for ethical commitments. Organizations following Rome Call principles are well-positioned for compliance as regulation develops. The document shows how moral leadership can shape emerging technology governance, as seen in Vatican engagement with policymakers.
Signatories & Commitment
Who signed the Rome Call for AI Ethics?
The original February 2020 signatories included: Microsoft (Brad Smith, President), IBM (John Kelly, Executive Vice President), the UN Food and Agriculture Organization (FAO), the Italian Ministry of Innovation, and the Pontifical Academy for Life representing the Vatican. Since 2020, additional signatories have joined including other tech companies, academic institutions, civil society organizations, and government agencies. The multi-stakeholder nature is crucial—tech companies developing AI, governments regulating it, international organizations implementing it, and moral authorities like the Vatican all committed to shared principles. This breadth of support demonstrates that ethical AI isn't just corporate social responsibility but fundamental to legitimate technology development serving humanity.
Why did tech companies agree to sign?
Tech companies signed the Rome Call for several reasons: (1) genuine recognition that AI ethics requires guidance beyond corporate self-interest; (2) reputational benefit from association with Vatican's moral authority; (3) preemptive action before government regulation imposed stricter requirements; (4) competitive advantage in demonstrating ethical commitment; (5) internal demand from employees wanting ethical guardrails; (6) investor and consumer pressure for responsible AI. The Vatican offered neutral convening power—a way to demonstrate commitment without admitting wrongdoing or accepting regulatory oversight. While cynics question corporate sincerity, even imperfect commitment creates accountability and shapes industry norms. The Rome Call gives stakeholders framework to hold signatories accountable, as discussed in ongoing Vatican-tech dialogues.
📚 Additional Vatican Resources
Where can I find more Vatican documents on this topic?
For deeper understanding from official Vatican sources, explore these documents:
- Rome Call Meeting 2023 - Latest Rome Call developments
- Rome Call Signatories Meeting - Industry commitments
- Antiqua et Nova (2025) - Theological grounding
- Tech Executives Meeting (2025) - Ongoing industry dialogue
These documents provide official Vatican perspectives, historical context, and theological foundations for understanding AI ethics from a Catholic perspective.