It sounded like the setup for a satire. A handful of the most powerful technology executives on Earth, including Microsoft, IBM, and Cisco, flew to Rome to meet priests and philosophers. The setting was the Vatican's marble-lined Casina Pio IV, a Renaissance villa more accustomed to papal science councils than to corporate summits. The mission was to sign a document about artificial intelligence.

Five years later, in 2025, that day feels less quaint than prophetic. The event became known as The Rome Call for AI Ethics, the moment faith met code and moral philosophy walked straight into Silicon Valley’s backyard.

In February 2020, just weeks before the world shut down for the pandemic, something extraordinary took place inside Vatican City. Executives from Microsoft and IBM sat beside cardinals, rabbis, and United Nations officials. There were no venture capital pitches or product demonstrations, only a shared statement about how machines should treat human beings.

For a technology culture accustomed to speed and disruption, Rome offered a very different rhythm: think slowly and build wisely. The Vatican was not offering regulation or funding. What it brought instead was older, rarer, and harder to ignore, two thousand years of moral infrastructure and a global audience that spans every border and faith.

One participant later recalled, “We needed a convening power that wasn’t beholden to any government or corporation. The Vatican was the only place that could put everyone at the same table and make them listen.”

Pope Francis had been talking about AI long before ChatGPT turned it into dinner-table conversation. His concern was not robots taking jobs but algorithms taking judgment, systems deciding who receives a loan, a job, or parole without any understanding of human dignity.

In the months before the summit, he warned that technology should never determine the fate of individuals and societies without moral reflection. But he did not want to lecture. He wanted to convene. His science advisors, including Fr. Paolo Benanti, a Franciscan friar trained in engineering and bioethics, drafted a plan to gather the world’s AI powerbrokers and challenge them to articulate shared principles before the technology became embedded in global infrastructure.

Out of that unlikely collaboration came six simple words: transparency, inclusion, responsibility, impartiality, reliability, and security. Together they formed the Rome Call for AI Ethics, a kind of Ten Commandments for code.

Transparency meant people should know when and how an algorithm makes a decision about them. Inclusion meant AI must serve everyone, not just those designing or affording it. Responsibility kept humans accountable because machines cannot carry moral blame. Impartiality required systems to reduce bias rather than amplify it. Reliability insisted AI must work safely and predictably. And security reminded everyone that data belongs first to the individual, not the institution.

There were no legal penalties. But the signatures included Brad Smith of Microsoft, John Kelly III of IBM, senior Vatican officials, rabbis, imams, and representatives of the Italian government, and together they gave the document real weight. When Smith later described the event as “a turning point in how we think about technology’s role in society,” he was not exaggerating. For the first time, the moral language of should entered a conversation long dominated by can.

The timing was uncanny. By early 2020, headlines about biased facial recognition systems, discriminatory hiring algorithms, and extremist-fueling social feeds were everywhere. Governments were beginning to regulate. Consumers were losing trust. The technology industry needed legitimacy. The Vatican needed a way to apply its social teaching to modern life. The Rome Call gave both what they wanted, a moral handshake that made ethics sound like innovation rather than obstruction.

Behind the photo opportunities, something deeper stirred. Francis reframed AI not as a technical challenge but as a human rights issue. The question was no longer how to perfect the code, but how to preserve conscience in a data-driven world.

Then the world changed. COVID-19 emptied airports and silenced conferences, but the Rome Call kept echoing. Within a year, the European Union cited several of its principles while drafting the AI Act, especially transparency and human oversight. The Vatican had no seat in Brussels, yet its phrasing appeared in policy memos and academic syllabi.

Technology companies followed. Microsoft’s Responsible AI guidelines echo the Call’s six principles almost word for word. IBM included the document in employee ethics training. Whether that reflected conviction or branding is debatable, but it gave journalists and watchdogs a moral ruler to measure against.

Universities adopted it. Interfaith coalitions expanded it. By 2023, Buddhist, Hindu, and Indigenous leaders had joined follow-up meetings, turning the original trio of faiths into a global dialogue. What began as a photo opportunity became a vocabulary.

Not everyone accepted the Vatican’s miracle narrative. Scholars pointed out that “human dignity” and “the common good,” while noble, are vague in code audits or compliance checklists. Others noted that the Rome Call did not address the real engines of AI harm, data monopolies, economic incentives, and concentrated power. And it had no enforcement. Companies could sign on Friday and violate the principles by Monday.

Fr. Benanti, who helped draft the document, was candid. “We didn’t solve AI ethics in a day,” he said. “We began a conversation. The question is whether that conversation changes behavior or just provides cover for business as usual.”

Even critics admit that moral frameworks rarely start perfect. The Universal Declaration of Human Rights was not enforceable either; its influence came from consensus, not coercion.

For the Vatican, the Call was never about compliance but about conscience. By drawing technology leaders into moral language, it forced a culture defined by disruption to pause and ask why. That is Rome’s enduring strength. It can convene where others compete. It can think in centuries where others think in quarters. As Cardinal Peter Turkson said at the signing, “Tech companies think in quarters. Governments think in election cycles. The Church thinks in centuries.”

The genius of the Rome Call was not theological but temporal. It widened the horizon.

Since 2020, the Vatican’s AI efforts have expanded. The Pontifical Academy for Life now hosts annual Rome Call follow-ups on AI and climate modeling, algorithmic warfare, automation and inequality, generative AI and authorship, and end-of-life care technologies. The guest list now includes artists, ethicists, and activists. The idea is to keep the circle widening until AI ethics becomes a conversation, not a code.

In parallel, Francis has woven the same themes into his encyclicals, urging that technology serve the integral development of the person, not just economic growth. He frames it not as a ban but as a balance, innovation with inclusion, progress with purpose.

Even outside faith circles, the Rome Call offers durable lessons. Ethics needs diversity. Principles must become practice. Time horizons matter. And some questions are not technical. “Should we?” is a moral verb, not a computational one.

The Vatican’s AI revolution is not about religion so much as it is about reminding the world that moral imagination still counts as a kind of intelligence.

Five years on, no one pretends the Rome Call fixed AI. Bias still infects datasets. Automation still displaces workers. Surveillance still erodes privacy. But the Call changed the tone. It made moral language acceptable again in a field that once dismissed it as medieval.

When the next generation of policymakers writes AI law, when engineers debate algorithmic fairness, they borrow phrases minted in Rome: transparency, inclusion, accountability.

Perhaps that is how ethical revolutions begin, not with commandments carved in stone, but with a conversation that refuses to end.

Related Resources from DCF Hungary

Vatican Documents on AI Ethics

Vatican Documents on Peace and Technology

More Vatican AI Resources

Frequently Asked Questions

Why did tech companies agree to sign the Rome Call?

Tech companies signed in February 2020 for strategic reasons: it provided moral credibility at a time when AI ethics was becoming critical, and the Vatican offered convening power without regulatory authority. The Vatican was uniquely positioned as a neutral convener.

What did tech companies actually commit to?

By signing, companies committed to six principles: transparency, inclusion, accountability, impartiality, reliability, and security/privacy. These are voluntary commitments without legal enforcement, but create reputational stakes.

Has the Rome Call actually changed how tech companies build AI?

Evidence is mixed. Some companies established AI ethics boards and increased transparency. However, critics note voluntary commitments often lack teeth. The real impact may be cultural—it legitimized AI ethics as a corporate concern.

Why was the Vatican chosen as convener?

The Vatican offered unique advantages: moral authority built over two millennia, complete independence from governments and corporations, global reach, and historical perspective on technological disruption.

What role did Pope Francis personally play?

Francis was the driving force, warning about AI years before ChatGPT. He provided the moral vision that AI must serve human dignity and lent his personal authority to bring tech leaders to Rome.

Did other religious leaders participate?

Yes. Jewish rabbis and Muslim leaders participated, making it interfaith. This reinforced that the Rome Call addresses universal human values, not just Catholic doctrine.

How is the Rome Call different from government regulations?

Government regulations establish legal requirements and penalties. The Rome Call offers moral principles without enforcement. The two complement each other—moral frameworks inform what regulations should achieve.

Has it influenced AI policy beyond tech companies?

Yes. The Rome Call's principles have been cited at the OECD, UN, and EU. Its language about human dignity and accountability has become standard in AI policy debates worldwide.