In 2025, as nations race to regulate artificial intelligence and militaries experiment with drones that can think for themselves, one of the most consistent voices warning about the moral consequences of technological power does not come from Silicon Valley, Washington, or Beijing.
It comes from Rome.
The Vatican's newest declarations on AI and warfare may sound modern, but they rest on a foundation that stretches back more than three decades. It is a moral architecture that began with nuclear disarmament and now confronts the age of algorithms.
"A world without nuclear weapons is possible and necessary," Pope Francis said in Nagasaki in 2019. "But so too is a world where machines never decide who lives and who dies."
From mushroom clouds to machine code, the Church has been preparing for this moment all along.
The story begins not in a laboratory or a war room, but in the closing chapter of the Cold War.
In 1991, while the world celebrated the fall of the Berlin Wall and the supposed end of history, Pope John Paul II struck a discordant note. His World Day of Peace message reminded the world that peace was not self-sustaining. It required vigilance, justice, and moral imagination.
He framed disarmament not as politics but as ethics. "Peace," he wrote, "is not merely the absence of war but a value to be built up patiently."
Over the next decade he returned to that theme repeatedly.
Weapons of mass destruction, he warned, were incompatible with human dignity. True security would never come from stockpiles but from solidarity. Deterrence, the idea that mutual terror keeps peace, was a false and fragile logic.
John Paul II's teaching did not end the arms race, but it reframed the conversation. The problem was not only the weapons. It was the mindset that made them seem necessary. That insight would become crucial once machines began learning to kill on their own.
When Benedict XVI succeeded him, the emphasis shifted from ideals to precision.
In his 2009 and 2013 peace messages, Benedict returned to a principle rooted in centuries of Catholic moral theology: proportionality. Even just wars must follow rules, he reminded the world, rules designed to limit harm, preserve conscience, and keep humanity tethered to morality amid violence.
He laid out three tests. Weapons must distinguish between soldiers and civilians. Force must be proportional to the threat. The cure must never be worse than the disease.
Nuclear weapons failed all three tests. So did cluster bombs and landmines. And although artificial intelligence was not yet part of the Vatican's vocabulary, Benedict’s warnings anticipated it. He understood something technologists were just beginning to grasp. The more sophisticated our tools become, the easier it is to separate capability from conscience.
When Pope Francis inherited that legacy, he brought moral clarity and urgency.
He was not content to discuss principles. He wanted to expose systems. His 2017 peace message pulled no punches. "Why are deadly weapons sold to those who plan to inflict untold suffering?" he asked. "The answer, sadly, is money. Money drenched in blood."
Francis saw the arms trade, and later the data trade, as twin manifestations of the same sickness: the commodification of human life. It was not only about weapons. It was about the economy that sustained them. The same profit-driven logic that fueled Cold War armament now powers AI research, cyberwarfare, and automated defense systems.
In his view, technology divorced from ethics is never neutral. It simply accelerates the worst tendencies of its creators.
The turning point came on November 24, 2019.
Standing at ground zero in Nagasaki, Pope Francis condemned nuclear arms in searing moral language. "The use of atomic energy for purposes of war is, today more than ever, a crime," he said, "not only against human dignity but against any possible future for our common home."
Then, in what Vatican insiders call the pivot, he went off script. He warned that the same logic of annihilation was reemerging, not in bombs, but in algorithms.
Behind closed doors, Francis had been meeting with scientists, theologians, and ethicists to discuss AI and autonomy in warfare. The questions he asked were deceptively simple. What happens when a machine decides who dies? Can an algorithm understand mercy? If an autonomous drone kills civilians, who bears moral responsibility—the coder, the commander, or the code?
The answers were chilling, and they shaped everything the Vatican would say about AI in the decade to come.
When COVID-19 silenced public gatherings, the Vatican turned inward and digital. Francis began issuing messages that quietly linked automation, inequality, and violence.
His 2022 World Day of Peace message warned: "New technologies can contribute to peace, but they can also increase inequality and conflict when controlled by a few or developed without ethical guidelines."
Translation: AI is the new arms race.
Unlike nuclear weapons, autonomous systems do not require uranium or government laboratories. They are software. They scale cheaply. They spread invisibly.
By mid-2022, the Holy See sent a formal statement to the U.N. Disarmament Commission. It was the first to explicitly connect AI, cyberwarfare, and lethal autonomous weapons. The warning was blunt. The world is sleepwalking into a new kind of arms race, one waged not with missiles but with models.
In 2024, the Vatican dropped any ambiguity.
Francis's World Day of Peace message that year, titled "Artificial Intelligence and Peace," became the Church's most explicit moral statement on technology.
He echoed the same moral triad that John Paul II had articulated three decades earlier. Peace requires justice. Weapons that erase moral boundaries are immoral. Security comes from solidarity, not dominance. Then he applied it directly to the machines now reshaping warfare.
"How can a machine make ethical decisions?" he asked. "How can we ensure that algorithms respect human dignity? And what happens when the capacity to wage war is democratized through code?"
He called for three urgent actions: a global ban on autonomous kill systems without human oversight, international treaties for military AI transparency, and moral responsibility built into every level of AI design.
The statement landed with unusual force. Defense ministers and AI researchers debated it at Davos and in U.N. panels. Even skeptics admitted that moral language had finally caught up to the technology.
This year, Francis took the argument to its philosophical core.
In his 2025 World Communications Day message, "Artificial Intelligence and the Wisdom of the Heart," he reminded the world that intelligence and wisdom are not the same thing. "True intelligence," he wrote, "requires more than data processing. It requires wisdom, the ability to discern what serves human flourishing and what diminishes it."
An AI system might process data perfectly, but it cannot feel the moral weight of taking a life. It can optimize, but it cannot empathize. It can simulate reason, but it cannot love.
And that, Francis insists, makes all the difference.
He paired the message with a new apostolic letter, Antiqua et Nova ("Old and New"). It reaffirmed the Vatican's position that AI must serve human dignity, not replace human judgment. It also established a doctrinal baseline: in every domain, including warfare, human conscience must remain sovereign.
Across three popes, one thread runs unbroken. Technology changes, but the moral stakes remain constant. Weapons of mass destruction turned human lives into numbers. Autonomous weapons threaten to do the same, only faster, cheaper, and without remorse.
Deterrence promised safety through fear. Automation promises efficiency through detachment. Both replace moral discernment with mechanical logic.
The Vatican's teaching resists that substitution. It insists that conscience cannot be outsourced, not to generals, not to algorithms, not to any system that confuses calculation for judgment.
The Church's argument is not anti-technology. It is anti-nihilism.
For decades, nations justified nuclear arsenals as deterrence, the idea that mutual destruction ensured peace. Francis and his predecessors called it what it was, a moral contradiction. Now, the same contradiction is reemerging under new branding. Autonomous weapons promise to minimize risk and reduce casualties, but they also lower the threshold for war, detaching human cost from human decision.
The Vatican's alternative logic, the logic of dignity, says peace is achieved not by controlling others through fear or automation but by cultivating justice, solidarity, and empathy. It is not utopian. It is survival.
The Vatican cannot regulate AI or enforce treaties, but it can shape the moral climate in which laws are written and technologies are built. History shows it is not powerless. The same peace messages once dismissed as naïve helped inspire treaties on nuclear testing, landmines, and chemical weapons. They gave moral language to diplomats who lacked it.
Now the Church is doing the same for AI. It is not writing code. It is writing conscience.
"Machines may learn," Francis said recently, "but only humanity can choose."
That choice, to build tools that serve life rather than replace it, will define the next century.
Thirty-five years of papal peace teaching have led here, to a world where the tools of annihilation no longer glow with radiation but hum quietly in data centers. The Vatican's warning is simple but radical. We cannot make peace by automating the conditions for war.
We can build weapons that think faster than us. But if we build them without wisdom, they will one day act without us.
Related Resources from DCF Hungary
Vatican Documents on Peace and Disarmament
- Apostolic Journey to Japan - Nagasaki Address (2019)
- Holy See Statement on Disarmament and Emerging Technologies
- World Day of Peace 2013 - Blessed Are the Peacemakers
Vatican Documents on AI and Technology Ethics
- World Day of Peace 2024 - Artificial Intelligence and Peace
- World Communications Day 2024 - AI and Wisdom of the Heart
- World Day of Peace 2022 - Dialogue, Education, and Work
Historical Peace Teaching
- World Day of Peace 2017 - Nonviolence: A Style of Politics for Peace
- World Day of Peace 2009 - Fighting Poverty to Build Peace
More Vatican AI and Peace Resources
Frequently Asked Questions
How long has the Vatican been working on technology ethics?
The Vatican has been addressing technology ethics systematically since at least 1991, when Pope John Paul II began connecting disarmament ethics to emerging technologies. The 35-year progression from nuclear ethics to AI ethics shows remarkable consistency.
What is the connection between nuclear weapons and AI weapons?
The Vatican views autonomous AI weapons as a new manifestation of the same moral problem: both involve delegating catastrophic decisions to systems that act faster than human moral reasoning. Both require human judgment, conscience, and accountability.
How did Pope John Paul II influence current AI positions?
John Paul II established that weapons of mass destruction are incompatible with human dignity and that true security comes from solidarity, not deterrence. His teaching that moral reasoning must guide technology became the foundation for Francis's AI approach.
When did the Vatican first warn about AI specifically?
Explicit AI warnings emerged in the late 2010s, with Francis addressing AI in 2019. The formal framework intensified with the Rome Call in 2020. However, these built on decades of teaching about autonomous systems.
What does "machines should never decide who lives and who dies" mean?
It means lethal force decisions must always involve meaningful human control—a human who can exercise moral judgment, show mercy, and be held accountable. The moment of deciding to take a life requires human conscience.
Has the Vatican's position on technology changed over 35 years?
Core principles haven't changed, but applications have evolved. The consistent message is that human dignity is non-negotiable and moral responsibility cannot be transferred to machines. What changed is the specific technologies addressed.
Why focus so much on weapons and warfare?
Weapons represent the starkest ethical test case. If it's wrong to delegate life-and-death decisions to algorithms in warfare, it's clear AI systems making healthcare or employment decisions require human judgment too.
What practical impact has 35 years of teaching had?
The Vatican influenced UN, OECD, and EU policy discussions. The Rome Call brought tech companies to commit publicly. More broadly, it legitimized treating AI as a moral issue and shaped how we think about AI's relationship to human dignity.
