As more people turn to artificial intelligence for comfort and decisionmaking, religion seeks to involve itself in the process
By Dana Humaid Al Marzouqi and Joanna Shields
In a New York Times editorial, Northeastern University psychology professor David DeSteno posed a question: Can religion make artificial intelligence (AI) moral? Religion, he said, draws its transformative power not from doctrine or scripture, but from physical rituals such as fasting, breathwork and communal prayer. Because AI has no body and no capacity for compassion, gratitude or moral struggle, these mechanisms lie beyond its reach. This argument, while compelling, misidentifies the problem. Of course, we cannot make AI virtuous in the way that a meditating monk is transformed through spiritual practice. The real question is who determines the values embedded in the AI systems that shape how billions of people make decisions and which moral traditions and assumptions those systems ultimately reflect. In just a few years, AI has become a fixture of daily life. In an Ipsos survey last year, 53 percent of respondents said AI has already changed their lives, while two-thirds expect it to change society even further in the coming years. AI’s growing influence is rooted in the fact that it is trained on the accumulated record of human knowledge, language and behavior. These systems also learn through simulation, developing powerful capabilities that are not fully explainable, even to their creators. While people use AI to become more productive, they are beginning to rely on it to feel seen and understood. In this sense, generative AI has moved beyond utility into intimacy, entering emotional spaces once reserved for human relationships. Among younger users in particular, chatbots are becoming confidants, offering reassurance and advice. For some, AI is already a preferred source of comfort and counsel.
Heavy reliance on AI companions carries profound risks, as it is associated with loneliness and reduced social engagement. A major part of AI’s appeal is that, unlike human relationships, it offers interaction without friction. Over time, the danger is that it might condition people to seek relationships that no human being can sustain. Something similar is happening in the domain of morality and spirituality, as people increasingly turn to AI systems with questions of meaning, ethics and belief. Many people use AI to summarize sacred texts, interpret religious doctrine and seek moral guidance. The problem is not merely regulatory or technical. It concerns how people understand themselves, how they relate to one another and how they tell right from wrong. These are not questions that better models or stricter compliance alone can answer; they require judgment, context and moral imagination. In short, they require wisdom. That is where DeSteno’s argument falls short. The value of faith traditions in the age of AI lies not in their ability to make machines spiritual, but in the preservation of centuries of moral reasoning. Across cultures and continents, faith communities continue to play a role in shaping moral understanding. Religious leaders are trusted figures for many, particularly in moments of ethical ambiguity or personal crisis. While AI can simulate empathy, it cannot feel it and has little to offer in such moments. At the same time, AI could play a powerful role in protecting vulnerable people and strengthening communities when used carefully and strategically. Our combined experience in law enforcement, child and community protection, technology, policy and interfaith collaboration — including through the Interfaith Alliance for Safer Communities — has taught us that AI can help identify threat patterns, flag risks and support intervention. The hard work of understanding why harm occurs cannot be automated. Rebuilding trust within communities fractured by violence, exploitation or neglect requires human relationships, moral credibility and lived experience. The Faith-AI Covenant initiative held its inaugural roundtable in New York last month, bringing together representatives from more than 15 religious groups and AI companies, including OpenAI and Anthropic. The initiative draws inspiration from countries that have sought to integrate ethical frameworks into AI development. Chief among them is the United Arab Emirates, home to many cultures and major religions, which Microsoft said had a high rateof workplace AI adoption. The Covenant’s premise is straightforward: Since AI models are already shaping how people think, relate and make decisions, the values embedded within them cannot be determined solely by technical processes. They must be informed by the moral traditions that have long guided human societies. Rather than encoding religious doctrine into governance frameworks, the goal is to ensure that human values and social responsibility remain central to them. By fostering collaboration between AI developers and faith communities, the Faith-AI Covenant seeks to connect the technical architecture of these systems with the ethical foundations of the societies they increasingly influence. Informed by discussions between AI and faith leaders from around the world, its aim is to advance shared moral frameworks that protect dignity, agency and cognitive liberty, ensuring that those principles shape how AI models are designed and deployed. To reduce the risk of inaccurate or harmful AI-generated interpretations, the initiative supports the creation of a verified, consensus-based body of religious knowledge across traditions and languages. It encourages developers and faith leaders to design safety mechanisms that can identify and mitigate harms such as exploitation, radicalization and manipulation. At the heart of the Faith-AI Covenant is the belief that technological progress and the moral wisdom of faith communities are not in conflict with one another. AI is set to transform every aspect of our societies. The question is whether that transformation would be driven only by technological capability or guided by the values and personal bonds that make us human. Dana Humaid Al Marzouqi is co-chair of the Faith-AI Covenant Global Initiative and chief executive of the Interfaith Alliance for Safer Communities. Joanna Shields, a former UK minister for Internet safety and security, is co-chair of the Faith-AI Covenant Global Initiative. Copyright: Project Syndicate