AI and the Problem of Moral Agency: What Would Jesus Automate?


1. Introduction: From Automation to Accountability

In the age of artificial intelligence, tasks once considered deeply human—decision-making, medical diagnosis, teaching, even pastoral care—are now being performed or assisted by machines. But when decisions have moral consequences, who is responsible? As AI gains autonomy in areas of justice, warfare, and social governance, Christians must ask: Can a machine bear moral agency? And if not, what would Jesus automate—and what would He insist remain human?


2. Biblical and Theological Foundations

2.1 Moral Agency and Human Responsibility

Christian ethics begins with the doctrine that humans are created in the imago Dei (Genesis 1:26–27), endowed with conscience, free will, and responsibility. Scripture calls people to:

  • Choose wisely (Deuteronomy 30:19)
  • Act justly (Micah 6:8)
  • Love truthfully and sacrificially (John 13:34)
  • Repent when wrong (Acts 3:19)

Moral agency in Scripture is relational, responsive, and accountable—qualities that machines do not possess.

2.2 Jesus and Moral Discernment

Jesus models moral decision-making that is situational, compassionate, confrontational, and Spirit-led. Whether confronting injustice (Mark 11), forgiving sin (John 8), or responding to need (Luke 7), His actions are shaped not by rules alone, but by love, wisdom, and divine timing.

Automation, by contrast, optimises outcomes—it does not discern motives or reveal grace.


3. Contemporary Applications: AI in Moral Decision-Making

3.1 Autonomous Systems with Moral Impact

AI is already making or influencing decisions in:

  • Criminal justice (sentencing algorithms, predictive policing)
  • Hiring and admissions (automated CV filtering, performance projections)
  • Healthcare (triage prioritisation, surgical robotics)
  • Warfare (drone targeting, threat identification)
  • Content moderation (deciding what is true, harmful, or acceptable online)

These decisions often carry ethical weight—impacting lives, liberty, and dignity.

3.2 Machine Learning Without Moral Understanding

AI systems “learn” patterns from data—but they:

  • Lack moral intent (they do not intend good or evil)
  • Do not experience consequences (no capacity for guilt or growth)
  • Cannot love or hate (emotions are simulated, not experienced)

Therefore, any ethical action by AI is accidental, not volitional.


4. Critical Evaluation: The Limits of Artificial Morality

4.1 The Myth of Autonomous Ethics

AI does not generate ethics—it applies human-coded principles or inferred behaviours. Therefore, the moral agent remains the human: designer, trainer, or user.

To claim “the AI did it” is ethically evasive. As Jesus said, “Out of the abundance of the heart the mouth speaks” (Luke 6:45)—what comes out of machines reflects our programming.

4.2 Automation and the Loss of Moral Growth

Christian ethics involves formation: learning through failure, conviction, repentance, and transformation. Automating moral choices denies humans this developmental process. We risk creating ethical stagnation by outsourcing hard choices.

4.3 Accountability and the Image of God

Only beings who can be judged justly can be held morally responsible. Scripture teaches that each person will “give an account of himself to God” (Romans 14:12). Machines are not accountable, because they are not souls.


5. What Would Jesus Automate? A Theological Discernment Grid

Christians should consider:

  • Automate the procedural: data sorting, translation, administrative support
  • Enhance the discernible: use AI to inform, not replace, human judgement
  • Never automate the relational: reconciliation, confession, ethical confrontation
  • Retain spiritual leadership: pastoral, prophetic, moral guidance must remain human

Jesus might use AI to feed crowds (logistics), but He would not automate foot-washing (John 13) or call sinners to repentance via chatbot. Moral ministry requires the touch of grace.


6. Conclusion: Tools Without Souls, Decisions With Consequences

AI can be an excellent assistant—but never a moral agent. It cannot obey God, feel remorse, or choose righteousness. In Christian ethics, the question is not just what works best, but what forms people into Christlikeness.

In this light, “What would Jesus automate?” becomes a call to spiritual discernment—where the Church must both use technology wisely and resist the urge to surrender moral responsibility to machines.


Further Reading and Resources

  • Rae, S. B. (2021) Bioethics and the Christian Life: A Guide to Making Difficult Decisions. Crossway.
  • Vallor, S. (2016) Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. OUP.
  • MIT Tech & Faith Report (2023) AI, Ethics, and Human Agency.
  • Lexnary Tags: Moral Agency, Artificial Intelligence, Christian Ethics, Digital Discernment, What Would Jesus Do (WWJD)

Leave a Reply