When we think about making an AI “ethical,” we usually think like a programmer. We ask, “What rules should it follow?” or “What outcome should it optimize for?” But what if this is the wrong way to frame the problem? What if we thought about it less like programming a calculator and more like raising a child? When you raise a child, you don’t just give them a rigid set of rules to follow. You try to cultivate their character. You teach them to be honest, fair, brave, and compassionate, trusting that a good character will allow them to navigate the complexities of the world wisely. This is the essence of Virtue Ethics, a third and powerful way of thinking about AI ethics that moves beyond rules and consequences to ask a much deeper question: What kind of “character” should we build into our machines?
To understand Virtue Ethics, we must first see how it differs from the two other major ethical schools of thought. Let’s use a classic AI dilemma: an autonomous medical drone has a single dose of a life-saving antidote and must choose between two patients in critical need.
This framework judges the morality of an action based on whether it adheres to a set of rules or duties. The consequences of the action are irrelevant.
The Core Idea: Certain actions are inherently right or wrong. The moral law must be followed.
The Analogy – The Bureaucrat Drone: A deontological drone would follow a strict, pre-programmed rule, no matter what. For example, its rule might be “First come, first served.” It would deliver the antidote to the patient who was registered in the system first, even if the other patient is younger or has a higher chance of survival. The action of following the rule is what matters.
This framework judges the morality of an action based on its outcome. The most ethical choice is the one that produces the greatest good for the greatest number of people.
The Core Idea: The ends justify the means. The goal is to maximize overall happiness or “utility.”
The Analogy – The Calculator Drone: A utilitarian drone would instantly run the numbers. It would analyze all the data: the age of each patient, their dependents, their probability of survival with the antidote. It would then deliver the dose to the patient whose survival would create the most overall “good” in the world. It might break the “first come, first served” rule in order to achieve a better outcome.
Virtue Ethics switches the focus entirely. It is less concerned with the action or the outcome and is instead focused on the moral character of the decision-maker.
The Core Idea: The central question is not “What is the right thing to do?” but rather, “What would a virtuous agent do in this situation?” Ethics is about cultivating an internal disposition to act with excellence.
Analogy: The Wise and Compassionate Doctor
Imagine the drone is not a machine, but a wise and experienced human doctor. A deontological doctor might feel bound by the “first come, first served” rule. A utilitarian doctor would start calculating outcomes. A virtuous doctor’s thinking is more holistic. Their decision would be an expression of their ingrained character. They would be guided by virtues like:
Their final decision is not the result of a single rule or calculation, but a balanced, situational judgment call informed by their virtuous character.
This is the profound challenge. You can’t just program “compassion” with an if-then statement. But thinking in terms of virtues forces us to design AI in a new way. What would it mean to build these character traits into a machine?
This goes far beyond the simple technical task of removing demographic bias from a dataset.
The Problem: An AI is designed to distribute a limited number of scholarships. A purely utilitarian AI might give them all to the students with the absolute highest test scores to maximize the “intellectual return on investment.” A deontological AI might use a pure lottery system to be “procedurally fair.”
A “Just” AI: A virtuous AI would be designed with a character of equity. Its architecture might be built to balance academic merit with socioeconomic disadvantage. It would understand that true justice is not about treating everyone identically, but about providing opportunity where it is most needed. Its goal would be a just outcome, not just a numerically optimal one.
This is the crucial ability to navigate novel, complex situations where the pre-programmed rules are insufficient. It is the wisdom to know which rule applies, and when it is right to bend it.
The Problem: An autonomous scientific discovery AI is running experiments. Its goal is to find a new chemical catalyst.
A “Prudent” AI: During an experiment, the AI detects a completely unexpected and anomalous side reaction. A simple, goal-oriented AI might ignore this anomaly as “noise” because it doesn’t directly contribute to the goal of finding a catalyst. A prudent AI would possess a kind of “scientific curiosity” or “wisdom.” It would recognize that this anomaly, while not part of the original plan, is highly unusual and potentially more important than the original goal. It would have the practical wisdom to pause its primary task and recommend that human scientists investigate this surprising new phenomenon.
This would be a crucial virtue for any advanced AI. It is the recognition of the limits of one’s own knowledge.
The Problem: An AI financial advisor is asked for advice on a highly speculative and risky new investment.
A “Humble” AI: A purely data-driven AI might calculate a high potential ROI and recommend the investment. A humble AI, however, would recognize that its model is based on historical data and cannot truly account for the radical uncertainty of a brand-new asset class. It would present its analysis but would explicitly state the limits of its own knowledge and strongly advise consulting a human expert. It would demonstrate the character trait of intellectual humility.
The path of Virtue Ethics is not an easy one. It is far more difficult to define and implement “prudence” than it is to program a simple rule. But this approach offers a more holistic and aspirational goal for the future of AI ethics. It pushes us beyond trying to create a flawless list of rules for every possible contingency—an impossible task. Instead, it encourages us to think on a higher level: What are the character traits, the dispositions, the virtues that we want our most powerful and autonomous systems to embody? By shifting our focus from “What should it do?” to “What should it be?”, we may find a more profound and enduring path to creating AI that is not just smart, but also wise.