Moral calculus in videogames

Some ways it is and might be done

A number of games, especially recently, contain moral calculus systems: game mechanics and associated representations that interject some notion of ethics or morality into gameplay, usually tied to the player's actions. The most common approach is to position a player along a good/evil axis reminiscent of the "alignment" in traditional role-playing games. A player has a dynamic alignment, updated towards one end or the other as she performs actions deemed good or evil. This is the approach taken by, for example, Black and White (Lionhead, 1998), Star Wars: Knights of the Old Republic (LucasArts, 2003), and Fable (Lionhead, 2004). A player's goodness or evilness then affects what she can do and how non-player characters react to and interact with her. (For reviews of the morality systems in Black and White and Fable, see here.)

Although they can be seen as just another way of adding both gameplay variation and believable characters to a game, moral calculus systems are also representations of systems of ethics. In particular, they're representations that the player is given to engage with and interrogate. This is an angle that I think has been underexplored in existing games, though some have indeed produced interesting results.

Who keeps the morality scoreboard?

Broadly speaking, there are two places in a game in which this ethical machinery can reside: in the game world itself, or within the game's characters.

When moral calculus is part of the game world itself, it takes on a sort of cosmic aspect as part of the definition of that game world. Much like a game world has certain physics that the engine defines, it may also have certain moral rules that the engine defines. In games with a good/evil axis, this may take the form of having some actions in the world be inherently and definitionally bad, and as a result, performing them would make the player more evil, simply as a fact of the world.

On the other hand, when moral-calculus machinery is built into game chararacters, it forms a part of the construction of believable agents. Again this has analogues to non-moral aspects of games, since having and acting on some set of moral beliefs is just one of the many things we expect of fleshed-out believable characters. In this sort of implementation, it is the interaction with characters that takes on the moral character, rather than interaction with the world more abstractly.

In a game-world moral calculus, the moral character of actions is direct, unambiguous, and centrally arbitrated, much like other kinds of scores that games keep. A player in Knights of the Old Republic sits at a specific point on the light/dark Jedi axis at any given moment, and this point is instantly updated by the game world when the player takes a relevant action—the player's current position on this axis is even displayed explicitly on a meter. The game world then, from on high, morphs the player's visual appearance and determines which abilities she can use. In this respect the lineage with RPG alignment is fairly clear: A player really does have some specific alignment at any given time, which is simply cosmically given fact.

Moral calculus in characters, on the other hand, results in distributed scoring of actions, which may be variable, delayed, or avoidable; not actually quite the same as a normal game score at all. A cosmic rule like "killing innocent people is evil" translates into a much different character-level rule, along the lines of "if a decent sort of person knows you kill innocent people, then they'll think badly of you". This latter rule may vary among characters in the game world (maybe not all are decent folk), can be avoided (don't let them see you), might be delayed (does your reputation precede you or not?), and its effects are manifested only in interactions rather than direct changes in things like player abilities. This allows for moral judgments to be inconsistent across the world, with some characters deeming the player good, and others deeming her evil.

These two approaches aren't mutually exclusive. The central-scorekeeping style usually isn't used only to display a score meter and to control player attributes, but also affects interaction with characters. We can imagine this being like the central scorekeeper sticking a big "good" or "evil" badge on the player advertising current moral status, to which other characters in the game react appropriately. When put that way, it seems a bit unrealistic, but it does at least allow for the other characters to react in different ways.

We can sometimes even interpret this central-scorekeeper-and-advertisement mechanism as an approximate representation of what would happen in a particular simplified world with character-level moral calculus—the fact that the scorekeeping is literally centarlized doesn't necessarily mean that it produces (or is interpreted as) a representation of a centrally judged moral system. In Black and White, given its small world, simple characters, and fairly unambiguously good and bad actions, the approximation that the player's actions are instantly compiled by a central register that then affixes a badge advertising goodness/evilness is plausably similar to what we'd get if we simulated each individual villager evaluating the player's actions.

That seems a less likely way to explain why Fable has central moral scorekeeping, instantly known by all NPCs, though. It seems like we'd have to rely a bit too much on, "well, news travels fast", to think of it as any sort of representation of how ethical judgments work in a society. Here, it really does seem like the moral system is representing a cosmic notion of good and evil, instantly judged, as the code is literally doing. The fact that all NPCs everywhere in the world instantly know the judgment is a bit unrealistic, but from the point of view of representing an ethical system and illustrating its effects, this is actually somewhat interesting. We can think of it as a stylized world presenting a thought experiment: what if there really were absolute good and evil, it were judged instantly, and everyone knew the judgments? (I admit this is not likely to be the reason that Fable did things that way; but maybe they should've run with it.)

These two ways of implementing a moral-calculus system (in-the-world versus in-the-characters) lend themselves naturally to two main ways of viewing such a system representationally. The centralized, morals-in-the-world form has the player interact with a world in which certain principles hold, while the morals-in-the-characters form has her interact with a world whose inhabitants hold certain principles. In other words, the former thrusts the player into an engagement of one sort or another with ethics in a normative sense, whereas the latter thrusts her into an engagement with ethics in a descriptive sense. Thus the game acts as a concrete thought experiment arbitrated by the machine.

If we view games specifically as ways of constructing these thought experiments, how might we set them up?

How can we represent ethics normatively?

A centralized moral calculus asks the player to consider a world in which some concept of ethics "actually" holds. What it means for a normative system to actually hold can vary, however. There are at least three interesting senses, depending on how much the moral judgments affect the rest of the world simulation.

In the most detached sense, only the moral scorekeeper pays attention. The scorekeeper is a flawless judge, evaluating the player's actions according to the system being represented. The world itself is otherwise not influenced by the moral system, although it may be designed to highlight interesting aspects of it. The player can then see what her actions end up doing to the score (which could be more complicated than a good/evil scale, although it need not be), and investigate what actions she'd have to take to get various moral scores. This could help expose nonobvious consequences of the theory: It may be impossible, for example, to do some things the player would like to do while hewing to a particular normative theory; or it may be necessary to do things that seem intuitively immoral (or avoid some that seem moral) in order to get this particular system's scorekeeper to award a high score; or the player may find herself morally judged for actions that she wouldn't have expected to be morally relevant at all. From this perspective, the strangely omniscient character of the moral scoreboard isn't a problem, since it's outside the world, and explicitly intended to give that sort of omniscient, instantaneous, summarized feedback.

As one step towards integrating the moral calculus more closely with the rest of the game, aspects of a player's moral score could be tied to the gameplay rules, though still without impacting the world simulation itself. For example, immoral actions could function like lives: perform more than a certain number of immoral actions and you lose. Consider the film Liar, Liar (1997), for example, whose premise is that the protagonist is literally unable to lie: in gameplay form, the player might lose lives for lying. Alternately, the moral score could be tied in one way or another with progress through the game, such as requiring a certain score (or a certain score in some sub-component of morality) to undertake certain quests, or to beat a level.

Finally, the normative principles could directly impact concrete aspects of the world simulation, rather than only the meta-rules of the game built on top of the simulation. Some normative ethical theories make claims about how the world actually operates, and a version of those claims could be literally instantiated. For example, a form of karma (or a theistic analogue, such as "God punishes the wicked") could be implemented by making good or bad things happen to the player depending on her score. To take a fictional example, the Star Wars mythology posits that people's ethical character affects their ability to do concrete things, such as become light or dark Jedi, which Knights of the Old Republic attempts to implement in at least a simplified form.

How can we represent ethics descriptively?

If we have an implementation of ethical calculus within the game's characters, on the other hand, the game sets up a thought experiment engaging the player with a particular conception of ethics "in the wild". The player is asked to consider what it would be like if people held a particular set of principles and acted on them in particular ways. In a sociological thought experiment, the player is given a world populated by characters who have various ethical principles and ways of responding based on them. A game could have a world where nobody ever lies, or where some characters are exceedingly selfish, or where people really hate it when you talk loudly, and so on.

A player could then go about evaluating the various principles the characters they find in the world seem to hold, and what that means. What do characters really think, and how do they act, and is it easy to figure out? What do different patterns of interaction result in? Are there unexpected consequences or contradictions? One way of viewing this investigation is that the player is, through interaction, taking abstract principles she might already have some ideas about, and unpacking what a world in which people held them might be like. (This might remind one of Nietzsche's suggestion, in the preface of On the Genealogy of Morality, that we should investigate the value that moral values themselves have, by considering what effects there would be if people held them.)

* * *

Other approaches entirely are possible, though they might not make as much sense to call "moral calculus", which makes more sense when talking about systems where good/evil scores (or similar) are actually being calculated (whether centrally or in characters' opinions of each other). What would a eudaimonistic view of ethics do in a videogame? Or, what if we made Kant's maxim-universalization actually happen in the game world?

Credits: My thinking on this subject has been greatly influenced by discussions with Michael Mateas and Ian Bogost. The idea of focusing on what representational work games do came from Bogost; and the focus on what moral-calculus systems currently do and what else they might do is a perennial interest of Mateas's.