The Categorical Imperator

Notes towards Kantian gaming

In one formulation of his Categorical Imperative, Immanuel Kant says that one should "act only according to that maxim whereby you can at the same time will that it should become a universal law".

One interpretation: consider a proposed course of action, and imagine a world in which everyone acts by the principles it demonstrates. Is such a world conceivable without encountering contradictions? If not, the action is immoral.

What kinds of contradictions precisely are we looking for? There's been some dispute over that. One interesting kind, weaker than outright logical contradiction, is what Christine Korsgaard calls a "practical contradiction": "the contradiction is that your maxim would be self-defeating if universalized: your action would become ineffectual for the achievement of your purpose if everyone (tried to) use it for that purpose. Since you propose to use that action for that purpose at the same time as you propose to universalize the maxim, you in effect will the thwarting of your own purpose." For example, lying for your benefit would create a practical contradiction: If everyone lied when it benefited them, lying would cease to be effective, since its effectiveness depends on the fact that people believe you, and nobody would believe you if it were standard practice to lie whenever it benefited you. (For more on this, and on other ways to interpret what Kant means by contradiction, see Korsgaard's "Kant's Formula of Universal Law" in Creating the Kingdom of Ends, 1996.)

Since this view of morality is itself defined via a thought experiment ("what would it be like if this maxim were universalized?"), it might be a good starting point for implementing a playable morality thought experiment. One way to do so: whenever the player takes an action, abstract a maxim assumed to lie behind it, and make that maxim a part of the game rules, or of the rules followed by other players. So if a player lies, from that point onwards lying will be something other characters can do, and believing lies will be something that they won't do anymore. We don't have to actually determine when contradictions happen or not; we don't spit out "moral" or "immoral" judgments about the player's actions. We just implement this universalization, and then if the player takes actions that would result in a practical contradiction—i.e. universalization would thwart the action's purpose—that gets represented in the game world by universalization in fact thwarting the action's purpose. We don't even necessarily need to know what the player's purpose in taking a particular action was.

Several difficulties do remain. The player is not actually giving us maxims, but taking actions, from which we need to abstract what maxims we assume them to be acting by—which could be at any sort of level of granularity. If the player lies about the price of bread, does this mean that lies about bread are now universal? That all lies are universal? Or something odd, like: all bread now has the price the player said it did? Obviously some abstractions get to the relevant point more than others, but it's not clear how we can automatically infer them. Some simplified method will need to be used, which might be tailored depending on the character of the game. A flippant, highly caricatured game could do well with an abstraction method that extracts grossly overbroad maxims from everything the player does; a more sober game might want a more conservative method. Fortunately, we don't have to solve this for any possible action, since games can also constrain the types of actions players have available.

The other particularly difficult problem to solve is how to interpret the universalization itself: what, in fact, would happen if this maxim were universalized? Some parts are fairly easy, such as simply letting the other characters do everything that the player seems to have implied was acceptable to do. However others, such as inferring that the other characters shouldn't believe lies once lying becomes a maxim, require quite a bit of common-sense reasoning that is a nontrivial AI problem. Thus a workable game will require quite careful design to set it in a sufficiently abstracted world to be tractable (not AI-complete) to implement.

It's probably best to avoid any sort of complex inference at all (at least for a first such game), and go for a small world with a small number of actions, where the game author can fully write down all the relationships, implications, and sensible abstractions manually.

* * *

The challenge, then, which I might take up myself, but which someone is welcome to beat me to: come up with a game world that illustrates interesting aspects of the maxim-universalization thought experiment, while staying simple enough to be possible to implement.

Follow-up: I ended up taking this in a somewhat more formalist, game-mechanics-oriented direction, though the original idea still interests me (as the published version elaborates on a bit).

Published version: Mark J. Nelson (2012). Prototyping Kant-inspired reflexive game mechanics. In Proceedings of the 2012 FDG Workshop on Research Prototyping in Games.

See also Mirjam Palosaari Eladhari's two-part writeup (one, two) of the workshop at which this was presented, including a summary of my talk, and thoughts from her and others on how this work might be extended towards modeling non-utilitarian ethics in games in a meaningful way.