Reinforcement learning for declarative optimization-based drama management

Mark J. Nelson, David L. Roberts, Charles L. Isbell, Michael Mateas (2006). Reinforcement learning for declarative optimization-based drama management. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 775–782.

Abstract

A long-standing challenge in interactive entertainment is the creation of story-based games with dynamically responsive story-lines. Such games are populated by multiple objects and autonomous characters, and must provide a coherent story experience while giving the player freedom of action. To maintain coherence, the game author must provide for modifying the world in reaction to the player's actions, directing agents to act in particular ways (overriding or modulating their autonomy) or causing inanimate objects to reconfigure themselves “behind the player's back”. Declarative optimization-based drama management is one mechanism for allowing the game author to specify a drama manager (DM) to coordinate these modifications, along with a story the DM should aim for. The premise is that the author can easily describe the salient properties of the story while leaving it to the DM to react to the player and direct agent actions. Although promising, early search-based approaches have been shown to scale poorly. Here, we improve upon the state of the art by using reinforcement learning and a novel training paradigm to build an adaptive DM that manages the tradeoff between exploration and story coherence. We present results on two games and compare our performance with other approaches.


Back to publications.