Reinforcement learning for declarative optimization-based drama management

Download: PDF.

“Reinforcement learning for declarative optimization-based drama management” by Mark J. Nelson, David L. Roberts, Charles L. Isbell Jr., and Michael Mateas. In Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, 2006, pp. 775-782.

Abstract

A long-standing challenge in interactive entertainment is the creation of story-based games with dynamically responsive story-lines. Such games are populated by multiple objects and autonomous characters, and must provide a coherent story experience while giving the player freedom of action. To maintain coherence, the game author must provide for modifying the world in reaction to the player's actions, directing agents to act in particular ways (overriding or modulating their autonomy), or causing inanimate objects to reconfigure themselves “behind the player's back”.

Declarative optimization-based drama management is one mechanism for allowing the game author to specify a drama manager (DM) to coordinate these modifications, along with a story the DM should aim for. The premise is that the author can easily describe the salient properties of the story while leaving it to the DM to react to the player and direct agent actions. Although promising, early search-based approaches have been shown to scale poorly. Here, we improve upon the state of the art by using reinforcement learning and a novel training paradigm to build an adaptive DM that manages the tradeoff between exploration and story coherence. We present results on two games and compare our performance with other approaches.

BibTeX entry:

@inproceedings{DODM:AAMAS06,
   author = {Mark J. Nelson and David L. Roberts and Isbell Jr., Charles
	L. and Michael Mateas},
   title = {Reinforcement learning for declarative optimization-based
	drama management},
   booktitle = {Proceedings of the 5th International Joint Conference on
	Autonomous Agents and Multiagent Systems},
   pages = {775--782},
   year = {2006}
}

Back to publications.