Publication - Transition-Independent Decentralized Markov Decision Processes

Authors: Becker, Raphen; Zilberstein, Shlomo; Lesser, Victor; and Goldman, Claudia V.
Title: Transition-Independent Decentralized Markov Decision Processes
Abstract: There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied up by a global reward function that depends on both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.
Keywords: Coordination, Distributed MDP
Publication: University of Massachusetts/Amherst Computer Science Technical Report, Num: 02-50
Date: 2002
Sources: PS: /Documents/Becker_Transition_2002.ps
PDF: /Documents/Becker_Transition_2002.pdf
Reference: Becker, Raphen; Zilberstein, Shlomo; Lesser, Victor; and Goldman, Claudia V.. Transition-Independent Decentralized Markov Decision Processes. University of Massachusetts/Amherst Computer Science Technical Report, Number 02-50. 2002.
bibtex:
@article{Becker-232,
  author    = "Raphen Becker and Shlomo Zilberstein and Victor
               Lesser and Claudia V. Goldman",
  title     = "{Transition-Independent Decentralized Markov
               Decision Processes}",
  journal   = "University of Massachusetts/Amherst Computer
               Science Technical Report",
  number    = "02-50",
  year      = "2002",
  url       = "http://mas.cs.umass.edu/paper/232",
}