Software
Multiagent decision process toolbox
The Multiagent
decision process (MADP) Toolbox is a free C++
software toolbox for scientific research in
decision-theoretic planning and learning in multiagent
systems (MASs). It is jointly developed by Frans
Oliehoek and me, within the DecPUCS project. We
use the term MADP to refer to a collection of
mathematical models for multiagent planning:
multiagent Markov decision processes (MMDPs),
decentralized MDPs (Dec-MDPs), decentralized partially
observable MDPs (Dec-POMDPs), partially observable
stochastic games (POSGs), etc.
The toolbox is designed to be rather general,
potentially providing support for all these models,
although so far most effort has been put in planning
algorithms for discrete Dec-POMDPs. It provides
classes modeling the basic data types of MADPs (e.g.,
action, observations, etc.) as well as derived types
for planning (observation histories, policies,
etc.). It also provides base classes for planning
algorithms and includes several applications using the
provided functionality. For instance, applications
that use JESP or brute-force search to solve .dpomdp
files for a particular planning horizon. In this way,
Dec-POMDPs can be solved directly from the command
line. Furthermore, several utility applications are
provided, for instance one which empirically
determines a joint policy's control quality by
simulation.
|