MASS - Multi Agent System Simulator
The simulator is also a message router, in that all agent communications will pass through it. This scheme permits explicit control over network and communication delays. In this way, if we want to simulate a very fast communication path, the simulator may immediately re-send a message to its destination; but if we want to simulate a compromised network, the simulator may wait n pulses before sending the message. This method also allows an agent to broadcast a message to all other agents without explicitly knowing the number of agents that will receive the message.
The simulator behavior is directed by a queue containing a time-ordered list of events. Each message it receives either adds or removes events from the queue. At each pulse the simulator selects the correct events and realizes their effects (for example, a network may slow down). Only after the effect of each event has been completely determined is the timing pulse sent to each agent. Primitive actions in TÆMS, called methods,are characterized statistically via discrete probability distributions in three dimensions: quality, cost, and duration. Agents reason about these characteristics when deciding which actions to perform and at what time.
When an agent wants to execute a method, it sends a message to the simulator with the method name. The simulator then retrieves the method from the objectiveTÆMS database. Agents schedule, plan, and interact using a subjectiveview of the methods. The subjective view differs from the objective view because agents may have an imperfect model of what will actually happen, performance-wise, when the method is executed. For example, for a method that involves retrieving data from a remote site via the WWW, the remote site's performance characteristics may have changed since the agent learned them and thus the agent's view of the execution behavior of that method, namely its duration, is imperfect. In a simulation environment, both the subjective and the objective method views are created by the simulator / task generator and the objective, or true, views of the methods are stored in the simulator's TÆMS database. Thus when an execution message arrives, the simulator must obtain the objective view of the method before any other steps may be taken. The first step of the simulator is to calculate the value of the cost, duration and quality that will be "produced" by this execution. The duration (in pulse time) is used to create an event that sends to the agent the results in terms of cost and quality. Any event realized before the newly queued event is performed may change the results of the newly queued event's "execution." For example, a network breakdown event at the front of the queue may increase the time of all the simulated executions that follow it in the queue, by 100%. Thus subsequent method completion events are delayed.
This interaction effect is also possible because of the interactions between the methods themselves. For example, if one method enables another method, and the first method fails, then the other method may no longer be executed or executing the method will produce no result. If both methods are already "scheduled" in the event queue, and the first method fails, then the event associated with the second method's execution must be changed.
The random generator used to calculate the cost, duration and quality values is seeded by a fixed parameter or by a random number (depending on the current time). The solution of seeding by a fixed parameter is used to in order to obtain a deterministic simulation. Our random generator produces the same answer if the same seeding value is used. The goal is to compare different agent coordination mechanisms on the same problem using the same simulation. To test a particular coordination mechanism on several problems, we use a seeding based on the current time that guarantees that two simulations do not produce the same solution.
Vincent, Regis; Horling, Bryan; and Lesser, Victor. An Agent Infrastructure to Build and Evaluate Multi-Agent Systems: The Java Agent Framework and Multi-Agent System Simulator. Lecture Notes in Artificial Intelligence: Infrastructure for Agents, Multi-Agent Systems, and Scalable Multi-Agent Systems, Volume 1887, Wagner and Rana (eds.), Springer,, pp. 102-127. 2001.
Horling, Bryan, Lesser, Victor, Vincent, Regis. Multi-Agent System Simulation Framework. 16th IMACS World Congress 2000 on Scientific Computation, Applied Mathematics and Simulation. 2000.
Vincent, Regis, Horling, Bryan, Lesser, Victor. Experiences in Simulating Multi-Agent Systems Using TAEMS. The Fourth International Conference on MultiAgent Systems (ICMAS 2000), AAAI. 2000.
Lesser, Victor; Atighetchi, Michael; Benyo, Brett; Horling, Bryan; Raja, Anita; Vincent, Regis; Wagner, Thomas; Ping, Xuan; and Zhang, Shelley XQ. The Intelligent Home Testbed. Proceedings of the Autonomy Control Software Workshop (Autonomous Agent Workshop). 1999.
Lesser, Victor, Atighetchi, Michael, Benyo, Brett, Horling, Bryan, Raja, Anita, Vincent, Regis, Wagner, Thomas, Xuan, Ping, Zhang, Shelly XQ. A Multi-Agent System for Intelligent Environment Control. Computer Science Technical Report 1998-40, University of Massachusetts. 1999.
Horling, Bryan, and Lesser, Victor. A Reusable Component Architecture for Agent Construction. University of Massachusetts/Amherst CMPSCI Technical Report 1998-30, Number 1998-30, University of Massachusetts. 1998.
Vincent, Regis, Horling, Bryan, Wagner, Tom and Lesser, Victor. Survivability Simulator for Multi-Agent Adaptive Coordination. International Conference on Web-Based Modeling and Simulation, Volume 30, Number 1, Fishwick, P., Hill, D. and Smith R. (eds.), The Society for Computer Simulation International, pp. 114-119. 1998.