|
University of Massachusetts Project Participants
Victor Lesser (Principal Investigator) |
Daniel Corkill (Co-Principal Investigator) |
Chongjie Zhang (Post-Doctoral Research Associate) |
Bruno da Silva (Graduate Research Assistant) |
Yoonheui Kim (Graduate Research Assistant) |
Hala Mostafa (Graduate Research Assistant) |
Huzaifa Zafar (Graduate Research Assistant) |
Daniel Garant (Undergraduate Research Assistant) |
Kirby Seitz (Undergraduate Research Assistant) |
Maryam Esmaeili (Visiting Graduate Student) |
Torben Jess (Graduate Student) |
University of Michigan Project Participants
Overview of Project
|
The project focused on developing organizationally adept software agents (OAAs) that can reorient their local activities based on their interpretation of organizational intent, allowing emergent and adaptive organizational behavior within designed organizations. This research addressed the scaling issues involved in constructing multi-agent systems by adding new capabilities to agents so that they can operate effectively in an organizational context by being able to modify supplied organization guidelines should those guidelines become ineffective. An OAA is not only aware that it is part of an agent organization and about its role(s) in the organization, but it can also assess how well it is fulfilling its organizational responsibilities and proactively adapt its behavior to meet organizational needs better. OAAs evaluate their behavior based not only on their (agent-centric) self-interests, but also on their (organization-centric) responsibilities to one another and their (social-centric) willingness to perform activities requested by other agents. One of the novel ideas that we explored was the use of annotated organization guidelines that provide performance expectations that can be used by an OAA to improve its local decision-making and help it detect when its organization guidelines are no longer appropriate for the current environment. We have also explored strategies for how OAAs can operate when guidelines are no longer appropriate, which involves adapting both performance expectations and guidelines.
SPECIFIC OBJECTIVES
At the heart of our OAA architecture is an event-driven, belief-desires-intention (BDI)-like operational decision-making engine that can adjust its decisions when provided with parameterized role priority assignments specified in organization guidelines. It represents current organization guidelines and performance expectations as belief values, and these beliefs can be incrementally learned or explicitly adapted as a result of the agents' experiences.
An OAA adjusts its behavior when given annotated organization guidelines. More importantly, it can also determine when the supplied guidelines have become ineffective and proactively adapt its behavior to better achieve organizational objectives. A central OAA tenet is a clear separation between operational decision-making (the detailed moment-to-moment behavior decisions made by an agent) and organizational control (longer-term directives designed using estimates of environment and agent characteristics and expressed to agents as annotated guidelines that bias and inform their operational decision-making). This separation enables the OAA to stop following guidelines when the estimates used in their design are incomplete or incorrect or when the environment changes over time, and to propose and negotiate agreements with other OAAs to replace such guidelines. Our OAA architecture: 1) allows agents to operate reasonably without organization guidelines; 2) uses belief values in operational decision-making that are updated by experience and can be seeded by expectations conveyed in guideline annotations; 3) assesses the appropriateness of guidelines based on deviations from annotated estimates developed during their design; and 4) can negotiate agreements to replace inappropriate guidelines. We have used our agent architecture to implement call-center OAAs that use fire brigades under their control to extinguish fires in RoboCup Rescue scenarios. RoboCup Rescue is a detailed fire simulator that simulates fires burning in an urban setting.
A major challenge that we faced in the project since its inception was the difficulty of using the RoboCup Rescue simulator as an experimental platform in which we can run/analyze a significant number of controlled and repeatable experiments involving large numbers of agents operating in an ongoing firefighting environment. This is needed in order to understand the effectiveness of our OAA agents and to test certain hypothesis about environments where OAA agents are most effective.
During the final year of the project we worked extensively to address a number of problems and limitations in the use of the simulator that we had identified previously. For example, the city maps distributed with the RoboCup Rescue simulator are not expansive enough to require large numbers of call-center OAAs, and the size and spatial distribution of buildings in these maps is difficult to change. We created a series of tools for generating synthetic maps/scenarios that allows us to control simulator environmental parameters in very detailed ways and to create sizable maps/scenarios that require larger numbers of call centers and fire brigades. We also adapted the simulator to change its original emphasis on agents responding to a singular significant event to emphasizing the management of ongoing dynamic environments where new fires occur at various locations throughout the entire duration of an experimental scenario. With this latter emphasis, OAAs have an ongoing (but potentially changing) firefighting workload in which organization leads to advantages over immediate and reactive local decision-making. We also developed automated experimental-support tools that are integrated with a structured database that facilitates running large numbers of experimental RoboCup Rescue settings and their statistical analysis. As part of this set of tools, we developed additional tools for automating the generation of organization guidelines and their expectation annotations. All these tools were very important as we began to explore the space of very different scenarios and different agent capabilities.
As we came to better understand the detailed behavior and performance of our agents, we made significant modifications to some of the algorithms used in the OAA architecture that we had developed earlier in the project. These modifications involved generating better annotations (and the expectations contained in them) and new approaches to reasoning about expectations and performance. As part of this effort, we revamped our approach to multi-agent (distributed) opportunity cost estimation for settings where agents loan their resources to other agents or perform tasks on behalf of other agents. Opportunity cost is one of the key factors that an OAA uses in making decisions. We developed a much more stable and decentralized approach to estimating the local opportunity cost associated with committing resources to goals, which requires significantly less knowledge of the activities of other agents, and has the added benefit of reducing communication. This approach is based on each agent keeping a history window of its task's outcomes (utility achieved and duration of activities and resources used) and of successful resource requests from other agents (including the associated prorated utility and duration of use of the agent's resources by another agent). The OAA then uses this history in estimating the opportunity cost associated with potentially doing a specific task by doing a number of local "what if" simulations of the effects of taking on this task given its current tasks and resource availability.
We also developed new mechanisms used by OAAs to adapt their organization guidelines by generating agreements among agents to change responsibilities. Not only can OAAs adapt by agreeing to the open-ended transfer of resources to other agents, but now they can also adjust the class of events that they will react to (for RoboCup Rescue, the regions for which it has primary firefighting responsibility). In order to estimate how a candidate agreement will affect the overall performance of the organization, we had to develop techniques that estimate the net utility gain (or loss) of the proposed agreement. This work was done in collaboration with Professor Xiaoqin Zhang of the University of Massachusetts Dartmouth.
As part of this research, we recognized and began to identify the characteristics of the environment and agent reasoning capabilities that are necessary for the OAA approach, and organizations in general, to be effective. We have been exploring the idea of an "organizational sweet spot" where the resources available and the important tasks requiring them are in near balance (where the system is not severely over-resourced or overloaded). It is in this sweet spot where organization has the greatest effect and therefore where the OAA architecture's ability to adapt an ill-suited design is crucial. Organizations typically operate in this sweet spot, as the cost of resources compared with the benefits achievable by the system makes near balance sensible. We have also come to understand that if agents are not sufficiently intelligent or if the environment and task processing is too chaotic, their ability to benefit from an appropriate organization is reduced (because they make irrational decisions with or without organizational direction). We have not yet developed a formal way of characterizing the relationship among the variance in environmental and task processing characteristics and the character of agent reasoning, and how organization guidelines affect this reasoning, but that is one of our goals for future research. We are also excited about the potential generalization of our work on agreements to a wider range of organizational adaptations and problem domains.
Publications Supported by this Project
- Shanjun Cheng, Anita Raja, and Victor Lesser (2013). “Multiagent Meta-level Control for Radar Coordination” Web Intelligence and Agent Systems: An International Journal. IOS Press, Vol. 11(2): 81-105.
|
- Shanjun Cheng, Anita Raja, and Victor Lesser (2013). “Using Conflict Resolution to Inform Decentralized Learning” Proceedings of 12th International Conference on Autonomous Agents and Multiagent Systems. Ito; Jonker; Gini; Shehory (eds.), IFAAMAS, pp. 893-900.
|
-
Daniel Corkill, Chongjie Zhang, Bruno da Silva, Yoonheui Kim, Daniel Garant, Victor Lesser, Xiaoqin Zhang (2013). “Biasing the Behavior of Organizationally Adept Agents” (Extended Abstract) Proceedings of 12th International Conference on Autonomous Agents and Multiagent Systems. Ito, Jonker, Gini, and Shehory (eds.), IFAAMAS, pp. 1309-1310.
|
-
Daniel Corkill, Chongjie Zhang, Bruno da Silva, Yoonheui Kim, Xiaoqin Zhang, and Victor Lesser (2012). “Using Annotated Guidelines to Influence the Behavior of Organizationally Adept Agents” Proceedings of 14th International Workshop on Coordination, Organizations, Institutions, and Norms (COIN), held in conjunction with 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-12), Valencia, Spain, pp. 46-60.
|
- Ling Yu, Chunyan Miao, Zhiqi Shen, Victor Lesser (2011). “Genetic Algorithm Aided Optimization of Hierarchical Multi-Agent System Organization.” (Extended Abstract/Poster Presentation.) Proceedings of the Tenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Taipei, pp. 1169-1170.
|
- Ling Yu, Chunyan Miao, Zhiqi Shen, Victor Lesser (2010). “Genetic Algorithm Aided Optimization of Hierarchical Multi-Agent System Organization.” UMass Amherst Computer Science Technical Report 2011-003. (This is a full version of the AAMAS-11 extended abstract, above.)
|
- Daniel Corkill, Edmund Durfee, Victor Lesser, Huzaifa Zafar, Chongjie Zhang (2011). “Organizationally Adept Agents.” Proceedings of 12th International Workshop on Coordination, Organization, Institutions and Norms in Agent Systems (COIN), held in conjunction with 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Taipei, pp. 15-30.
|
- James C. Boerkoel Jr.; Edmund H. Durfee (2013). “Distributed Reasoning for Multiagent Temporal Problems.” In Journal of Artificial Intelligence Research (JAIR), Vol. 47, pp. 95-156.
|
- James C. Boerkoel Jr.; Edmund H. Durfee (2013). “Decoupling the Multiagent Disjunctive Temporal Problem.” Proceedings of 27th Conference on Artificial Intelligence (AAAI), pp. 123-129.
|
- Jason Sleight; Edmund H. Durfee (2012). “Organizational Design Principles and Techniques for Decision-Theoretic Agents.” Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 463-470.
|
- James C. Boerkoel Jr.; Edmund H. Durfee (2012). “A Distributed Approach to Summarizing Spaces of Multiagent Schedules.” Proceedings of the 26th Conference on Artificial Intelligence (AAAI), pp. 1742-1748.
|
- Jason Sleight; Edmund H. Durfee (2012). “A Decision-Theoretic Characterization of Organizational Influences.” Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-12), pp. 323-330.
|
- Stefan J. Witwicki; Inn-Tung Chen; Edmund H. Durfee; Satinder Singh (2012). “Planning and Evaluating Multiagent Influences Under Reward Uncertainty.” (Extended Abstract) In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-12), pp. 1277-1278.
|
- James C. Boerkoel Jr.; Edmund H. Durfee (2011). “Distributed Algorithms for Solving the Multiagent Temporal Decoupling Problem.” Proceedings of the Tenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS-11), Taipei, pp. 141-148.
|
- Stefan J. Witwicki; Edmund H. Durfee (2011). “Towards a Unifying Characterization for Quantifying Weak Coupling in Dec-POMDPs.” Proceedings of the Tenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS-11), Taipei, pp. 29-36. (One of three nominees for the Best Student Paper Award.)
|
The materials above are based upon work supported by the
National Science Foundation under Grant No. 0964590 and/or Grant No. 0964512. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of
the National Science Foundation (NSF). |
Other Related Work by the Multi-Agent Systems Lab
| |