Collaborative Research: RI: III: SHF: Small: Multi-Stakeholder Decision Making: Qualitative Preference Languages, Interactive Reasoning, and Explanation


Sponsoring Agency
National Science Foundation


The ability to express and reason about preferences over a set of alternatives is central to rational decision-making in a broad range of applications, such as product design, public policy, health care, information security, and privacy, among others. Because of the unavailability of quantitative preferences in many practical settings, there is increasing interest in methods for representing and reasoning with qualitative preferences. Recent work has led to a suite of practical tools for reasoning about qualitative preferences that leverage recent advances in formal methods and model checking. However, practical decision making scenarios typically involve multiple stakeholders, with possibly conflicting preferences. Furthermore, the preferences of some stakeholders may sometimes override those of others, e.g., because of the relative positions of the stakeholders within an organization. However, most of the existing methods are limited to the single stakeholder setting. Against this background, this project brings together a team of researchers with complementary expertise in formal methods, artificial intelligence, and preference reasoning to develop methods and tools for representing and reasoning with multi-stakeholder preferences.

The primary intellectual merit of the proposal centers around substantial advances in the current state-of-the-art in languages, algorithms, and software for multistakeholder representation and reasoning with preferences. The resulting preference reasoners will be able to (a) analyze preferences expressed in GCRIPT, a general language for multi-stakeholder preference representation that subsumes existing preference languages, (b) reason with the preferences of multiple stakeholders, taking into ac- count not only their individual preferences, but also hierarchies that give precedence to the preferences of some stakeholders over those of others (c) offer easy-to-understand explanations of the preferred choices as well as their impacts on the stakeholders. The project will also enhance the underlying model checking techniques that form the core technology for the preference reasoning framework; e.g., in the areas of incremental model checking, counter-example analysis and justification. The resulting advances in knowledge representation and formal methods contribute to AI systems that substantially augment and extend human capabilities in multi-stakeholder decision making.

The project offers enhanced opportunities for collaboration for a team of researchers with comple- mentary expertise in artificial intelligence and formal methods at Pennsylvania State University (PSU) and Iowa State University (ISU). The practical open source multi-stakeholder decision support tools resulting from the project will significantly lower the barrier to the applications of AI and formal meth- ods for multi-stakeholder decision making in a number of domains including product design (including software design), healthcare, public policy, and organizational decision making. The project enhances research-based training of graduate and undergraduate students, including females and members of other under-represented groups, at ISU and PSU in artificial intelligence, formal methods, and related areas of national importance. Broad dissemination of research results (including publications, open source soft- ware, data, tutorials, course materials), incorporation of research results into undergraduate and graduate curricula in Computer Science, Information Sciences and Technology, Data Sciences, and related disciplines, and outreach to targeted application domains e.g., health, public policy, security and privacy, that would benefit from advanced tools for multi-stakeholder decision-making further enhance the broader impacts of the project.