🔉 unmute for sound
Responsible Mechanism Design
AAMAS-26 tutorial
Humans have long been—and continue to be—involved in collective decisions, from voting in ancient democracies to navigating rush hour traffic. The mechanisms used in such decision-making are relatively straightforward. They are often some variation of majority vote, veto power, decision delegation, or a power hierarchy. The major advantage of such mechanisms is that they are simple enough to be understood by a layperson and that they require a small amount of communication between the decision-making agents. At the same time, such mechanisms usually make it impossible to hold the decision-making agents individually accountable for a harmful outcome of a collective decision.
With AI agents starting to take a bigger part in the collective decision-making, the requirements for decision-making mechanisms are changing. Artificial agents can follow much more sophisticated decision-making protocols than their human counterparts. They can also communicate at a much higher speed and, while doing so, exchange a larger amount of information. This creates an opportunity to develop and to adopt group decision-making mechanisms that trade simplicity and low information exchange for other important properties, such as individual accountability of agents for the collective decision.
This tutorial is intended to give an extended introduction to Responsible Mechanism Design (RMD) as a new interdisciplinary area of research on the border of artificial intelligence, game theory, logic, and philosophy. I will review the formal game-theory-based approaches to modelling responsibility that serve as the foundation for the new field; highlight the existing results in this area; and discuss open questions for future RMD research.