🔉    unmute for sound

Responsible Mechanism Design

IJCAI-26 tutorial

Humans have long been—and continue to be—involved in collective decisions, from voting in ancient democracies to navigating rush-hour traffic. The mechanisms used in such decision-making are relatively straightforward. They are often some variation of majority vote, veto power, decision delegation, or a power hierarchy. The major advantage of such mechanisms is that they are simple enough to be understood by a layperson and that they require only a small amount of communication between the decision-making agents. At the same time, such mechanisms usually make it impossible to hold the decision-making agents individually accountable for a harmful outcome of a collective decision.

With AI agents playing an increasing role in collective decision-making, the requirements for decision-making mechanisms are changing. Artificial agents can follow much more sophisticated decision-making protocols than their human counterparts. They can also communicate at a much higher speed and, while doing so, exchange much larger amounts of information. This creates an opportunity to study group decision-making mechanisms not only from the perspective of efficiency, fairness, or incentives, but also from the perspective of responsibility and accountability.

This tutorial presents Responsible Mechanism Design as a new research area concerned with the formal analysis and design of collective decision-making procedures in which questions of responsibility are central. It introduces game-theoretic approaches to modelling responsibility, explains the phenomena of responsibility gaps and diffusion of responsibility, reviews recent work on how mechanism structure, authority distribution, and information flow affect responsibility, and discusses open questions for future research.