
Many societal-scale problems such as climate change, emerging technologies, and international relations suffer from mismatched incentives, communication limitations, and trust issues – challenges that AI may help us to overcome. This topic has garnered increasing interest over the last few years, with notable proposals including computational social choice for fair division, multi-agent reinforcement learning for creating more efficient tax policies, and LLM mediators that help humans find consensus. While some have warned that AI-faciliated cooperation might threaten democracy, others have highlighted how it might eventually help us to avoid civilisational-scale risks and preserve human agency in an increasingly automated world.
At the Cooperative AI Retreat we therefore began by attempting to paint a positive vision of the successful integration of AI technologies into various kinds of democratic processes. We then focused particularly on AI tools for public deliberation, before concluding with a discussion of how to move from small-scale proofs of concept to large-scale societal successes.
What would the world look like if we could effectively solve key cooperation problems for high-stakes domains? During the discussions, four key features of such a society were outlined:
Some features of this vision appear to be simultaneously a requirement for and a product of success, which points to an iterative and incremental approach to AI-facilitated cooperation. Discussants also highlighted aspects such as a healthy social fabric and general belief in “shared humanity”, high political engagement, a healthy economy, justified optimism and an increased belief in the possibility of shared abundance: all of these characteristics were thought to reduce fear and the competitive mindsets that make cooperation problems more difficult to solve.
To achieve this vision, discussants pointed to the need to work towards healthy integration of AI tools in government settings that can support well-informed and effective governance. A key requirement would be to establish infrastructure that enables the elicitation of reliable and insightful information about public preferences at scale, and where good faith public participation is effectively incentivised. This includes providing attractive opportunities for people who otherwise feel disengaged from formal politics to express their opinions and see that expression making a difference. At the same time, participants at the retreat highlighted the importance of transparently and justifiably incorporating expert opinion rather than relying solely on that of the wider public. Improved trust in public policy (as described above) may help in this regard.
It is also important that public deliberation is not reduced to static preference elicitation, as a characteristic of high-quality deliberation is that opinions and preferences may evolve through the process as participants learn and develop their thinking. AI tools might help to quicken this process by mediating constructive engagement with those holding different views, or by aggregating preferences in a way that incorporates different sources of uncertainty. Three further factors were identified as being critical to ensure the resilience of new AI tools for public deliberation.
There are also clear risks associated with deployment of AI technologies in deliberation and governance, which warrant careful consideration. First, it is important to ensure that such systems do not themselves influence outputs and decisions in inappropriate ways due to, e.g., biases in their training data. Second, risks from malevolent or fanatical actors need to be managed. Third, we must consider how such technologies influence the viability of more authoritarian regimes.
A roadmap towards successful AI-facilitated deliberation and governance should proceed from low-stakes to more high-stakes domains as the solutions mature and gain legitimacy and support. Discussants highlighted the importance of piloting new solutions in local governments such as California, and of nations such as Taiwan and Estonia. To increase public awareness and understanding, they also called for more demonstrations of AI tools for bargaining or epistemics that produce direct value for users. A number of specific domains were listed as potentially promising testbeds for new solutions:
By building on earlier successes in these key domains and taking care to avoid some of the key pitfalls identified above, we can move towards AI-facilitated cooperation that delivers truly societal-scale benefits. At the Cooperative AI Foundation, we want to help build the community of individuals and organisations working towards this vision. To stay informed of upcoming events, grant calls, and other opportunities to get involved in the cooperative AI field subscribe to our mailing list.
October 29, 2025
