AI-Facilitated Human Cooperation: What Would Success Look Like? Insights from the Cooperative AI Retreat

The 2025 Cooperative AI Summer Retreat convened experts from academia, industry and civil society to discuss how new AI tools could help us make progress on human cooperation challenges. This post highlights some of the key insights and ideas from the event.

Many societal-scale problems such as climate change, emerging technologies, and international relations suffer from mismatched incentives, communication limitations, and trust issues – challenges that AI may help us to overcome. This topic has garnered increasing interest over the last few years, with notable proposals including computational social choice for fair division, multi-agent reinforcement learning for creating more efficient tax policies, and LLM mediators that help humans find consensus. While some have warned that AI-faciliated cooperation might threaten democracy, others have highlighted how it might eventually help us to avoid civilisational-scale risks and preserve human agency in an increasingly automated world.

At the Cooperative AI Retreat we therefore began by attempting to paint a positive vision of the successful integration of AI technologies into various kinds of democratic processes. We then focused particularly on AI tools for public deliberation, before concluding with a discussion of how to move from small-scale proofs of concept to large-scale societal successes.

Visions of Success for Deliberation, Collective Decision-Making, and Governance

What would the world look like if we could effectively solve key cooperation problems for high-stakes domains? During the discussions, four key features of such a society were outlined:

  1. Accurate Common Knowledge of Major Challenges. Effective deliberation and decision-making is grounded in a shared understanding of facts. There is a strong collective resistance to propaganda and dynamics such as “pluralistic ignorance” (where a majority of people privately reject a belief or practice but believe that most other people accept it, leading them to publicly conform).
  2. Radically Responsive Government. A continuous, real-time feedback loop between the public and policymakers ensures that governmental agendas remain aligned with the evolving concerns and aspirations of its citizens. The state identifies and can react to pressing public needs in days, not decades.
  3. Breakthrough Solutions to Gridlocked Problems. Previously intractable challenges are effectively addressed by policies that successfully integrate the public’s diverse experiences with expert knowledge, and AI-surfaced consensus removes political gridlock and de-risks decisive action for leaders.
  4. Consistent Public Trust in Policy. Nations’ policies consistently achieve supermajority public approval because their creation transparently incorporates widespread values of their citizenries, leading to high-consensus solutions.

Some features of this vision appear to be simultaneously a requirement for and a product of success, which points to an iterative and incremental approach to AI-facilitated cooperation. Discussants also highlighted aspects such as a healthy social fabric and general belief in “shared humanity”, high political engagement, a healthy economy, justified optimism and an increased belief in the possibility of shared abundance: all of these characteristics were thought to reduce fear and the competitive mindsets that make cooperation problems more difficult to solve.

AI Tools for Public Deliberation and Preference Elicitation

To achieve this vision, discussants pointed to the need to work towards healthy integration of AI tools in government settings that can support well-informed and effective governance. A key requirement would be to establish infrastructure that enables the elicitation of reliable and insightful information about public preferences at scale, and where good faith public participation is effectively incentivised. This includes providing attractive opportunities for people who otherwise feel disengaged from formal politics to express their opinions and see that expression making a difference. At the same time, participants at the retreat highlighted the importance of transparently and justifiably incorporating expert opinion rather than relying solely on that of the wider public. Improved trust in public policy (as described above) may help in this regard.

It is also important that public deliberation is not reduced to static preference elicitation, as a characteristic of high-quality deliberation is that opinions and preferences may evolve through the process as participants learn and develop their thinking. AI tools might help to quicken this process by mediating constructive engagement with those holding different views, or by aggregating preferences in a way that incorporates different sources of uncertainty. Three further factors were identified as being critical to ensure the resilience of new AI tools for public deliberation.

  1. Resilient Collective Epistemics. As noted above, good decision-making relies on accurate knowledge. We must ensure that tools for deliberation are supported by other mechanisms (such as independent journalism, online knowledge bases, and fact-checking services) that
    strengthen and maintain our collective ability for truth-seeking.
  2. State-of-the-Art Open-Source Tools. The important tools and systems that are used in governance should be both open-source and competitive so that there is little incentive to use proprietary, opaque systems for important decision-making.
  3. Secure and Robust Tools. These tools must also be secure, both from the perspective of protecting sensitive data and in terms of robustness to malicious inputs or Sybil attacks.

There are also clear risks associated with deployment of AI technologies in deliberation and governance, which warrant careful consideration. First, it is important to ensure that such systems do not themselves influence outputs and decisions in inappropriate ways due to, e.g., biases in their training data. Second, risks from malevolent or fanatical actors need to be managed. Third, we must consider how such technologies influence the viability of more authoritarian regimes.

From Proof-of-Concept to Large-Scale Deployment

A roadmap towards successful AI-facilitated deliberation and governance should proceed from low-stakes to more high-stakes domains as the solutions mature and gain legitimacy and support. Discussants highlighted the importance of piloting new solutions in local governments such as California, and of nations such as Taiwan and Estonia. To increase public awareness and understanding, they also called for more demonstrations of AI tools for bargaining or epistemics that produce direct value for users. A number of specific domains were listed as potentially promising testbeds for new solutions:

By building on earlier successes in these key domains and taking care to avoid some of the key pitfalls identified above, we can move towards AI-facilitated cooperation that delivers truly societal-scale benefits. At the Cooperative AI Foundation, we want to help build the community of individuals and organisations working towards this vision. To stay informed of upcoming events, grant calls, and other opportunities to get involved in the cooperative AI field subscribe to our mailing list.

The author thanks Lewis Hammond, Oly Sourbut, and Joss Oliver for helpful input to this post, and all participants of the Cooperative AI Summer Retreat.

October 29, 2025

Cecilia Elena Tilli
Associate Director (Research & Grants), Cooperative AI Foundation