Cooperative AI Summer School 2025: Highlights and Takeaways

The Cooperative AI Foundation held its annual Cooperative AI Summer School in July. Our Managing Director shares highlights and takeaways from the 2025 edition.

Held on the stunning banks of the River Thames in Marlow (England), the 2025 Cooperative AI Summer School gathered over 65 students and early-career professionals from across the world to explore foundational concepts, cutting-edge research, and career development opportunities in cooperative AI.

Lectures Shed Light on the Multiplicity of Cooperative AI

The Summer School lectures illustrated the academic richness of our field. Each presenter explored a research theme, often pointing towards open questions at the frontier of knowledge that await the attention of talented early-career researchers.

Cooperative AI to Enhance Democratic Deliberation

AI has great potential to help solve human coordination problems, and some of the early examples of this involve strengthening the functioning of democratic institutions.

  • Audrey Tang, Taiwan’s first digital economy minister and now the country’s cyber ambassador, is a pioneer in the reimagination of democracy for the digital age. Audrey spoke with Zarinah Agnew from the Collective Intelligence Project about innovations in policymaking and consensus building, including solving the problem of fake adverts without censorship by slowing down the connection to servers hosting the harmful material, an idea developed through representative online deliberation.

  • Ariel Procaccia continued Zarinah’s and Audrey’s democracy theme with a talk on generative social choice. This uses AI to go far beyond the rigid, predefined options typical of voting systems, instead creating new tailored solutions that better satisfy people’s complex preferences.

Security and Safety Through Cooperative AI

Cooperative AI is about realising the many opportunities for human progress that will come from interacting AI systems; it is also about mitigating the distinct risks that arise in these settings.

  • Christian Schroeder de Witt spoke about multi-agent security: safeguarding systems of multiple interacting AI agents. Distinct threats arise from or can be amplified through agent interactions, and Christian’s talk positioned security as a foundation of human cooperation and a precondition for AI safety. 

  • Nora Amman presented recent work on gradual human disempowerment, the risk that people could over time be displaced from decision-making at all levels, even without discontinuous AI progress. Nora’s talk centred human agency within our discussions of increasingly autonomous AI systems.

Exploring the Foundations of Cooperation

A thorough grounding in both game theory and empirical aspects of agent learning is essential for success in cooperative AI.

  • Vincent Conitzer introduced some of the foundations of cooperative AI, investigating the differences between cooperation among people versus cooperation among AI agents. His game theoretic analysis showed how the nature of AI systems could enable new bases for AI-AI cooperation that are impossible in the human world. 

  • Kate Larson’s talk explored cooperative game theory, looking at groups of self-interested agents working in coalitions. Kate showed how to think about the stability of those coalitions, i.e. whether agents would want to break away from the coalition and form other coalitions. Kate also explored fairness in terms of whether the individual payoffs from success capture what each agent brings to the coalition.

  • Eugene Vinitsky spoke about building AI agents that can learn to collaborate with people using little or no human data, which is important in many applications including autonomous vehicles that share roads with human drivers. Human data is expensive, scarce, and often fails to represent critical but infrequent real world interactions.

Evaluating Interactive AI Agents

Cooperative AI recognises that other agents are a significant part of the environment in which AI agents will be deployed. This insight shapes the nature of the evaluations we develop.

  • Michael Wellman discussed the challenge of performing evaluations on advanced interactive AI. Evaluating multi-agent learning systems is difficult because the agents’ environment includes all the other agents, which adapt to each other in complex ways. Michael showed how a meta-game analysis can help us make progress here. 

  • Cecilia Tilli’s talk set out a systematic approach to analysing the cooperation-relevant properties of AI agents. Those properties include both capabilities (behaviours or actions an agent can perform) and dispositions (the tendency of agents to express one behaviour over another). These are top research priorities for CAIF.

Opportunities for Impact in Cooperative AI

Choices about real world impact will be central to any career path in cooperative AI.

  • Lewis Hammond spoke about doing effective research, offering advice on mistakes to avoid and on how to make a difference in the world through research. Lewis also outlined some big picture considerations of neglected areas, which may present opportunities for early career researchers choosing their academic path.

Hands-On Learning

While expert talks were at the heart of the summer school programme, there were opportunities for participants to learn in more interactive ways. Each of our speakers held office hours (open, informal conversations in groups) and we also dedicated time for one-on-one meetings.

The poster sessions and lightning talks, where participants got to introduce their own research, proved among some of the liveliest parts of the agenda. Everyone joined a practical project in teams, designing solutions in response to challenges ranging from gradual disempowerment to evaluating cooperation-relevant properties of AI agents.

To get a sense of participant consensus and disagreement on a variety of topics, we used Pol.is throughout the Summer School. Participants proposed statements about cooperative AI and then voted to show their support or disagreement. This tool helped us identify areas of broad agreement, such as the statement "It's better for AI systems to be cautiously cooperative than overly trusting," as well as points of contention, including "AI systems will need explicit rules about when cooperation is appropriate."

Looking Ahead

The diversity of speakers’ research themes and the variety of backgrounds participants brought to the table illustrated the multidisciplinary nature of cooperative AI. 

The field needs more people with a strong computer science background and foundational knowledge of cooperation rooted in game theory. But we also need researchers from complex systems science, from the social sciences, and others already working on applying theory in real world settings. We try to select for this kind of mix among the applications for the Summer School each year, and we value the cross-disciplinary learning that results from it.

If you are an ambitious researcher working on any of these themes, one of our future Summer Schools might offer a pathway towards the heart of our field. There are numerous open research questions with implications for the transition to a world with advanced AI agents. We would love you to join us in our mission to make that transition go well.

Make sure to sign up to our mailing list to stay informed about upcoming initiatives, and be the first to gain access to the Summer School lecture recordings once available.

August 21, 2025

David Norman
Managing Director