Cooperative AI at the Athens Roundtable 2025

Photograph of the Seventh Edition of the Athens Roundtable, courtesy of The Future Society.

This month, we were pleased to join the Seventh Edition of the Athens Roundtable. CAIF hosted a reception for the event’s speakers at the House of Lords the previous evening, welcoming leading figures in AI governance, policy, and law. 

The 2025 roundtable theme was "facing the stakes of AI together: from shared concerns to joint action", which relates to cooperative AI in multiple ways. AI is making our world more complex and more interconnected than ever before, and humanity must manage our competing and overlapping interests as we navigate this world together. Two central strands of cooperative AI address this directly. We aim to avoid the systemic risks that can arise from the interactions and collective behaviours of advanced AI agents; we also want to build AI systems and tools that can help people find their way and cooperate in our increasingly complex world.

Several of the speakers in the morning’s public sessions addressed themes closely associated with cooperative AI. One described the central role of agentic AI in the sophisticated cyberattack detected by Anthropic this November, which the company said “has significant implications for cybersecurity in the age of AI agents.” Another spoke about systematic risks to the economy from agent actions, including the potential for automated activities to initiate a bank run.

The afternoon’s private dialogues also offered fertile ground for perspectives from cooperative AI: 

  • The ‘Defining and Governing Unacceptable AI Risks’ dialogue brought together policymakers, diplomats, and technical experts to socialise and refine the emerging concept of global AI red lines: uses and behaviours of AI deemed universally unacceptable due to safety, security, or human rights concerns. We noted that AI agent interactions may contribute in distinct ways to many harmful outcomes expressed in “red line” definitions. For example, several AI agents working together in an orchestrated way could create a harmful combined outcome (such as creating a biosecurity risk) even when each individual agent’s action, evaluated in isolation, can appear harmless. Collusion between AI agents to manipulate users, or agent-based cybersecurity attacks, are inherently multi-agent risks. For red lines to be useful, observations of AI system behaviour need to indicate how close we are to a red line threshold. Such monitoring must therefore be done at a multi-agent level as well as in evaluations of individual models.

  • The ‘Serious AI Incident Prevention & Preparedness’ dialogue sought to build a shared understanding of what constitutes “serious AI incidents” and how such events can serve as early warning signals for broader technical and governance failures. We highlighted that in the future, one of the key ways that incidents might escalate is via networks of interconnected agents. This might include, for example, escalatory dynamics (such as those seen in market crashes or military conflicts) or the propagation of harmful attacks and content between agents (such as when computer worms or misinformation spreads through artificial or human networks, respectively). To contain such incidents, incident reporting and mitigation methods must be coordinated. As in the case of red lines, monitoring must also take place not only at the level of individual agents, but at the multi-agent level. This is especially the case for incidents that are diffuse and more subtle, rather than having a single, obvious point of failure.

These are just a few examples arising from many rich conversations throughout the day. One of the great successes of the Athens Roundtable is that it brings together participants with contrasting experience and perspectives, including many from outside AI research (such as diplomacy, law, and international relations) that are directly relevant to cooperative AI. We are grateful to The Future Society for the opportunity to contribute to this valuable event.

Opening remarks delivered during the roundtable speaker reception at the House of Lords.

December 9, 2025

David Norman
Managing Director
Lewis Hammond
Research Director