Updates in Cooperative AI

Powerful AI systems are increasingly being deployed with the ability to act autonomously in the world. This is a profound change from most people’s experience of AI so far.

​The competitive advantages offered by autonomous, adaptive agents will drive their adoption, and these advanced agents will interact with each other and with people, giving rise to complex new multi-agent systems. AI-AI interactions within multi-agent systems present significant and under-appreciated risks.

​To delve into these risks in greater detail, the Cooperative AI Foundation invites you to our forthcoming seminar entitled "Exploring Multi-Agent Risks from Advanced AI”. ​Based on key findings from our recent report on ‍Multi-Agent Risks from Advanced AI, and featuring contributions from Lewis Hammond, Gillian Hadfield and Jakob Foerster.

This session will delve into:

- How do multi-agent risks fit in the broader AI governance and safety landscape

- Identifying key risk factors that can lead to harmful interactions in a multi-agent AI setting

- ​Exploring the different mechanisms via which failure modes can arise

- Strategies to mitigate risk and promising research directions

We’re delighted to be launching our Updates in Cooperative AI Seminar Series. We'll be running these seminars monthly, and you're welcome to subscribe to our Google Calendar to stay up-to-date on all upcoming events. You're also welcome to ask questions or provide input for the seminar here.

Speakers

Gillian K. Hadfield (The Johns Hopkins University)

Michael Dennis (Google Deepmind) 

Discussants

Time

16:00 - 17:00 UTC 26 June 2025

Links
This seminar has now finished
Register Here

Gillian K. Hadfield is professor of government and policy and in the Computer Science Department at the Whiting School of Engineering at Johns Hopkins University. She holds a CIFAR AI Chair at the Vector Institute for Artificial Intelligence and is a Schmidt Sciences AI2050 Senior Fellow. Originally trained as an economist and legal scholar, Hadfield's research now focuses on collaborations with machine learning researchers to build machine learning systems that understand and respond to human norms and innovative design for legal and regulatory systems for AI and other complex global technologies.

Michael Dennis is currently a Research Scientist on Google Deepmind's Openendedness team. He was previously a Ph.D. Student at the Center for Human Compatible AI (CHAI) advised by Stuart Russell. Prior to research in AI he conducted research on computer science theory and computational geometry.

Exploring Multi-Agent Risks from Advanced AI