Updates in Cooperative AI

Theory of mind is the ability to understand other people’s behaviour in terms of mental states such as desires and beliefs. Many have hypothesised that theory of mind is important for explaining the distinct scale, scope, and sophistication of human cooperation. However, there is still an open question of how theory of mind actually leads to enhanced cooperation.
In this talk, Max Kleiman-Weiner presents findings from building a computational model of theory of mind and using it to develop an agent that conditionally cooperates with agents it infers are like itself. In evolutionary game theoretic simulations, the agent leads to the emergence of cooperation in a wider range of games and outcompetes less sophisticated agents that lack a theory of mind.
Speakers

Max Kleiman-Weiner (University of Washington)

Discussants
Time

16:00 - 17:00 UTC 23 October 2025

Links
This seminar has now finished
Register Here

Max Kleiman-Weiner is an Assistant Professor at the University of Washington and PI of the Computational Minds and Machines lab. His goal is to create computational models that explain how the mind works and draw on insights from how people learn, think, and act to build smarter and more human-like artificial intelligence. He holds a PhD in Computational Cognitive Science from MIT, where he received fellowships from the Hertz Foundation and NSF. He has co-founded two AI companies: Common Sense Machines and Diffeo.

Evolving General Cooperation With a Bayesian Theory Of Mind