Updates in Cooperative AI

One plausible extrapolation of current AI trends is that we are headed towards pluralistic AI futures: scenarios where a plethora of AI agents, systems, and services are deployed towards various ends by various actors. How can we influence this process so that it occurs not just safely – avoiding AI-powered conflict and anti-human equilibria – but also in the direction of mutual flourishing?
In order to address this challenge, Xuan argues that we need to scale cooperative intelligence, advancing the cooperative capacities and dispositions of the agents we create, while designing institutions that promote AI and human cooperation at scale. To develop these capacities, Xuan will introduce recent research on rational cooperative AI – a paradigm for building coherent AI agents that are cooperative-by-design. By performing rational inference over intended goals or constraints, rational cooperative AI avoids the jaggedness and unreliability of current generative AI, while promising substantial efficiency gains in key application areas. As for the design of cooperative institutions, Xuan will discuss ongoing research into automated negotiation and market intermediaries. While these institutional technologies promise to enable radically efficient solutions to social dilemmas, Xuan shall also argue that they need to be based on thicker accounts of human values and rationality if they are to preserve much of what matters to us. Xuan will conclude with some suggestions about what these thicker foundations might consist in, and why they might be crucial for replicating human-like cooperative reasoning.
Speakers

Tan Zhi Xuan (National University of Singapore)

Discussants
Time

15:00–16:00 UTC 22 January 2026

Links
This seminar has now finished
Register Here

Tan Zhi Xuan is an Assistant Professor in the National University of Singapore, Department of Computer Science, with a joint appointment at the A*STAR Institute of High Performance Computing. Xuan's research focuses on scaling cooperative intelligence via rational AI engineering, spanning the areas of AI alignment, probabilistic programming, and computational cognitive science. Together with their research group, the Cooperative Systems & Intelligence (CoSI) lab, Xuan aims to reverse engineer the computational foundations of human cooperation and normativity, thereby enabling the development of human-level AI cooperators and the design of cooperative infrastructure for an increasingly automated future. Previously, Xuan completed their PhD with the Massachusetts Institute of Technology Probabilistic Computing Project and Computational Cognitive Science lab. Xuan also serves as a board member of Principles of Intelligence, an AI safety non-profit, and Welfare Matters, an organisation promoting farmed animal welfare in Southeast Asia.

Scaling Rational Cooperative Intelligence for Pluralistic AI Futures