New Directions in Cooperative AI

AI systems control an ever growing part of our world. As a result, they will increasingly interact with each other directly, with little or no potential for human mediation. If each system stubbornly pursues its own objectives, this runs the risk of familiar game-theoretic tragedies – along the lines of the Tragedy of the Commons, the Prisoner’s Dilemma, or even the Traveler’s Dilemma – in which outcomes are reached that are far worse for every party than what could have been achieved cooperatively.
However, AI agents can be designed in ways that make them fundamentally unlike strategic human agents. This approach is often overlooked, as we are usually inspired by our own human condition in the design of AI agents. But I will argue that this approach has the potential to avoid the above tragedies in new ways. The price to pay for this, for us as researchers, is that many of our intuitions about game and decision theory, and even belief formation, start to fall short.
I will discuss how foundational research from the philosophy and game theory literatures provides a good starting point for pursuing this approach. This talk covers joint work with Caspar Oesterheld, Scott Emmons, Andrew Critch, Stuart Russell, Abram Demski, Yuan Deng, and Catherine Moon.
Speakers

Vincent Conitzer (Duke University, University of Oxford)

Discussants

Edith Elkind (University of Oxford)
Joseph Halpern (Cornell University)

Time

15:00-16:00 UTC 20 January 2022

Links
This seminar has now finished

Google calendar event

ICS file

Zoom meeting (passcode sent via mailing list)

Vincent Conitzer is the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford.

He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University. Conitzer works on artificial intelligence (AI). Much of his work has focused on AI and game theory, for example designing algorithms for the optimal strategic placement of defensive resources. More recently, he has started to work on AI and ethics: how should we determine the objectives that AI systems pursue, when these objectives have complex effects on various stakeholders?

Conitzer has also recently joined the Cooperative AI Foundation as an advisor, and has announced that he will be moving to Carnegie Mellon University in order to start a new lab, FOCAL (the Foundations of Cooperative AI Lab). The lab’s goal is to create foundations of game theory appropriate for advanced, autonomous AI agents – with a focus on achieving cooperation.

AI Agents May Cooperate Better if They Don’t Resemble Us