New Directions in Cooperative AI

Human intelligence employs a set of cognitive abilities and inductive biases that underlie our impressive – for a mammal – ability to cooperate with one another. I propose that we approach our goal of building cooperative AI through the lens of reverse engineering human cooperation.
The reverse engineering approach is common in AI research, especially for work on the “classic” cognitive abilities like perception, attention, and memory. But I think it has been too under explored with regard to the social-cognitive abilities that underlie cooperation. We need to figure out what are the essential capacities, representations, and motivations that underlie human cooperation, and then build them into our AI systems.
In this talk I will describe how to use Melting Pot, an evaluation methodology and suite of test scenarios for multi-agent reinforcement learning, to further this goal of reverse engineering human cooperation in order to build cooperative artificial general intelligence.
Speakers

Joel Leibo (DeepMind)

Discussants

Natasha Jaques (Google Brain, UC Berkeley)
Marco Janssen (Arizona State University)

Time

15:00-16:00 UTC 17 February 2022

Links
This seminar has now finished

Google calendar event

ICS file

Zoom meeting (passcode sent via mailing list)

Joel Z. Leibo is a research scientist at DeepMind. He obtained his PhD in 2013 from MIT where he worked on the computational neuroscience of face recognition with Tomaso Poggio. Nowadays, Joel's research is aimed at the following questions:

  • How can we get deep reinforcement learning agents to perform complex cognitive behaviors like cooperating with one another in groups?
  • How should we evaluate the performance of deep reinforcement learning agents?
  • How can we model processes like cumulative culture that gave rise to unique aspects of human intelligence?

How to Measure and Train the Social-Cognitive Capacities, Representations, and Motivations Underlying Cooperation

No items found.