Cooperative AI Workshop
-
NeurIPS 2020

Aims and Focus

Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.


We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.


Such research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. In the context of machine learning, it will be important to develop training environments, tasks, and domains in which cooperative skills are crucial to success, learnable, and non-trivial. Work on the fundamental question of cooperation is by necessity interdisciplinary and will draw on a range of fields, including reinforcement learning (and inverse RL), multi-agent systems, game theory, mechanism design, social choice, language learning, and interpretability. This research may even touch upon fields like trusted hardware design and cryptography to address problems in commitment and communication.


Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements. Research should also study the potential downsides of cooperative skills—such as exclusion, collusion, and coercion—and how to channel cooperative skills to most improve human welfare. Overall, this research would connect machine learning research to the broader scientific enterprise, in the natural sciences and social sciences, studying the problem of cooperation, and to the broader social effort to solve coordination problems.


We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.

Call for Papers

We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list):


  • Multi-agent learning

  • Agent cooperation

  • Agent communication

  • Resolving commitment problems

  • Agent societies, organizations and institutions

  • Trust and reputation

  • Theory of mind and peer modelling

  • Markets, mechanism design and and economics based cooperation

  • Negotiation and bargaining agents

  • Team formation problems


Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non archival reports, allowing future submissions to archival conferences or journals.

Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format.

The review process will be double-blind to avoid potential conflicts of interests.

Submission Instructions

Please submit your papers through EasyChair via the following link: https://easychair.org/my/conference?conf=coopai2020#

Best Paper Awards and Workshop Registration Grants

More information to follow soon.

Key Dates

  • Submission Deadline: October 02, 2020 (Midnight Pacific Time)

  • Final Decisions: October 30, 2020

  • Workshop: December 12, 2020

Program Details and Workshop Schedule

The workshop will feature invited talks by researchers from diverse disciplines and backgrounds, ranging from AI and machine learning to political science, economics, and law (see below).

We will have a virtual poster session for presenting work submitted to the workshop. In advance of the breakout conversations, the organizers will produce different topic groups which would be sent to attendees to join, aiming to stimulate discussion around these topics. Poster sessions would be arranged virtually by assigning a specific video channel per poster, and providing participants with a list of links to these, so they can join the conversation and chat to the authors. We have allotted a slot for Spotlight talks, which will be mostly dedicated to junior researchers.

We intend to have a panel discussion regarding the main open questions in Cooperative AI, stimulating future research in this space.

We hope that bringing together speakers from diverse fields and views will result in useful discussions and interactions, leading to novel ideas.

Our confirmed invited speakers are:

  • James D. Fearon is Theodore and Frances Geballe Professor in the School of Humanities and Sciences and Professor of Political Science at Stanford University. He has produced multiple field-changing works on international and domestic cooperation and conflict. A prominent survey of international relations scholars voted him in the top 10 scholars who have had the greatest influence on the field of International Relations in the past 20 years.

  • Gillian Hadfield is the inaugural Schwartz Reisman Chair in Technology and Society, Professor of Law, and Professor of Strategic Management. Her research is focused on innovative design for legal and dispute resolution systems in advanced and developing market economies; governance for artificial intelligence (AI); the markets for law, lawyers, and dispute resolution; and contract law and theory.

  • William Isaac is a Research Scientist with DeepMind’s Ethics and Society Team, with a particular interest in ethical cooperation among both humans and agents. Prior to DeepMind, William served as an Open Society Foundations Fellow and Research Advisor for the Human Rights Data Analysis Group focusing on bias and fairness in machine learning systems. William’s prior research centering on deployments of machine learning in the US criminal justice system has been featured in publications such as Science, the New York Times, and the Wall Street Journal.

  • Sarit Kraus is a Professor of Computer Science at Bar-Ilan University. Her research is focused on intelligent agents and multi-agent systems (including people and robots). She was awarded the IJCAI Computers and Thought Award, ACM SIGART Agents Research award, ACM Athena Lecturer, the EMET prize and was twice the winner of the IFAAMAS influential paper award. She is AAAI, ECCAI and ACM fellow and a recipient of the advanced ERC grant.

  • Peter Stone is founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin. He is also Executive Director of Sony AI America and President of the International RoboCup Federation. Prof. Stone is interested in understanding how we can best create complete intelligent agents based on adaptation, interaction, and embodiment with research in machine learning, multiagent systems, and robotics.

Workshop Schedule

Saturday 12 December (Eastern Standard Time)

(Note: all sessions will be pre-recorded and available to view in advance, except for the Q&A, Poster Sessions and Closing Remarks, which will take place live)

8:20am | Welcome

8:30am | Opening Remarks

9:00am | Invited Talk 1

9:30am | Invited Talk 2

10:00am | Invited Talk 3

10:30am | Invited Talk 4

11:00am | Invited Talk 5

11:30am | General Q&A (Live): Open Problems in AI

11:45am | Individual Q&A sessions with Invited Speakers (Live)

1:00pm | Poster Sessions (Live)

1:45pm | Panel Discussion

2:30pm | Spotlight Talk 1

2:45pm | Spotlight Talk 2

3:00pm | Spotlight Talk 3

3:15pm | Closing Remarks (Live)

Organizers and Program Committee

Thore Graepel

DeepMind

Dario Amodei

OpenAI

Yoram Bachrach

DeepMind

Vincent Conitzer

Duke University

Allan Dafoe

University of Oxford

Gillian Hadfield

University of Toronto

Eric Horvitz

Microsoft Research

Sarit Kraus

Bar-Ilan University

Kate Larson

DeepMind

Sponsors

  • DeepMind