6 AI for Human Cooperation

Required Content 3hrs 30mins

Curriculum
Curriculum

Thus far, we have been focusing on the need for AI to be able to handle cooperation problems in order to manage the novel risks that arise from multi-agent interactions involving AI agents. However, the potential gains of having AI systems that can solve cooperation problems well are greater than that. As many of the greatest challenges that humanity faces are in fact cooperation problems, AI systems with cooperative capabilities could be invaluable to solving these.

Tooltip Text

Learning Objectives:
Explain how AI can be applied to social planning problems at scale
Compare different AI-supported approaches to collective deliberation and discuss their respective strengths and limitations.
Discuss how AI systems could enhance democracy and social coordination while considering associated risks and constraints.

AI for Social Planning

The next resource is a well-known paper from this area of work: ‘The AI Economist’, which explores how reinforcement learning can be used to design effective tax policies. While reading this paper, note the connections to previous material on opponent shaping and adaptive mechanism design, if you engaged with that material. The paper describes their approach as “two-level deep RL”, meaning both the mechanism designer (that sets the taxation policy) and the agents (that participate in the taxed economy) are learning agents.

Tooltip Text

The AI Economist: Taxation policy design via two-level deep multiagent reinforcement learning

Introduction

Tooltip Text

Required • 2400 Words (Technical)
Exercise 6.1

What are some issues you see with using AI, specifically a deep neural network trained through RL, for social planning as in the AI Economist paper? What would you propose to mitigate these issues?

Tooltip Text

Required

AI Tools for Public Deliberation and Preference Elicitation

While the AI Economist aims to design effective tax policies under the assumption that policy objectives have been agreed upon, one of the central cooperation challenges in human societies is agreeing on such objectives—not just for economic policy but for all kinds of political decision-making. There have been several initiatives to use AI to facilitate collective deliberation and decision making, and we are going to review one notable example: Pol.is.

We’ll first look at a non-technical article that provides some context for the implementation of digital solutions for improving collective deliberation and decision-making. Before diving into the details of technical solutions, it is important to recognise that there are other important challenges that relate more to the adoption and legitimacy of solutions than to technical features and capabilities.

Tooltip Text

The simple but ingenious system Taiwan uses to crowdsource its laws

All parts

Tooltip Text

Required • 2500 Words

We will now go more into technical detail on the Pol.is system mentioned in the previous article. While the first version of Pol.is came before large language models, this next resource is a paper on how modern AI systems can be used to improve it. The discussion section also applies quite broadly to the integration of LLMs and machine learning in deliberative systems.

Tooltip Text

Opportunities and Risks of LLMs for Scalable Deliberation with Polis

'1 Introduction' and '3 Discussion'

Tooltip Text

Required • 5600 Words (Technical)
Exercise 6.2

What are the key features of Pol.is, and why are they important? For example, why does it not allow direct replies to comments?

Tooltip Text

Required

Another notable project in this area is the Habermas machine that was developed by a team at Google Deepmind. It is named after the German philosopher Jürgen Habermas and his theory of communicative action. While Pol.is focuses on the challenge of eliciting and mapping opinions in large and diverse groups, the Habermas machine aims to help human groups reach consensus by generating new proposals that obtain wide agreement and leave groups less divided. The next optional resource is a talk by Christopher Summerfield presenting the Habermas machine in more detail.

Tooltip Text

The Habermas Machine: Using AI to help people find common ground

10:30 to 52:00

Tooltip Text

Optional • 45 mins (Technical)
Exercise 6.3

Compare and contrast the key features of Pol.is and Habermas. Come up with settings where you think one would be better / more applicable than the other.

Tooltip Text

Optional

AI Tools for Existential Security

While human cooperation problems such as climate change or political deliberation predate the development of advanced AI systems, AI development also creates new challenges that makes the need for effective solutions more urgent. There is a growing overlap between the research communities that work on ‘AI for Democracy’ and those that focus on existential risk. The following resource is published by Forethought, a research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems, and highlights coordination-enabling applications of AI as something that could mitigate existential risks.

Tooltip Text

AI Tools for Existential Security

'Executive summary' and 'Coordination-enabling applications'

Tooltip Text

Required • 800 Words
Exercise 6.4

‘AI Tools for Existential Security’ warns that “better coordination tools also have the potential to cause harm”. What do you think are some of those harms, and how might we defend against them?

Tooltip Text

Required • 15mins

If you are interested in exploring further work in this area, it is also worth looking into the work of the Collective Intelligence Project (CIP), an organisation focused on the research and development of collective intelligence capabilities: decision-making technologies, processes, and institutions that expand a group’s capacity to construct and cooperate towards shared goals.

Tooltip Text

Whitepaper – The Collective Intelligence Project

All parts

Tooltip Text

Optional • 2500 Words
Exercise 6.5

Revisit ‘Wisdom and / or madness of the Crowds’ by Nicky Case from section 3 of the course. What are some examples of human systems, as modelled with the networks from Nicky Case’s interactive, where AI agents could facilitate better outcomes, and where they might facilitate worse outcomes. Use the concepts from the interactive to explain why.

Tooltip Text

Required
Exercise 6.6

Now that you are 6 sections through the course, revisit exercise 1.5 from the first section of the curriculum: Spend under 30 minutes trying to answer all the following prompts about the field of cooperative AI.

  • How would you define the field of cooperative AI in your own words?
  • What concepts that you’ve heard about so far confuse you?
  • What problems do you think the cooperative AI field focuses on?
  • What real-world present day scenarios would the field of cooperative AI be concerned about or focussed on? What future scenarios might the field of cooperative AI be concerned about?
  • Why does the field of cooperative AI matter?

Tooltip Text

Required