1 What is Cooperative AI?
Required Content 1hr 35mins • All Content 2hrs 35mins
Cooperative AI is an emerging research field focused on improving the cooperative intelligence of advanced AI for the benefit of all. This includes both addressing risks of advanced AI related to multi-agent interactions and realizing the potential of advanced AI to enhance human cooperation. This course is meant to provide an introduction to cooperative AI.
This introductory section is aimed at providing a broad introduction to the field of cooperative AI. The resources present risks of advanced AI related to multi-agent interactions, but also touch on how advanced AI could present new opportunities for enhanced human cooperation – a topic that we will return to later in the course.
By the end of the section, you should be able to:
- Explain why alignment may not be enough to ensure the safety of AI systems in a multi-agent setting
- Explain how classical game-theoretic problems could apply to AI systems
- Give examples of research questions that target multi-agent safety for large language models (LLMs)
We start off with a short video that introduces some basics about the field: the relationship between AI alignment and cooperative AI, the differences between cooperative capabilities and dispositions and terms such as collective alignment:
Cooperative AI is an emerging research field focused on improving the cooperative intelligence of advanced AI for the benefit of all. This includes both addressing risks of advanced AI related to multi-agent interactions and realizing the potential of advanced AI to enhance human cooperation. This course is meant to provide an introduction to cooperative AI.
This introductory section is aimed at providing a broad introduction to the field of cooperative AI. The resources present risks of advanced AI related to multi-agent interactions, but also touch on how advanced AI could present new opportunities for enhanced human cooperation – a topic that we will return to later in the course.
By the end of the section, you should be able to:
- Explain why alignment may not be enough to ensure the safety of AI systems in a multi-agent setting
- Explain how classical game-theoretic problems could apply to AI systems
- Give examples of research questions that target multi-agent safety for large language models (LLMs)
We start off with a short video that introduces some basics about the field: the relationship between AI alignment and cooperative AI, the differences between cooperative capabilities and dispositions and terms such as collective alignment: