Recent Grants Awarded by the Cooperative AI Foundation

The Cooperative AI Foundation is delighted to have provided a new round of grants to support research on cooperative AI for the benefit of all. Summaries of these grants can be found below with four more to come, and we'll be announcing new grant applications in the near future.

We aim to provide an up-to-date summary of the Cooperative AI Foundation's (CAIF) grants, though please note that some recently approved grants or recent outputs from projects may be missing. Four more grants have been made in 2025 that are yet to be published here. Grants are listed in chronological order, from earliest to latest. This page was last updated on 22 Jul 2025.

Quick and Safe Adaptation to New Teams

Eugene Vinitsky

USD 150,974

2024 - 2025

New York University

This project explores how to enhance an AI agent’s ability to learn the norms, conventions, and preferences of other agents in order to rapidly adapt and cooperate more effectively. It proposes creating a population of diverse and capable agent strategies that an agent can learn through a limited amount of interaction (known as a k-shot setting). To encourage rapid adaptation, the learning agent will be constrained to prioritise strategies that are easier to learn and coordinate with, guided by their description length. The approach will be evaluated in the game of Welfare Diplomacy, focusing on the agent's ability to form stable, high-welfare coalitions with unknown partners and its robustness to exploitative strategies.

Other-Regarding Goals

Mia Taylor

GBP 74,344

2025-2026

Center on Long-Term Risk

This project addresses the risk of AI agents acquiring unintended goals, with a focus on other-regarding goals (such as spite) which take into account the preferences of other agents. It aims to investigate whether greater representation of behaviors consistent with a particular goal in the training data make it more likely that a model acquires that goal during subsequent reinforcement learning. The purpose is to understand how to develop training schemes that select for cooperative dispositions. 

AI Coercive Capabilities: Concepts and Measurements

Sophia Hatz

639,830 SEK

2025-2026

Uppsala University

This project addresses the dual-use capabilities underlying coercion in AI systems. Strong coercive capabilities could lead to large-scale societal harms through misuse. Conversely, some of the capabilities enabling coercion are also essential for fostering cooperation, such as increasing the credibility of commitments. With these challenges in mind, this project aims to develop practical ways to measure these capabilities and model the risks associated with different levels of coercive capabilities. 

This is the first early-career track grant awarded by the Cooperative AI Foundation. Sophia Hatz is an Associate Professor at the Department of Peace and Conflict Research (Uppsala University). She leads the Working Group on International AI Governance, within the Alva Myrdal Center for Nuclear Disarmament. 

AI for Humanitarian Crisis Negotiation and Beyond

Finale Doshi Velez

USD 213,707

2024-2027

Harvard University

This project addresses the challenge of supporting human decision-makers in complex, multi-party negotiations for societal benefit, particularly in humanitarian crises. While these scenarios could be studied using traditional coalition building games (CBGs) focused on optimal coalition structures, this project recognises the limitations of such approaches, especially the lack of focus on iterative formation and the prioritisation of humanitarian goals across multiple negotiation rounds. To address this, the project will build upon a CBG framework, using MARL to develop coalition formation strategies for multiple goals and LLMs to synthesise and extract key information from unstructured negotiation case files. The project will then test this AI-assisted negotiation method with both lay users in synthetic scenarios as well as with teams of real frontline negotiators.

Governing the risks that the interaction between AI agents may present to international peace and security

SIPRI

373,000 SEK

2025

The Stockholm International Peace Research Institute (SIPRI) is conducting a scoping study focused on the risks that the interaction between AI agents may present to international peace and security. The aim of the study is to raise awareness on the topic in diplomatic circles dedicated to international security, and to inform the design of a potential followup project on how cooperation challenges related to agentic AI ought to be governed at the multilateral level.

July 21, 2025

Cecilia Elena Tilli
Associate Director (Research & Grants), Cooperative AI Foundation
Rebecca Eddington
Grants and Events Officer