A Look At Our Recent Partnerships To Support Early-Career Researchers

The Cooperative AI Foundation is delighted to have partnered with two external research initiatives (the PIBBSS Fellowship and the MATS Program) to support early-career researchers in cooperative AI. Below we discuss what these partnerships entail and share some promising outputs.

PIBBSS Fellowship: Advancing Cooperative AI Through Interdisciplinary Research

Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) is a research initiative exploring parallels between intelligent behaviour in natural and artificial systems, in order to advance on important questions in AI risk, governance and safety. The Cooperative AI Foundation supports the PIBBSS Fellowship Cooperative AI Track, focused on projects aiming to mitigate multi-agent AI risks and enhance the cooperative intelligence of advanced AI. Find out more about the application process and eligibility for the PIBBSS Fellowship here.


Recent PIBBSS fellows on the Cooperative AI Track

Fellow: Aron Vallinder
Mentor: Edward Hughes

The purpose of this project was to study cultural evolution in Large Language Model (LLM) agents. The project focused on the conditions under which stable norms of cooperation and coordination culturally evolve in populations of LLM agents tasked with playing public goods games, coordination games, and other relevant economic games.

Selected outputs:

MATS Program: Strengthening Cooperative AI Mentorship

The ML Alignment & Theory Scholars (MATS) Program is an independent research and educational seminar program connecting talented scholars with mentors in AI alignment, interpretability, and governance. The Cooperative AI Foundation provides funding for researchers eligible to attend these programs. Find out more about the application process and eligibility for MATS Cohorts here.

Recent MATS scholars supported by the Cooperative AI Foundation

Scholars: Jinyeop Song and Zora Che
Mentor: Max Kleiman-Weiner

34,147 USD

2024 - 2025


The Cooperative AI Foundation sponsored two scholars, Jinyeop Song and Zora Che, who attended the cohort under the mentorship of Max Kleiman-Weiner.


Jinyeop's research addressed the risk of power-seeking LLM agents by developing the first systematic approach to measure and quantify power in Language Model Agents. He then measured LLM empowerment in a range of benchmark environments.


Zora’s research explored how to improve the reliability of information provided by multiple AI language models. She specifically investigated the "metacognitive calibration" of LLMs—which means how accurately they can assess their own knowledge and confidence, and also predict the knowledge of others.


Selected outputs:


Both scholars are continuing their research in the MATS extension phase, aiming to publish papers.

July 21, 2025

Cecilia Elena Tilli
Associate Director (Research & Grants), Cooperative AI Foundation
Rebecca Eddington
Grants and Events Officer