Announcing the 2026 Cooperative AI PhD Fellows

We're delighted to welcome 14 exceptional early careerists who'll be joining our next Cooperative AI PhD Fellowship cohort.

Our fellows will contribute to advancing cooperative AI while receiving dedicated support and funding through our fellowship programme. This year, we received 240 applications, an increase of 35% compared to our first cohort in 2025. The amount of high-quality proposals reflects a rapidly growing interest in cooperative AI.

Among this year’s newly selected fellows, one will study the computational circuits that enable agents to coordinate covertly, while another will develop a multi-agent R&D testbed paired with oversight tools, including active honeypot agents designed to probe for collusion patterns. Another fellow will explore opportunities for consensus-building in transformative AI delegation contexts using reflective equilibrium. These are just a few of the exciting research proposals put forward by our new cohort.

Our selection process focused on three criteria: the potential impact of the proposed research, demonstrated academic excellence, and a clear commitment to addressing multi-agent and cooperation challenges in AI systems.

Fellows will receive annual budgets for conferences, compute, wellbeing, and productivity; invitations and travel funding for relevant CAIF events; expanded opportunities to share their research via our website, events, social media, and other channels; and access to collaboration with CAIF researchers, advisors, and partner organisations.

We also congratulate our CAIF Scholars, who ranked among the top 1% of applicants in this highly competitive process. In addition to full fellowship benefits, Scholars will receive top-up funding of up to $40,000 per year for living expenses, for up to three years.

Read more about our cohort’s backgrounds and research focus below.

Alistair Letcher

I'm a PhD student at the University of Oxford, supervised by Jakob Foerster, with a background in pure mathematics. My research revolves around reinforcement learning, world models, multi-agent learning, (reverse) game theory, and collective intelligence. I am currently working on making AI agents more robust and interpretable by extracting their implicit beliefs about the world. My hot take on AI safety is that alignment between AI and humanity is less pressing than alignment between owners of AI and the rest of us.

Personal Website | Google Scholar | X

Michelle Si

I am a first year computer science PhD student at Harvard advised by Ariel Procaccia and Finale Doshi-Velez. My research spans theoretical computer science, human-computer interaction, and cognitive science. In my work, I explore formalisations of consensus building as well as preference alignment for determining high-stakes decision rules. I am excited to pursue further research in human-ai collaboration in negotiation, healthcare, and legal settings. Previously, I studied math as an A.B. (B.A.) Scholar at Duke and conducted research on data markets, where I had the privilege to be mentored by Jian Pei. I have also spent time at Microsoft Research New England interning on the Economics and Computation team.

Personal Website | Google Scholar | LinkedIn

Prerna Ravi

I am a Computer Science PhD student at MIT advised by Dr. Hal Abelson and Dr. Michiel Bakker. I work at the intersection of Human-Computer Interaction and Artificial Intelligence. My research explores how AI systems can augment group collaboration and deliberation across diverse settings. I design frameworks and interventions that support equitable team participation, foster group trust and social connection, and facilitate consensus-building. My work informs applications in education, creative practice, and collective decision-making. I also develop critical AI literacy programmes that empower stakeholders to engage in responsible and ethical AI practices. Before my PhD, I completed an S.M. (MSc) in Computer Science from MIT and a B.S. in Computer Science from Georgia Institute of Technology. I have previously worked at Microsoft and Google Research in both research and software engineering roles.

Personal Website | LinkedIn | X

Shashank Reddy Chirra

I am a PhD student at the University of Oxford, supervised by Professor Jakob Foerster. My research focuses on multi-agent cooperation at scale, particularly on building agents capable of negotiation, debate, and other forms of coordination under mixed objectives and incentives. I am especially interested in designing open-ended environments that elicit these behaviours, and in the role of pretraining in such environments for building generalist, cooperative agents.

 

Personal Website | Google Scholar | LinkedIn

Jennifer Za Nzambi

I am a research fellow at MATS, working on multi-agent coordination alongside Samuel Albanie. My research explores how teams of AI agents cooperate on complex tasks and studies their decision-making. Previously, I worked on training language models for forecasting and researched chain-of-thought monitoring vulnerabilities alongside Victoria Krakovna. My background is in computer science and economics.

Matīss Apinis

I am an MSc student in Computer Science at University of Latvia and AI research engineer at Tilde working with large language models. My research focus is on how AIs reason about strategic interactions and whether they acausally cooperate safely and avoid catastrophic conflicts. I have developed evaluations for capabilities and predispositions of LLMs tied to decision theory (Newcomblike problems) and anthropic reasoning (self-locating beliefs). My research interest stems from such foundational cooperative AI properties remaining largely unmeasured in current systems despite high-stakes implications for multi-agent AI safety.

Google Scholar | LinkedIn

Akash Agrawal

I am a research scholar at MATS, working on understanding compositionality of safety properties in multi-agent and multi-principal settings. In the past, I studied at the University of Oxford, where I worked on reinforcement learning and agent-based modeling; and at Indian Institute of Technology Delhi, where I worked on problems in computational social choice and graph theory.

Personal Website | Google Scholar | LinkedIn

Kiriaki Fragkia

I am a Computer Science PhD student at Carnegie Mellon University, advised by Maria-Florina Balcan. My research lies broadly at the intersection of machine learning theory and algorithmic game theory. Specifically, I am interested in how the emergent capabilities of modern AI might affect learning and decision-making in multi-agent settings. Through my work, I aim to address challenges that uniquely arise in multi-agent AI systems in a way that facilitates more reliable, safe, and effective deployment of AI in complex, strategic environments. Prior to CMU, I completed my undergraduate studies at the University of California, Berkeley.

Personal Website | Google Scholar | X

Tianyi Qiu

I am Tianyi Alex Qiu, doing empirical and theoretical ML research on ‘what the heck to align AIs to when alignment itself is a self-fulfilling prophecy’. I try to operationalise reflective equilibrium in human-AI interaction as a solution concept that bridges value disagreements, both between human individuals and between past and future selves. I am interested in moral progress in humans and the emergence of normativity in models. I have worked with Anthropic, UC Berkeley CHAI, Oxford HAI Lab, and PKU Alignment Team on related topics, and have received two Best Paper awards for research I led.

Personal Website | Google Scholar | LinkedIn | X

Aashiq Muhamed

I am a PhD student in Machine Learning at Carnegie Mellon University, advised by Mona Diab and Virginia Smith, and a two-time scholar (MATS). My research develops mechanistic foundations for understanding and governing multi-agent AI systems, using interpretability to uncover how foundation models coordinate, cooperate, and potentially collude. My fellowship project applies this lens to agentic collusionbuilding benchmarks that move beyond behavioural evaluation to probe the internal mechanisms driving collusive reasoning. Prior to my PhD, I spent four years as an Applied Scientist building large-scale AI systems at Amazon.

Personal Website | Google Scholar | LinkedIn | X

Annie Ulichney

I am a PhD student in Statistics at University of California, Berkeley studying the statistical foundations of responsible AI. My research examines how AI systems can support decision-making in settings shaped by uncertainty, incentives, and data limitations. I design systems that are robust to real-world implementation challenges and develop tools to rigorously evaluate system performance under these constraints. Drawing on statistics, machine learning, and economics, I study how to ensure that the resulting algorithms are reliable and promote socially responsible outcomes. My work is supported by the National Science Foundation Graduate Research Fellowships Program (GRFP), and I previously received my Bachelor’s degree in Applied Mathematics and Mechanical Engineering from Yale University.

Personal Website | Google Scholar | LinkedIn | X

Dhara Yu

I am a cognitive science PhD student at University of California, Berkeley, advised by Bill Thompson. I study the computational principles that underlie social cognition in humans and AI systems. I am interested in understanding how intelligent agents solve difficult coordination problems by developing abstractions such as social norms. I hope to apply these insights to build AI systems that can facilitate human cooperation. Before my PhD, I earned a BS in Symbolic Systems and an MS in Computer Science from Stanford University.

Personal Website | Google Scholar

Joseph Bejjani

I am currently completing my A.B. (B.A.) in Computer Science at Harvard University. I do research in Kianté Brantley’s group at the Kempner Institute, focusing on technical AI alignment and multi-agent systems. I am interested in understanding how unintended behaviours emerge in AI systems with interaction and scale, and particularly the ways in which AIs (mis)generalise training. I’ve worked on evaluating agent propensities, simulating ecosystems of 60,000+ agents, debugging biases in reward models for RLHF, and applying evolutionary approaches to deep learning. I aim to better understand AI systems and their failure modes in order to develop more reliable methods for aligning them with human intent.

Personal Website | LinkedIn | X | GitHub

Joshua Ashkinaze

I am a PhD candidate at the University of Michigan School of Information, advised by Eric Gilbert and Ceren Budak. My research addresses population-scale human-AI interaction. In one direction, I build new multi-agent systems to simulate perspectives and test whether exposure to these simulations improves decisions and changes attitudes. I am particularly interested in using AI to improve democracy and deliberation. In another direction, I create experimental paradigms to measure the collective and long-run effects of existing AI. My work draws on pluralistic alignment, human-AI interaction, and collective intelligence.

Personal Website | Google Scholar

February 6, 2026

Goda Mockutė
Programme Manager