Cooperative AI Research Grants

The Cooperative AI Foundation is seeking proposals for research projects in Cooperative AI. Anyone is eligible to apply, and we welcome applications from disciplines outside of computer science. This call will remain continuously open throughout 2024, and will have four deadlines after which applications will be processed: January 14, March 17, July 30, and October 6.

The Cooperative AI Foundation's (CAIF’s) mission is to support research that will improve the cooperative intelligence of advanced AI systems for the benefit of all of humanity. As the field of Cooperative AI is emerging and we are in an early stage of our grantmaking, our intention with this call is to keep the scope as wide as possible while staying true to CAIF’s mission. We will consider proposals that meet the following conditions:

  • The proposal must be for a research project (as opposed to, for example, an educational or advocacy project);
  • The proposal must be focused on the development of AI to help address multi-agent/cooperation problems;
  • The proposal must be such that the results could be relevant for the most advanced AI systems, including future systems;
  • The proposal should aim to contribute in a major way to societally beneficial AI development.

Further guidance on what we mean by these terms and on some of the research directions that we are currently prioritising can be found in the frequently asked questions and the supporting document further below.

Selection Criteria
Scope

We want to fund the projects that we expect to be the most valuable for our mission. The following five criteria will be central in our evaluation of proposals:

  • Impact: If the project is successfully executed, how important would the results be from a societal perspective? Could this be among the most important work in the field of Cooperative AI right now?
  • Tractability/feasibility: Is this project likely to be successfully executed? Is it a tractable problem in general, and does the team behind the application have the ability to carry it out?
  • Neglectedness: If we didn’t fund this project, how likely is it that very similar work would happen soon anyway? Proposals that have a significant potential commercial value are less likely to score well on this criterion.
  • Risk: Is there a risk that this project might have harmful consequences, such as contributing to development of more dangerous systems? Are such risks appropriately managed?
  • Cost-effectiveness: Is the level of funding requested appropriate for the scope of work? Have overhead costs been kept to a minimum?

Anyone is welcome to apply to this call, and we welcome applications from disciplines outside of computer science.

  • Formal training and degrees (such as a doctoral degree) can strengthen your proposal, but are not required.
  • An affiliation can, in many cases, strengthen your proposal, but is not required. Note that processing of applications from unaffiliated individuals may take longer time.
  • You can be located anywhere in the world. For countries with a low Corruption Perceptions Index, processing may take longer due to a more extensive due diligence process.
  • The project you propose can be up to two years long, and should begin within at most one year from the application deadline.
  • For now, we will not process applications for less than GBP 10,000. This may change in the future.

Our aim is to be able to cover all costs for completing accepted projects. This could include:

  • Personnel costs for research staff;
  • Materials (including software and compute);
  • Travel expenses;
  • Publication expenses.

We allow a maximum of 10% in indirect costs (overhead). We do not cover personnel costs for teaching.

We do not have a fixed upper limit on how large funding requests we consider, but cost-effectiveness is important to us and we do reject proposals where the costs do not stand in proportion to the expected impact. The grants we have made so far range from GBP 10,000 to GBP 385,000, with a median size close to GBP 150,000.

Application Process
Eligibility

Applications will be processed quarterly in batches, based on the following rolling deadlines: January 14, March 17, July 28, and October 13. Applications are processed via the SurveyMonkey Apply platform, and should consist of a four-page proposal, with a separate short project plan and budget. More detailed instructions about what your proposal should contain, as well as templates for budget and project plan, can be found on the application platform.

We suggest that applicants start preparing their application well before the deadline, so that we can answer any queries in time. After each deadline, we aim to process submitted applications and provide a decision within nine weeks from the deadline. This decision will be one of the following:

  • Accepted: Your proposal is accepted as submitted, conditional on successful completion of due diligence.
  • Minor Revisions: The grant committee responds positively to your proposal but recommends some minor revisions in order to determine whether your proposal should be accepted. You can resubmit your revised proposal before any upcoming application deadline.
  • Rejection: Your proposal is rejected. This may include suggestions for (major) revisions, after which your proposal can be resubmitted before any upcoming application deadline.

For accepted proposals we aim to complete the full process including due diligence within twelve weeks from the deadline. Some proposals may however take longer to process, particularly in the first rounds.

FAQs

What do you mean by "AI to help address multi-agent/cooperation problems"?
Typically, multi-agent problems include multiple artificial agents, or multiple human agents and at least one artificial agent. We believe that the most important cooperation failures occur when agents have different objectives. This means that we are significantly less likely to view work in the fully cooperative setting as scoring highly on impact, in terms of CAIF’s mission. We will also generally not consider work on aligning one artificial agent to one principal (a human), even if that problem technically does consist of two agents. Finally, please note that cooperative AI does not necessarily include work on helping humans cooperate to build AI – in short, our focus is on "AI for cooperation", not "cooperation for AI".
How do you assess if my research will be relevant for the most advanced AI systems, including future systems?
We cannot know for sure which work will be most relevant to AI systems that do not yet exist, but there are some things that can make work less likely to be relevant for future systems. This would for example be if your research depends a lot on properties of existing systems that we expect to change in subsequent generations. An example of something that would be less relevant here are theoretical analyses that make very restrictive assumptions and are unlikely to generalise, such that they're unlikely to tell us anything about real-world advanced AI systems.
What do you mean by "societally beneficial"?
Advanced AI is likely to transform society in many ways. We are focused on ensuring that the large-scale consequences for humanity are beneficial. In practice, this means that we are especially excited about work that has a clear path to positive impact at a very large scale (see examples below), rather than narrower applications (such as coordination between autonomous vehicles, for example). For example, this could include work on AI tools for collective decision-making that could demonstrably scale to vast populations, allowing deeper democratic engagement and consensus building, or technical research on how to avoid the most severe kinds of conflict involving AI systems (which are increasingly being used in high-stakes military situations).
How "neglected" does something have to be for you to fund it?
It is important that we use our funding in a cost-effective way to fulfill our mission. A part of this is avoiding funding work that would be likely to happen (soon) regardless of our support. This is often hard to evaluate, but, for example, it is unlikely that we would fund research that is aiming at producing results or patents that would be commercially valuable, as this type of research can typically attract other funding.
I’m still uncertain whether my proposal fits the scope of the call, can I get some early feedback?
Yes, we want to help you with this! If you have gone over the call description and this FAQ and you are still uncertain whether you should apply, you are welcome to reach out to us with a short description of your idea. You can email us at grants@cooperativeai.org or use our contact form. We will try to provide feedback to everyone and also continue to update this FAQ. Note that we will only give feedback on how well your project fits the scope.
How much funding can I apply for?
We have not set any upper bound for how much funding you can apply for. However, the budget should be for a maximum of two years, and you should ensure a prudent use of funds and consider the criteria on cost-effectiveness. To begin with, we will not process applications for less than GBP 10,000. This may change in the future. The grants we have made so far range from GBP 10,000 to GBP 385,000, with a median size close to GBP 150,000.
Can I apply for funding for travel?
You can include travel expenses in your application budget as part of a research project, if the travel is directly related to and important for the project. However, we do not consider applications that are only for travel expenses.
Who will see my application?
Your application will be read by administrative staff and by internal reviewers employed by CAIF. If your application is progressed past this stage, it will also be reviewed by external reviewers (typically two people) who are researchers knowledgeable of the field, as well as by the grantmaking committee. CAIF’s trustees may also access applications, if necessary. We do not share the identity of any reviewers with applicants. If you have any specific confidentiality concerns around your application there is a specific field in the application forms where you can let us know about this.
Will funds to universities be processed as a "grant" or a "gift"?
We offer funding for universities in the form of grants (as opposed to unconditional gifts).
Do you offer feedback on rejected applications?
For applications that are rejected in screening we will have limited ability to give feedback, but we might, for example, indicate if we think the proposal was out of scope for the call. For applications that are rejected after going through our full review process we will attempt to provide at least a couple of sentences of feedback or reason for rejection. Our capacity to do this will depend on how many applications we receive.
Can I submit more than one application?
Yes, if you have more than one distinct research proposal that falls under the scope of the grant call, then you are welcome to submit more than one application. Each application will be considered completely independently.
What are the most common reasons that applications are rejected?
The most common reasons that we reject proposals are: not being sufficiently in scope (e.g. focusing on alignment and not cooperation); not clearly explaining the important real-world problem/threat model that the proposal helps to address, and the underlying assumptions for it doing so; representing an incremental advance or being likely to be achieved soon regardless of CAIF's support for the specific proposal; combining too many work packages without sufficient technical detail (especially when those work packages are mostly independent); and including an unreasonably high budget or personnel requirements relative to the scope of work.
Is there a limit on indirect costs/overhead?
Yes, we allow a maximum of 10% indirect costs.

References

Agapiou, John P. , Alexander Sasha Vezhnevets, Edgar A. Duéñez-Guzmán, Jayd Matyas, Yiran Mao, Peter Sunehag, Raphael Köster, Udari Madhushani, Kavya Kopparapu, Ramona Comanescu, DJ Strouse, Michael B. Johanson, Sukhdeep Singh, Julia Haas, Igor Mordatch, Dean Mobbs, and Joel Z. Leibo. 2023. "Melting Pot 2.0". arXiv:2211.13746.
Bakker, Michiel, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, and Christopher Summerfield (2022). “Fine-tuning Language Models to Find Agreement Among Humans with Diverse Preferences”. In Proceedings of the 36th International Conference on Neural Information Processing Systems.
Bommasani, Rishi, Dilara Soylu, Thomas I. Liao, Kathleen A. Creel, and Percy Liang (2023). “Ecosystem Graphs: The Social Footprint of Foundation Models”. arXiv:2303.15772.
Brero, Gianluca, Nicolas Lepore, Eric Mibuari, and David C. Parkes (2022). Learning to Mitigate AI Collusion on Economic Platforms. arXiv:2202.07106.
Calvano, Emilio, Giacomo Calzolari, Vincenzo Denicolò, and Sergio Pastorello (2020). "Artificial Intelligence, Algorithmic Pricing, and Collusion." American Economic Review (110:10), pp. 3267-3297.
Conitzer, Vincent, and Caspar Oesterheld (2023). “Foundations of Cooperative AI”. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, pp. 15359-15367.
Critch, Andrew and Stuart Russell (2023). "TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI". arXiv:2306.06924.
Johnson, James (2021). "Inadvertent Escalation in the Age of Intelligence Machines: A New Model for Nuclear Risk in the Digital Age". European Journal of International Security (7:3), pp. 337-359.
Lu, Chris, Timon Willi, Christian Schroeder de Witt, and Jakob Foerster (2022). “Model-Free Opponent Shaping”. In Proceedings of the 39th International Conference on Machine Learning, pp. 14398-14411.
McKee, Kevin R., Andrea Tacchetti, Michiel A. Bakker, Jan Balaguer, Lucy Campbell-Gillingham, Richard Everett, and Matthew Botvinick (2023) “Scaffolding Cooperation in Human Groups with Deep Reinforcement Learning”. Nature Human Behaviour (7), pp. 1787-1796.
Mukobi, Gabriel, Hannah Erlebach, Niklas Lauffer, Lewis Hammond, Alan Chan, and Jesse Clifton (2023). “Welfare Diplomacy: Benchmarking Language Model Cooperation”. arXiv:2310.08901.
Pan, Alexander, Jun Shern Chan, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Hanlin Zhang, Scott Emmons, and Dan Hendrycks (2023). “Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark”. In Proceedings of the 40th International Conference on Machine Learning pp. 26837-26867.
Pardoe, David, Peter Stone, Maytal Saar-Tsechansky, and Kerem Tomak (2006). “Adaptive Mechanism Design: A Metalearning Approach. In Proceedings of the 8th International Conference on Electronic Commerce, pp. 92-102.
Roger, Fabien and Ryan Greenblatt (2023). "Preventing Language Models From Hiding Their Reasoning". arXiv:2310.18512.
Sourbut, Oliver, Lewis Hammond, and Harriet Wood (2024). "Cooperation and Control in Delegation Games". arXiv:2402.15821.
Yang, Jiachen, Ang Li, Mehrdad Farajtabar, Peter Sunehag, Edward Hughes, and Hongyuan Zha (2020). “Learning to Incentivize other Learning Agents. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 15208-15219.

Supporting documents

Research Priorities

Lewis Hammond

While the scope of this call is intentionally wide, there are some research directions that we would be especially excited to see proposals for. A brief and incomplete list of examples of such directions is provided below, with additional guidance given in the scope of the call and under the frequently asked questions. We expect to update this list over time, including with additional references (which at present are by no means comprehensive), and welcome feedback on how it could be improved.

  • Developing theoretically-grounded evaluations of cooperation-relevant properties of AI systems, especially evaluations of large language models, e.g. Agapiou et al. (2023), Mukobi et al. (2023), and Pan et al. (2023)
  • Study of how the special properties of AI systems, such as transparency or replicability, can affect cooperation, e.g. Conitzer and Oesterheld (2023) and references therein
  • Scalable techniques for using AI to enhance cooperation between humans, e.g. Bakker et al. (2022) and McKee et al. (2023)
  • Conceptual and theoretical work about how to disentangle and define cooperative capabilities and dispositions, e.g. Sourbut et al. (2024)
  • Principled and scalable methods for shaping the training processes and interactions of AI systems to lead to more (or less) cooperative outcomes, e.g. Pardoe et al. (2006), Lu et al. (2022), and Yang et al. (2020)
  • Investigations into possible multi-agent failures in high-stakes scenarios and how they might be prevented, e.g. Critch & Russell (2023) and Johnson (2021)
  • Methods for detecting and preventing collusion between AI systems, including steganographic collusion, e.g. Brero et al. (2022), Calvano et al. (2020), and Roger & Greenblatt (2023)
  • Technical research that can directly support governance efforts towards cooperation and safety in multi-agent systems, e.g. Bommasani (2023)

Please note that we may also fund work that does not fall under or is not related to these examples. If you have ideas that fit the overall scope and match the selection criteria but that are different from what is listed above, please do not be discouraged from applying!