Cooperative AI Research Grants

The Cooperative AI Foundation is seeking proposals for research projects in Cooperative AI. Anyone is eligible to apply, and we welcome applications from disciplines outside of computer science. This call will remain continuously open throughout 2024, and will have four deadlines after which applications will be processed: January 14, March 17, July 30, and October 6.

The Cooperative AI Foundation's (CAIF’s) mission is to support research that will improve the cooperative intelligence of advanced AI systems for the benefit of all of humanity. As the field of Cooperative AI is emerging and we are in an early stage of our grantmaking, our intention with this call is to keep the scope as wide as possible while staying true to CAIF’s mission. We will consider proposals that meet the following conditions:

  • The proposal must be for a research project (as opposed to, for example, an educational or advocacy project);
  • The proposal must be focused on multi-agent/cooperation problems involving AI systems;
  • The proposal must be such that the results could be relevant for the most advanced AI systems, including future systems;
  • The proposal should aim to contribute in a major way to societally beneficial AI development.

Further guidance on what we mean by these terms and on some of the research directions that we are currently prioritising can be found in the frequently asked questions and the supporting document further below. You can also read about projects that we have funded here. The Introduction to Cooperative AI Curriculum is currently in development, but can be accessed if you would like to learn more about the research field of cooperative AI. We welcome any feedback on the curriculum via this form, which is particularly valuable whilst we are still developing the content.

Selection Criteria
Scope

We want to fund the projects that we expect to be the most valuable for our mission. The following five criteria will be central in our evaluation of proposals:

  • Impact: If the project is successfully executed, how important would the results be from a societal perspective? Could this be among the most important work in the field of Cooperative AI right now?
  • Tractability/feasibility: Is this project likely to be successfully executed? Is it a tractable problem in general, and does the team behind the application have the ability to carry it out?
  • Neglectedness: If we didn’t fund this project, how likely is it that very similar work would happen soon anyway? Proposals that have a significant potential commercial value are less likely to score well on this criterion.
  • Risk: Is there a risk that this project might have harmful consequences, such as contributing to development of more dangerous systems? Are such risks appropriately managed?
  • Cost-effectiveness: Is the level of funding requested appropriate for the scope of work? Have overhead costs been kept to a minimum?

Anyone is welcome to apply to this call, and we welcome applications from disciplines outside of computer science.

  • Formal training and degrees (such as a doctoral degree) can strengthen your proposal, but are not required.
  • An affiliation can, in many cases, strengthen your proposal, but is not required. Note that processing of applications from unaffiliated individuals may take longer time.
  • You can be located anywhere in the world. For countries with a low Corruption Perceptions Index, processing may take longer due to a more extensive due diligence process.
  • The project you propose can be up to two years long, and should begin within at most one year from the application deadline.
  • For now, we will not process applications for less than GBP 10,000. This may change in the future.

Our aim is to be able to cover all costs for completing accepted projects. This could include:

  • Personnel costs for research staff;
  • Materials (including software and compute);
  • Travel expenses;
  • Publication expenses.

We allow a maximum of 10% in indirect costs (overhead). We do not cover personnel costs for teaching.

We do not have a fixed upper limit on how large funding requests we consider, but cost-effectiveness is important to us and we do reject proposals where the costs do not stand in proportion to the expected impact. The grants we have made so far range from GBP 10,000 to GBP 385,000, with a median size close to GBP 150,000.

Application Process
Eligibility

Applications will be processed quarterly in batches, based on the following rolling deadlines: January 14, March 17, July 30, and October 6. Submit applications by 23.59 in your local time zone. Applications are processed via the SurveyMonkey Apply platform, and should consist of a proposal of up to five pages, with a separate short project plan and budget. More detailed instructions about what your proposal should contain, as well as templates for budget and project plan, can be found on the application platform. Most applicants report spending around 20-30 hours on their application.

We suggest that applicants start preparing their application well before the deadline, so that we can answer any queries in time. After each deadline, we aim to process submitted applications and provide a decision within nine weeks from the deadline. This decision will be one of the following:

  • Accepted: Your proposal is accepted as submitted, conditional on successful completion of due diligence.
  • Minor Revisions: The grant committee responds positively to your proposal but recommends some minor revisions in order to determine whether your proposal should be accepted. You can resubmit your revised proposal before any upcoming application deadline.
  • Rejection: Your proposal is rejected. This may include suggestions for (major) revisions, after which your proposal can be resubmitted before any upcoming application deadline.

For accepted proposals we aim to complete the full process including due diligence within twelve weeks from the deadline. Some proposals may however take longer to process, particularly in the first rounds.

FAQs

What are the most common reasons that applications are rejected?

The most common reason for rejection of a proposal is that it is not sufficiently in scope of the grant call. There are three common types of applications that we receive that fall out of the scope of what we consider for funding:

  1. Projects where the main output is an AI-powered tool designed to solve a specific problem, such as access to healthcare, improvement of medical diagnostic tools and processes, disaster preparedness, or infrastructure/housing development. While we appreciate that these are important societal problems and that developments in cooperative AI may contribute to developments in such areas, these projects are often too superficially linked to the field of cooperative AI to be considered for funding from us.
  2. Projects where the main purpose is improving the user experience or general performance of different types of AI tools, such as tools used for software development, translation and text interpretation, or AI personal assistants. Applicants often frame this as “improving the cooperation between the AI and the user”, but this narrow form of multi-agent cooperation problem is less of a priority from the perspective of CAIF’s mission.
  3. Projects that study how AI is used in society, with only a weak or superficial link to cooperative AI. Such projects are often focused on evaluating the impact of AI development on a specific group of stakeholders, how users perceive AI-generated content or on questions of ownership, liability, or copyright. Again, while such topics are important, they (mostly) do not fall under the category of cooperative AI.

Additionally, there are some common reasons why we reject proposals that are in scope of our call:

  • The proposal is lacking in technical detail, which makes it difficult or impossible to evaluate properly as it is unclear what exactly the applicant proposes to do, or what kind of results they are aiming for;
  • The proposed work would only lead to incremental progress and is unlikely to make any big difference;
  • The proposed work (or very similar work) seems likely to happen without our support;
  • The proposed work does not take relevant LLM development into consideration;
  • The proposed work is focused on a domain that is very simple or narrow and the results are not likely to generalise or scale well to more complex settings;
  • The budget is unreasonably high relative to the scope of the work.
My application was rejected with some feedback. Should I submit a new version?

As many applicants value feedback on their applications regardless of the outcome, we aim to provide constructive comments on as many applications as we are able to given our limited resources. Getting feedback on your application should therefore not by default be understood as that we would fund your application if you submit a new version where the feedback is addressed. In some cases, you might however be able to use the feedback to produce a new application that we would fund, so it is hard to give a general answer to this question.

Note the distinction between rejection and minor revision: if reviewers believe there is a high chance that you will be able to address the feedback in such away that your application qualifies for funding, you will be asked for a minor revision. A rejection is an indication that changes to the proposal would have to be significant to pass the bar for funding. If you are planning a resubmission and you are uncertain about how to interpret the feedback you received or the chances for acceptance, we highly encourage you to reach out to us and we will clarify this for your specific case.

What do you mean by "multi-agent/cooperation problems"?

We fund research that is focused on addressing multi-agent/cooperation problems involving AI. Typically, multi-agent problems include multiple artificial agents, or multiple human agents and at least one artificial agent. A few important things to note on this topic are:

  • We believe that the most important cooperation failures occur when agents have different objectives. This means that we are significantly less likely to view work in the fully cooperative setting as scoring highly on impact, in terms of CAIF’s mission;
  • We will generally not consider work on aligning one artificial agent to one principal (a human), even if that problem technically does consist of two agents;
  • Please note that cooperative AI does not necessarily include work on helping humans cooperate to build AI – in short, our focus is on "AI for cooperation", not "cooperation for AI".
What do you mean by "societally beneficial"?

Advanced AI is likely to transform society in many ways. We are focused on ensuring that the large-scale consequences for humanity are beneficial. In practice, this means that we are especially excited about work that has a clear path to positive impact at a very large scale (see examples below), rather than narrower applications (such as coordination between autonomous vehicles, for example).


For example, this could include work on AI tools for collective decision-making that could demonstrably scale to vast populations, allowing deeper democratic engagement and consensus building, or technical research on how to avoid the most severe kinds of conflict involving AI systems (which are increasingly being used in high-stakes military situations).

How do you assess if my research will be relevant for the most advanced AI systems, including future systems?

We cannot know for sure which work will be most relevant to AI systems that do not yet exist, but there are some things that can make work less likely to be relevant for future systems. This would for example be if your research depends a lot on properties of existing systems that we expect to change in subsequent generations. An example of something that would be less relevant here are theoretical analyses that make very restrictive assumptions and are unlikely to generalise, such that they're unlikely to tell us anything about real-world advanced AI systems.

How "neglected" does something have to be for you to fund it?

It is important that we use our funding in a cost-effective way to fulfill our mission. A part of this is avoiding funding work that would be likely to happen (soon) regardless of our support. This is often hard to evaluate, but, for example, it is unlikely that we would fund research that is aiming at producing results or patents that would be commercially valuable, as this type of research can typically attract other funding.

I’m still uncertain whether my proposal fits the scope of the call, can I get some early feedback?

Yes, we want to help you with this! If you have gone over the call description and this FAQ and you are still uncertain whether you should apply, you are welcome to reach out to us with a short description of your idea. You can email us at grants@cooperativeai.org or use our contact form. We will try to provide feedback to everyone and also continue to update this FAQ. Note that most of the time we will only be able to give feedback on how well your project fits the scope.

Can I submit more than one application?

Yes, if you have more than one distinct research proposal that falls under the scope of the grant call, then you are welcome to submit more than one application. Each application will be considered completely independently.

Do you offer feedback on rejected applications?

For applications that are rejected in screening we will have limited ability to give feedback, but we might, for example, indicate if we think the proposal was out of scope for the call. For applications that are rejected after going through our full review process we will attempt to provide at least a couple of sentences of feedback or reason for rejection. Our capacity to do this will depend on how many applications we receive.

How much funding can I apply for?

We have not set any upper bound for how much funding you can apply for. However, the budget should be for a maximum of two years, and you should ensure a prudent use of funds and consider the criteria on cost-effectiveness. To begin with, we will not process applications for less than GBP 10,000. This may change in the future. The grants we have made so far range from GBP 10,000 to GBP 385,000, with a median size close to GBP 150,000.

Can I apply for funding for travel?

You can include travel expenses in your application budget as part of a research project, if the travel is directly related to and important for the project. However, we do not consider applications that are only for travel expenses.

Is there a limit on indirect costs/overhead?

Yes, we allow a maximum of 10% indirect costs.

Will funds to universities be processed as a "grant" or a "gift"?

We offer funding for universities in the form of grants (as opposed to unconditional gifts).

Who will see my application?

Your application will be read by administrative staff and by internal reviewers employed by CAIF. If your application is progressed past this stage, it will also be reviewed by external reviewers (typically two people) who are researchers knowledgeable of the field, as well as by the grantmaking committee. CAIF’s trustees may also access applications, if necessary. We do not share the identity of any reviewers with applicants. If you have any specific confidentiality concerns around your application there is a specific field in the application forms where you can let us know about this.

References

Agapiou, John P. , Alexander Sasha Vezhnevets, Edgar A. Duéñez-Guzmán, Jayd Matyas, Yiran Mao, Peter Sunehag, Raphael Köster, Udari Madhushani, Kavya Kopparapu, Ramona Comanescu, DJ Strouse, Michael B. Johanson, Sukhdeep Singh, Julia Haas, Igor Mordatch, Dean Mobbs, and Joel Z. Leibo. 2023. "Melting Pot 2.0". arXiv:2211.13746.
Bakker, Michiel, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, and Christopher Summerfield (2022). “Fine-tuning Language Models to Find Agreement Among Humans with Diverse Preferences”. In Proceedings of the 36th International Conference on Neural Information Processing Systems.
Bommasani, Rishi, Dilara Soylu, Thomas I. Liao, Kathleen A. Creel, and Percy Liang (2023). “Ecosystem Graphs: The Social Footprint of Foundation Models”. arXiv:2303.15772.
Brero, Gianluca, Nicolas Lepore, Eric Mibuari, and David C. Parkes (2022). Learning to Mitigate AI Collusion on Economic Platforms. arXiv:2202.07106.
Calvano, Emilio, Giacomo Calzolari, Vincenzo Denicolò, and Sergio Pastorello (2020). "Artificial Intelligence, Algorithmic Pricing, and Collusion." American Economic Review (110:10), pp. 3267-3297.
Conitzer, Vincent, and Caspar Oesterheld (2023). “Foundations of Cooperative AI”. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, pp. 15359-15367.
Critch, Andrew and Stuart Russell (2023). "TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI". arXiv:2306.06924.
Johnson, James (2021). "Inadvertent Escalation in the Age of Intelligence Machines: A New Model for Nuclear Risk in the Digital Age". European Journal of International Security (7:3), pp. 337-359.
Lu, Chris, Timon Willi, Christian Schroeder de Witt, and Jakob Foerster (2022). “Model-Free Opponent Shaping”. In Proceedings of the 39th International Conference on Machine Learning, pp. 14398-14411.
McKee, Kevin R., Andrea Tacchetti, Michiel A. Bakker, Jan Balaguer, Lucy Campbell-Gillingham, Richard Everett, and Matthew Botvinick (2023) “Scaffolding Cooperation in Human Groups with Deep Reinforcement Learning”. Nature Human Behaviour (7), pp. 1787-1796.
Mukobi, Gabriel, Hannah Erlebach, Niklas Lauffer, Lewis Hammond, Alan Chan, and Jesse Clifton (2023). “Welfare Diplomacy: Benchmarking Language Model Cooperation”. arXiv:2310.08901.
Pan, Alexander, Jun Shern Chan, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Hanlin Zhang, Scott Emmons, and Dan Hendrycks (2023). “Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark”. In Proceedings of the 40th International Conference on Machine Learning pp. 26837-26867.
Pardoe, David, Peter Stone, Maytal Saar-Tsechansky, and Kerem Tomak (2006). “Adaptive Mechanism Design: A Metalearning Approach. In Proceedings of the 8th International Conference on Electronic Commerce, pp. 92-102.
Roger, Fabien and Ryan Greenblatt (2023). "Preventing Language Models From Hiding Their Reasoning". arXiv:2310.18512.
Sourbut, Oliver, Lewis Hammond, and Harriet Wood (2024). "Cooperation and Control in Delegation Games". arXiv:2402.15821.
Yang, Jiachen, Ang Li, Mehrdad Farajtabar, Peter Sunehag, Edward Hughes, and Hongyuan Zha (2020). “Learning to Incentivize other Learning Agents. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 15208-15219.

Supporting documents

Research Priorities

Lewis Hammond

While the scope of this call is intentionally wide, there are some research directions that we would be especially excited to see proposals for. A brief and incomplete list of examples of such directions is provided below, with additional guidance given in the scope of the call and under the frequently asked questions. We expect to update this list over time, including with additional references (which at present are by no means comprehensive), and welcome feedback on how it could be improved.

  • Developing theoretically-grounded evaluations of cooperation-relevant properties of AI systems, especially evaluations of large language models, e.g. Agapiou et al. (2023), Mukobi et al. (2023), and Pan et al. (2023)
  • Study of how the special properties of AI systems, such as transparency or replicability, can affect cooperation, e.g. Conitzer and Oesterheld (2023) and references therein
  • Scalable techniques for using AI to enhance cooperation between humans, e.g. Bakker et al. (2022) and McKee et al. (2023)
  • Conceptual and theoretical work about how to disentangle and define cooperative capabilities and dispositions, e.g. Sourbut et al. (2024)
  • Principled and scalable methods for shaping the training processes and interactions of AI systems to lead to more (or less) cooperative outcomes, e.g. Pardoe et al. (2006), Lu et al. (2022), and Yang et al. (2020)
  • Investigations into possible multi-agent failures in high-stakes scenarios and how they might be prevented, e.g. Critch & Russell (2023) and Johnson (2021)
  • Methods for detecting and preventing collusion between AI systems, including steganographic collusion, e.g. Brero et al. (2022), Calvano et al. (2020), and Roger & Greenblatt (2023)
  • Technical research that can directly support governance efforts towards cooperation and safety in multi-agent systems, e.g. Bommasani (2023)

Please note that we may also fund work that does not fall under or is not related to these examples. If you have ideas that fit the overall scope and match the selection criteria but that are different from what is listed above, please do not be discouraged from applying!