Postdoctoral Researcher at MIT

This position is with the Algorithmic Alignment Group in the Computer Science and Artificial Intelligence Lab at MIT and, alongside collaborators at DeepMind and the Cooperative AI Foundation, will lead the design and implementation of a large-scale Cooperative AI contest to take place next year at a major AI conference. We expect this work to be highly important and influential for driving progress in the field.

Deadline: 31 July 2022 23:59 UTC
06 November 2022 23:59 UCT

The Algorithmic Alignment Group is led by Dylan Hadfield-Menell, an assistant professor on the faculty of Artificial Intelligence and Decision-Making in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. His research focuses on the problem of agent alignment: the challenge of identifying behaviours that are consistent with the goals of another actor or group of actors. The Algorithmic Alignment Group works to identify algorithmic solutions to alignment problems that arise from groups of AI systems, principal-agent pairs (i.e., human-robot teams), and societal oversight of ML systems.

The postdoc will work with an interdisciplinary team from several institutions including DeepMind and the Cooperative AI Foundation, as well as with MIT faculty, staff, and students.

The Role
The Organisation

A core component of human intelligence is the ability to effectively cooperate across a broad group of agents in a variety of one or few-shot interactions where agents are interdependent. A key open problem in AI is the development of agents or populations that mimic this cognitive ability. This has the promise to improve design strategies for helpful and prosocial artificial agents and can shed light on the development of these abilities in humans. We are looking for a postdoc to lead innovative research on these problems, specifically through the lens of multi-agent reinforcement learning. Specific goals of the research include:

  • The design and launch of a contest on multi-agent reinforcement learning approaches to Cooperative AI at a top-tier ML conference, to standardise approaches and infrastructure for cooperation research,
  • Leading novel research on the incentives for cooperative behaviours,
  • Leading on a contest summary paper, along with the participants and senior advisors, to distill what was learned about the relative strengths and weaknesses of different approaches to cooperative AI.

This postdoc position is highly technical and visible and will play an integral role in designing the contest. The postdoc will work with an interdisciplinary team from several institutions including DeepMind and the Cooperative AI Foundation, as well as with MIT faculty, staff, and students.

The successful candidate will be expected to start in either September 2022 or January 2023. This is a temporary position with funding initially provided for 12 months, with a funding renewal process due to take place 9 months after the start date.

The position is based on-site at MIT in Cambridge, Massachusetts. The role also includes travel expenses for several trips to the UK, providing opportunities for in-person collaboration with researchers at DeepMind and the Cooperative AI Foundation.

Salary
Location

The salary for the role is $75,000. The role also includes all standard employee benefits offered by MIT.

Qualifications

If you are interested in this position, please apply by via email before the 31st of July at 23:59 UTC. Applications should include:

  • A CV,
  • A cover letter that explains why you are interested in this topic and that highlights up to three of your most relevant publications, and any other relevant skills or experience,
  • Three reference letters.

Candidates who succeed in passing the application review process will then be invited to an interview, after which a final decision will be made.

MIT is an equal opportunity employer and academic institution, please find further information here. Candidates from traditionally underrepresented backgrounds in CS and AI are especially encouraged to apply. If you have specific needs or circumstances that require accommodation, please include this in your application.

Candidates should have technical knowledge of computer science, AI and machine learning, and also a strong background or interest in cooperative AI research and allied fields in the social and natural sciences. The minimum required education and experience are as follows:

  • PhD in computer science or other scientific or engineering field with focus on computation,
  • Familiarly with open-source philosophy and methodologies,
  • Broad technical knowledge of a wide variety of current AI and ML research and practice,
  • Technical experience with implementation of AI or ML tools/models/artefacts,
  • Excellent software engineering and communication skills in academic research settings.

Preferred education and experience include:

  • Specific experience with PyTorch, or other ML libraries,
  • Demonstrated success at publishing in top-tier AI/ML venues,
  • Technical expertise in large-scale reinforcement learning methods,
  • Technical expertise in multi-agent learning and/or game theory,
  • Interest in doing novel research on cooperation and multi-agent reinforcement learning.
Application Process