Workshop Papers
Tuesday 14 December - NeurIPS 2021


  • Paper Submission Deadline: 25 September 2021

  • Final Decisions by: 26 October 2021

  • Camera Ready Deadline: 24 November 2021

  • Workshop Poster Deadline: 1 December 2021

  • Workshop: Tuesday 14 December 2021

Accepted Papers

Thank you to everyone for submitting a paper to the workshop. The Organisers are pleased to confirm the list of accepted papers:

  1. (Best Paper Award - Spotlight Talk 1) Interactive Inverse Reinforcement Learning for Cooperative Games
    (Thomas Kleine Buening, Anne-Marie George, Christos Dimitrakakis)

  2. (Best Paper Award - Spotlight Talk 2) Learning to solve complex tasks by growing knowledge culturally across generations
    (Michael Henry Tessler, Jason Madeano, Pedro Tsividis, Noah Goodman, Joshua B. Tenenbaum)

  3. (Best Paper Award - Spotlight Talk 3) On the Approximation of Cooperative Heterogeneous Multi-Agent Reinforcement Learning (MARL) using Mean Field Control (MFC)
    (Washim Uddin Mondal, Mridul Agarwal, Vaneet Aggarwal, Satish Ukkusuri)

  4. (Best Paper Award - Spotlight Talk 4) Public Information Representation for Adversarial Team Games
    (Luca Carminati, Federico Cacciamani, Marco Ciccone, Nicola Gatti)

  5. A Fine-Tuning Approach to Belief State Modeling
    (Samuel Sokota, Hengyuan Hu, David J Wu, Jakob Nicolaus Foerster, Noam Brown)

  6. A taxonomy of strategic human interactions in traffic conflicts
    (Atrisha Sarkar, Kate Larson, Krzysztof Czarnecki)

  7. Ambiguity Can Compensate for Semantic Differences in Human-AI Communication
    (Özgecan Koçak, Sanghyun Park, Phanish Puranam)

  8. Automated Configuration and Usage of Strategy Portfolios for Bargaining
    (Bram M. Renting, Holger Hoos, Catholijn M Jonker)

  9. Bayesian Inference for Human-Robot Coordination in Parallel Play
    (Shray Bansal, Jin Xu, Ayanna Howard, Charles Lee Isbell)

  10. Causal Multi-Agent Reinforcement Learning: Review and Open Problems
    (St John Grimbly, Jonathan Phillip Shock, Arnu Pretorius)

  11. Coalitional Bargaining via Reinforcement Learning: An Application to Collaborative Vehicle Routing
    (Stephen Mak, Liming Xu, Tim Pearce, Michael Ostroumov, Alexandra Brintrup)

  12. Coordinated Reinforcement Learning for Optimizing Mobile Networks
    (Maxime Bouton, Hasan Farooq, Julien Forgeat, Shruti Bothe, Meral Shirazipour, Per Karlsson)

  13. Disinformation, Stochastic Harm, and Costly Effort: A Principal-Agent Analysis of Regulating Social Media Platforms
    (Shehroze Khan, James R. Wright)

  14. Fool Me Three Times: Human-Robot Trust Repair & Trustworthiness Over Multiple Violations and Repairs
    (Connor Esterwood, Lionel Robert)

  15. Generalized Belief Learning in Multi-Agent Settings
    (Darius Muglich, Luisa M Zintgraf, Christian Schroeder de Witt, Shimon Whiteson, Jakob Nicolaus Foerster)

  16. Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria
    (Kavya Kopparapu, Edgar A. Duéñez-Guzmán, Jayd Matyas, Alexander Sasha Vezhnevets, John P Agapiou, Kevin R. McKee, Richard Everett, Janusz Marecki, Joel Z Leibo, Thore Graepel)

  17. I Will Have Order! Optimizing Orders for Fair Reviewer Assignment
    (Justin Payan, Yair Zick)

  18. Learning Collective Action under Risk Diversity
    (Ramona Merhej, Fernando P. Santos, Francisco S. Melo, Mohamed CHETOUANI, Francisco C. Santos)

  19. Locality Matters: A Scalable Value Decomposition Approach for Cooperative Multi-Agent Reinforcement Learning
    (Roy Zohar, Shie Mannor, Guy Tennenholtz)

  20. Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination
    (Rui Zhao, Jinming Song, Hu Haifeng, Yang Gao, Yi Wu, Zhongqian Sun, Yang Wei)

  21. Modular Design Patterns for Hybrid Actors
    (André Meyer-Vitali, Wico Mulder, Maaike de Boer)

  22. Multi-lingual agents through multi-headed neural networks
    (Jonathan David Thomas, Raul Santos-Rodriguez, Robert Piechocki, Mihai Anca)

  23. Normative disagreement as a challenge for Cooperative AI
    (Julian Stastny, Maxime Nicolas Riché, Alexander Lyzhov, Johannes Treutlein, Allan Dafoe, Jesse Clifton)

  24. On Agent Incentives to Manipulate Human Feedback in Multi-Agent Reward Learning Scenarios
    (Francis Rhys Ward)

  25. On the Importance of Environments in Human-Robot Coordination
    (Matthew Christopher Fontaine, Ya-Chuan Hsu, Yulun Zhang, Bryon Tjanaka, Stefanos Nikolaidis)

  26. On-the-fly Strategy Adaptation for ad-hoc Agent Coordination
    (Jaleh Zand, Jack Parker-Holder, Stephen J. Roberts)

  27. PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration
    (Pengyi Li, Hongyao Tang, Tianpei Yang, Xiaotian Hao, Sang Tong, YAN ZHENG, Jianye HAO, Matthew E. Taylor, Jinyi Liu)

  28. Preprocessing Reward Functions for Interpretability
    (Erik Jenner, Adam Gleave)

  29. Promoting Resilience in Multi-Agent Reinforcement Learning via Confusion-Based Communication
    (Ofir Abu, Matthias Gerstgrasser, Jeffrey Rosenschein, Sarah Keren)

  30. Reinforcement Learning Under Algorithmic Triage
    (Eleni Straitouri, Adish Singla, Vahid Balazadeh Meresht, Manuel Gomez Rodriguez)

  31. The challenge of redundancy on multi agent value factorisation
    (Siddarth Singh, Benjamin Rosman)

  32. The Evolutionary Dynamics of Soft-Max PolicyGradient in Multi-Agent Settings
    (Martino Bernasconi de Luca, Federico Cacciamani, Simone Fioravanti, Nicola Gatti, Francesco Trovò)

  33. The Power of Communication in a Distributed Multi-Agent System
    (Philipp Dominic Siedler)

  34. Towards Incorporating Rich Social Interactions Into MDPs
    (Ravi Tejwani, Yen-Ling Kuo, Tianmin Shu, Bennett Stankovits, Dan Gutfreund, Joshua B. Tenenbaum, Boris Katz, Andrei Barbu)

  35. When Humans Aren’t Optimal: Robots that Collaborate with Risk-Aware Humans
    (Minae Kwon, Erdem Biyik, Aditi Talati, Karan Bhasin, Dylan Losey, Dorsa Sadigh)