Mava allows researchers to experiment with multi-agent reinforcement learning (MARL) at lightning speed. The single-file JAX implementations are built for rapid research iteration - hack, modify, and test new ideas fast. Our state-of-the-art algorithms scale seamlessly across devices. Created for researchers, by The Research Team at InstaDeep.
- π₯ Implementations of MARL algorithms: Implementations of current state-of-the-art MARL algorithms that are distributed and effectively make use of available accelerators.
- π¬ Environment Wrappers: We provide first class support to a few JAX based MARL environment suites through the use of wrappers, however new environments can be easily added by using existing wrappers as a guide.
- π§ͺ Statistically robust evaluation: Mava natively supports logging to json files which adhere to the standard suggested by Gorsane et al. (2022). This enables easy downstream experiment plotting and aggregation using the tools found in the MARL-eval library.
- π₯οΈ JAX Distrubution Architectures for Reinforcement Learning: Mava supports both Podracer architectures for scaling RL systems. The first of these is Anakin, which can be used when environments are written in JAX. This enables end-to-end JIT compilation of the full MARL training loop for fast experiment run times on hardware accelerators. The second is Sebulba, which can be used when environments are not written in JAX. Sebulba is particularly useful when running RL experiments where a hardware accelerator can interact with many CPU cores at a time.
- β‘ Blazingly fast experiments: All of the above allow for very quick runtime for our experiments, especially when compared to other non-JAX based MARL libraries.
At the moment Mava is not meant to be installed as a library, but rather to be used as a research tool. We recommend cloning the Mava repo and pip installing as follows:
git clone https://github.com/instadeepai/mava.git
cd mava
pip install -e .
We have tested Mava
on Python 3.11 and 3.12, but earlier versions may also work. Specifically, we use Python 3.10 for the Quickstart notebook on Google Colab since Colab uses Python 3.10 by default. Note that because the installation of JAX differs depending on your hardware accelerator,
we advise users to explicitly install the correct JAX version (see the official installation guide). For more in-depth installation guides including Docker builds and virtual environments, please see our detailed installation guide.
To get started with training your first Mava system, simply run one of the system files:
python mava/systems/ppo/anakin/ff_ippo.py
Mava makes use of Hydra for config management. In order to see our default system configs please see the mava/configs/
directory. A benefit of Hydra is that configs can either be set in config yaml files or overwritten from the terminal on the fly. For an example of running a system on the Level-based Foraging environment, the above code can simply be adapted as follows:
python mava/systems/ppo/anakin/ff_ippo.py env=lbf
Different scenarios can also be run by making the following config updates from the terminal:
python mava/systems/ff_ippo.py env=rware env/scenario=tiny-4ag
Additionally, we also have a Quickstart notebook that can be used to quickly create and train your first multi-agent system.
Mava has implementations of multiple on- and off-policy multi-agent algorithms that follow the independent learners (IL), centralised training with decentralised execution (CTDE) and heterogeneous agent learning paradigms. Aside from MARL learning paradigms, we also include implementations which follow the Anakin and Sebulba architectures to enable scalable training by default. The architecture that is relevant for a given problem depends on whether the environment being used in written in JAX or not. For more information on these paradigms, please see here.
Algorithm | Variants | Continuous | Discrete | Anakin | Sebulba | Paper | Docs |
---|---|---|---|---|---|---|---|
PPO | ff_ippo.py |
β | β | β | β | Link | Link |
ff_mappo.py |
β | β | β | Link | Link | ||
rec_ippo.py |
β | β | β | Link | Link | ||
rec_mappo.py |
β | β | β | Link | Link | ||
Q Learning | rec_iql.py |
β | β | Link | Link | ||
rec_qmix.py |
β | β | Link | Link | |||
SAC | ff_isac.py |
β | β | Link | Link | ||
ff_masac.py |
β | β | Link | ||||
ff_hasac.py |
β | β | Link | Link | |||
MAT | mat.py |
β | β | β | Link | Link | |
Sable | ff_sable.py |
β | β | β | Link | Link | |
rec_sable.py |
β | β | β | Link | Link |
These are the environments which Mava supports out of the box, to add a new environment, please use the existing wrapper implementations as an example. We also indicate whether the environment is implemented in JAX or not. JAX-based environments can be used with algorithms that follow the Anakin distribution architecture, while non-JAX environments can be used with algorithms following the Sebulba architecture.
Environment | Action space | JAX | Non-JAX | Paper | JAX Source | Non-JAX Source |
---|---|---|---|---|---|---|
Mulit-Robot Warehouse | Discrete | β | β | Link | Link | Link |
Level-based Foraging | Discrete | β | β | Link | Link | Link |
StarCraft Multi-Agent Challenge | Discrete | β | β | Link | Link | Link |
Multi-Agent Brax | Continuous | β | Link | Link | ||
Matrax | Discrete | β | Link | Link | ||
Multi Particle Environments | Discrete/Continuous | β | Link | Link |
We have performed a rigorous benchmark across 45 different scenarios and 6 different environment suites to validate the performance of Mava's algorithm implementations. For more detailed results please see our Sable paper and for all hyperparameters, please see the following website.
The original code in Mava was adapted from PureJaxRL which provides high-quality single-file implementations with research-friendly features. In turn, PureJaxRL is inspired by the code philosophy from CleanRL. Along this vein of easy-to-use and understandable RL codebases, Mava is not designed to be a modular library and is not meant to be imported. Our repository focuses on simplicity and clarity in its implementations while utilising the advantages offered by JAX such as pmap
and vmap
, making it an excellent resource for researchers and practitioners to build upon. A notable difference between Mava and CleanRL is that Mava creates small utilities for heavily re-used elements, such as networks and logging, we've found that this, in addition to Hydra configs, greatly improves the readability of the algorithms.
Please read our contributing docs for details on how to submit pull requests, our Contributor License Agreement and community guidelines.
We plan to iteratively expand Mava in the following increments:
- Support for more environments.
- More robust recurrent systems.
- Support for non JAX-based environments.
- Add Sebulba versions of more algorithms.
- Support for off-policy algorithms.
- Continuous action space environments and algorithms.
- Allow systems to easily scale across multiple TPUs/GPUs.
Please do follow along as we develop this next phase!
InstaDeep's MARL ecosystem in JAX. In particular, we suggest users check out the following sister repositories:
- π OG-MARL: datasets with baselines for offline MARL in JAX.
- π΄ Jumanji: a diverse suite of scalable reinforcement learning environments in JAX.
- π Matrax: a collection of matrix games in JAX.
- β‘ Flashbax: accelerated replay buffers in JAX.
- π MARL-eval: standardised experiment data aggregation and visualisation for MARL.
Related. Other libraries related to accelerated MARL in JAX.
- π¦ JaxMARL: accelerated MARL environments with baselines in JAX.
- π DeepMind Anakin for the Anakin podracer architecture to train RL agents at scale.
- βοΈ Pgx: JAX implementations of classic board games, such as Chess, Go and Shogi.
- πΌ Minimax: JAX implementations of autocurricula baselines for RL.
If you use Mava in your work, please cite the accompanying technical report:
@article{dekock2023mava,
title={Mava: a research library for distributed multi-agent reinforcement learning in JAX},
author={Ruan de Kock and Omayma Mahjoub and Sasha Abramowitz and Wiem Khlifi and Callum Rhys Tilbury
and Claude Formanek and Andries P. Smit and Arnu Pretorius},
year={2023},
journal={arXiv preprint arXiv:2107.01460},
url={https://arxiv.org/pdf/2107.01460.pdf},
}
We would like to thank all the authors who contributed to the previous TF version of Mava: Kale-ab Tessera, St John Grimbly, Kevin Eloff, Siphelele Danisa, Lawrence Francis, Jonathan Shock, Herman Kamper, Willie Brink, Herman Engelbrecht, Alexandre Laterre, Karim Beguir. Their contributions can be found in our TF technical report.
The development of Mava was supported with Cloud TPUs from Google's TPU Research Cloud (TRC) π€.