Gymnasium is an open source Python library for developing and comparing reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. This is a fork of OpenAI's Gym library by its maintainers (OpenAI handed over maintenance a few years ago to an outside team), and is where future maintenance will occur going forward.
The documentation website is at gymnasium.farama.org, and we have a public discord server (which we also use to coordinate development work) that you can join here: https://discord.gg/bnJ6kubTg6
Gymnasium includes the following families of environments along with a wide variety of third-party environments
- Classic Control - These are classic reinforcement learning based on real-world problems and physics.
- Box2D - These environments all involve toy games based around physics control, using box2d based physics and PyGame-based rendering
- Toy Text - These environments are designed to be extremely simple, with small discrete state and action spaces, and hence easy to learn. As a result, they are suitable for debugging implementations of reinforcement learning algorithms.
- MuJoCo - A physics engine based environments with multi-joint control which are more complex than the Box2D environments.
- Atari - Emulator of Atari 2600 ROMs simulated that have a high range of complexity for agents to learn.
- Third-party - A number of environments have been created that are compatible with the Gymnasium API. Be aware of the version that the software was created for and use the
apply_env_compatibility
ingymnasium.make
if necessary.
To install the base Gymnasium library, use pip install gymnasium
This does not include dependencies for all families of environments (there's a massive number, and some can be problematic to install on certain systems). You can install these dependencies for one family like pip install "gymnasium[atari]"
or use pip install "gymnasium[all]"
to install all dependencies.
We support and test for Python 3.8, 3.9, 3.10, 3.11 and 3.12 on Linux and macOS. We will accept PRs related to Windows, but do not officially support it.
The Gymnasium API models environments as simple Python env
classes. Creating environment instances and interacting with them is very simple- here's an example using the "CartPole-v1" environment:
import gymnasium as gym
env = gym.make("CartPole-v1")
observation, info = env.reset(seed=42)
for _ in range(1000):
action = env.action_space.sample()
observation, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
observation, info = env.reset()
env.close()
Please note that this is an incomplete list, and just includes libraries that the maintainers most commonly point newcomers to when asked for recommendations.
- CleanRL is a learning library based on the Gymnasium API. It is designed to cater to newer people in the field and provides very good reference implementations.
- PettingZoo is a multi-agent version of Gymnasium with a number of implemented environments, i.e. multi-agent Atari environments.
- The Farama Foundation also has a collection of many other environments that are maintained by the same team as Gymnasium and use the Gymnasium API.
Gymnasium keeps strict versioning for reproducibility reasons. All environments end in a suffix like "-v0". When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion. These were inherited from Gym.
We have a roadmap for future development work for Gymnasium available here: Farama-Foundation#12
If you are financially able to do so and would like to support the development of Gymnasium, please join others in the community in donating to us.
You can cite Gymnasium using our related paper (https://arxiv.org/abs/2407.17032) as:
@article{towers2024gymnasium,
title={Gymnasium: A Standard Interface for Reinforcement Learning Environments},
author={Towers, Mark and Kwiatkowski, Ariel and Terry, Jordan and Balis, John U and De Cola, Gianluca and Deleu, Tristan and Goul{\~a}o, Manuel and Kallinteris, Andreas and Krimmel, Markus and KG, Arjun and others},
journal={arXiv preprint arXiv:2407.17032},
year={2024}
}