Skip to content

A baseline for playing around with model-free and model-based reinforcement learning approaches in both online and offline settings.

License

Notifications You must be signed in to change notification settings

Mr-Pepe/offline-model-based-rl

Repository files navigation

Offline Model-Based Reinforcement Learning

Pipeline: passing codecov linting: pylint Type checks: mypy Imports: isort Code style: black License: MIT

This library provides a simple but high-quality baseline for playing around with model-free and model-based reinforcement learning approaches in both online and offline settings. It is mostly tested, type-hinted, and documented. Read the detailed documentation here.

The code in this repository is based on the code written as part of my master's thesis on uncertainty estimation in offline model-based reinforcement learning. Please cite accordingly.

Contribute

Clone the repo and install the package and all required development dependencies with:

pip install -e .[dev]

After making changes to the code, make sure that static checks and unit tests pass by running tox. Tox only runs unit tests that are not marked as slow. For faster feedback from unit tests, run pytest -m fast. Please run the slow tests if you have a GPU available by executing pytest -m slow.

Citation

Feel free to use the code but please cite the usage as:

@misc{peter2021ombrl,
    title={Investigating Uncertainty Estimation Methods for Offline Reinforcement Learning},
    author={Felipe Peter and Elie Aljalbout},
    year={2021}
}

About

A baseline for playing around with model-free and model-based reinforcement learning approaches in both online and offline settings.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages