Skip to content

This is a pip package implementing Reinforcement Learning algorithms in non-stationary environments supported by the OpenAI Gym toolkit.

Notifications You must be signed in to change notification settings

SuReLI/dyna-gym

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dyna Gym

This is a pip package implementing Reinforcement Learning algorithms in non-stationary environments supported by the OpenAI Gym toolkit. It contains both the dynamic environments i.e. whose transition and reward functions depend on the time and some algorithms implementations.

Environments

The implemented environments are the following and can be found at dyna-gym/dyna_gym/envs. For each environment, the id given as argument to the gym.make function is writen in bold.

  • RandomNSMDP-v0. A randomly generated NSMDP;

  • NSFrozenLakeEnv-v0. A frozen lake environment where the ice is melting resulting in time-varying transition probabilities;

NSFrozenLakeEnv-v0 environment. The probability distribution of a next state's transition depends on time.

  • NSCartPole-v0. A cart pole environment with a time-varying direction of the gravitational force;

Cart pole in the NSCartPole-v0 environment. The red bar indicates the direction of the gravitational force.

  • NSCartPole-v1. A cart pole environment with a double objective: to balance the pole and to keep the position of the cart along the x-axis within a time-varying interval;

Cart pole in the NSCartPole-v1 environment. The two red dots correspond to the limiting interval.

  • NSCartPole-v2. A cart pole environment with a time-varying cone into which the pole should balance.

Cart pole in the NSCartPole-v2 environment. The two black lines correspond to the limiting angle interval.

Algorithms

The implemented algorithms are the following and can be found at dyna-gym/dyna_gym/agents.

  • Random action selection;
  • Vanilla MCTS algorithm (random tree policy);
  • UCT algorithm;
  • OLUCT algorithm;
  • Online Asynchronous Dynamic Programming with tree structure.

Installation

Type the following commands in order to install the package:

cd dyna-gym
pip install -e .

Examples are provided in the example/ repository. You can run them using your installed version of Python.

Dependencies

Edit June 12 June 2018.

The package depends on several classic Python libraries. An up to date list is the following: copy; csv; gym; itertools; logging; math; matplotlib; numpy; random; setuptools; statistics.

Non classic libraries are also used by some algorithms: scikit-learn (see website); LWPR (see git repository for a Python 3 binding).

About

This is a pip package implementing Reinforcement Learning algorithms in non-stationary environments supported by the OpenAI Gym toolkit.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages