Skip to content

jiseongHAN/Double-Experience-Replay-DER-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Double Experience Replay (DER)

Pytorch implementaion of Double_Experience_Replay (DER)

This method mixes two stratesgies for sampling experiences which will be stored in replay buffer.
You could choose strategies whatever you want, but this paper we use temperal difference (TD) value based sample strategy and uniform sample strategy.

Contents

This implementaion contains:

Simulation of Urban MObility (SUMO)

  • Lane change Environmnet
  • Ring Network Environment

**YeongDong Bridge Environment does not contain in this repository.

Method

Using Uniform sample strategy and TD value based sampling method.
As a training algorithm we use Deep Q-learning (DQN)

Requirements

Usage

To train SUMO with ring environment

cd ring
python ring.py

To train SUMO with Lane Change environment

cd lanechange
python lane.py

Result

  • YeongDong Bridge Agent (LEFT, white car)
  • Lane Change Agent (RIGHT, white car)

  • YeongDong Bridge (LEFT)
  • Ring Network (RIGHT)

About

Implementation of Double Experience Reply (DER) with Pytorch

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages