Skip to content

RH algorithm for player performance analysis in Unity

License

Notifications You must be signed in to change notification settings

z1skgr/RH_-drive_car

Repository files navigation

Rolling Horizon Evolution on generating Personalized/Balance racing games

AI algorithm implementation on Unity for acquiring computer engineering diploma

Table of contents

General Information

An innovative algorithm to offer personalized experiences online in a simple racing video game. A stochastic planning algorithm (Rolling Horizon Evolution Algorithm (RHEA)), generates parts of a race track based on two factors:

  • Difficulty of the level. 1
  • Player’s in-game performance.

The algorithm attempts to match players skills upon track measurement. In other words, bring the level of difficulty, up or down, to match player performance through its fitness function (time completion). The degree of difficulty is determined by a function that is rooted in theory of Flow Channel. We have conducted a series of experiments in our game environment with agents using human strategies and non-human player behavior.

Features

The original implementation used static paths aimed at training the agents. For information on how to use the training model using anaconda environment, see doc/Train-Behavior Configuration. The initial body of work comes from Medium article. For details on the content, please check the article.



player2


We modify the implementation and the goal is not to train the agents but to construct content that is matched to the performance of the player.

  • Construct dynamic paths on runtime execution (taking into account the parameters needed for construction (distances, direction, collisions)).
  • Adequate spatial layout between all tracks of the RH algorithm.
  • New components for tracking times (checkpoint, finish lines, etc).
  • Modify the brain model for the requirements of our work.
    • Player can use brain in heuristic mode.
    • Agents for flow channel area and evolution use inference mode brain.

Rolling Horizon

In baseline form, RHEA utilies Evolutionary Algorithms (EA) to evolve an in-game sequence of actions at every game tick using a Forward Model (FM), with restricted computation time per execution.

  • In our implementation, we evolve sequence of race tiles. External driving agents perform the evaluation. A heuristic function h evaluates the game state at the end of the evolution phase in favor of the time-balance problem.

One evolution stage/iteration is up to:

  • Parent chromosomes (current generation)
  • Offsprings (tracks built from evolution)

Parent chromosomes are divided into two equal sectors (genotype can be represented as a checkpoint in the middle of the tracks that marks the player's time). Offsprings do not use checkpoints2.


Offsprings after player crosses checkpoint:
phase1

Offsprings after player crosses finish line:
phase2

We do not want an additional evolutionary phase in these type of tracks. So, there is no checkpoint.


Flow Channel Theory

Flow activities are guilty of maintaining challenge between the edges of boredom and depression in gaming. There the user can become either bored or frustrated. Flow can serve as a function of the search space between skills and challenges.

The concept focuses on how the agents behave on the benchmark tracks and contribute to the channel formulation based on their execution.

  • Straight-line 3
  • Snake-line 4

ss

tuple1 = (m11, m12)
tuple2 = (m21, m22)

where

  • mi1 : the performance record in the benchmark track i
  • mi2 : the number of curve tiles in the track
    • 0 in straight
    • 60 in snake tracks

Genetic Operators


Evolution
Selection Rank
Crossover Uniform
Mutation N-flip
------------- -------------
PCG (Level Gen)
------------- -------------
Selection Tournament
Crossover
Mutation Scramble

Experiments

The experimental analysis contains three parts:

  • Display configurations for the flow channel.
  • Show sequence of generated tracks in response to player’s behavior.
  • Exhibition the profile of the player.
Parameter Value
Population Size 25
Elitism Factor 20%
Individual Length 20
Mutation Propability 30%
Crossover Propability 30%
Generation Iterations 20

We tested our system and the simulations lasted for 20 levels using agents and human players. We tested agents using two policies.

  • human-executed strategies
  • Non-strategic

You can view more details about the experiments in Experiment Section in pdf.

Technologies

  • Unity Real-Time Development Platform
  • Unity Machine Learning Agents Toolkit (ML-Agents v-0.15)
  • Anaconda Navigator
  • Visual Studio Community

Useful material for ml packages installation/execution/description/etc can also be found in the docs of the creators.

Prerequisites

At best I'd suggest to follow the official installation guide. However the following steps should suffice:

  1. Get Python 3.6 or above 64-bit (the 64 bit seems necessary, at least on Windows)
  2. Get Unity 2020.1+
  3. Download the ML-Agents 1.0.0+ package (see https://github.com/Unity-Technologies/ml-agents/blob/release_12_docs/docs/Installation.md)
  4. Install dependencies: pip3 install "mlagents"
  5. Clone this repository
  6. Open project in Unity

Detailed information about the installation and execution can also be found at doc

Acknowledgements

  • This project was created for the requirements of my Diploma Work in TUC
  • You can also see a graphical view with more details about the work in video.
  • You can highlight the parameters used in this work in pdf on appendix section.

Footnotes

  1. Additional AI racing agents define the bounds of the channel within which the players are assessed.

  2. We want to evolve main population, not offsprings. We could make some adjustments for modifications in evolving stages but not implemented.

  3. Maximum performance (or a low finish time).

  4. Minimum performance (or a high finish time).