Skip to content

tatsuyah/deep-improvisation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Improvisation

Easy-to-use Deep LSTM Neural Network to generate song like containing improvisation.

Demo (SoundCloud)

screenshot

Dependencies

  • Keras
  • TensorFlow
  • Python MIDI

Usage

1. Set up environment (conda recommended)

pip install -r requirements.txt

2. Parse MIDI file to text

python ./src/parse_midi_to_text.py

3. Train the model (GPU recommended)

python ./src/training.py

4. Generate music

python ./src/generate_music.py

Note

  • You can use other MIDI file to train the model to generate new song. Change the file ./midi/original/original_song.mid.
  • MIDI format is usually consist of multiple track and this repository is currently not supporting automatic detection which track is main part of the song. So you may have to choose track as index of the pattern in parse_midi_to_text.py.

License

MIT © Tatsuya Hatanaka

About

Easy-to-use Deep LSTM Neural Network to generate song sounds like containing improvisation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages