A Torch implementation of the Neural Turing Machine model described in this paper by Alex Graves, Greg Wayne and Ivo Danihelka.
This implementation uses an LSTM and a GRU controller. Thanks to kaishengtai's implementation from which this is built on. NTM models with multiple read/write heads are supported. Also the implementation has both cuda and non-cuda versions.
Torch7, as well as the following libraries:
All the above dependencies can be installed using luarocks. For example:
luarocks install nngraph
The base directory contains the implementation of NTM with LSTM and GRU controllers. The layers directory contains the implementation of some modules which is used in the NTM.
The task directory contains the data, train and test code, and pre-trained models and results. There are three tasks namely Copy, Repeat-Copy and Recall each in their respective directory. For every task there are four directories named
-
dataset (contains the script to generate data for test and train and already generated data files)
-
src (contains script for training and testing)
-
pre-trained-models (contains trained models in pkl files)
-
results (contains results for train and test)
To generate data go to dataset and type:
$ th <task>_gen_dataset.lua
It will generate two data set one for train and one for test.
To train go to src/ and type:
$ th <task>_cuda.lua
To test go to src/ and type:
$ th <task>_test.lua
The pre-trained/ directory contains pre-trained pkl models which can be used for testing.
There are multiple tasks which this repo implements. The tasks implemented are:
- Copy
- Repeat Copy
- Associative Recall
To check a particular task, for example Recall
- Generate dataset
$ cd tasks/Recall
$ th recall_gen_dataset.lua
- Train the model
$ cd ../src
$ th recal_cuda.lua
The program will ask the architecture. Give 1 for LSTM and 2 for GRU
- Test our model
$ th recall_test.lua
- Kaishengtai/torch-ntm Github Repository
- Neural Turing Machines by Alex Graves, Greg Wayne and Ivo Danihelka