Skip to content

Kishwar/tensorflow

Repository files navigation

tensorflow (Python 2.7 / 3.5)

This repo contains my tensorflow codes

MNIST

  • mnist training - python mnist_train.py
    Required training and test/validation data is automatically downloaded by code.
  • mnist test - python mnist_test.py
    Image input as arg (not yet implemented, arg for input image, change in code for now)

EMOTION

VGG19-STYLE-TRANSFER FOR IMAGE (Python 3.5, Tensorflow LATEST)

python style-transfer-main.py  --style ./imgstyle/StyleImage.jpg --content ./imgcontent/ContentImage.jpg --out ./output/ --epochs 100000 --print-iterations 100 --learning-rate 10
  • VGG19 Style Transfer can be also executed by vgg19-style-transfer/Style_Transfer.ipynb IPYTHONG NOTEBOOK
    • Please change values for --epochs and --print-iterations as per your requirement.

VGG19-STYLE-TRANSFER FOR VIDEO (Python 3.5, Tensorflow LATEST)

* Training

python style-transfer-Noise-train.py --style imgstyle\udnie.jpg --chkpnt chkpnt_udnie_batch_16\ --test-image testImage\Kishwar.jpg --print-iterations 500 --chkpnt-iterations 2000 --epochs 3 --learning-rate 0.001 --batch-size 16 --content D:\train2014\train2014\

* Testing (Image)

python style-transfer-Noise-test.py --chkpnt chkpnt_udnie_batch_16\ --cam-url 255 --in-path testImage\ContentImage.jpg --out-path output\Output.jpg

* Testing (Video) - Using IP Camera

python style-transfer-Noise-test.py --chkpnt chkpnt_udnie_batch_16\ --cam-url http://192.168.0.3:8080/video/

* Testing (Video) - Using Laptop Camera

python style-transfer-Noise-test.py --chkpnt chkpnt_udnie_batch_16\ --cam-url 0

TRIGGER WORD LILLY (Python 3.5, Tensorflow LATEST)

  • Code generates required training data from given positive, negative and background samples.
  • Total number of 2000 samples are generated
  • Model output at 97.91% accuracy: rnn-trigger-word-lilly/keras/raw_data/chime/chime_output.wav
  • Trained model: rnn-trigger-word-lilly/keras/raw_data/chime/chimModel_loss_0_0715.h5 [loss - 0.0715]
  • MODEL

VOICE CONTROLLED CAR (Python 3.5, Tensorflow LATEST, Keras LATEST)

Youtube video can be found at Youtube

  • Step 1: Generate New data using Generate_Voice_Data.py

    • Before running / executing file please change
      WAVE_OUTPUT_FILENAME = "data/right/file-right-t-" --> Example for keyword "RIGHT"
    • Please create required directories manually as code will not generate any directory.
  • Step 2: Update config.ini with correct path and keywords.

  • Step 3: Run Voice-Controlled-Car.ipynb to start training.

  • Step 4: Save model after training.

  • Step 5: Change IP to correct IP of car in Test-Voice-Control-Car-Model.py.

Please note that with current data, model doesn't produce good result.

STYLE-TRANSFER FOR MUSIC (Python 3.5, Tensorflow LATEST) - Expected to be finished 30/06/2018

  • NEXT IN QUEUE..

STYLE-TRANSFER FOR AUDIO - VOICE CLONING (Python 3.5, Tensorflow LATEST) - Expected to be finished 30/11/2018

  • IN QUEUE