Skip to content

mystic123/nn-optimizers-numpy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Simple examples of optimizers implementation in numpy.

I based my implementation on examples from Deep Learning Specialization (Coursera) and this tutorial.

Great overview about optimization algorithms is here.

Results:

From this simple example, we can conclude:

  1. Not-adaptive algorithms (SGD, Momentum, NAG) need high learning rate for this task. With small learning rates, progress is slow.
  2. Adaptive algorithms (Adam, Adagrad, RMSProp) fail (diverge) with high learning rates
  3. Best results are achieved by Adam, RMSProp and Adagrad (depending on lr)

Results1 Results2 Results3 Results4 Results5

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages