fast.ai Courses for deep learning beginners
-
Updated
Jun 2, 2017 - Jupyter Notebook
fast.ai Courses for deep learning beginners
Project for Intro to AI @ UW - Madison Fall 2019
Machine learning experiments in Julia
Training a simple AI using the policy-gradient approach to Reinforcement Learning.
Built a 2-layer, feed-forward neural network and trained it using the back-propagation algorithm to solve a multi-class classification problem for recognizing images of handwritten digits.
Custom Neural Network
Machine Learning Lab JNTUHUCESTH R-21
A basic back propagation neural net written in Processing.
Second assignment of Neural and Evolutionary Computation (NEC) at URV
Neural_Network for handwritten numbers detection in Octave (free Matlab)
Implemented gradient descent algorithm and its variants from scratch and visualized their results by training models, for comparison and learning purposes
A fully connected multilayer feed-forward network
CSC3022H: Machine Learning Lab 6: Artificial Neural Networks III
Java port of Tinn - The tiny neural network library
Pyhton projects on various topics
First assignment of Neural and Evolutionary Computation (NEC) at URV
Lab exercise for Neural Networks study. Includes Self-Organizing map, Hopfield network and Back-Propagation network
This repository contains codes of deep deducing solving blank Sudoku.
Automatic differentiation is a method for evaluating the rate of change in the numerical output of a program with respect to the rate of change in its input.
Add a description, image, and links to the back-propagation topic page so that developers can more easily learn about it.
To associate your repository with the back-propagation topic, visit your repo's landing page and select "manage topics."