Skip to content
/ nets Public

Recreating PyTorch from scratch, using Numpy. Supports FCN, CNN, RNN layers.

License

Notifications You must be signed in to change notification settings

arthurdjn/nets

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

63 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

python pypi status docs website

1. Table of Content

2. Overview

2.1. About

NETS is a light-weight Deep Learning Python package, made using only (mostly) numpy. This project was first introduced as an assignment at the University of Oslo, which is similar to the second assignment from Stanford University.

However, this project was pushed further to make it OOP with an easier API. In addition, the back-propagation and update rules where changed, using a custom autograd system. NETS was highly inspired from PyTorch and TensorFlow packages.

But why ?

NETS package has NO CLAIMS to shadow already well build deep learning packages like PyTorch or TensorFlow. Instead, this package was made to understand how all of these libraries work and handle forward / backward propagation by making one from scratch. As I am going through this deep understanding, I found interesting to share as much as possible my work, which I hope will help students or people who want to learn more about this subject.

2.2. Requirements

All packages within NETS are made from scratch, using mainly numpy. However, some additional packages can offer a better experience if installed (saving checkpoints and models for example).

  • numpy
  • json (Optional)
  • time (Optional)
  • pandas (Optional)
  • scipy (Optional)
  • sklearn (Optional)

2.3. Installation

To install this package from PyPi

$ pip install nets

or from this repository

$ git clone https://github.com/arthurdjn/nets
$ cd nets
$ pip install .

3. Status

Development Status Feature
Autograd System finished
  • Tensor
  • Parameter
Optimization finished
  • SGD
  • Adam
  • RMSprop
Loss in progress
  • MSE
  • Cross Entropy
  • BCE
Solver finished
  • Train
  • Eval
  • Checkpoints
Data finished
  • Dataset
  • Batch
  • Iterator
Dense Neural Network finished
  • Linear
  • Sequential
Convolutional Neural Network finished
  • Conv2d
  • MaxPool2d
  • Dropout
Recurrent Neural Network in progress
  • RNN
  • LSTM
  • GRU

4. Documentation

The documentation and tutorials are in process and will be released soon. You will find some tutorials and application on how to get started or build a similar package.

5. Get Started

NETS architecture follows the one from PyTorch. It provides a basic neural network structure so you can create your own with numpy. You will need to wrap your arrays in a Tensor class to keep track of the gradients, just like in PyTorch.

NETS

5.1. Computational Graph

NETS uses a forward & backward pass for gradient descent optimizations (NOTE: there are now other optimizers !).

You can also uses the autograd system (recommended). It behaves similarly as Pytorch, except it is entirely done with NumPy.

import nets


t1 = nets.Tensor([1, 2, 3], require_grad=True)
t2 = nets.Tensor([4, 5, 6])

t3 = t1 + t2  
# t3 now requires gradient
t3 = t3.sum()
# Compute the gradients for t1
t3.backward()

5.2. Building a model

A model is a Modulesubclass, where biases, weights and parameters transformations are computed. All modules have a forward method, that MUST be overwritten. This method will compute the forward propagation from an input tensor, and compute the transformation. If using the autograd system, no back-propagation need to be added. However, if you prefer to manually compute the gradients, you will need to override the backward method.

Your Model should inherits from the Module class and override the forward method.

import nets
import nets.nn as nn

class Model(nn.Module):
    """
    Create your own model.
    The attributes should be your submodels used during the forward pass.
    You don't have to necessary affect the activation function as an attribute,
    unless you want to set a manual backward pass.
    """
    def __init__(self, input_dim, hidden_dim, output_dim):
        # Initialization
        super().__init__() # Don't forget to add this line
        self.layer1 = nn.Linear(input_dim, hidden_dim)
        self.layer2 = nn.Linear(hidden_dim, hidden_dim)
        self.layer3 = nn.Linear(hidden_dim, output_dim)

    def forward(self, inputs):
        # Forward pass
        out1 = nets.tanh(self.layer1(inputs))
        out2 = nets.tanh(self.layer2(out1))
        return self.layer3(out2)

model = Model(10, 100, 2)

# Let's check the architecture
model

Out:

Model(
   (layer1): Linear(input_dim=10, output_dim=100, bias=True)
   (layer2): Linear(input_dim=100, output_dim=100, bias=True)
   (layer3): Linear(input_dim=100, output_dim=2, bias=True)
)

Again, this is really similar to what PyTorch offers.

6. Notebooks

7. References

Here is a list of tutorials and lectures/assignment that helped to develop NETS