Skip to content

A behavioral cloning model that successfully drives a self-driving car simulator around a simulated track.

Notifications You must be signed in to change notification settings

AryanJ-NYC/CarND-Behavioral-Cloning-P3

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Behavioral Cloning

The goal of this project is to design, create and train a convolutional neural network so that a self-driving car simulator can successfully drive around a course without human intervention.

I used Keras to write the neural network. Udacity provided video.py and drive.py, which converts images to videos and serves the model's output to Udacity's self-driving car simulator, respectively.

I describe my work in writing model.py below.

Model Architecture and Training Strategy

My model consists of a neural network modeled after NVIDIA's DaveNet.

NVIDIA's architecture

NVIDIA's architecture

My architecture

CNN architecture

The cropping layer crops the top and bottom of the images as those portions do not add any relevant information. The lambda layer is a normalization layer. The convolutional layers include ReLU layers. Dropout layers with a dropout rate of 0.3 were used twice, once after the last convolutional layer and once at after the output layer. This prevented overfitting.

An Adam optimizer was used so the learning rate was not tuned manually.

Model Design

I began by creating a convolutional neural network model similar to the LeNet model. I found this model appropriate as I successfully used it with my traffic signs project.

I collected data using Udacity's self-driving car simulator. I began with two laps of center lane driving around the track. I split this data into training and validation data (dedicating 20% of the data to a validation set). The model had a low mean squared error on the training set and a validation set.

However, I found the self-driving car successfully drove straight (though along the right lane line) but struggled around curves. After reading NVIDIA's End to End Learning for Self-Driving Cars, I gathered more data that represented curves. I also looked to gather recovery data (data that trained the car how to recover when driving along a lane line). Lastly, I changed my from the LeNet model to the model used in the paper (as a challenge to myself).

With the additional data, my self-driving car successfully made it to the second to last curve. However, it failed to make such a sharp turn. I gathered additional data with respect to this turn.

With this last bit of data, the car is able to drive autonomously around the track without leaving the road.

To capture good driving behavior, I recorded two laps on track one using center lane driving: Center lane driving

I then recorded the vehicle recovering from the left and right sides of the road back to the center: Lane Recovery 1 Lane Recovery 2 Lane Recovery 3 Lane Recovery 4 Lane Recovery 5 Lane Recovery 6 Lane Recovery 7

I had 28260 images total dedicating 22608 as training images and 5652 as validation images.

The data set was randomly shuffled prior to data generation. Each batch was also randomly shuffled.

A video of the vehicle's successful lap around the course can be found on YouTube.

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%