Skip to content

Web App to classify diaster reponse messages into response categories

Notifications You must be signed in to change notification settings

as2leung/disaster_alerts_classifier_web_app_project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Disaster Response Pipeline Project

Table of Contents

Purpose

This project is a web app API that classifies disaster messages to a category. Data from Figure Eight is used to calibrate the underlying model (Multi-Output Classifier with Random Forest Classifier). There are three underlying components to the web app:

  1. ETL pipeline Python script
  2. ML pipeline Python script
  3. Flask Web App with two webpages

NLTK is used to carry out the required text extraction and normalization on the disaster messages. Messages are then converted from raw text to a matrix of TF-IDF features to predict the 36 disaster message categories. A custom transformer was also created to identify whether the first word of the message is a verb to include as another feature. The scikit-learn pipeline and GridSearchCv are used to select the appropriate hyperparameters for the final model.

Running just the web app

To run just the web app, first download the pickle file (see Download Model - Pickle below), rename the pickle to classifier.pkl" and then place the pickle file into the model folder.

Next, run the "run.py" file in the app folder and then go to http://127.0.0.1:3001/ on your web browser

Running all pipelines and web app

  1. Run the following commands in the project's root directory to set up your database and model.

    • To run ETL pipeline that cleans data and stores in database python data/process_data.py data/disaster_messages.csv data/disaster_categories.csv data/disaster_alerts.db

    • To run ML pipeline that trains classifier and saves python models/train_classifier.py data/diaster_alerts.db models/classifier.pkl

  2. Run the following command in the app folder to run your web app. python run.py

  3. Go to http://127.0.0.1:3001/

Web App Output

The following are screenshots of the actual Web App. If the app is run correctly, it should look as follows:

Web App Homepage

Homepage

Web App Classifier

User Input Page

File structure and list of key files

  • app folder

    • template folder
      • master.html # main page of web app - displays three visualizations
      • go.html # classification result page of web app
    • run.py # Flask file that runs app (change host ip address here)
  • data

    • disaster_categories.csv # data to process
    • disaster_messages.csv # data to process
    • process_data.py #ETL pipeline script
    • InsertDatabaseName.db # database to save clean data to
  • models

    • train_classifier.py #ML pipeline script
    • classifier.pkl # saved model
  • README.md

Python_Libraries

  • nltk
  • sklearn
  • joblib
  • sqlqlchemy
  • time
  • numpy
  • re
  • pandas

Download Model Pickle (large file)

Since the model pickle is very large in size, I have uploaded elsewhere. You can download them here. Please rename the pickle to classifier.pkl" and then place the pickle file into the model folder and follow the run steps above. I recommend using "classifier_2.pkl".

Pickle File Download

Note on imbalanced classes

For most of the 36 classification categories the categories suffer from imbalanced classes, where there is a majority of observations in one class. The issue with imbalanced classes is that it results in the classifier model usually predicting the majority class the most of the time.

In terms of ways to combat this issue, resampling methods can be used to help balance the classes in a statistically sound way or the use of ensemble methods where the classifier calibrates multiple models/learners to find a more robust model. The random forest classifier used in this web app is one such ensemble method.

Creators & Credits