1. Authenticating using Twitter API tokens
2. Extracting tweet data id, datetime of created tweet, tweet text
3. Saving the extracted data into a csv file
4. Automating the above process using Prefect - extracting 100 most recent tweets every 3 days
1. Dockerize the application
2. ETL operations - Data Pre-processing
3. Exploratory Data Analysis
4. Analytics dashboard using D3.js
5. Tweets Classification model and Updating the dashboard
6. Dockerizing the whole application
1. Authenticating using Twitter API tokens
2. Extracting tweet data id, datetime of created tweet, tweet text
3. Saving the extracted data into a csv file in a S3 bucket
4. Automating the above process using AWS Lambda, EventBridge
5. Using Cloud9 to create an EC2 instance to install the required python packages for AWS Lambda
6. Saving the tweets into a DynamoDB table for further analysis
1. Sagemaker
2. EC2
3. Cloud9
4. S3
5. Lambda
6. EventBridge
7. DynamoDB