Skip to content

Grabs Content from websources and creates a static page hosted on S3 with a R53 DNS

License

Notifications You must be signed in to change notification settings

twosdai/contentGrabber

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

contentGrabber

This repository is for the infrastructure and code to grab links from top webpages, then render them as a simple webpage.

Technologies Used

AWS
Node.js

Purpose

This repository serves as a place to deploy a personally managed webpage to aggregate links from hackernews and reddit.

Getting Started

Example result: www.danielwasserlaufquicklinks.com

This guild assumes you have an aws account, if you do not please go to aws.amazon.com and create one and configure your system through aws configure to be able to deploy to aws. This guild also assumes you have Node 12.x installed and npm installed. Please vist www.nodejs.com if you do not.

Also note that there will likely be some costs to running the resources through AWS.

  • Run npm i
  • Add your needed credentials to the config.js file in the root directory, Follow instructions here to get the application credentials: https://hackernoon.com/build-a-serverless-reddit-bot-in-3-steps-with-node-js-and-stdlib-sourcecode-e5296b78fc64
  • If you want to add more subreddits just extend the subreddits array with more subs, like 'hacking'
  • Update the Route 53 components in the serverless.yml file fill in the labels called FILL
  • Verify that you have the domain you need to run the webpage purchased through AWS
    • There maybe some manual configuration you will need to do when linking the R53 domain to S3 feel free to reach out if you have problems during this stage.
  • Run sudo serverless deploy
  • Verify that your site is created in S3, and the lambdas have deployed correctly, Serverless will deploy a cloudformation stack which should manage it all for you.

Needed Improvements

- Unit testing 
- Further documentation 
- Additional API integrations beyond hackernews, reddit 
- CSS if needed. 
- Multiple system testing currently devlopment has only been done on linux-mint 19
- Build tagging for link titles 
- Keep history for looking back

About

Grabs Content from websources and creates a static page hosted on S3 with a R53 DNS

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published