Skip to content

Latest commit

History

History
259 lines (191 loc) 路 8.67 KB

README.md

File metadata and controls

259 lines (191 loc) 路 8.67 KB

馃惪 linkinator

A super simple site crawler and broken link checker.

npm version Build Status codecov Dependency Status Known Vulnerabilities Code Style: Google semantic-release

Behold my latest inator! The linkinator provides an API and CLI for crawling websites and validating links. It's got a ton of sweet features:

  • 馃敟Easily perform scans on remote sites or local files
  • 馃敟Scan any element that includes links, not just <a href>
  • 馃敟Supports redirects, absolute links, relative links, all the things
  • 馃敟Configure specific regex patterns to skip
  • 馃敟Scan markdown files without transpilation

Installation

$ npm install linkinator

Command Usage

You can use this as a library, or as a CLI. Let's see the CLI!

$ linkinator LOCATION [ --arguments ]

  Positional arguments

    LOCATION
      Required. Either the URL or the path on disk to check for broken links.

  Flags

    --concurrency
          The number of connections to make simultaneously. Defaults to 100.

    --config
        Path to the config file to use. Looks for `linkinator.config.json` by default.
    
    --format, -f
        Return the data in CSV or JSON format.
    
    --help
        Show this command.

    --include, -i
        List of urls in regexy form to include.  The opposite of --skip.

    --markdown
        Automatically parse and scan markdown if scanning from a location on disk.

    --recurse, -r
        Recursively follow links on the same root domain.
    
    --server-root
        When scanning a locally directory, customize the location on disk 
        where the server is started.  Defaults to the path passed in [LOCATION].

    --silent
        Only output broken links.

    --skip, -s
        List of urls in regexy form to not include in the check.

    --timeout
        Request timeout in ms.  Defaults to 0 (no timeout).

Command Examples

You can run a shallow scan of a website for busted links:

$ npx linkinator http://jbeckwith.com

That was fun. What about local files? The linkinator will stand up a static web server for yinz:

$ npx linkinator ./docs

But that only gets the top level of links. Lets go deeper and do a full recursive scan!

$ npx linkinator ./docs --recurse

Aw, snap. I didn't want that to check those links. Let's skip em:

$ npx linkinator ./docs --skip www.googleapis.com

The --skip parameter will accept any regex! You can do more complex matching, or even tell it to only scan links with a given domain:

$ linkinator http://jbeckwith.com --skip '^(?!http://jbeckwith.com)'

Maybe you're going to pipe the output to another program. Use the --format option to get JSON or CSV!

$ linkinator ./docs --format CSV

Let's make sure the README.md in our repo doesn't have any busted links:

$ linkinator ./README.md

Configuration file

You can pass options directly to the linkinator CLI, or you can define a config file. By default, linkinator will look for a linkinator.config.json file in the current working directory.

All options are optional. It should look like this:

{
  "format": "json",
  "recurse": true,
  "silent": true,
  "concurrency": 100,
  "timeout": 0,
  "markdown": true,
  "skip": "www.googleapis.com"
}

To load config settings outside the CWD, you can pass the --config flag to the linkinator CLI:

$ linkinator --config /some/path/your-config.json

API Usage

linkinator.check(options)

Asynchronous method that runs a site wide scan. Options come in the form of an object that includes:

  • path (string) - A fully qualified path to the url to be scanned, or the path to the directory on disk that contains files to be scanned. required.
  • concurrency (number) - The number of connections to make simultaneously. Defaults to 100.
  • port (number) - When the path is provided as a local path on disk, the port on which to start the temporary web server. Defaults to a random high range order port.
  • recurse (boolean) - By default, all scans are shallow. Only the top level links on the requested page will be scanned. By setting recurse to true, the crawler will follow all links on the page, and continue scanning links on the same domain for as long as it can go. Results are cached, so no worries about loops.
  • serverRoot (string) - When scanning a locally directory, customize the location on disk where the server is started. Defaults to the path passed in path.
  • timeout (number) - By default, requests made by linkinator do not time out (or follow the settings of the OS). This option (in milliseconds) will fail requests after the configured amount of time.
  • markdown (boolean) - Automatically parse and scan markdown if scanning from a location on disk.
  • linksToSkip (array | function) - An array of regular expression strings that should be skipped, OR an async function that's called for each link with the link URL as its only argument. Return a Promise that resolves to true to skip the link or false to check it.

linkinator.LinkChecker()

Constructor method that can be used to create a new LinkChecker instance. This is particularly useful if you want to receive events as the crawler crawls. Exposes the following events:

  • pagestart (string) - Provides the url that the crawler has just started to scan.
  • link (object) - Provides an object with
    • url (string) - The url that was scanned
    • state (string) - The result of the scan. Potential values include BROKEN, OK, or SKIPPED.
    • status (number) - The HTTP status code of the request.

Simple example

const link = require('linkinator');

async function simple() {
  const results = await link.check({
    path: 'http://example.com'
  });

  // To see if all the links passed, you can check `passed`
  console.log(`Passed: ${results.passed}`);

  // Show the list of scanned links and their results
  console.log(results);

  // Example output:
  // {
  //   passed: true,
  //   links: [
  //     {
  //       url: 'http://example.com',
  //       status: 200,
  //       state: 'OK'
  //     },
  //     {
  //       url: 'http://www.iana.org/domains/example',
  //       status: 200,
  //       state: 'OK'
  //     }
  //   ]
  // }
}
simple();

Complete example

In most cases you're going to want to respond to events, as running the check command can kinda take a long time.

const link = require('linkinator');

async function complex() {
  // create a new `LinkChecker` that we'll use to run the scan.
  const checker = new link.LinkChecker();

  // Respond to the beginning of a new page being scanned
  checker.on('pagestart', url => {
    console.log(`Scanning ${url}`);
  });

  // After a page is scanned, check out the results!
  checker.on('link', result => {

    // check the specific url that was scanned
    console.log(`  ${result.url}`);

    // How did the scan go?  Potential states are `BROKEN`, `OK`, and `SKIPPED`
    console.log(`  ${result.state}`);

    // What was the status code of the response?
    console.log(`  ${result.status}`);

    // What page linked here?
    console.log(`  ${result.parent}`);
  });

  // Go ahead and start the scan! As events occur, we will see them above.
  const result = await checker.check({
    path: 'http://example.com',
    // port: 8673,
    // recurse: true,
    // linksToSkip: [
    //   'https://jbeckwith.com/some/link',
    //   'http://example.com'
    // ]
  });

  // Check to see if the scan passed!
  console.log(result.passed ? 'PASSED :D' : 'FAILED :(');

  // How many links did we scan?
  console.log(`Scanned total of ${result.links.length} links!`);

  // The final result will contain the list of checked links, and the pass/fail
  const brokeLinksCount = result.links.filter(x => x.state === 'BROKEN');
  console.log(`Detected ${brokeLinksCount.length} broken links.`);
}

complex();

License

MIT