Skip to content

Latest commit

 

History

History
173 lines (131 loc) · 5.07 KB

README.md

File metadata and controls

173 lines (131 loc) · 5.07 KB

Monitored 🕵️‍♀️

A utility for monitoring services

Monitored is a wrapper function that writes success/error logs and StatsD metrics (gauge, increment, timing) after execution. It supports both asynchronous and synchronous functions.

GitHub Workflow Status License code style: prettier

Quick start

Yarn

yarn add monitored

Npm

npm install monitored

Initialize

Call setGlobalInstance at the root of the project

To wire this package, you must pass an Options object.

import { setGlobalInstance, Monitor } from 'monitored';

interface MonitorOptions {
    serviceName: string; // Represents the name of the service you are monitoring (mandatory)
    plugins: MonitoredPlugin[]; // Stats plugins, statsD and/or prometheus (mandatory)
    logging?: {
        // Writes success and error logs with the passed in logger (optional)
        logger: any; // logger (mandatory)
        logErrorsAsWarnings?: boolean; // log errors as warnings (optional)
        disableSuccessLogs?: boolean; // When true, will not send success log. defaults to false (optional)
    };
    shouldMonitorExecutionStart?: boolean; // When true will log execution start and will increment a metrics. defaults to true (optional)
    mock?: boolean; //Writes the metrics to logs instead of StatsD for debugging. defaults to false (optional)
}

setGlobalInstance(
    new Monitor({
        serviceName: 'monitored-example',
        logging: {
            logger: logger,
            logErrorsAsWarnings: false,
            disableSuccessLogs: false,
        },
        plugins: [
            new StatsdPlugin({
                serviceName: 'test',
                apiKey: 'key',
                host: 'host',
                root: 'root',
            }),
            new PrometheusPlugin(),
        ],
        shouldMonitorExecutionStart: true,
    })
);

API

monitored

After execution, a wrapper function writes success/error logs and StatsD metrics (gauge, increment, timing).

monitored supports both Asynchronous and Synchronous functions:

//Async function:
const result = await monitored('functionName', async () => console.log('example'));

//Sync function:
const result = monitored('functionName', () => {
    console.log('example');
});

You can also pass a options argument to monitored:

type MonitoredOptions = {
    context?: any; //add more information to the logs
    logResult?: boolean; //should write log of the method start and success
    parseResult?: (e: any) => any; //custom parser for result (in case it is logged)
    level?: 'info' | 'debug'; //which log level to write (debug is the default)
    logAsError?: boolean; //enables to write error log in case the global `logErrorsAsWarnings` is on
    logErrorAsInfo?: boolean //enables to write the error as info log
    shouldMonitorError: e => boolean //determines if error should be monitored and logged, defaults to true
    shouldMonitorSuccess: (r: T) => boolean //determines if success result should be monitored and logged, defaults to true
    tags?: Record<string, string>; //add more information/labels to metrics
};

You can use context to add more information to the log, such as user ID

const result = monitored(
    'functionName',
    () => {
        console.log('example');
    },
    {context: {id: 'some context'}}
);

You can use tags to add labels to metrics

const result = monitored(
    'functionName',
    () => {
        console.log('example');
    },
    {tags: {'some-label': 'some-value'}}
);

Also, you can log the function result by setting logResult to true:

const result = monitored(
    'functionName',
    () => {
        console.log('example');
    },
    {context: {id: 'some context'}, logResult: true}
);

flush

Wait until all current metrics are sent to the server.
We recommend using it at the end of lambda execution to ensure all metrics are sent.

import { getGlobalInstance } from 'monitored';

const flushTimeout: number = 2000;
await getGlobalInstance().flush(flushTimeout)

Testing

  1. Create a .env file with STATSD_API_KEY and STATSD_HOST values
  2. Run yarn example
  3. Verify manually that console logs and metrics in the statsd server are valid

Contributing

Before creating an issue, please ensure that it hasn't already been reported/suggested, and double-check the documentation. See the Contribution Guidelines if you'd like to submit a PR.

License

Licensed under the MIT License, Copyright © 2020-present Soluto.

Crafted by the Soluto Open Sourcerers🧙