Skip to content
This repository has been archived by the owner on Jan 15, 2023. It is now read-only.

vmarchaud/openprofiling-node

Repository files navigation

Version Build Status codecov License

NOTE: This project is deprecated, OpenTelemetry is discusting adding support for profiling

OpenProfiling is a toolkit for collecting profiling data from production workload safely.

The project's goal is to empower developers to understand how their applications is behaving in production with minimal performance impact and without vendor lock-in.

The library is in alpha stage and the API is subject to change.

I expect that the library will not match everyone use-cases, so I'm asking everyone in this case to open an issue so we can discuss how the toolkit could meet yours.

The NodeJS implementation is currently tested with all recent NodeJS LTS (10, 12) and the most recent major (14).

Use cases

An application have a memory leak

The recommended profiler is the Heap Sampling Profiler which has the lowest impact in terms of performance, here are the instructions on how to use it. After getting the exported file, you can go to speedscope to analyse it. If we load an example heap profile and head to the Sandwich panel, we can see a list of functions sorted by how much memory they allocated.

At the left of the table, you have two entry:

  • self memory: is how much the function allocated in the memory to run without counting any function that it may have called.
  • total memory: the opposite of self which means that it count the memory it allocated plus all the memory allocated by the functions it called.

Note that the top function in the view should not be automatically considered as a leak: for example, when you receive a HTTP request, NodeJS allocates some memory for it but it will be freed after the request finishes. The view will only show where memory is allocated, not where it leaks.

We highly recommend to read the documentation about the profiler to understand all the pros and cons of using it.

An application is using too much CPU

The recommended profiler is the CPU JS Sampling Profiler which is made for production profiling (low overhead), check the instructions to get it running. After getting the exported file, you can go to speedscope to analyze it. If we load an example CPU profile and head to the Sandwich panel again, we can see a list of functions sorted by how much CPU they used.

As the heap profiler that there is two concepts to read the table:

  • self time: is the time the CPU took in the function itself, without considering calling other functions.
  • total time: the opposite of self, it represent both the time used by the function and all functions that it called.

You should then look for functions that have a high self time, which means that their inner code take a lot of time to execute.

We highly recommend to read the documentation about the profiler to understand all the pros and cons of using it.

Installation

Install OpenProfiling for NodeJS with:

yarn add @openprofiling/nodejs

or

npm install @openprofiling/nodejs

Configure

Before running your application with @openprofiling/nodejs, you will need to choose 3 different things:

  • What do you want to profile: a profiler
  • How to start this profiler: a trigger
  • Where to send the profiling data: an exporter

Typescript Example

import { ProfilingAgent } from '@openprofiling/nodejs'
import { FileExporter } from '@openprofiling/exporter-file'
import { InspectorHeapProfiler } from '@openprofiling/inspector-heap-profiler'
import { InspectorCPUProfiler } from '@openprofiling/inspector-cpu-profiler'
import { SignalTrigger } from '@openprofiling/trigger-signal'

const profilingAgent = new ProfilingAgent()
/**
 * Register a profiler for a specific trigger
 * ex: we want to collect cpu profile when the application receive a SIGUSR2 signal
 */
profilingAgent.register(new SignalTrigger({ signal: 'SIGUSR2' }), new InspectorCPUProfiler({}))
/**
 * Start the agent (which will tell the trigger to start listening) and
 * configure where to output the profiling data
 * ex: the file exporter will output on the disk, by default in /tmp
 */
profilingAgent.start({ exporter: new FileExporter() })

JavaScript Example

const { ProfilingAgent } = require('@openprofiling/nodejs')
const { FileExporter } = require('@openprofiling/exporter-file')
const { InspectorCPUProfiler } = require('@openprofiling/inspector-cpu-profiler')
const { SignalTrigger } = require('@openprofiling/trigger-signal')

const profilingAgent = new ProfilingAgent()
/**
 * Register a profiler for a specific trigger
 * ex: we want to collect cpu profile when the application receive a SIGUSR2 signal
 */
profilingAgent.register(new SignalTrigger({ signal: 'SIGUSR1' }), new InspectorCPUProfiler({}))
/**
 * Start the agent (which will tell the trigger to start listening) and
 * configure where to output the profiling data
 * ex: the file exporter will output on the disk, by default in /tmp
 */
profilingAgent.start({ exporter: new FileExporter(), logLevel: 4 })

Triggers

A trigger is simply a way to start collecting data, you can choose between those:

Profilers

Profilers are the implementation that collect profiling data from different sources, current available profilers:

Exporters

OpenProfiling aims to be vendor-neutral and can push profiling data to any backend with different exporter implementations. Currently, it supports:

Versioning

This library follows Semantic Versioning.

Note that before the 1.0.0 release, any minor update can have breaking changes.

LICENSE

Apache License 2.0