Skip to content

m99coder/cloud-native-enterprise-nodejs

Repository files navigation

Cloud-native Enterprise Node.js

Build and Test

Configuration

Some formatting settings like format on save using the ESLint plugin for VSCode:

{
  "editor.formatOnSave": true,
  "editor.formatOnPaste": false,
  "editor.defaultFormatter": "dbaeumer.vscode-eslint",
  "editor.codeActionsOnSave": {
    "source.fixAll.eslint": true
  },
  "eslint.validate": [
    "javascript",
    "javascriptreact",
    "typescript",
    "typescriptreact",
  ],
  "files.insertFinalNewline": true
}

To support direnv, everything you need to do is to define an .envrc that calls dotenv as described here.

Fastify

The Fastify web framework comes bundled with Pino by default, as described here. A pretty-printed variant can be achieved by piping stdout to pino-pretty:

npm i -g pino-pretty
npm run dev | pino-pretty

Jay Wolfe has some nice blog posts about using Fastify the right way:

Testing

To make use of the coverage report it’s recommended to install the Jest plugin.

# run tests
npm test

# run tests and generate coverage report
npm run test:coverage
open coverage/lcov-report/index.html

Running

# clean compile folder
npm run clean

# compile TypeScript to JavaScript
npm run build

# run application
npm start

# run application in development mode with hot-reloading
npm run dev

# lint sources
npm run lint

# format sources
npm run fmt

Environment variables

Variable Type Default
LOG_LEVEL string info
PORT number 4000

Scaling

Load Balancing

HAProxy is a free and open source software that provides a high availability load balancer and reverse proxy for TCP and HTTP-based applications that spreads requests across multiple servers.

# start 3 server instances
PORT=4001 npm start
PORT=4002 npm start
PORT=4003 npm start
# install and start HAProxy
brew install haproxy
haproxy -f ./haproxy/haproxy.cfg
# open the stats dashboard
open http://localhost:4000/admin/stats

# call the API
curl -i http://localhost:4000

# terminate one of the API servers with `kill <pid>`
# HAProxy detects that the API is down
# re-start the API server and HAProxy will include it into load-balancing again

By default a health check is performed on Layer 4 (TCP). If haproxy.cfg defines option httpchk GET /health for a backend the health check is changing to be on Layer 7 (HTTP), as you can see in the stats dashboard (LastChk column).

In order to use gzip compression, you need to provide the respective header. The HAProxy configuration file defines the compression to be available for content types application/json and text/plain.

curl -s http://localhost:4000/ -H "Accept-Encoding: gzip" | gunzip

HAProxy can also be configured to close connections as soon as a maximum number of connections is reached to avoid back pressure.

Load Testing

In order to load test one instance of the application, we stop HAProxy and start the instance without piping to pino-pretty.

wrk is a modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU.

brew install wrk
$ wrk -c10 -d60s --latency http://localhost:4001/health
Running 1m test @ http://localhost:4001/health
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   752.68us    0.85ms  33.52ms   93.70%
    Req/Sec     7.73k     0.94k    9.52k    74.42%
  Latency Distribution
     50%  430.00us
     75%    0.93ms
     90%    1.32ms
     99%    3.45ms
  923474 requests in 1.00m, 108.33MB read
Requests/sec:  15388.55
Transfer/sec:      1.81MB
$ haproxy -f ./haproxy/passthrough.cfg
$ wrk -c10 -d60s --latency http://localhost:4000/health
Running 1m test @ http://localhost:4000/health
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.90ms    1.54ms  49.85ms   90.55%
    Req/Sec    10.08k     2.29k   13.40k    66.08%
  Latency Distribution
     50%  347.00us
     75%  689.00us
     90%    2.33ms
     99%    6.89ms
  1203917 requests in 1.00m, 141.22MB read
Requests/sec:  20062.58
Transfer/sec:      2.35MB

Based on these numbers the application – run locally on my specific machine – reaches a throughput of ~15,000 requests/s and a TP99 (top percentile 99%) for latency of ~4 ms, when testing only one instance. Using HAProxy with a passthrough configuration reaches a throughput of ~20,000 requests/s and a TP99 for latency is ~7 ms.

Observability

The Three Pillars of observability are metrics, logs, and traces.

# install plugin to allow applications to ship their logs to Loki
docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions

# start observability stack
docker compose up -d

# stop observability stack
docker compose down
# stop observability stack and remove volumes
docker compose down -v

In Grafana we can query for logs from the Loki container that contain a traceID using this query:

{container_name="m99coder-loki"} |= "traceID"

Drop down any log line of the result and click the “Tempo” link to jump directly from logs to traces.

npm i pino-tee -g
npm start | pino-tee info ./logs/api.info.log | tee -a ./logs/api.log | pino-pretty

You can see the application logs in Grafana.

Note: So far the log files are not kept in sync between the host and the container, which results in no updates in Grafana.

Resources