Skip to content

Commit

Permalink
Merge branch 'master' of github.com:pinojs/pino
Browse files Browse the repository at this point in the history
  • Loading branch information
mcollina committed Sep 19, 2022
2 parents 727244e + 4f9e5a6 commit 2858099
Show file tree
Hide file tree
Showing 15 changed files with 142 additions and 112 deletions.
14 changes: 7 additions & 7 deletions README.md
Expand Up @@ -72,10 +72,10 @@ format logs during development:
### Transports & Log Processing

Due to Node's single-threaded event-loop, it's highly recommended that sending,
alert triggering, reformatting and all forms of log processing
be conducted in a separate process or thread.
alert triggering, reformatting, and all forms of log processing
are conducted in a separate process or thread.

In Pino terminology we call all log processors "transports", and recommend that the
In Pino terminology, we call all log processors "transports" and recommend that the
transports be run in a worker thread using our `pino.transport` API.

For more details see our [Transports⇗](docs/transports.md) document.
Expand All @@ -92,9 +92,9 @@ See the [Benchmarks](docs/benchmarks.md) document for comparisons.

### Bundling support

Pino supports to being bundled using tools like webpack or esbuild.
Pino supports being bundled using tools like webpack or esbuild.

See [Bundling](docs/bundling.md) document for more informations.
See [Bundling](docs/bundling.md) document for more information.

<a name="team"></a>
## The Team
Expand Down Expand Up @@ -139,8 +139,8 @@ Pino is an **OPEN Open Source Project**. This means that:
See the [CONTRIBUTING.md](https://github.com/pinojs/pino/blob/master/CONTRIBUTING.md) file for more details.

<a name="acknowledgements"></a>
## Acknowledgements
<a name="acknowledgments"></a>
## Acknowledgments

This project was kindly sponsored by [nearForm](https://nearform.com).

Expand Down
44 changes: 22 additions & 22 deletions docs/api.md
Expand Up @@ -40,7 +40,7 @@
## `pino([options], [destination]) => logger`

The exported `pino` function takes two optional arguments,
[`options`](#options) and [`destination`](#destination) and
[`options`](#options) and [`destination`](#destination), and
returns a [logger instance](#logger).

<a id=options></a>
Expand Down Expand Up @@ -70,7 +70,7 @@ Additional levels can be added to the instance via the `customLevels` option.
Default: `undefined`

Use this option to define additional logging levels.
The keys of the object correspond the namespace of the log level,
The keys of the object correspond to the namespace of the log level,
and the values should be the numerical value of the level.

```js
Expand All @@ -88,7 +88,7 @@ logger.foo('hi')
Default: `false`

Use this option to only use defined `customLevels` and omit Pino's levels.
Logger's default `level` must be changed to a value in `customLevels` in order to use `useOnlyCustomLevels`
Logger's default `level` must be changed to a value in `customLevels` to use `useOnlyCustomLevels`
Warning: this option may not be supported by downstream transports.

```js
Expand All @@ -100,13 +100,13 @@ const logger = pino({
level: 'foo'
})
logger.foo('hi')
logger.info('hello') // Will throw an error saying info in not found in logger object
logger.info('hello') // Will throw an error saying info is not found in logger object
```
#### `depthLimit` (Number)

Default: `5`

Option to limit stringification at a specific nesting depth when logging circular object.
Option to limit stringification at a specific nesting depth when logging circular objects.

#### `edgeLimit` (Number)

Expand Down Expand Up @@ -266,11 +266,11 @@ Default: `undefined`
As an array, the `redact` option specifies paths that should
have their values redacted from any log output.

Each path must be a string using a syntax which corresponds to JavaScript dot and bracket notation.
Each path must be a string using a syntax that corresponds to JavaScript dot and bracket notation.

If an object is supplied, three options can be specified:
* `paths` (array): Required. An array of paths. See [redaction - Path Syntax ⇗](/docs/redaction.md#paths) for specifics.
* `censor` (String|Function|Undefined): Optional. When supplied as a String the `censor` option will overwrite keys which are to be redacted. When set to `undefined` the key will be removed entirely from the object.
* `censor` (String|Function|Undefined): Optional. When supplied as a String the `censor` option will overwrite keys that are to be redacted. When set to `undefined` the key will be removed entirely from the object.
The `censor` option may also be a mapping function. The (synchronous) mapping function has the signature `(value, path) => redactedValue` and is called with the unredacted `value` and `path` to the key being redacted, as an array. For example given a redaction path of `a.b.c` the `path` argument would be `['a', 'b', 'c']`. The value returned from the mapping function becomes the applied censor value. Default: `'[Redacted]'`
value synchronously.
Default: `'[Redacted]'`
Expand Down Expand Up @@ -356,7 +356,7 @@ const formatters = {

Changes the shape of the log object. This function will be called every time
one of the log methods (such as `.info`) is called. All arguments passed to the
log method, except the message, will be pass to this function. By default it does
log method, except the message, will be passed to this function. By default, it does
not change the shape of the log object.

```js
Expand Down Expand Up @@ -503,7 +503,7 @@ pino({ transport: {}}, '/path/to/somewhere') // THIS WILL NOT WORK, DO NOT DO TH
pino({ transport: {}}, process.stderr) // THIS WILL NOT WORK, DO NOT DO THIS
```
when using the `transport` option. In this case an `Error` will be thrown.
when using the `transport` option. In this case, an `Error` will be thrown.
* See [pino.transport()](#pino-transport)
Expand All @@ -513,7 +513,7 @@ The `onChild` function is a synchronous callback that will be called on each cre
Any error thrown inside the callback will be uncaught and should be handled inside the callback.
```js
const parent = require('pino')({ onChild: (instance) => {
// Exceute call back code for each newly created child.
// Execute call back code for each newly created child.
}})
// `onChild` will now be executed with the new child.
parent.child(bindings)
Expand Down Expand Up @@ -567,7 +567,7 @@ path, e.g. `/tmp/1`.
Default: `false`
Using the global symbol `Symbol.for('pino.metadata')` as a key on the `destination` parameter and
setting the key it to `true`, indicates that the following properties should be
setting the key to `true`, indicates that the following properties should be
set on the `destination` object after each log line is written:
* the last logging level as `destination.lastLevel`
Expand Down Expand Up @@ -613,7 +613,7 @@ The parameters are explained below using the `logger.info` method but the same a
#### `mergingObject` (Object)
An object can optionally be supplied as the first parameter. Each enumerable key and value
of the `mergingObject` is copied in to the JSON log line.
of the `mergingObject` is copied into the JSON log line.
```js
logger.info({MIX: {IN: true}})
Expand Down Expand Up @@ -658,7 +658,7 @@ the following placeholders:
* `%s` – string placeholder
* `%d` – digit placeholder
* `%O`, `%o` and `%j` – object placeholder
* `%O`, `%o`, and `%j` – object placeholder
Values supplied as additional arguments to the logger method will
then be interpolated accordingly.
Expand Down Expand Up @@ -776,7 +776,7 @@ Write a `'error'` level log, if the configured `level` allows for it.
Write a `'fatal'` level log, if the configured `level` allows for it.
Since `'fatal'` level messages are intended to be logged just prior to the process exiting the `fatal`
Since `'fatal'` level messages are intended to be logged just before the process exiting the `fatal`
method will always sync flush the destination.
Therefore it's important not to misuse `fatal` since
it will cause performance overhead if used for any
Expand Down Expand Up @@ -832,7 +832,7 @@ Options for child logger. These options will override the parent logger options.
##### `options.level` (String)
The `level` property overrides the log level of the child logger.
By default the parent log level is inherited.
By default, the parent log level is inherited.
After the creation of the child logger, it is also accessible using the [`logger.level`](#logger-level) key.
```js
Expand Down Expand Up @@ -921,9 +921,9 @@ The core levels and their values are as follows:
The logging level is a *minimum* level based on the associated value of that level.
For instance if `logger.level` is `info` *(30)* then `info` *(30)*, `warn` *(40)*, `error` *(50)* and `fatal` *(60)* log methods will be enabled but the `trace` *(10)* and `debug` *(20)* methods, being less than 30, will not.
For instance if `logger.level` is `info` *(30)* then `info` *(30)*, `warn` *(40)*, `error` *(50)*, and `fatal` *(60)* log methods will be enabled but the `trace` *(10)* and `debug` *(20)* methods, being less than 30, will not.
The `silent` logging level is a specialized level which will disable all logging,
The `silent` logging level is a specialized level that will disable all logging,
the `silent` log method is a noop function.
<a id="islevelenabled"></a>
Expand Down Expand Up @@ -994,7 +994,7 @@ $ node -p "require('pino')().levels"
### logger\[Symbol.for('pino.serializers')\]
Returns the serializers as applied to the current logger instance. If a child logger did not
register it's own serializer upon instantiation the serializers of the parent will be returned.
register its own serializer upon instantiation the serializers of the parent will be returned.
<a id="level-change"></a>
### Event: 'level-change'
Expand Down Expand Up @@ -1079,7 +1079,7 @@ A `pino.destination` instance can also be used to reopen closed files
<a id="pino-transport"></a>
### `pino.transport(options) => ThreadStream`
Create a a stream that routes logs to a worker thread that
Create a stream that routes logs to a worker thread that
wraps around a [Pino Transport](/docs/transports.md).
```js
Expand Down Expand Up @@ -1122,7 +1122,7 @@ const transport = pino.transport({
pino(transport)
```
If `WeakRef`, `WeakMap` and `FinalizationRegistry` are available in the current runtime (v14.5.0+), then the thread
If `WeakRef`, `WeakMap`, and `FinalizationRegistry` are available in the current runtime (v14.5.0+), then the thread
will be automatically terminated in case the stream or logger goes out of scope.
The `transport()` function adds a listener to `process.on('beforeExit')` and `process.on('exit')` to ensure the worker
is flushed and all data synced before the process exits.
Expand Down Expand Up @@ -1242,7 +1242,7 @@ The `pino.stdSerializers` object provides functions for serializing objects comm
<a id="pino-stdtimefunctions"></a>
### `pino.stdTimeFunctions` (Object)
The [`timestamp`](#opt-timestamp) option can accept a function which determines the
The [`timestamp`](#opt-timestamp) option can accept a function that determines the
`timestamp` value in a log line.
The `pino.stdTimeFunctions` object provides a very small set of common functions for generating the
Expand All @@ -1258,7 +1258,7 @@ The `pino.stdTimeFunctions` object provides a very small set of common functions
<a id="pino-symbols"></a>
### `pino.symbols` (Object)
For integration purposes with ecosystem and third party libraries `pino.symbols`
For integration purposes with ecosystem and third-party libraries `pino.symbols`
exposes the symbols used to hold non-public state and methods on the logger instance.
Access to the symbols allows logger state to be adjusted, and methods to be overridden or
Expand Down
12 changes: 6 additions & 6 deletions docs/asynchronous.md
@@ -1,6 +1,6 @@
# Asynchronous Logging

In essence, asynchronous logging enables the minimum overhead of Pino.
Asynchronous logging enables the minimum overhead of Pino.
Asynchronous logging works by buffering log messages and writing them in larger chunks.

```js
Expand All @@ -13,16 +13,16 @@ const logger = pino(pino.destination({
```

It's always possible to turn on synchronous logging by passing `sync: true`.
In this mode of operation log messages are directly written to the
output stream, as the messages are generated with a _blocking_ operation.
In this mode of operation, log messages are directly written to the
output stream as the messages are generated with a _blocking_ operation.

* See [`pino.destination`](/docs/api.md#pino-destination)
* `pino.destination` is implemented on [`sonic-boom`](https://github.com/mcollina/sonic-boom).

### AWS Lambda

Asynchronous logging is disabled by default on AWS Lambda, or any other environment
that modifies `process.stdout`. If forcefully turned on, we recommend to call `dest.flushSync()` at the end
Asynchronous logging is disabled by default on AWS Lambda or any other environment
that modifies `process.stdout`. If forcefully turned on, we recommend calling `dest.flushSync()` at the end
of each function execution to avoid losing data.

## Caveats
Expand All @@ -36,5 +36,5 @@ Asynchronous logging has a couple of important caveats:

See also:

* [`pino.destination` api](/docs/api.md#pino-destination)
* [`pino.destination` API](/docs/api.md#pino-destination)
* [`destination` parameter](/docs/api.md#destination)
20 changes: 10 additions & 10 deletions docs/browser.md
@@ -1,6 +1,6 @@
# Browser API

Pino is compatible with [`browserify`](https://npm.im/browserify) for browser side usage:
Pino is compatible with [`browserify`](https://npm.im/browserify) for browser-side usage:

This can be useful with isomorphic/universal JavaScript code.

Expand Down Expand Up @@ -101,7 +101,7 @@ pino.info({custom: 'a', another: 'b'})
```

When `serialize` is `true` the standard error serializer is also enabled (see https://github.com/pinojs/pino/blob/master/docs/api.md#stdSerializers).
This is a global serializer which will apply to any `Error` objects passed to the logger methods.
This is a global serializer, which will apply to any `Error` objects passed to the logger methods.

If `serialize` is an array the standard error serializer is also automatically enabled, it can
be explicitly disabled by including a string in the serialize array: `!stdSerializers.err`, like so:
Expand Down Expand Up @@ -141,7 +141,7 @@ message and a `logEvent` object.

The `logEvent` object is a data structure representing a log message, it represents
the arguments passed to a logger statement, the level
at which they were logged and the hierarchy of child bindings.
at which they were logged, and the hierarchy of child bindings.

The `logEvent` format is structured like so:

Expand All @@ -154,25 +154,25 @@ The `logEvent` format is structured like so:
}
```

The `ts` property is a unix epoch timestamp in milliseconds, the time is taken from the moment the
The `ts` property is a Unix epoch timestamp in milliseconds, the time is taken from the moment the
logger method is called.

The `messages` array is all arguments passed to logger method, (for instance `logger.info('a', 'b', 'c')`
would result in `messages` array `['a', 'b', 'c']`).

The `bindings` array represents each child logger (if any), and the relevant bindings.
For instance given `logger.child({a: 1}).child({b: 2}).info({c: 3})`, the bindings array
For instance, given `logger.child({a: 1}).child({b: 2}).info({c: 3})`, the bindings array
would hold `[{a: 1}, {b: 2}]` and the `messages` array would be `[{c: 3}]`. The `bindings`
are ordered according to their position in the child logger hierarchy, with the lowest index
being the top of the hierarchy.

By default serializers are not applied to log output in the browser, but they will *always* be
By default, serializers are not applied to log output in the browser, but they will *always* be
applied to `messages` and `bindings` in the `logEvent` object. This allows us to ensure a consistent
format for all values between server and client.

The `level` holds the label (for instance `info`), and the corresponding numerical value
(for instance `30`). This could be important in cases where client side level values and
labels differ from server side.
(for instance `30`). This could be important in cases where client-side level values and
labels differ from server-side.

The point of the `send` function is to remotely record log messages:

Expand All @@ -184,7 +184,7 @@ const pino = require('pino')({
send: function (level, logEvent) {
if (level === 'warn') {
// maybe send the logEvent to a separate endpoint
// or maybe analyse the messages further before sending
// or maybe analyze the messages further before sending
}
// we could also use the `logEvent.level.value` property to determine
// numerical value
Expand All @@ -205,4 +205,4 @@ const pino = require('pino')({browser: {disabled: true}})
```

The `disabled` option will disable logging in browser if set
to `true`. Default is set to `false`.
to `true`, by default it is set to `false`.
8 changes: 4 additions & 4 deletions docs/bundling.md
Expand Up @@ -2,17 +2,17 @@

Due to its internal architecture based on Worker Threads, it is not possible to bundle Pino *without* generating additional files.

In particular, a bundler must ensure that the following files are also bundle separately:
In particular, a bundler must ensure that the following files are also bundled separately:

* `lib/worker.js` from the `thread-stream` dependency
* `file.js`
* `lib/worker.js`
* `lib/worker-pipeline.js`
* Any transport used by the user (like `pino-pretty`)

Once the files above have been generated, the bundler must also add information about the files above by injecting a code which sets `__bundlerPathsOverrides` in the `globalThis` object.
Once the files above have been generated, the bundler must also add information about the files above by injecting a code that sets `__bundlerPathsOverrides` in the `globalThis` object.

The variable is a object whose keys are identifier for the files and the values are the paths of files relative to the currently bundle files.
The variable is an object whose keys are an identifier for the files and the values are the paths of files relative to the currently bundle files.

Example:

Expand All @@ -27,7 +27,7 @@ globalThis.__bundlerPathsOverrides = {
};
```

Note that `pino/file`, `pino-worker`, `pino-pipeline-worker` and `thread-stream-worker` are required identifiers. Other identifiers are possible based on the user configuration.
Note that `pino/file`, `pino-worker`, `pino-pipeline-worker`, and `thread-stream-worker` are required identifiers. Other identifiers are possible based on the user configuration.

## Webpack Plugin

Expand Down
8 changes: 4 additions & 4 deletions docs/child-loggers.md
Expand Up @@ -50,7 +50,7 @@ benchPinoExtremeChildChild*10000: 127.753ms

## Duplicate keys caveat

It's possible for naming conflicts to arise between child loggers and
Naming conflicts can arise between child loggers and
children of child loggers.

This isn't as bad as it sounds, even if the same keys between
Expand All @@ -71,10 +71,10 @@ $ cat my-log
{"pid":95469,"hostname":"MacBook-Pro-3.home","level":30,"msg":"howdy","time":1459534114473,"a":"property","a":"prop"}
```

Notice how there's two key's named `a` in the JSON output. The sub-childs properties
Notice how there are two keys named `a` in the JSON output. The sub-childs properties
appear after the parent child properties.

At some point the logs will most likely be processed (for instance with a [transport](transports.md)),
At some point, the logs will most likely be processed (for instance with a [transport](transports.md)),
and this generally involves parsing. `JSON.parse` will return an object where the conflicting
namespace holds the final value assigned to it:

Expand All @@ -92,4 +92,4 @@ in light of an expected log processing approach.

One of Pino's performance tricks is to avoid building objects and stringifying
them, so we're building strings instead. This is why duplicate keys between
parents and children will end up in log output.
parents and children will end up in the log output.

0 comments on commit 2858099

Please sign in to comment.