Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add --profile option to lerna exec and lerna run #2376

Merged
merged 16 commits into from Dec 27, 2019
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
24 changes: 24 additions & 0 deletions commands/exec/README.md
Expand Up @@ -79,3 +79,27 @@ Pass `--no-bail` to disable this behavior, executing in _all_ packages regardles

Disable package name prefixing when output is streaming (`--stream` _or_ `--parallel`).
This option can be useful when piping results to other processes, such as editor plugins.

### `--profile`

Profiles the command executions and produces a performance profile which can be analyzed using DevTools in a
Chromium-based browser (direct url: `devtools://devtools/bundled/devtools_app.html`). The profile shows a timeline of
the command executions where each execution is assigned to an open slot. The number of slots is determined by the
`--concurrency` option and the number of open slots is determined by `--concurrency` minus the number of ongoing
operations. The end result is a visualization of the parallel execution of your commands.

The default location of the performance profiles is at the root of your project.

```sh
$ lerna run build --profile
evocateur marked this conversation as resolved.
Show resolved Hide resolved
```

> **Note:** Lerna will only profile when topological sorting is enabled (i.e. without `--parallel` and `--no-sort`).

### `--profile-location <location>`

You can provide a custom location for the performance profiles. The location is relative to the root of your project.
evocateur marked this conversation as resolved.
Show resolved Hide resolved

```sh
$ lerna run build --profile --profile-location=logs/profile/
```
10 changes: 10 additions & 0 deletions commands/exec/command.js
Expand Up @@ -57,6 +57,16 @@ exports.builder = yargs => {
hidden: true,
type: "boolean",
},
profile: {
group: "Command Options:",
describe: "Profile command executions and output performance profile to default location.",
type: "bolean",
evocateur marked this conversation as resolved.
Show resolved Hide resolved
},
"profile-location": {
group: "Command Options:",
describe: "Output performance profile to custom location (relative to the project root).",
type: "string",
},
});

return filterable(yargs);
Expand Down
4 changes: 4 additions & 0 deletions commands/exec/index.js
Expand Up @@ -127,6 +127,10 @@ class ExecCommand extends Command {
return runTopologically(this.filteredPackages, runner, {
evocateur marked this conversation as resolved.
Show resolved Hide resolved
concurrency: this.concurrency,
rejectCycles: this.options.rejectCycles,
profile: this.options.profile,
profileLocation: this.options.profileLocation,
rootPath: this.project.rootPath,
log: this.logger,
});
}

Expand Down
24 changes: 24 additions & 0 deletions commands/run/README.md
Expand Up @@ -82,3 +82,27 @@ Pass `--no-bail` to disable this behavior, running the script in _all_ packages

Disable package name prefixing when output is streaming (`--stream` _or_ `--parallel`).
This option can be useful when piping results to other processes, such as editor plugins.

### `--profile`

Profiles the script executions and produces a performance profile which can be analyzed using DevTools in a
Chromium-based browser (direct url: `devtools://devtools/bundled/devtools_app.html`). The profile shows a timeline of
the script executions where each execution is assigned to an open slot. The number of slots is determined by the
`--concurrency` option and the number of open slots is determined by `--concurrency` minus the number of ongoing
operations. The end result is a visualization of the parallel execution of your scripts.

The default location of the performance profiles is at the root of your project.

```sh
$ lerna run build --profile
```

> **Note:** Lerna will only profile when topological sorting is enabled (i.e. without `--parallel` and `--no-sort`).

### `--profile-location <location>`

You can provide a custom location for the performance profiles. The location is relative to the root of your project.

```sh
$ lerna run build --profile --profile-location=logs/profile/
```
10 changes: 10 additions & 0 deletions commands/run/command.js
Expand Up @@ -59,6 +59,16 @@ exports.builder = yargs => {
hidden: true,
type: "boolean",
},
profile: {
group: "Command Options:",
describe: "Profile script executions and output performance profile to default location.",
type: "bolean",
},
"profile-location": {
group: "Command Options:",
describe: "Output performance profile to custom location (relative to the project root).",
type: "string",
},
});

return filterable(yargs);
Expand Down
4 changes: 4 additions & 0 deletions commands/run/index.js
Expand Up @@ -135,6 +135,10 @@ class RunCommand extends Command {
return runTopologically(this.packagesWithScript, runner, {
concurrency: this.concurrency,
rejectCycles: this.options.rejectCycles,
profile: this.options.profile,
profileLocation: this.options.profileLocation,
rootPath: this.project.rootPath,
log: this.logger,
});
}

Expand Down
39 changes: 29 additions & 10 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 5 additions & 1 deletion utils/run-topologically/package.json
Expand Up @@ -13,6 +13,7 @@
"url": "https://github.com/evocateur"
},
"files": [
"profiler.js",
"run-topologically.js"
],
"main": "run-topologically.js",
Expand All @@ -33,6 +34,9 @@
"dependencies": {
"@lerna/query-graph": "file:../query-graph",
"figgy-pudding": "^3.5.1",
"p-queue": "^4.0.0"
"fs-extra": "^8.1.0",
"npmlog": "^4.1.2",
"p-queue": "^6.2.1",
"upath": "^1.2.0"
}
}
86 changes: 86 additions & 0 deletions utils/run-topologically/profiler.js
@@ -0,0 +1,86 @@
"use strict";

const fs = require("fs-extra");
const upath = require("upath");

const hrtimeToMicroseconds = hrtime => {
return (hrtime[0] * 1e9 + hrtime[1]) / 1000;
};

const range = len => {
return Array(len)
.fill()
.map((_, idx) => idx);
};

const generateOutputname = () => {
const now = new Date(); // 2011-10-05T14:48:00.000Z
const datetime = now.toISOString().split(".")[0]; // 2011-10-05T14:48:00
const datetimeNormalized = datetime.replace(/-|:/g, ""); // 20111005T144800
return `Lerna-Profile-${datetimeNormalized}.json`;
};

class Profiler {
evocateur marked this conversation as resolved.
Show resolved Hide resolved
constructor({ concurrency, log, profile, profileLocation, rootPath }) {
this.events = [];
this.profile = profile;
this.log = log;
this.profileLocation = profileLocation;
this.rootPath = rootPath;
evocateur marked this conversation as resolved.
Show resolved Hide resolved
this.threads = range(concurrency);
}

run(fn, name) {
if (!this.profile) {
return fn();
}

let startTime;
let threadId;

return Promise.resolve()
.then(() => {
startTime = process.hrtime();
threadId = this.threads.shift();
})
.then(() => fn())
.then(value => {
const duration = process.hrtime(startTime);

// Trace Event Format documentation:
// https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU/preview
const event = {
name,
ph: "X",
ts: hrtimeToMicroseconds(startTime),
pid: 1,
tid: threadId,
dur: hrtimeToMicroseconds(duration),
};

this.events.push(event);

this.threads.unshift(threadId);
this.threads.sort();

return value;
});
}

output() {
if (!this.profile) {
return;
}

const outputFolder = this.profileLocation
? upath.join(this.rootPath, this.profileLocation)
: this.rootPath;
const outputPath = upath.join(outputFolder, generateOutputname());

fs.outputJsonSync(outputPath, this.events);
evocateur marked this conversation as resolved.
Show resolved Hide resolved

this.log.info("profiler", `Performance profile saved to ${outputPath}`);
}
}

module.exports = Profiler;
24 changes: 20 additions & 4 deletions utils/run-topologically/run-topologically.js
@@ -1,19 +1,28 @@
"use strict";

const PQueue = require("p-queue");
const { default: PQueue } = require("p-queue");
const npmlog = require("npmlog");
const figgyPudding = require("figgy-pudding");
const QueryGraph = require("@lerna/query-graph");
const Profiler = require("./profiler");

module.exports = runTopologically;

const TopologicalConfig = figgyPudding({
log: { default: npmlog },
// p-queue options
concurrency: {},
// query-graph options
"graph-type": {},
graphType: "graph-type",
"reject-cycles": {},
rejectCycles: "reject-cycles",
// profile options
profile: { default: false },
"profile-location": {},
profileLocation: "profile-location",
"root-path": {},
rootPath: "root-path",
});

/**
Expand All @@ -27,8 +36,11 @@ const TopologicalConfig = figgyPudding({
* @returns {Promise<Array<*>>} when all executions complete
*/
function runTopologically(packages, runner, opts) {
const { concurrency, graphType, rejectCycles } = TopologicalConfig(opts);
const { concurrency, graphType, log, profile, profileLocation, rejectCycles, rootPath } = TopologicalConfig(
evocateur marked this conversation as resolved.
Show resolved Hide resolved
opts
);

const profiler = new Profiler({ concurrency, log, profile, profileLocation, rootPath });
const queue = new PQueue({ concurrency });
const graph = new QueryGraph(packages, { graphType, rejectCycles });

Expand All @@ -41,7 +53,8 @@ function runTopologically(packages, runner, opts) {

queue
.add(() =>
runner(pkg)
profiler
.run(() => runner(pkg), name)
.then(value => returnValues.push(value))
.then(() => graph.markAsDone(pkg))
.then(() => queueNextAvailablePackages())
Expand All @@ -51,6 +64,9 @@ function runTopologically(packages, runner, opts) {

queueNextAvailablePackages();

return queue.onIdle().then(() => resolve(returnValues));
return queue
.onIdle()
.then(() => profiler.output())
.then(() => resolve(returnValues));
});
}