Skip to content
This repository has been archived by the owner on Jul 3, 2019. It is now read-only.

Start benchmark suite #25

Open
zkat opened this issue Feb 14, 2017 · 1 comment
Open

Start benchmark suite #25

zkat opened this issue Feb 14, 2017 · 1 comment

Comments

@zkat
Copy link
Owner

zkat commented Feb 14, 2017

pacote is built for performance. Performance is meaningless without benchmarks and profiling. So. We need benchmarks.

There should be benchmarks for each of the supported types (note: only registry ones are needed for 0.2.0), with small, medium, and large packages (including some variation for number of files vs size of individual files). All of these for both manifest-fetching and extraction.

We should make sure all the benchmarks run hit the following cases too, for each of the groups described above:

  • no shrinkwrap, tarball extract required
  • no shrinkwrap, but with pkg._hasShrinkwrap === false (so no extract)
  • has shrinkwrap, with alternative fetch (so, an endpoint, git-host, etc)
  • has shrinkwrap, tarball extract required
  • cached data, no memoization (lib/cache exports a _clearMemoized() fn for this purpose)
  • memoized manifest data (tarballs are not memoized)
  • cached data for package needing shrinkwrap fetch
  • memoized data for package needing shrinkwrap fetch
  • stale cache data (so, 304s)
  • concurrency of 50-100 for all of the above, to check for contention and starvation (this is usually what the CLI will set its concurrency to).

https://npm.im/benchmark does support async stuff and seems like a reasonable base to build this suite upon.

Marking this as starter because while it's likely to take some time to write, you need relatively little context to be able to write some baseline benchmarks for the above cases. The actual calls are literally all variations of pacote.manifest() and pacote.extract() calls: that's the level these benchmarks should run at, rather than any internals. At least for now.

I would also say that comparing benchmark results across different versions automatically is just a stretch goal, because the most important bit is to be able to run these benchmarks at all.

@zkat zkat added this to the 0.2.0 milestone Feb 14, 2017
@zkat zkat changed the title Benchmarks Start benchmark suite Feb 14, 2017
@zkat
Copy link
Owner Author

zkat commented Mar 5, 2017

Cacache has one! Can probably yank it out into an external library to make reporting nicer.

zkat added a commit that referenced this issue Mar 10, 2017
@zkat zkat modified the milestones: 1.0.0, 1.0.1 Mar 11, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant