Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fairer comparison of Babel and Buble #58

Open
aleclarson opened this issue Aug 29, 2018 · 3 comments
Open

Fairer comparison of Babel and Buble #58

aleclarson opened this issue Aug 29, 2018 · 3 comments

Comments

@aleclarson
Copy link

The current Babel benchmark should include Babylon parsing just like the Buble benchmark includes Acorn parsing. Thoughts?

PS: This benchmark is a great comparison of existing Javascript parsers, and you can even verify the results from your browser! Perhaps this repository should take a similar approach?

cc: @hzoo @Rich-Harris

@mathiasbynens
Copy link
Member

To be absolutely clear: the goal of the Web Tooling Benchmark is not to compare the performance of these tools between themselves, but rather to measure the JS engine's performance when running an example payload using that library. This is in no way meant to be a comparison of Babel vs. Bublé!

Still, we could consider a PR that changes https://github.com/v8/web-tooling-benchmark/blob/master/src/acorn-benchmark.js to move the parsing out of the measured time region. We have a separate acorn benchmark after all. It's important that the measurements are of the same order as the other tests, though.

@aleclarson
Copy link
Author

Still, we could consider a PR that changes https://github.com/v8/web-tooling-benchmark/blob/master/src/acorn-benchmark.js

Did you mean to link to the Buble benchmark?

It's important that the measurements are of the same order as the other tests, though.

Not sure what you mean. Can you give an example?

@mathiasbynens
Copy link
Member

@aleclarson Right, I posted the wrong link indeed. Sorry for the confusion!

By "the measurements should roughly be of the same order" I mean the following. Here's the example from the README:

$ node dist/cli.js
Running Web Tooling Benchmark v0.5.1…
-------------------------------------
         acorn:  6.94 runs/s
         babel:  7.45 runs/s
  babel-minify:  6.66 runs/s
       babylon:  6.26 runs/s
         buble:  4.07 runs/s
          chai: 14.33 runs/s
  coffeescript:  5.95 runs/s
        espree:  2.09 runs/s
       esprima:  4.13 runs/s
        jshint:  8.84 runs/s
         lebab:  7.07 runs/s
       postcss:  5.35 runs/s
       prepack:  5.58 runs/s
      prettier:  6.19 runs/s
    source-map:  7.63 runs/s
    typescript:  8.59 runs/s
     uglify-es: 13.69 runs/s
     uglify-js:  4.59 runs/s
-------------------------------------
Geometric mean:  6.38 runs/s

We want each test to be roughly around the same number of runs/s. For example, it would not be acceptable to have a single test that takes 100x as long, or that is 100x as fast, as the mean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants