Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Babel: thinking about how to make the benchmark more representative #27

Open
1 of 3 tasks
hzoo opened this issue Nov 16, 2017 · 1 comment
Open
1 of 3 tasks

Babel: thinking about how to make the benchmark more representative #27

hzoo opened this issue Nov 16, 2017 · 1 comment

Comments

@hzoo
Copy link
Contributor

hzoo commented Nov 16, 2017

Ref #24 (comment)

These could all similarly apply for babylon itself
FYI we need the bundled/concat'd uncompiled version

the current benchmark:

This benchmark runs the Babel transformation logic using the es2015 preset on a 196KiB ES2015 module containing the untranspiled Vue bundle. Note that this explicitly excludes the Babylon parser and only measures the throughput of the actual transformations. The parser is tested separately by the babylon benchmark below.

name: "vue.runtime.esm-nobuble-2.4.4.js",

Right now it only tests an ES2015 module (albeit a 194kb one 👍) but that may not be representative of what the future will be like so we should think about possible changes to this benchmark:

  • Since we have deprecated the yearly presets like preset-es2015 we should run with @babel/preset-env now
    options: { presets: ["es2015"], sourceType: "module" }
    • I guess we might want to think about different targets but that just runs less of Babel so not sure how useful that is for a benchmark? (targets: default/ie, current node, current chrome, etc)
  • In a similar way I guess we could have a test for an ES3/ES5 file as a good test of the baseline perf of going through the whole program. (The shortcut Babel could do is just to print the file exactly out if it doesn't find any changes, kinda like engines cheat but we won't do that)
    • I just realized we could just run Babel on the output of the original benchmark since that will be ES5 anyway?
  • The payload should test out other kinds of code that people are writing/using with Babel like
    • ES2017+ and Stage x proposals (we could use Babel itself for this if we bundled all of it untranspiled, but there are probably other projects we could use)
    • JSX/Flow/Typescript
  • There are other things like compiling a minified source but people shouldn't be doing that?
    • Babel operates per file so realistically it compiles a lot of smaller files
@hzoo hzoo changed the title Babel: suggestions for benchmark payloads Babel: thinking about how to make the benchmark more representative Nov 16, 2017
@bmeurer
Copy link
Member

bmeurer commented Nov 17, 2017

Those are great suggestions, Henry, thanks a lot! The current version is mostly a one-shot prototype. Ideally the benchmark payloads would be created and driven by experts like you, who know how a representative workload for Babel looks like.

mathiasbynens added a commit that referenced this issue Dec 11, 2017
The yearly presets like preset-es2015 are deprecated, and the best
practice is to use @babel/preset-env nowadays.

Ref. #27.
mathiasbynens added a commit that referenced this issue Dec 11, 2017
The yearly presets like preset-es2015 are deprecated, and the best
practice is to use @babel/preset-env nowadays.

Ref. #27.
mathiasbynens added a commit that referenced this issue Dec 11, 2017
The yearly presets like preset-es2015 are deprecated, and the best
practice is to use @babel/preset-env nowadays.

Ref. #27.
Closes #30.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants