New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up ESLint #16962
Comments
I would like to invite the author of the article to participate in the discussion, ping @marvinhagemeister |
Thanks for the ping. Most of the improvements relating to
|
Here's what I think the task list is, so we can just check these off as we go:
The last recommendation, to drop selector syntax, would be a significant breaking change and not one we could introduce any time soon. I think a better approach is likely to investigate creating a tool that can take a rule that uses query strings and regenerate it so that it doesn't use query strings...or maybe something simpler like a tool that you can pass a bunch of query strings to and it will generate a rule scaffold for you. In any event, I think going the way of creating a tool to generate more-performant JS code instead of having selectors in the final JS would be a much more palatable choice. |
I took a stab at the fast path for query selectors and didn't see any significant performance improvement, either in our standard perf test or just running ESLint on our own codebase. |
These are some great performance improvements:
|
I went ahead and reimplemented |
That part of the blog post mentions JSDoc rules that ESLint uses when linting its own codebase ( |
Thanks @mdjermanovic, this method shows a clear performance improvement for |
I'm not familiar with the internals of ESLint, but I'm just wondering: can we execute different rules in parallel on different cores(/processes)? If this is feasible, it seems like an easy, low-hanging fruit for speeding up ESLint. |
Closing this issue as the planned work has been completed. |
This is an issue to define tasks to improve ESLint performance per the recommendations from the "Speeding up the JavaScript ecosystem - eslint" blog post:
https://marvinh.dev/blog/speeding-up-javascript-ecosystem-part-3/
I was able to extract five recommendations from the blog post that relate to eslint core or eslint dependencies. Please add more if I missed something.
Token store's
utils.search
should use a binary search algorithm.We could implement our own, or find a library. In fact, this used to be a binary search before Chore: Remove lodash #14287. The performance impact of switching to
Array#findIndex
was discussed in Chore: Remove lodash #14287 (comment), but at the time performance tests did not show significant differences. Regardless, I think we should reintroduce binary search here.Refactor the code to avoid calling the mentioned
utils.search
/ instantiatingBackwardTokenCommentCursor
millions of times.This suggestion requires further analysis. I'm not sure if the premise that we can avoid this because "we should know exactly where we are" applies here because
BackwardTokenCommentCursor
is used by methods that take an arbitrary node/token, such asSourceCode#getTokensBefore
.Several points on improving
esquery
performance.This has already been implemented in Optimize hot code paths estools/esquery#134. Our Single File and Multi Files performance tests show ~8% overall performance improvement. 🚀
Fast path for simple selectors ("Bailing out early" section in the blog post).
The suggestion is to handle the simplest selectors in the form of "NodeType" manually, without using esquery. We could definitely give this a try.
"Rethinking selectors" section in the blog post.
I'm not sure what is the recommendation here, is it to drop declarative selectors in favor of js functions that would be provided by rules?
The text was updated successfully, but these errors were encountered: