New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New: Add suggestions API #12384
New: Add suggestions API #12384
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to add some tests to verify that suggestions will not be included in formatter output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to throw in an early wrinkle: Should we consider supporting something similar to messageId
strings for suggestion descriptions, in order to allow for i18n/l10n of the suggestion descriptions?
I probably should have thought of this in the RFC stage, sorry.
@platinumazure Oh, good point. Yes, I think we should |
Just a heads up for anyone that is interested in this work. I am currently traveling internationally and it looks like I won't have time to work on this while I'm gone. If anyone is motivated to move this forward, go for it. Otherwise, I will try to get back to it when I'm available again in about 3 weeks. |
data: { character }, | ||
suggest: [ | ||
{ | ||
messageId: "removeEscape", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've implemented suggestions support for messageId
s. Do we want to do it this way, or should it be a different key and different format in the meta
info? For example, I could easily change this to be descriptionId
and put the associated descriptions in meta.suggestionDescriptions
or something instead of overloading meta.messages
and messageId
s. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question!
I don't have a problem with reusing meta.messages
itself, but I would be curious to know if other team members think it could be confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see meta.messages
as general-purpose bucket for i18n. So I think it should be fine to just reuse it.
type: "Literal", | ||
suggestions: [{ | ||
messageId: "removeEscape", | ||
output: "var foo = /;/;" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed up this work-in-progress to get some feedback on the approach here. One option for these tests is to do something like what I have in the above test, where we assert against the suggestion fix data e.g.:
fix: { range: [11, 12], text: "" }
One problem with this is that it is hard to look at and verify that it will work properly.
An alternative approach that I was exploring was to use something like this output
key where we define the result of applying the suggestion on the original code. This seems a lot easier to write good tests with, but it doesn't actually exist on the suggestion data so it could potentially be confusing. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm.. That's a good question. I do think output approach is easier to deal with, but we don't have any way of getting that data (unless we run suggestions through auto-fix code inside tester)... So I think we'll have to stick with the ranges at least for now. We can change it later, if we find better approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd really like to make sure the user experience is good here-- otherwise, I'm worried nobody would use the feature. So I'm in favor of getting the output to work (and having RuleTester run the code through the autofixer as needed). It seems like this shouldn't cause regressions because it would only happen when someone adds a suggestion assertion object to the test case.
Let me know if I'm missing something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, I've added support for both options. See the Testing Suggestions section in docs/developer-guide/nodejs-api.md
for docs.
I'm running the SourceCodeFixer.applyFixes
when a suggestion output
is specified, which helped me catch a bug in my original implementation of suggestions for the no-useless-escape
rule which I definitely would have missed without this.
Just getting around to working on this again. I pushed up some work-in-progress commits to get some feedback on the approach. @platinumazure in response to your comment about "supporting something similar to |
Sorry for the delay, I've been very busy and haven't had a lot of time for open source. I'm kind of concerned about the tests. Why wouldn't we want to use text-based output and just run the autofixer against the suggestion's fix object? It would be much easier for users to test and it would be consistent with how autofix is tested outside of suggestions. If the implementation has switched recently to use a text-base approach, that's great (and please see my comment about not having time for open source 😅). Otherwise, I guess I would like to understand what benefit we get from requiring users to know the structure of the fix object (and effectively making it a public API) to write tests for suggestions. EDIT: I see that this implementation supports text output verification. Hurray! I see that it also supports checking the fix object itself. As I stated above, I'm mildly opposed to this because it effectively adds the fix object into the public API, and I don't think it really adds any value for users when most users should be used to I appreciate your hard work on this and your willingness to respond to feedback, and I am grateful for your patience as I haven't been as available as I would like. |
type: "Literal", | ||
suggestions: [{ | ||
messageId: "removeEscape", | ||
output: "var foo = /#/;" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm kind of concerned about the tests. Why wouldn't we want to use text-based output and just run the autofixer against the suggestion's fix object? It would be much easier for users to test and it would be consistent with how autofix is tested outside of suggestions.
If the implementation has switched recently to use a text-base approach, that's great (and please see my comment about not having time for open source 😅). Otherwise, I guess I would like to understand what benefit we get from requiring users to know the structure of the fix object (and effectively making it a public API) to write tests for suggestions.
@platinumazure, I have set up support for testing the output of what applying the suggestion would be and that is what I'm doing in these tests (as seen above). As noted here, this PR also currently supports testing the shape of the suggestion fix object. If you think we should only provide the output
-based testing option, and not have the ability for users to test the fix
object I'm fine with removing that option.
Thanks for commenting even though you have limited time. Let me know if there is anything else I can try to clarify.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I just saw your edited comment above. That seems reasonable. I will go ahead and remove the option to test the shape of the fix object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much! (I wasn't sure whether to edit or leave a second comment.)
Hoping to have time to do a full review this weekend.
Alright, I've removed support for testing the suggestion Good idea to avoid features that we are not sure about. It is much easier to add them later than add them now and then later try to remove them after folks have started using those features. |
From my perspective, this PR is in good shape. Would love for some other members of the @eslint/eslint-team to review it as well, but I think it's good enough. If no new reviews will be available in a few days, I will go ahead and merge it. @wdoug thank you for awesome job introducing this feature! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a lot to this, but it looks good to me - thanks @wdoug!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Excited to see how our integrations utilize this new feature. Thank you for implementing this!
} | ||
}, | ||
{ | ||
messageId: "escapeBackslash", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thought going forward - do we want to create a convention that indicates which messages are for suggestions?
Thanks for helping me work through this @ilyavolodin and @platinumazure, and thanks y'all for the detailed RFC and docs that helped guide me. One other question: Is there an expectation on when this might end up going out in a release? |
This should get in today's release! |
What is the purpose of this pull request? (put an "X" next to item)
[ ] Documentation update
[ ] Bug fix (template)
[ ] New rule (template)
[ ] Changes an existing rule (template)
[ ] Add autofixing to a rule
[ ] Add a CLI option
[X] Add something to the core
[ ] Other, please explain:
What changes did you make? (Give an overview)
no-useless-escape
rule to include the suggestions described hereIs there anything you'd like reviewers to focus on?
As a new contributor implementing a core addition, everything should probably be scrutinized closely. In particular, I would like some advice on tests, including additional tests that should be written.