Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support for testing invalid rule schemas and runtime exceptions #103

Merged
merged 9 commits into from Mar 20, 2023

Conversation

bmish
Copy link
Sponsor Member

@bmish bmish commented Jan 5, 2023

Summary

Enable rule authors to write unit tests for invalid rule schemas and runtime exceptions in rules.

Related Issues

Copy link
Sponsor

@ljharb ljharb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes please! it'd also be great to have test cases that fail to parse, intentionally

@mdjermanovic mdjermanovic added Initial Commenting This RFC is in the initial feedback stage and removed triage labels Jan 5, 2023
@jfmengels
Copy link

jfmengels commented Jan 5, 2023

I like this 👍

Some prior art here: in elm-review rules can't crash (no such thing in Elm, and there are no exceptions here). The tool doesn't validate a schema for you (there is a type checker that does the most work). But it can happen that some values pass the type checker but are not valid inputs. For instance, a rule might take a string as an argument but being a given an empty string which would be a non-valid input.

This is how the rule would report that there is a configuration error: by returning a

rule : String -> Rule
rule functionName =
    if isValidFunctinoName functionName then
        Rule.configurationError "RuleName"
            { message = "`" ++ functionName ++ "` is not a valid function name"
            , details =
                [ "I was expecting the function name to be a valid Elm function name."
                , "When that is not the case, I am not able to function as expected."
				, "<more explanations on the constraints for instance>"
                ]
            }

	else
        Rule.newModuleRuleSchema "RuleName" ()
            -- ...the configuration of the rule, with the different visitors we want to have
            |> Rule.fromModuleRuleSchema

And we can write some tests for this as well:

someTest : Test
someTest =
    -- using a regular Elm test syntax
    test "should report a configuration error when function name is empty" <|
        \() ->
            rule ""
                |> Review.Test.expectConfigurationError
                    { message = "Configuration argument should not be empty"
                    , details =
                        [ "I was expecting the function name to be a valid Elm function name."
                        , "When that is not the case, I am not able to function as expected."
                        , "<more explanations on the constraints for instance>"
                        ]
                    }
whereas a regular error looks more like this
someTest : Test
someTest =
    test "should report an error when..." <|
        \() ->
            """module ModuleA exposing (a)
a = 1"""
                |> Review.Test.run rule
                |> Review.Test.expectErrors
                      [ Review.Test.error
                          { message = "Some message"
                          , details = [ "Some details" ]
                          , under = "a = 1"
                          }
                      ]

Error messages

The testing framework forces you to provide the expected error message (for configuration but for all the other errors as well). Not requiring so would create the possibility that the test is failing for an different or unexpected reason. So I would recommend forcing to give the expected error message (and have the test failure make that easy to copy/paste).

I don't know how I feel about a still helpful but less precise "CUSTOM_RULE_ERROR". If the error message will be shown to the user and is controllable by the rule author (you could catch the error and re-throw it with a nicer error message), then I would really recommend that authors have to provide it. Being able to see it in the test source code means you can see the error that will be presented to the user, and figure out how it could be better presented or made more understandable.

For SCHEMA_VALIDATION_ERROR, I imagine the rule author doesn't control the error message, but forcing the error message instead of using this generic id might catch some problems where you are getting an different/unexpected validation error. For instance if the schema changes and you forget to update one of the tests.

I imagine you'll find me a bit pedantic on this but it has worked really well for me and elm-review.

Separate category

In elm-review, we don't (can't) provide a source code for this kind of test API, which fits with your suggestion 👍

I concur with @mdjermanovic that it makes more sense to have a separate category. In a way, elm-review separates also forces these configuration error tests to be separate, and it worked well for me.

Also, to be clear, if the rule has a configuration error, then the testing framework tells you as such, forcing you to either fix the problem or to assert that it reports a configuration error like I've shown, and that's the only way.

I hope this gives some helpful tips. Anyway, I like the proposal, I think it goes in the right direction 👍

Copy link
Member

@nzakas nzakas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this idea -- it's a use case we haven't really thought of before but I can see the value in having a way to test if a rule is throwing an error that breaks out of the normal linting flow.

designs/2023-test-rule-errors/README.md Outdated Show resolved Hide resolved
Copy link
Member

@nzakas nzakas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with the general direction of this, and I like fatal as a separate category of test.

@bmish
Copy link
Sponsor Member Author

bmish commented Jan 25, 2023

I've simplified the "implementation" section of the RFC based on my learnings from creating a working implementation of the proposed feature in this draft PR:


Differences between `fatal` test cases and `invalid` test cases:

- `code` is not required (the code may be irrelevant when testing options)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code is indeed irrelevant when testing whether the options will fail schema validations as these validations are run before using Linter.

However, when testing anything else, if code is not provided (i.e., it's undefined), Linter will throw:

TypeError: Cannot read properties of undefined (reading 'text')
      at Linter._verifyWithoutProcessors (lib\linter\linter.js:1307:37)
      at Linter.verify (lib\linter\linter.js:1496:57)
      at runRuleForItem (lib\rule-tester\rule-tester.js:748:35)
      at testFatalTemplate (lib\rule-tester\rule-tester.js:1116:28)
      at Context.<anonymous> (lib\rule-tester\rule-tester.js:1175:29)
      at processImmediate (node:internal/timers:466:21)

Perhaps the code property should be required when error.name in the test case is not set to "SchemaValidationError"?

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my draft PR (https://github.com/eslint/eslint/pull/16823/files#diff-1d5e36429b1a08e3baa8311c3c702f2edf5e3c5e542771dcd6a6bef4c43727f5R1173), I solved this issue by using a placeholder (Test Case #x) when there's no name nor code.

fatal.name || fatal.code || `(Test Case #${index + 1})`

I prefer this over requiring the user to come up with a name anytime they want to omit code, which feels like it would be a burden.

Is that okay?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This solves the test case name (first argument of it). I was thinking about the code we are passing to linter.verify.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think code can be an empty string.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think code can be an empty string.

If code property isn't specified, we'll pass an empty string to linter.verify? Sounds good to me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, exactly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the last open question we have for this RFC.

@bmish what do you think about this? If you agree with passing an empty string to linter.verify() when the code property is not provided in the test case, can you update the RFC document with this detail?

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will address remaining issues in the next week.

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a note about using an empty string for code.

Comment on lines 201 to 203
- A user testing that their rule schema catches invalid options may end up including the message from JSON Schema in their test case (e.g. `Value "bar" should be boolean.`). This could make it more difficult for ESLint to upgrade its version of ajv / JSON Schema in the future, as an upgraded version could tweak messages and necessitate updating any test cases that include the message. This is a relatively minor concern, as it's unlikely that messages will change often, and it's easy to update test cases if they do. Two factors that could mitigate this:
- If [snapshot testing](https://github.com/eslint/eslint/issues/14936) is implemented in the future, it would become possible to fix test cases with a single command.
- We can encourage the use of `error.name` property (discussed in [Detailed Design](#detailed-design)) for the majority of cases that don't care about including the exact exception text.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should document that these error messages are not guaranteed to be stable and can change even in non-major versions of ESLint, and suggest using error: { name: "SchemaValidationError" }.

Copy link
Sponsor Member Author

@bmish bmish Feb 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to document this. Note that the error message changing should be rare, only if:

  • The user changes an error message in their own rule
  • An upgraded version of ajv changes the validation text

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • The user changes an error message in their own rule

I was thinking to add this note only for schema validation errors as those are the messages we produce.

  • An upgraded version of ajv changes the validation text

That should be really rare, but still we can't guarantee it won't happen.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems we agree on this. Can you add a note to the RFC document so that we don't forget this? Perhaps in the Documentation section.

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a note about documenting this in the "Drawbacks" section where I describe this issue.

designs/2023-test-rule-errors/README.md Show resolved Hide resolved
designs/2023-test-rule-errors/README.md Show resolved Hide resolved
"options": [{ "checkFoo": true, "nonExistentOption": true }],
"error": {
"message": "Value {\"checkFoo\":true,\"nonExistentOption\":true} should NOT have additional properties.",
"name": "SchemaValidationError"
Copy link
Sponsor Member Author

@bmish bmish Feb 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mdjermanovic since you commented recently, do you think SchemaValidationError is the ideal name here? It's a bit lengthy, but describes well that the rule was provided options that do not conform to the rule's schema. Alternatively, could use SchemaError or OptionsError. ValidationError would be ambiguous regarding what it's referring to. My goal is that we choose a good name that will make sense even if we add additional types of errors that can be tested in the future.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SchemaValidationError sounds perfect to me.

@nzakas
Copy link
Member

nzakas commented Mar 8, 2023

@bmish I think we are close on this. Can you update with the last batch of suggestions?

Copy link
Member

@nzakas nzakas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Given the age and type of updates applied, moving to Final Commenting pending @mdjermanovic's final approval.

@nzakas nzakas added Final Commenting This RFC is in the final week of commenting and removed Initial Commenting This RFC is in the initial feedback stage labels Mar 15, 2023
Copy link
Member

@mdjermanovic mdjermanovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@nzakas nzakas merged commit a6a1942 into eslint:main Mar 20, 2023
2 checks passed
bmish added a commit to bmish/eslint-rfcs that referenced this pull request May 21, 2023
* main:
  chore: Add Mastodon status post to workflow (eslint#110)
  feat: ESLint Language Plugins (eslint#99)
  feat: support for testing invalid rule schemas and runtime exceptions (eslint#103)
  feat!: check for parsing errors in suggestion fixes (eslint#101)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Final Commenting This RFC is in the final week of commenting
Projects
None yet
5 participants