Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start warning when using undesirable chains #996

Closed
keithamus opened this issue Jun 22, 2017 · 5 comments
Closed

Start warning when using undesirable chains #996

keithamus opened this issue Jun 22, 2017 · 5 comments

Comments

@keithamus
Copy link
Member

Throughout or docs we have commentary on the best practices for assertions. We also often come across edge cases where the user has used redundant keywords in combination with certain values (see #994 (comment) as an example).

In these cases we should look to log warnings to the user - they're not strict failures, but they are code smells or potentials for future regressions. We should ensure that our users have the best quality codebase, and have the power to do that through introspecting their function calls. In those instances, it might be worth console.logging a descriptive message about what they're doing - and why we consider it to be not ideal.

This is here just as a discussion point to see how viable this is and what kind of implementation we'd want, if it is viable.

@meeber
Copy link
Contributor

meeber commented Jun 22, 2017

Interesting thought. I think it'd be important that such a feature not have any false positives though, which may be challenging, especially since some assertions change the object but do not clear the flags. For example, expect(target).deep.include("mystring") doesn't make sense, but maybe expect(target).deep.<some-assertion>.that.includes("mystring") does make sense. In both cases though, the includes assertion sees the deep flag as being set.

@keithamus
Copy link
Member Author

Yes I think it'd be useful to expose the full chain to assertions, in some way - so that they can introspect the whole command and not just which flags have been set. We could use this to drive a mechanism that - combined with argument types - generate a set of warnings for nonsensical chains.

@ScottFreeCode
Copy link

In these cases we should look to log warnings to the user - they're not strict failures, but they are code smells or potentials for future regressions. We should ensure that our users have the best quality codebase, and have the power to do that through introspecting their function calls. In those instances, it might be worth console.logging a descriptive message about what they're doing - and why we consider it to be not ideal.

I'd love to see a general system for warning about dangerous (even if not strictly disallowed/erroneous) assertions -- not just language chains, but also things like "asserting not with a have.property value comparison -- should it pass if the property is there with another value?" and other things that the documentation notes as potentially confusing. So 👍 to the general idea here!

That being said, for Mocha, console.log will dump these in the middle of the reporter output with no (sane) possibility to redirect and separate them, as the reporters print to standard output as well. console.error is marginally better (as users who are aware of the need can redirect standard error).

However, if I want to "fix" these smells (and why else would I need to know about them?), having an option to raise them to the level of true test failures would be the best way to get them visible and/or enforceable. The fact that they're only probably dangerous rather than strictly erroneous doesn't change that.

On the other hand, having another option to control where they're output could be infinitely useful as well -- if I want to put them in a log file instead of standard output/error while letting other code still use standard error, for instance, or if I want to apply filters to the warnings I want to ignore, e.g.: "I don't plan on updating Ancient Test Suite X unless I have to, so ignore ones from that folder or file glob," or, "I think these particular sentences read correctly and want to whitelist all uses of them while still alerting about newly introduced warnings." (On the other hand, being able to apply such filters and use throw-as-errors mode advocated above would presumably require the filtering functionality to be separate from the control of logging output functionality...)

That's my two cents' worth on the idea -- it's definitely worth pursuing, but (personally anyway) I think it's also worth coming up with something more robust and useful than just writing to one of the standard output streams (even if a bit more to implement -- I guess I'd have to be more familiar with the Chai codebase to tell how much more). 😸

@keithamus
Copy link
Member Author

Yes I think exposing a pointer or config option for how these are logged or raised could be useful. Either:

chai.config.warnOnBadAssertion = true // uses console.error or some mechanism
chai.config.warnOnBadAssertion = false // switches off entirely
chai.config.warnOnBadAssertion = 'error' // throws errors for these

or something like:

chai.config.warnOnBadAssertion = console.log // uses console.log
chai.config.warnOnBadAssertion = console.warn // uses console.warn
chai.config.warnOnBadAssertion = (message) => { throw new AssertionError(message) } // custom function
chai.config.warnOnBadAssertion = someMochaFriendlyRepoter

@keithamus
Copy link
Member Author

this is now in our roadmap https://github.com/chaijs/chai/projects/2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants