New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
expose warnings=
to pytester assert_outcomes()
#8952
expose warnings=
to pytester assert_outcomes()
#8952
Conversation
closing this for now, want to discuss the best ways to add more usability to assert warnings in pytester, it may not be via |
testing/test_pytester.py
Outdated
pass | ||
""" | ||
) | ||
result = pytester.runpytest(p, "--strict") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any preferred way to generate the warnings here? using a deprecated feature will create additional work later for someone when erroring it I guess, would like to minimise that if the test can
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @symonk!
Besides my comment, please make sure to add a 8952.feature.rst
changelog entry. 👍
testing/test_pytester.py
Outdated
pass | ||
""" | ||
) | ||
result = pytester.runpytest(p, "--strict") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm let's improve this by making the test itself generate the warning:
import warnings
def test_with_warning():
warnings.warn(UserWarning("some custom warning"))
Let's not rely on --strict
being deprecated, because then this test will break once we remove --strict
and will need to adapt it anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @nicoddemus see my other comment, you read my mind :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh we posted the comments at the same time, it seems. 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when i use your approach with pytester, the test fails w/o warning (failed=1)
only , is there a way to get the outcome from running a test w/o running pytester.runpytest()
?
edit: -W ::UserWarning
I think can sort it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh right, what happens is that we change all warnings into errors when running the pytest suite.
You can override that applying @pytest.mark.filterwarnings("default")
to test_pytester_assert_outcomes_warnings
:
@pytest.mark.filterwarnings("default")
def test_pytester_assert_outcomes_warnings(pytester: Pytester) -> None:
p = pytester.makepyfile(
"""
import warnings
def test_with_warning():
warnings.warn(UserWarning("some warning"))
"""
)
result = pytester.runpytest(p)
result.assert_outcomes(passed=1, warnings=1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
every day is a school day, thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, thanks!
Feel free to squash/merge yourself once everything passes. 👍
Evening all, while investigating some other areas I noticed
pytester
cannot actually assert outcomes onwarnings=
so I've decided to bolt that on, I'm not sure if there is a good reason why this wasen't already a feature, maybe because of shadowing the builtinwarnings
module with it's naming or so? or it might create a bit of unpredictability in our tests?Might have some test fallout, will have a look as soon as
edit: Actually there might be a very good use-case why this is NOT the default..
edit2: Reopening as it might be an ok feature?
closes #8953