Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix handling of unexpected success results #190

Merged
merged 1 commit into from
Aug 7, 2018

Conversation

mtreinish
Copy link
Owner

This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to call the wasSuccessful() method from the
TestResult class instead which has the correct behavior.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that function. But for the time being
this mitigates the issue.

Fixes #189

@coveralls
Copy link

coveralls commented Aug 3, 2018

Coverage Status

Coverage increased (+0.06%) to 65.369% when pulling a96c158 on use-testtools_wasSuccessful into 2b1b62a on master.

@mtreinish
Copy link
Owner Author

Ugh, python2 is enforcing the object's type for the self argument. While python3 will take a different type we pass in. We'll have to figure out a different method of doing this I guess, maybe just building a helper to do manually what testtools.TestResult.wasSuccessful() is doing.

@mtreinish
Copy link
Owner Author

Ok, I just replaced the usage of TestResult.wasSuccessful() with a custom helper function that does the same thing. It's only a 1 line function, but I think it makes it cleaner than manually checking the counts in all the places we call wasSuccessful()

@masayukig
Copy link
Collaborator

Why not change this too?

failed = not summary.wasSuccessful()

@mtreinish
Copy link
Owner Author

Heh, because I missed it. I'll push up a new revision in a minute to correct the oversight.

This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to manually query the results object for
unxsuccess results instead of relying on the exisiting wasSuccessful()
method from the StreamResult class. This is basically the same as the
wasSuccessful() method from the testtools.TestResult class, which exhibits
the correct behavior, but isn't something we're using.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that method and deleting our local
helper function. But, for the time being this mitigates the issue.

Fixes #189
@mtreinish
Copy link
Owner Author

@masayukig ok, the latest revision now has the last command updated too

Copy link
Collaborator

@masayukig masayukig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updating :) LGTM.
I think it would be good to add unit tests for last.py and failing.py. But it's a bit different topic and we can push a following patch later.

@masayukig masayukig merged commit 58963c9 into master Aug 7, 2018
@masayukig masayukig deleted the use-testtools_wasSuccessful branch August 7, 2018 03:25
jogo added a commit to jogo/stestr that referenced this pull request Sep 16, 2022
Previously `stestr run` and subunit_trace used slightly different
output and validation logic. run wouldn't catch the case where a test
exited without a status, while code using `last` and `history` would
since they used subunit_trace.

Refactor the subunit_trace output and validation logic and use it for
`load`/`run`.

One side effect of this, which required a change in tests, is the
current logic will return an error if the only test that is run is an
expected failure (Related to mtreinish#190).

Converge on the subunit_trace formatted output, in addition to the
summary also show if something goes wrong.

The primary motivation for this change is to better detect when a
subprocess is killed and not all tests are run as expected (segfault,
OOM etc.).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unexpected Successes do not result in overall failure
3 participants