-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix handling of unexpected success results #190
Conversation
Ugh, python2 is enforcing the object's type for the self argument. While python3 will take a different type we pass in. We'll have to figure out a different method of doing this I guess, maybe just building a helper to do manually what testtools.TestResult.wasSuccessful() is doing. |
ec18f05
to
eb97f84
Compare
Ok, I just replaced the usage of TestResult.wasSuccessful() with a custom helper function that does the same thing. It's only a 1 line function, but I think it makes it cleaner than manually checking the counts in all the places we call wasSuccessful() |
Why not change this too? stestr/stestr/commands/last.py Line 157 in eb97f84
|
Heh, because I missed it. I'll push up a new revision in a minute to correct the oversight. |
eb97f84
to
cbca1b9
Compare
This commit addresses a bug when unexpected success results are used by a test suite. stestr was relying on wasSuccessful() method in the StreamResult class from testtools, which doesn't properly account for unexpected success. So when stestr was run on a test suite it wouldn't properly handle tests with unexpected success result and thus treat the run as successful. This addresses the issue by adjusting our testtools api usage to manually query the results object for unxsuccess results instead of relying on the exisiting wasSuccessful() method from the StreamResult class. This is basically the same as the wasSuccessful() method from the testtools.TestResult class, which exhibits the correct behavior, but isn't something we're using. An issue has been opened in testtools testing-cabal/testtools#273 regarding the accounting error with StreamResult.wasSuccessful(), if/when a fix is introduced into a released testtools we can investigate switching these calls back to use that method and deleting our local helper function. But, for the time being this mitigates the issue. Fixes #189
cbca1b9
to
a96c158
Compare
@masayukig ok, the latest revision now has the last command updated too |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updating :) LGTM.
I think it would be good to add unit tests for last.py
and failing.py
. But it's a bit different topic and we can push a following patch later.
Previously `stestr run` and subunit_trace used slightly different output and validation logic. run wouldn't catch the case where a test exited without a status, while code using `last` and `history` would since they used subunit_trace. Refactor the subunit_trace output and validation logic and use it for `load`/`run`. One side effect of this, which required a change in tests, is the current logic will return an error if the only test that is run is an expected failure (Related to mtreinish#190). Converge on the subunit_trace formatted output, in addition to the summary also show if something goes wrong. The primary motivation for this change is to better detect when a subprocess is killed and not all tests are run as expected (segfault, OOM etc.).
This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to call the wasSuccessful() method from the
TestResult class instead which has the correct behavior.
An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that function. But for the time being
this mitigates the issue.
Fixes #189