-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug?] Reruns of Scenario outlines lead to potentially confusing scenario names #1448
Comments
@timhunt just to be clear..
The fact that it is random doesn't relate to the problem you are describing, right? Also, you don't have to have a browser(or even the dummy html file) for this to happen, correct? (I'm just trying to exclude the unrelated points) I tried what you described, and the output in the console looked fine, but I noticed the problem you mentioned in the junit output. From what I understand, An unambiguous solution would be to generate a scenario name based on the example values in all cases. 🤔 |
Right, random not essential. It was just the only think I could think of to get minimal steps to reproduce. I think "An unambiguous solution would be to generate a scenario name based on the example values in all cases." is what I would expect. In my brain at least, "Unreliable test #2" is is "Unreliable test" run with the second set of Example data, whether or not the first set of data was executed as part of this run or not. For me, it is a fairly fundamental princple of automated testing that each testcase should work the same, irrespective of any other tests that may or may not be being run. (And, certainly, each testcase works the same - naming is a relatively minor matter, but I still feel it should follow the same principle. But, that is just my opinion.) Anyway, thank you very much for thinking about this. |
To explain what is going on, I am going to use a silly example, just to keep it short. Suppose random.html is a pointless page which displays 1 or 2 at random, and we test it like this.
Suppose that we run this, and get the results:
Unreliable test #1
-> PASSUnreliable test #2
-> FAILSuppose we are using the --rerun option to the test-runner. When we do the re-run, it correctly only runs the second example. However, in the result, it names it #1. That is
Unreliable test #1
-> PASS (this did correctly check for and find 2 in the page).This makes it difficult to link the result of the re-run to the original failure, which is leading to occasional false-positives from our CI system, which is a minor annoyance.
I am not sure that this is actually a bug, but I thought I woudl raise it.
I don't know much about Behat internals. Issue #154 might be related? If this is not considered to be a Behat probelm we can probably change our CI scripts which handle the test results to deal with our issue. Thank you for considering this.
The text was updated successfully, but these errors were encountered: