New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tests] Make the 'test_html_copy_source' case agnostic of test evaluation order. #12118
base: master
Are you sure you want to change the base?
[tests] Make the 'test_html_copy_source' case agnostic of test evaluation order. #12118
Conversation
(this is ready for review: I intend to open one pull request per independency-fixup. my guess is that a small number of the tests will be extremely difficult to resolve, compared to this one) |
Thank you for this one, but in the future we'll just use Technically speaking, the minimal isolation will guarantee you that you are in different folders since it's based on 1) the test module / class and the sphinx markers arguments. As such, I'd prefer waiting for part 3 to be merged until we start fixing the current test suite. |
Fair points. When I run the test suite locally with That makes me wonder whether we should try fixing the current test-suite first, enable random runtime, and then add parallelism. I don't want to block, but I'd also prefer to navigate towards simplicity and robustness first before adding (I admit perhaps necessary) complexity. |
Ok, I can give you the tests that are known to fail and the reasons why:
|
Could you review these ideas for fixing each category:
...and then enable a test suite with If I'm creating unnecessary work, then let's not do this. But it feels achievable to me (apart from the |
Ah, I should add: |
Not always because there are some helper functions that filter message warnings based on the testroot name... which is not used if you use a custom srcdir (so you either need a custom srcdir with the testroot name inside or fix the filter function...)
That's easy to patch. Just use a fixture that unloads modules at the end of each test (make the rollback_sysmodules an autouse fixture, but we can improve that one (leave it to me)).
That's the main issue: determining the "others" is a challenge. Even with random-order, you can have no failures. But random-order + xdist might uncover some failures. It's really hard to know now what the bad tests are... Maybe we should review the whole test suite and extract the bad tests we can think of...? |
Really? Ok! Thank you.
My suggestion would be: make this week a week of |
Is there a script that can spot all errors? like, running many instances of the test suite would likely be very long and would monopolize the CI/CD resources so... |
After a test begins failing due to ordering concerns, there is a way to backtrack to get a reasonable degree of confidence about where the interacting tests might exist. For example: for multiple |
(whether that exists as a reusable |
The closest thing I've found so far is: https://github.com/pytest-dev/pytest-random-order (less-well-known than It can shuffle tests at the class/module/package/global level. However it doesn't do one thing that would be really helpful for us: systematically narrowing-down-to problem dependencies between tests ( If all tests succeed globally in order, then if a single test (let's say the |
I'm using the random-order because it's compatible with xdist. |
Feature or Bugfix
Purpose
Detail
srcdir
during thetest_html_copy_source
test case so that it can succeed even if other tests cases that use the sametestroot
modify their source files before it runs.Relates