New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MNT] remove coverage reporting and pytest-cov
from PR CI
#6363
base: main
Are you sure you want to change the base?
Conversation
@fkiraly I want to be catious for this. Can we test these:
My caution is mainly for the reason that it's highly counter intuitive to me that coverage will affect timing by this much. It's more than 3-4 times in your screenshots, and if had that been the general effect of pytest-cov, it's expected to be detected by users quite ago. It's very popular and standard, so I am really wondering if we are missing something else (though I don't have any alternative ideas yet). |
According to the profiler, indeed these parts create the overhead. How would you turn these off separately? Can you help? Would it be removing the
This PR also removes the badge from the readme, because it is misleading anyway, with or without this PR. We should find a way to display genuine coverage in the readme (see #5090) - I would consider that a separate issue (namely, #5090), and it would then include adding the correct coverage display to the readme. |
so am I. Or, perhaps cause/effect are hard to detect in general? |
Yes, that only. Let's see what happens.
I think if README shows 0% or etc. it may give potential users/contributors a negative impression that this framework is untested (e.g. I know I'll feel the same for a new tool). |
ok - I've added it back in the |
Only 5 jobs got triggered, not a single testing job! How did it ran everything earlier? |
I see - I think I understand why the difference. Previously, |
https://github.com/sktime/sktime/actions/runs/8940323391 I triggered a manual test all workflow on this branch for debugging. |
Thanks. Sth is taking hours again, how do we find out with which estimator it gets stuck? |
I am not aware of a better solution than going into verbose mode. By the way, have you seen the failures? It seems every single "other" run failed with this:
|
Thanks for pointing this out - this is a bug with a test I added to make sure we test the The bug surfaces only in the |
I checked other jobs, and so far no timeout failure. Only one module job failed and it's for forecasting: FAILED sktime/forecasting/model_evaluation/tests/test_evaluate.py::test_evaluate_common_configs[backend8-scoring1-refit-1-10-fh5-ExpandingWindowSplitter] - OverflowError: Python int too large to convert to C long Ref. https://github.com/sktime/sktime/actions/runs/8940323391/job/24558260007#step:3:6594 Any idea if it's sporadic? We'll probably know from random seed diagnostic. (FYI @benHeid ) |
This is definitely a new one - have not seen this before. However, there have been failures in
Probably not, as that one does not add random seeds except in |
This PR removes generation of coverage reports, installation and use of
pytest-cov
from standard CI. Also removes the (unreliable) coverage badge from the READMEReasons: