New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Generating a coverage report with --runInBand, collectCoverageFrom, and a transformer can mask a failing exit code #13233
Comments
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
I believe this is still an issue -- I can try to dig into the code base to evaluate what's causing the problem. |
Same problem here when using --runInBand. Does someone have a workaround, i cant run in parallel on our server :( ? |
Same problem here. This is still a bug. |
I'm also hitting this now. (I know "+1 isn't helpful", but stalebot is watching the clock! 😉 ) |
Same issue (hi stalebot 🙄) |
It has been a few months of reproduction across a bunch of folks and the issue is still marked as needing triage -- does anybody know if there is a mechanism to escalate? |
Having looked at no code, here's my hopefully-not-red-herring pet theory in lieu of triage. I noticed similar behavior setting I believe the intention (feel like this was implied in the doc somewhere) is for there to be the main "thread" plus one or more workers. I assume the trouble comes in when the main "thread" is the only one. It thinks it's a worker, and ends for whatever reason with no one left to do the clean-up, reporting, whatever. If my imagination is in the ballpark, a solution might be to enforce the correct minimum worker count (guessing 2), and/or ensure that when the main "thread" is working as a solitary worker that it's resilient enough to pick back up and do main thread stuff whether its worker work was successful or not (think try/catch/finally sort of thing). |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
As far as i know, this is still an issue |
Ran into this as well. |
ran into this as well |
I have the same issue. Hi stalebot! 👋 |
Reproducing this error is easy. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
I suppose this warrants another bump. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
it's unclear to me if the maintainers of this project end up seeing issues like this, but in case they do I suppose I'll still bump. |
I can confirm that the bug is still there with the latest version of jest and ts-jest as of May 2023. Also I found that I have a test case covering this issue: https://github.com/handy-common-utils/dev-dependencies/blob/08fc16a45db1e22882f084f14c3be4acaca1e956/jest/test/fs-utils.spec.ts#LL51C7-L51C67 And the test case would fail in GitHub actions if I remove I suspect that GitHub gives the actions 2 virtual CPU cores, and that triggers the problem if you don't tell Jest to use 2 workers. |
In my case it fails silently like described if there is at least one uncovered file which has compile errors. Noticed this while having an incomplete but unused file (a draft) locally. Workaround is obviously to make sure there are no compile errors, but an error message would be helpful of course :) |
Same problem here. Setting
|
Have same issue, any updates on this? |
Any updates? |
I don't believe anybody in the jest team has seen this issue, unfortunately. If anybody knows anybody that can help escalate it would probably be useful! |
Is the I am asking this because |
Does not reproduce with So this is either setup issue or a problem in a transformer you are using. Simply report the issue in their repos. If someone is able to reproduce the problem using |
Thank you for taking a look at this! I'll see if I can create a reproduction case with |
I was not able to reproduce this with |
I have the same problem !!! |
@FranciscoLagorio would you be up for also commenting at the |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
This issue was closed because it has been stalled for 30 days with no activity. Please open a new issue if the issue is still relevant, linking to this one. |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Version
29.0.2
Steps to reproduce
Create a new project that uses a jest transformer (e.g.
ts-jest
).Specify a
collectCoverageFrom
injest.config.js
in a way that will invoke the transformer when collecting coveragejest.config.js
foo.test.js
file that will fail to run (e.g. including a syntax error). Note, this does NOT need to pass through the transformer. e.g.:foo.test.js
collectCoverageFrom
pattern and will pass through the jest transformer (e.g.bar.ts
), and write something that will cause an error, e.g. a syntax error, in that file.bar.ts
Run
npx jest --runInBand --coverage
Run
echo $?
to see the exit code, which will be0
Expected behavior
I expect:
1
.Actual behavior
The tests fail (due to failure to run) but the coverage report generation silently errors and the exit code for the entire process is, incorrectly,
0
.(This, for instance, means that CI marks tests as succeeding).
Additional context
Running
npx jest --coverage
without--runInBand
does not cause this bug and instead renders output such as:Running
npx jest --runInBand --coverage
only renders:and then the process exits with code
0
.Some other interesting "alternative outcomes":
collectCoverageFrom
is not specified then coverage is generated as expected and the process returns correct exit codes.1
regardless of test outcomes.Environment
The text was updated successfully, but these errors were encountered: