Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Differentiate coverage report scope between watch and run mode #2628

Closed
4 tasks done
xsjcTony opened this issue Jan 10, 2023 · 4 comments · Fixed by #2665
Closed
4 tasks done

Differentiate coverage report scope between watch and run mode #2628

xsjcTony opened this issue Jan 10, 2023 · 4 comments · Fixed by #2665
Labels
feat: coverage Issues and PRs related to the coverage feature

Comments

@xsjcTony
Copy link
Contributor

xsjcTony commented Jan 10, 2023

Clear and concise description of the problem

Based on the issue #2316 and the fix #2385 , I cannot find a way to achieve:

  1. In run mode, generate report for all files listed in the include option even if there's no test file for it (needs all: true)
  2. In watch mode, ONLY generate report for files are changed so I can see how my changes are going to improve the coverage (after the first run, so full coverage report for the first run is expected)

But currently, using istanbul or c8 (I've tried both), I cannot achieve this.

If I set all: true, then I can get the full coverage in run mode, but in watch mode I will get the full report as well, no matter what's the value of cleanOnRerun option.
If I set all: false, then in watch mode I can see only those files are covered, but I CANNOT get all files in run mode.

Is there any way I can get different coverage scope between those two mode?

Suggested solution

Like the comment in #2385 (comment) , create another option for that. (for watch mode only, run mode will refer to the value of all option)

Additional context

And also I want to double-check, is that new feature in #2385 only for istanbul? Since I personally prefer using c8 (but like I said, I've tried both but none of them are working)

Validations

@sheremet-va sheremet-va added bug feat: coverage Issues and PRs related to the coverage feature labels Jan 10, 2023
@AriPerkkio
Copy link
Member

AriPerkkio commented Jan 11, 2023

This sounds like a valid use case and should be supported without requiring users to modify the configuration file between vitest run and vitest watch runs. However I would rather try to avoid adding new configuration flags that are identical to existing ones but activate only on watch mode. Maybe this could be solved by making changes how coverage.all behaves on watch mode.

So when coverage.all is enabled user should

  • see all files, including the uncovered ones, in the coverage report when running vitest run
  • see only the changed files in coverage report when re-running tests in watch mode, vitest watch

I think user should see all files, including uncovered ones, in watch mode when

  • watch mode is started and all tests are run for the first time
  • user presses a key to re-run all files

So currently when coverage.all is enabled and user re-runs only changed tests, all files are shown. I don't think this should ever happen.

Internally Vitest already keeps track whether tests are running for the first time so we could quite easily flip the all flag when needed.

@xsjcTony does the use case descriptions above sound good?

And also I want to double-check, is that new feature in #2385 only for istanbul?

That change only modified default value of cleanOnRerun which is used in watch mode. It affects both c8 and istanbul providers. The PR title was a bit misleading there but the change was intentional.

@xsjcTony
Copy link
Contributor Author

@AriPerkkio Yes, that describes exactly what I want. Thanks so much for making it clearer🎉!

@xsjcTony
Copy link
Contributor Author

xsjcTony commented Jan 16, 2023

@AriPerkkio Thanks soooooooooo much for the PR. it saves my life. I actually have another simple question but it's not really important.

Generally, which of c8 and Istanbul is better in terms of performance (report generation speed)?

@AriPerkkio
Copy link
Member

Generally, which of c8 and Istanbul is better in terms of performance (report generation speed)?

It depends. There can be cases where c8 is faster but I can imagine cases where istanbul could be faster. They work so much differently that it's difficult to say which one could be considered faster. It's not just about the coverage collection speed, but also the result post-processing.

For example, V8 coverage will reporter coverage of all the run functions, including all your node_modules and internal NodeJS APIs. The coverage/tmp can grow into 100's of MBs of JSON files. Then c8 will go through these and picks the ones you've told it to include in coverage. With istanbul you can clearly instruct it to instrument only the specific files. But as this is code transformation step, it can get slow in certain cases.

Personally I don't consider speed as the factor when choosing between the two. The istanbul provider is much more precise since instrumentation is AST based. The c8 doesn't do any AST analysis, and cannot differentiate code comments from code that actually runs. There's good summary of the differences here: jestjs/jest#11188.

Try both and pick your favourite.

@github-actions github-actions bot locked and limited conversation to collaborators Jun 7, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
feat: coverage Issues and PRs related to the coverage feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants