New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stress runs #6419
Comments
we should be able to specify parallelism as well ( |
Related problem and solution: #6194 (comment) |
I would love this also! |
Oh man how have I not seen this! I could totally build this. We already have systems like this at Spotify for jest and other test runners. |
Go for it @palmerj3! 馃榾 |
This still a thing? I was looking for ways to do this with the CLI and found this issue 馃ぃ |
This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue was closed because it has been stalled for 7 days with no activity. Please open a new issue if the issue is still relevant, linking to this one. |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
馃殌 Feature Proposal
In order to ensure test quality in a repository, this feature will make sure we provide a way of stress running a test.
Stress running means to run in parallel a test, and report all the results separately, so that they can be later analized. A successful test is the one that is all green, a flaky one it's a mix of them, and a broken test will fail in all runs.
It is important to return results separately, as people might want to later compute the flakiness percentage and decide on it (life's not black or white! 馃槈).
Motivation
Ensuring that a test passes correctly before releasing it to the world helps getting higher signal when running tests, and less annoyance for other users who might be running the test later.
Example
jest --stressRuns=24
to perform 24 runs of the same test in parallel.Pitch
Being able to run the same test at the same time requires heavy integration with the runner and the reporter. Currently you can do that by (ab)using the MPR and reporting over and over the same configuration, but that's just a hack 馃槃.
The text was updated successfully, but these errors were encountered: