-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] Report performance test results on each PR #3551
Conversation
14660c5
to
d8f0e6e
Compare
This comment has been minimized.
This comment has been minimized.
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
@m4theushw Done, added to CircleCI. The comment should come from https://github.com/mui-bot going forward. I have followed https://danger.systems/guides/getting_started.html |
These are the results for the performance tests:
|
@m4theushw could we add the standard deviation or the variance to have a glance of the value accurateness ? |
@flaviendelangle I added the standard deviation. As you can see, the values vary substantially, probably because integration between Playwright and the Chrome DevTools protocol is slow. This means that we won't be able to see if a PR has caused a small impact on performance. However, it's going to be useful for situations where a change causes a drastic performance hit, in which a unit test, using a small dataset, wouldn't fail due timeout. In 154cc7e I intentionally caused a performance degradation (ignore the failing unit tests). The average time to select all rows is 60x higher. This is a clear signal that something is wrong. I'll revert 154cc7e once you react to this comment. |
Seems perfect 👍 |
This reverts commit 154cc7e.
This PR removes the performance tests that were running as unit test. Instead, they run now in a different environment, with the React production build and closer to how the user interacts with the grid. The results will be reported as a comment in each PR, that means that the PR, initially, won't fail if a regression is introduced.
The comparison between these results and a "gold master" (assumed to be the last released version) will be done after the merge and once we have a performance snapshot for a release.
As inspiration: https://github.com/saucelabs/performance-CI-demo/blob/main/README.md