Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: added benchmarks for CESQL #1050

Merged
merged 2 commits into from May 14, 2024
Merged

Conversation

Cali0707
Copy link
Contributor

@Cali0707 Cali0707 commented May 2, 2024

Fixes #927

This PR adds benchmarks to run on every TCK test case that we have, as that seemed like the easiest way of getting a good set of test cases. For each scenario, this benchmarks:

  1. The parsing
  2. The evaluation

Since all these cases are already being tested for correctness, this does not bother checking if the results are correct

Signed-off-by: Calum Murray <cmurray@redhat.com>
@Cali0707 Cali0707 requested a review from a team as a code owner May 2, 2024 18:40
@Cali0707
Copy link
Contributor Author

Cali0707 commented May 2, 2024

cc @duglin @pierDipi

@duglin
Copy link
Contributor

duglin commented May 3, 2024

LGTM but I've never really used the go benchmark stuff before... so a potentially silly question, I assume this will print stats about the testcases (e.g. timing, etc...), but is there a mechanism in all of this that compares the values with the state of the code before whatever PR is being tested? How will people know if a PR impacts performance dramatically w/o the historical info?

@Cali0707
Copy link
Contributor Author

Cali0707 commented May 3, 2024

@duglin we tried to think this through in Knative and didn't come to a great conclusion. The main challenge with historical info is that it is really dependent on the machine the code is running on. We found that the github action runners were too noisy for good benchmarking.

So, our solution was to just run the benchmarks once on the main branch and then once on whatever branch had the changes when we were making a change that might impact performance, and then there is some tooling to compare those two results. If you can think of a better workflow I would be very interested in trying that!

@duglin
Copy link
Contributor

duglin commented May 3, 2024

Nope - that seems reasonable. Would be really cool if the PR testing infra could do that on each PR and flag it when things look bad. But that's just a dream :-)

Copy link
Member

@pierDipi pierDipi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@pierDipi
Copy link
Member

pierDipi commented May 9, 2024

cc @cloudevents/sdk-go-maintainers

@Cali0707
Copy link
Contributor Author

cc @embano1

@duglin
Copy link
Contributor

duglin commented May 14, 2024

rebase needed

@duglin duglin enabled auto-merge May 14, 2024 22:53
@duglin duglin disabled auto-merge May 14, 2024 22:53
@duglin duglin merged commit b3a8729 into cloudevents:main May 14, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add benchmarks to CloudEvents SQL
3 participants