Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add @stedolan markbench benchmark for prefetching #457

Merged
merged 1 commit into from
Jun 8, 2023

Conversation

fabbing
Copy link
Contributor

@fabbing fabbing commented Jun 5, 2023

This PR, a joint effort with @MisterDA, adds a very slightly modified version of @stedolan markbench micro-benchmark.
This micro-benchmark was first used in ocaml/ocaml#10195 and then in ocaml/ocaml#11827 to validate prefetching speedup while the GC is tracing blocks.

It could be useful as a sort of regression test running in Sandmark.

It would be preferable to use the reported seconds/GC time as calculated by the benchmark, which is reported on stdout, since it avoids accounting for setup time.
How could this could be achieved in Sandmark?

@punchagan
Copy link
Contributor

Thanks for the contribution, @fabbing and @MisterDA. As discussed in person, the benchmarks are currently run using different wrappers, like orun, perfstat or pausetimes.

But, the use case of being able to measure specific parts of a program as part of a benchmark makes sense to me. We do have such benchmarks being run on different repositories, using current-bench, which allows repositories to define their own custom benchmarks. I wonder if maybe that is a better place to have this benchmark? Or if we should add support for doing these kinds of benchmarking with micro-benchmarks in Sandmark.

@shakthimaan or @kayceesrk might have thoughts on this.

@kayceesrk
Copy link
Contributor

But, the use case of being able to measure specific parts of a program as part of a benchmark makes sense to me.

Sandmark is not built for measuring and reporting on specific parts of the program. All the reported metrics are for the entire program. Breaking this invariant complicates how we measure, report and analyse the benchmarks in Sandmark. I am not keen on breaking this.

current-bench is probably the better place. The other option is to make the prefetching specific bits run much longer so that its effects dominate the program behaviour measured as a whole. This may be as simple as running the core parts of the algorithm repeatedly so that prefetching effects are magnified.

@fabbing
Copy link
Contributor Author

fabbing commented Jun 7, 2023

current-bench is probably the better place. The other option is to make the prefetching specific bits run much longer so that its effects dominate the program behaviour measured as a whole. This may be as simple as running the core parts of the algorithm repeatedly so that prefetching effects are magnified.

The core part is actually already run several times (and can be easily tweaked with the launch parameter).
The GC/tracing time of the benchmark largely dominates the setup time, so the benchmark can be kept as it is in Sandmark.

@kayceesrk
Copy link
Contributor

Sounds great.

Co-authored-by: "Antonin Décimo <antonin@tarides.com>"
@punchagan punchagan merged commit 772341d into ocaml-bench:main Jun 8, 2023
6 checks passed
@punchagan
Copy link
Contributor

Thanks for the contribution, @fabbing and @MisterDA ! I've merged it with a minor change - changed the tag from micro_bench to macro_bench to make sure that the benchmark runs in our nightly runs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants