Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Analyze and select optimizations to port from C++ port of Halo2 by kroma-network/tachyon #293

Open
CPerezz opened this issue Mar 3, 2024 · 3 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@CPerezz
Copy link
Member

CPerezz commented Mar 3, 2024

I recently saw a message in the Halo2 Ecosystem Discord server where @dongchangYoo and @ashjeong shared a HackMD document link with a list of optimisations found when porting the Halo2 codebase to C++.
The optimizations are claimed to speedup arround a 10% the implementation (So we need to ensure we don't loose too much time, as a 10% is not a priority considering other low-hanging fruits we are already aware of.

This is the link: https://github.com/kroma-network/tachyon/blob/main/tachyon/zk/plonk/halo2/optimizations.md

note: now is in https://github.com/kroma-network/tachyon/blob/main/tachyon/zk/plonk/halo2/README.md

cc: @ed255 @han0110 @kilic @duguorong009 @davidnevadoc

@CPerezz CPerezz added the question Further information is requested label Mar 3, 2024
@ed255 ed255 added the enhancement New feature or request label Mar 4, 2024
@duguorong009
Copy link

I've checked the optimizations & here is my opinion on those.

Overall, I believe 1 ~ 2 optimizations could be ported with a bit of work.
Others need some prerequisites.

1. Lazy IFFT

  • Lazy IFFT for Permutation Argument
  • Lazy IFFT for Lookup Argument
    I am sure these optimizations are good for improvement.
    Overall, this improvement adaptation needs refactoring, like mentioned in Split create_proof inner logic in focused functions/stages #247
    Simply, it needs some design updates & complex work.
    I think we can go for this after finishing all of the post-split work.

2. Batch Normalize

  • Batch Normalize for Permuted Polys

  • Batch Normalize for Grand Product Polys
    I think we can apply these optimizations pretty easily.
    But, it needs the implementation of batch_inverse & optimized batch_normalize.

  • Batch Inverse in Batch Normalize
    Here, tachyon team says they optimized the batch_normalize, along with implementation of batch_inverse.
    We should check & introduce their optimizations.
    One side effect is that the optimized batch_normalize is for a repository of elliptic curves.(In our case, it is halo2curves repo)
    So, I believe we need to first introduce the batch_normalize to halo2curves repo.

3. Optimizing parallelization code
I believe this improvement adaptation needs a bit of work.
It is a refactoring in evaluate_h process, but needs some engineering with parallelization code.

4. Optimizing MSM
Regarding this optimization, I believe there are some ongoing discussions & PRs in both of halo2 & halo2curves repos.
For example, there are the following issues & PRs.
privacy-scaling-explorations/halo2curves#29
privacy-scaling-explorations/halo2curves#130
#187

One note is that @mratsim mentioned this WNAF method.
#187 (comment)
(I didn’t check the details of the WNAF method.
Maybe, are these different methods, with just similar names?)
I think this topic should be discussed among those who are more familiar.

5. Saving heap allocation
I believe this improvement adaptation needs a bit of work.
It is a refactoring in managing the variables, and needs some expertise about protocol.

6. Benchmarks for each circuit
This is not an improvement optimization. It would be better if we have similar benchmarks.

@guorong009
Copy link

Updated report about optimizing MSM

  1. Optimizing MSM
    In the halo2curves, it uses the cyclonemsm to optimize the MSM.
    MSM optimisations: CycloneMSM halo2curves#130
    According to the PR, there is 30-40% gain when using cyclonemsm.
    MSM optimisations: CycloneMSM halo2curves#130 (comment)
    Also, the following links show that wNAF method seems inappropriate when k is large.(eg. k > 15)
    Towards state-of-the-art multi-scalar-muls #187 (comment)
    Towards state-of-the-art multi-scalar-muls #187 (comment)

One thing to discuss is the PointXYZZ type.
They say there is improvement when using the PointXYZZ type as default for bucket operation.
kroma-network/tachyon#36 (comment)
I believe it is worthy to introduce the PointXYZZ type to halo2curves repo.

cc @ed255 @davidnevadoc @kilic

@adria0
Copy link
Member

adria0 commented May 21, 2024

BTW there's a very nice video on MSM optization tricks:

https://www.youtube.com/watch?v=KAWlySN7Hm8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants