Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request: Add support for non-buffered hash functions that take array inputs #71

Open
TheFrozenFire opened this issue Oct 31, 2022 · 3 comments

Comments

@TheFrozenFire
Copy link

Some hash functions use different constants depending upon the number of rounds (input elements). The current behaviour when hashing pairs of nodes is to concatenate the elements together before hashing them. This makes sense for hashing algorithms like SHA-2 where the hasher implementation takes care of decomposing the input into blocks of a fixed length, doing padding, etc.

For other hashing algorithms like Poseidon, which is used extensively in zero knowledge circuits, when hashing multiple elements the behaviour of the hashing algorithm differs according to the number of elements specified. As such, concatenating the elements before supplying them to the hasher makes that infeasible.

Put in a slightly more complicated way, elliptic curve compression functions accept "field elements", not strings of binary data. Combining two or more field elements is not done by concatenating their binary representations.

A backwards-compatible solution for this would be to allow the "concatenator function" to be specified as an option, with the default being Buffer.concat. However, it seems like the use of Buffers as inputs to hash functions is a bit baked in, and will need some careful untangling.

@miguelmota
Copy link
Member

miguelmota commented Nov 17, 2022

Hey @TheFrozenFire, you brought up great points and the library has been updated to support an optional concatenator function. The hash function would just need to be wrapped to big numberify inputs and bufferify outputs since the library currently only deals with buffers. These conversions are of course inefficient but it should at least work and can be optimized in near future.

Here's an example:

const { MerkleTree } = require('merkletreejs')
const { buildPoseidon } = require('circomlibjs')

const poseidon = await buildPoseidon()
const poseidonHash = (inputs) => {
  const hash = poseidon(inputs.map(MerkleTree.bigNumberify))
  const bn = MerkleTree.bigNumberify(poseidon.F.toString(hash))
  return MerkleTree.bufferify(bn)
}

console.log(poseidonHash([1, 2]).toString('hex')) // '115cc0f5e7d690413df64c6b9662e9cf2a3617f2743245519e19607a4417189a'

const leaves = [1, 2, 3, 4].map(x => poseidonHash([x]))
const tree = new MerkleTree(leaves, poseidonHash, {
  concatenator: (hashes) => hashes // don't Buffer.concat, just return node list, ie [hashA, hashB]
})

console.log(tree.getHexRoot()) // '0xd24e045226875e22b37ce607ea2af7a9fbb137ee128caa0ce3663615350245'

Let me know if this doesn't make sense and/or have further suggestions!

@TheFrozenFire
Copy link
Author

Awesome. That's looking very good. I'll test this out in a likely scenario (e.g. with Semaphore), and will report back on any issues.

In a quick skim of the code, I do see one thing that may be an issue. In combineHashes, you do hashes = hashes.filter(x => x.byteLength > 0).

I cannot find a consistent description of the byteLength property of Buffer objects, and it seems to be 0 always in every isolated test I've run.

Additionally, it would be pretty normal to have elements of a hash which are 0. Removing empty elements from the input would be unexpected behaviour.

@miguelmota
Copy link
Member

@TheFrozenFire ah yea you're right that shouldn't be there. The redundant combineHashes method has been removed now so it just calls this.concatenator(...) when passing to hash function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants