Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an optimization for trivial batch sizes. #151

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nmathewson
Copy link

[Hello, fine Dalek maintainers! This is my first open-source rust patch.]

When told to validate zero signatures, there's no need to go through
any pre-computation to be sure that there are no invalid signatures
in the empty set.

When told to validate one signature, a single signature verification
call is faster than validating a single signature via the
batch-verification mechanism.

On my not-very-reliable desktop, the performance difference is ~99%
when there are no signatures, and ~25% when there is one signature.

Why would a programmer use batch verification in this case?
Sometimes it makes sense to write code that handles an unknown
number of signatures at once: for example, when validating a
download that contains multiple objects, between zero and many of
which are ed25519-signed objects. Instead of the programmer having
to pick which API to use, IMO it's better just to make the "batch"
API fast in all cases.

This commit adds cases for 0 and 1-signature batches to the
benchmark. It also adds a case for a 2-signature batch, to verify
that batch verification is faster for all n>1.

When told to validate zero signatures, there's no need to go through
any pre-computation to be sure that there are no invalid signatures
in the empty set.

When told to validate one signature, a single signature verification
call is faster than validating a single signature via the
batch-verification mechanism.

On my not-very-reliable desktop, the performance difference is ~99%
when there are no signatures, and ~25% when there is one signature.

Why would a programmer use batch verification in this case?
Sometimes it makes sense to write code that handles an unknown
number of signatures at once: for example, when validating a
download that contains multiple objects, between zero and many of
which are ed25519-signed objects.  Instead of the programmer having
to pick which API to use, IMO it's better just to make the "batch"
API fast in all cases.

This commit adds cases for 0 and 1-signature batches to the
benchmark.  It also adds a case for a 2-signature batch, to verify
that batch verification is faster for all n>1.
@isislovecruft isislovecruft self-requested a review October 27, 2020 22:24
Copy link
Member

@isislovecruft isislovecruft left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @nmathewson! Congratulations on your first Rust patch!

I'm not sure how I feel about mixing the verification equations. Perhaps @valerini, @huitseeker, or @kchaliki have opinions? (Context: we did a bunch of work together documenting all the different types of ed25519 malleabilities.)

return Ok(());
} else if signatures.len() == 1 {
use crate::ed25519::signature::Verifier;
return public_keys[0].verify(messages[0], &signatures[0]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. Thinking out loud here. I'm not entirely sure if I'm okay with mixing the verification equations because of the subtle differences between the two (but anyone should feel free to try to convince me). See for example my and @valerini's comments on this issue. I suppose in general the single verification is safer than the linearised batching formula, but I still worry that mixing the two behaviours will result in difficult to debug verification errors like the following situation:

  1. I call verify_batch() on bad signature (s1,R1), which fails.
  2. I call verify_batch() on bad signature (s1,R1) and crafted signature (s2,R2) which probabilistically cancels out the linearisation factors and succeeds.

However, I suppose the above behaviour is better than the current behaviour where calling verify_batch() on (s2,R2) alone would probabilistically succeed.

Okay, I think I've convinced myself that this behaviour is safer. My only remaining concern is the debugging confusions it might cause with a signature not verifying if passed into this function by itself versus in a batch with N>1.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @isislovecruft - I am afraid you have the wrong person, probably looking for @kchalkias instead

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Table 3 in the paper details the issue at hand:
Selection_097

Dalek has a cofactorless single verification with a cofactored batch verification (column "[2]+[3]"), so we can create what you'd call "false negatives" from the PoV of batch verification as a heuristic: a set S of signatures that will (probabilistically) pass batch verification but fail (iterated) single verification.

The present PR eliminates this possibility when N = |S| = 1 by making sure only single verification applies, but does not change the case |S| > 1, as you mentioned.

I think the way to fix the issue is to:

  1. add a cofactored single signature verification,
  2. make sure the batch verification is only available when using this cofactored verification by uniting them both under a "cofactored" feature flag (which would replace the current "batch" feature)
  3. then implement the spirit of this PR: reduce the batch verification call when N = 1 to the (now cofactored) single verification.

I'm happy to help with 1. & 2.

@burdges
Copy link

burdges commented Oct 29, 2020

Actually features cannot be used here @huitseeker due to being additive: Any crate using "standard" cofactorless would silently be switched to cofactored whenever a crate using cofactored gets included.

You'll could use two different Signature types for cofactored and cofactorless, like I suggested in #117 (comment) or perhaps even some runtime flag:

pub struct Signature([u8; SIGNATURE_LENGTH], cofactored: bool);

@huitseeker
Copy link
Contributor

huitseeker commented Oct 29, 2020

@burdges I see your point.

I note, however, that we could make compilation blow up with something like

#[cfg(all(feature = "cofactored", feature = "cofactorless"))] // with cofactorless default
compile_error!(
    "only one signature paradigm should be used!"
);

But a more interesting alternative is that since cofactored verification will always accept cofactorless-correct inputs, and the discrepancy in behavior can only be observed on signatures produced outside of any (draft-)standard —or this library—, we could consider feature unification under cofactorless behavior as a "feature" 😈

To make things compatible with unification, we'd hence define cofactorless verification function (still the default) under

#[cfg(and(not(feature = "cofactored"), feature = "cofactorless"))] 

and the corresponding cofactored verification under

#[cfg(feature = "cofactored")] 

The real risk occurs across libraries / implementations, where you could maliciously craft signatures so that one implementation disagrees with the other, not if all your code is "downgraded" to cofactorless unbeknownst to you. You would be downgraded, after all, to the recommended standard behavior.

Because the "cofactored" feature would be non-default, the feature approach gives us a meaningful way to ensure upon cofactored opt-in that all the components of a code base are turned to cofactored behavior. At the admitted cost of possibly introducing a discrepancy with another implementation (e.g. ring) , but:

  • that's why you have the opt-in,results
  • I am tempted to submit a PR for cofactored verification to BoringSSL (and all the others) and replicate the feature trick in ring.

@burdges
Copy link

burdges commented Oct 29, 2020

You could just name cofactored Signature and name cofactorless SignatureLegacy or whatever, so then you encourage cofactored but nobody who reads docs carefully gets surprised. We'd never produce SignatureLegacy by signing of course, only by deserialization.

@huitseeker
Copy link
Contributor

huitseeker commented Oct 30, 2020

I see.
That works for the purpose of bringing the ecosystem to cofactored verification, but:

  • it puts the burden on dalek maintainers to keep an explicit cofactorless signature around forever, as soon as dependencies start using it,
  • I'm not sure anybody "reads docs carefully", so I'm worried that it is not opt-in. Most people would be silently "switched" to cofactored given their current code uses Signature. As a consequence, we get the "reverse" dependency problem: those inattentive Dalek users would be performing a cofactored verification while the rest of the ecosystem is still, largely, on cofactorless.

@burdges
Copy link

burdges commented Oct 30, 2020

I suspect features cause more harm than good here, but you could've a feature that controls a type alias.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants