Skip to content

ZenGo-X/multi-party-bls

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

multi-party-bls

Rust implementation of {t,n}-threshold BLS over BLS12-381 elliptic curve. Currently two protocols are implemented:

  • Aggregated BLS. Based on the MSP protocol (BDN18, section 3.1)
  • Threshold BLS assuming dishonest majority. Based on Threshold GLOW signatures (GLOW20 version 20200806:135847)

Threshold BLS performance

We deployed 3 parties at dedicated AWS t3.medium instances and measured keygen & signing running time (t=1, n=3). Here are results:

  • Keygen
    • Mean: 158.4ms
    • Std: 18.4ms
  • Signing
    • Mean: 45.5ms
    • Std: 21.2ms

How to use it

To execute any protocol (keygen/signing) in tokio async environment, you need to define message delivery logic and construct stream of incoming messages and sink for outcoming messages. Then you can execute protocol using AsyncProtocol (see below).

Messages delivery should meet security assumptions:

  • Any P2P message must be encrypted so no one can read it except recipient
  • Broadcast messages must be signed, so no one can forge message sender

Keygen

use round_based::{Msg, AsyncProtocol};
use bls::threshold_bls::state_machine::keygen::{Keygen, ProtocolMessage};

async fn connect() -> Result<(
   // Party's unique index in range [1;parties_count]
   u16,
   // Incoming messages
   impl Stream<Item=Result<Msg<ProtocolMessage>, RecvErr>> + FusedStream + Unpin,
   // Outcoming messages
   impl Sink<Msg<ProtocolMessage>, Error=SendErr> + Unpin,
)> {
   // ...
}

let (i, incoming, outcoming) = connect().await?;
// n - number of parties involved in keygen, t - threshold value, i - party's index
let keygen = Keygen::new(i, t, n)?;
let local_key = AsyncProtocol::new(keygen, incoming, outcoming)
    .run().await?;
println!("Public key: {:?}", local_key.public_key());

See our demo for more concrete examples: we have method join which is used to set up message delivery (similar to connect function from example above), and we call it in keygen & signing.

Sign

use round_based::{Msg, AsyncProtocol};
use bls::threshold_bls::state_machine::sign::{Sign, ProtocolMessage};

async fn connect() -> Result<(
    // Party's unique index in range [1;parties_count]
    u16,
    // Incoming messages
    impl Stream<Item=Result<Msg<ProtocolMessage>, RecvErr>> + FusedStream + Unpin,
    // Outcoming messages
    impl Sink<Msg<ProtocolMessage>, Error=SendErr> + Unpin,                        
)> {
    // ...
}

let (i, incoming, outcoming) = connect().await?;
// message - bytes to sign, n - number of parties involved in signing,
// local_key - local secret key obtained by this party at keygen
let signing = Sign::new(message, i, n, local_key)?;
let (_, sig) = AsyncProtocol::new(signing, incoming, outcoming)
    .run().await?;
println!("Signature: {:?}", sig);

Demo

Using demo CLI app, you can distributedly generate key and sign data.

  1. (Optional) Set environment variable to see log messages:

    export RUST_LOG=demo=trace
  2. Start mediator server:

    cargo run --example cli -- mediator-server run

    Mediator server allow parties to communicate with each other. By default, it listens at 127.0.0.1:8333

  3. Run distributed keygen by launching N parties:

    cargo run --example cli -- keygen -t 1 -n 3 --output target/keys/key1
    cargo run --example cli -- keygen -t 1 -n 3 --output target/keys/key2
    cargo run --example cli -- keygen -t 1 -n 3 --output target/keys/key3

    This will generate key between 3 parties with a threshold=1. Every party connects to mediator server and uses it to send and receive messages to/from other parties within the protocol.

    Every party will output result public key, e.g.:

    Public key: 951f5b5bc45af71346f4a7aee6b50670c07522175f7ebd671740075e4247b45f5f03206ae8274d77337eae797e0f69490cca3ee5da31eb5f8746dd942034550dff5c4695ee7160f32bfa8424d40e3690bdd7cf4d58e9ab5d03d00d50fc837278
    

    Parties private local shares will be in target/keys folder

  4. Let's sign some data using 2 parties:

    cargo run --example cli -- sign -n 2 --key target/keys/key1 --digits some-data
    cargo run --example cli -- sign -n 2 --key target/keys/key2 --digits some-data

    Every party will output the same signature, e.g.:

    Signature: acbac87f8168d866df8d1f605cf8d688c64ae491e6d6cbc60db4fc0952dc097452f252cb2f746a948bac0e2311e6c14e
    
  5. Then lets check that signature is indeed valid. You can use command:

    cargo run --example cli -- verify --digits DATA --signature SIG --public-key PK

    E.g.:

    cargo run --example cli -- verify --digits some-data \
      --signature acbac87f8168d866df8d1f605cf8d688c64ae491e6d6cbc60db4fc0952dc097452f252cb2f746a948bac0e2311e6c14e \
      --public-key 951f5b5bc45af71346f4a7aee6b50670c07522175f7ebd671740075e4247b45f5f03206ae8274d77337eae797e0f69490cca3ee5da31eb5f8746dd942034550dff5c4695ee7160f32bfa8424d40e3690bdd7cf4d58e9ab5d03d00d50fc837278

    Output:

    Signature is valid
    

Note that if you need to run several protocols (keygen/sign) concurrently, you need to provide a unique identifier to each group of parties by specifying --room-id flag. To learn more, see cargo run --example cli -- keygen --help

Development

Detecting performance regression

We use statistical-driven benchmarks backed by criterion to detect any regressions. Please, follow instruction to see how your changes effect on performance:

  1. Checkout commit before your changes (don't forget to commit all your changes)
  2. Run benchmarks:
    cargo bench --bench criterion --features dev
    It will take a few minutes. After that, you should be able to discover HTML-rendered report at ./target/criterion/report/index.html. It'll contain results of benchmarks along with nice-rendered charts.
  3. Checkout back on the commit with your changes
  4. Run benchmarks again:
    cargo bench --bench criterion --features dev
    Criterion will report about any regression it found right in console output. HTML-rendered report will be updated (see ./target/criterion/report/index.html) and will reason about performance differences more precisely.

Note that benchmark results do not show real-world performance of multi party computation since everything is computed sequentially, not in parallel. We do not allocate separate thread for every party as it will make harder to reason about performance differences.

Warning

Do not use this code in production before consulting with us. Feel free to reach out or join ZenGo X Telegram.

Releases

No releases published

Packages

No packages published

Languages