Skip to content

Latest commit

 

History

History
125 lines (93 loc) · 9.86 KB

aip-25.md

File metadata and controls

125 lines (93 loc) · 9.86 KB
  AIP: 0025
  Title: Providing PoW like guarantees on (D)PoS networks
  Authors: Moazzam Abdullah Khan, Asif Mehmood
  Status: Draft
  Type: Standards Track
  Discussion: https://github.com/ArkEcosystem/AIPs/issues/39
  Created: 2018-12-08
  Last Update: 2019-1-10

Abstract

Traditional Proof of Stake (PoS) and Delegated Proof of Stake (DPoS) approaches achieve Probabilistic Byzantine Fault Tolerant consensus without creating the race condidtion that Proof of Work (PoW) systems create. As a result these approaches waste less energy and therefore are able to provide utility at a cheaper cost. However one drawback of conventional (D)PoS approaches is that they cannot guarantee data immutability as a result of predictable hash sequences or easy re-sync guarantees by accepting only the longest chain as valid. This proposal aims to address these concerns by introducing an approach to determine producer of the next block while keeping them hidden until the block is actually announced. This enables strong guarantees that the data is immutable and that the next block's forger isn't known to the network beforehand thereby eliminatng a DDoS attack vector.

Motivation

As cryptocurrencies get used more widely it is desired that the infrastructure and algorithms required to deploy them become more cost efficient so that barriers to entry for aspiring developers and dapp creators are reduced. This desire has given rise to PoS and DPoS paradigms which achieve the same level of concensus as PoW systems at several orders of magnitude lower cost. However these (D)PoS systems don't provide a hard guarantee that data committed to the blockchain cannot be changed. As a result (D)PoS systems cannot take advantage of the longest chain valid principle that creates concensus in PoW systems because doing would make them vulnerable to deep reorg attacks. This kind of attack is undetectable in current schemes and is pretty cheap to carry out. Because of this failure there is no easy way to trust data provided by other nodes and local backups of the blockchain have to be relied upon for a guarantee that the data has not been changed for older blocks. These are critical issues that need to be addressed and one approach that we can take to solve them is discussed below. This proposal will describe the approach in context to Ark but it can be applied to any Proof of Stake system.

Rationale

The main emphasis in this approach is to create a verifiable random function (VRF) that is used to generate random numbers. This allows us to select the next block producer using the generated random number. This is achieved by making each delegate submit a pledge containing the hash of a secret random number. If the random number generated by the VRF is concurrent with the pledged secret submitted by a delegate then that particular delegate is allowed to create the next block. While creating the new block the delegate will add the secret random number as claim corresponding to the colliding pledge as proof for other delegates to validate their claim of having the right to produce the block.

One issue that must be pointed out while working with VRF implementations is the last actor problem which is a situation in which the latest block producer can keep producing blocks and discarding them until they have found one that confirms themself to be the next block producer as well. In order to circumvent the issue altogether we mandate that the VRF is created from blocks at least N depth away from current height H of the blockchain where N is the total number of forging delegates.

  1. Bootstrapping process

    • Randomized Round robin for 2 rounds using current approach implemented by the Ark v2 protocol
    • During this round with each block the producer of the block commits the sha256 hash of a string containing a large secret number and time epoch to the blockchain via a special pledge transaction (discussed below)
  2. Creating the VRF

    • The hashes pledged by the delegates in the blocks from depth N to 2*N from current block height H are collected and concatenated into a string. This string is then hashed using sha256 to create the new random number. Using this approach all N delegates get an equal chance to contribute to the VRF's next output and there is no chance for hijacking the random number generator by a minority of colluding parties.
  3. Selecting Next Blockproducer

    • The new random number generated is modded by a number C smaller than N
    • Each delegate mods their secret number by C as well
    • If both of these resulting numbers match for a delegate then that delegate has the right to forge next block
    • Other nodes will only accept block produced for which the claim correlates to the earliest unused pledge and will reject claim with a later pledge committed in a later block
    • The block produced by the chosen delegate will contain the claim which will reveal the secret random number and the salt used before hashing it. The salt in this case is the unix time epoch and it must be close to the transaction creation time for the claim to be considered valid.

Using this approach it is possible that multiple delegates have submitted numbers that produce the same remainder when modded with C or that no delegates have submitted a number that corresponds with the current VRF output. These cases must be handled specifically in the protocol.

  1. If more than one delegate have submitted pledges that result in the same remainder then the unused pledge that was submitted in the earliest block will be used. Delegates can estimate a risk factor for producing a block that will be orphaned based on the depth of their pledge being used as claim. Using this calculation a delegate can decide to immediately publish a block or wait roughly 2 seconds so it can check if someone else has already published a block, if not it can publish it.

  2. In case no blocks are heard after a reasonable amount of time (let's say ~4 seconds) it can be assumed that no one has submitted the pledge corresponding to the current VRF output. In order to make sure the chain doesn't stall a sha256 of the VRF output is calculated and is used as collision target for next block producer. This process is repeated until a block is produced and the normal mechanism for VRF calculation is resumed afterwards. The block will only be accepted by peers in the corresponding time window (~4 seconds as described above), once the time window is expired it will not be accepted. The time span for each window must be defined keeping in mind the number of delegates and should be roughly longer than the time it takes for 51% of the nodes in the network to receive a new block.

An additional constraint we add is that each delegate can only have up to 3 pledges active at a time. If more than 3 are submitted then only the latest 3 will be considered valid.

Specifications

The pseudocode for a forging node is presented below with the forge() function being entrypoint:

inputs:
N = number of delegates (or minimum number of PoS miners);
C = integer less than N;
D = dictionary to store hash and pledge secret locally
unclaimed_block_hash = hash corresponding to block if the claim transaction is ignored

algorithm:
function create_pledge()
{
	rand = random.next()
	salt = unix_time_epoch
	public_rand = sha256(string(rand)+string(salt))
	return public_rand, rand, salt
}

function forge()
{
	if no_active_pledge:
		public_rand, rand, salt = create_pledge()
		D.add_item(public_rand, {rand, salt})
		publish_pledge_tx(public_rand)

	if H > 2*N:
	{
		for each pledge_i in D below height H-N:
		{
			j = sha256( concatenate_hashes_from_height(H-N, H-2*N) ) % C
			if D.at(pledge_i).rand%C == j:
			{
				D.remove(pledge_i)
				public_rand, rand, salt = create_pledge()
				D.add_item(public_rand, {rand, salt})
				publish_pledge_tx(public_rand)
				publish_claim_tx(pledge_i, unclaimed_block_hash)
				create_next_block()
				break;
			}
		
		}
	}
	else
	{
		follow_old_algorithm()
	}
}

The pledge and claim transactions are a new transaction type with a dynamic fee defined to avoid a nothing at stake condition for bidding on competing forks of the same chain. If a delegate produces a block that is uncled then all relays extract the pledge and claim transactions specified within it and add it to the mempool to be forged by the next block producer. The transaction fee would be selected based on a tradeoff between uncle rate and block time delay. If there are too many uncle blocks being produced then the transaction fee should be increased to penalize the delegates producing uncled blocks and change their risk/reward compromize. However if this fee is too high then delegates might be reluctant to produce blocks even if they have a valid claim simply because they fear that someone else might have an earlier claim and therefore delay the production of next block unnecessarily.

Pledge transaction example

type data
0xfe (public_rand)

Claim transaction example

type data
0xff (rand):(salt):(unclaimed_block_hash)

unclaimed_block_hash is what the hash of the block would have been if the claim transaction wasn't included in the block data (excluding signature). This is meant to discourage double forging by producing competing blocks using the same claim that lead to a fork.

Finally the delegate can assess their risk of having the block rejected due to earlier claim based on revealed claims and calculating probability using the birthday problem formula and estimate their profitability using kelly criterion.

In this algorithm the delegates can tweak the fees for the two new transaction types as well as the variable C to get the best network performance versus uncle rate balance. For this to happen we propose that the block also specify a suggested value for C and the median value from the last N blocks is used to calculate next block.