The Crosslink Construction

We are now ready to give a description of a variant of Snap‑and‑Chat that takes into account the issues described in Notes on Snap‑and‑Chat, and that implements bounded dynamic availability. I call this the “Crosslink” construction. I will try to make the description relatively self-contained, but knowledge of the Snap‑and‑Chat construction from [NTT2020] (arXiv version) is assumed.

Conventions

” is a metavariable for the name of a protocol. We also use it as a wildcard in protocol names of a particular type, for example bc” for the name of some best‑chain protocol.

Protocols are referred to as for a name “”. Where it is useful to avoid ambiguity, when referring to a concept defined by we prefix it with ‑”.

We do not take synchrony or partial synchrony as an implicit assumption of the communication model; that is, unless otherwise specified, messages between protocol participants can be arbitrarily delayed or dropped. A given message is received at most once, and messages are nonmalleably authenticated as originating from a given sender whenever needed by the applicable protocol. Particular subprotocols may require a stronger model.

Background

For an overview of communication models used to analyse distributed protocols, see this blog post by Ittai Abraham.

Discussion of incorrect applications of the GST formalization of partial synchrony to continuously operating protocols. (You’re doing it wrong!)

The original context for the definition of the partially synchronous model in [DLS1988] was for “one‑shot” Byzantine Agreement — called “the consensus problem” in that paper. The following argument is used to justify assuming that all messages from the Global Stabilization Time onward are delivered within the upper time bound :

Therefore, we impose an additional constraint: For each execution there is a global stabilization time (GST), unknown to the processors, such that the message system respects the upper bound from time GST onward.

This constraint might at first seem too strong: In realistic situations, the upper bound cannot reasonably be expected to hold forever after GST, but perhaps only for a limited time. However, any good solution to the consensus problem in this model would have an upper bound on the amount of time after GST required for consensus to be reached; in this case it is not really necessary that the bound hold forever after time GST, but only up to time GST . We find it technically convenient to avoid explicit mention of the interval length in the model, but will instead present the appropriate upper bounds on time for each of our algorithms.

Several subsequent authors applying the partially synchronous model to block chains appear to have forgotten this context. In particular, the argument depends on the protocol completing soon after GST. Obviously a block-chain protocol does not satisfy this assumption; it is not a “one‑shot” consensus problem.

This assumption could be removed, but some authors of papers about block-chain protocols have taken it to be an essential aspect of modelling partial synchrony. I believe this is contrary to the intent of [DLS1988]:

Instead of requiring that the consensus problem be solvable in the GST model, we might think of separating the correctness conditions into safety and termination properties. The safety conditions are that no two correct processors should ever reach disagreement, and that no correct processor should ever make a decision that is contrary to the specified validity conditions. The termination property is just that each correct processor should eventually make a decision. Then we might require an algorithm to satisfy the safety conditions no matter how asynchronously the message system behaves, that is, even if does not hold eventually. On the other hand, we might only require termination in case holds eventually. It is easy to see that these safety and termination conditions are [for the consensus problem] equivalent to our GST condition: If an algorithm solves the consensus problem when holds from time GST onward, then that algorithm cannot possibly violate a safety property even if the message system is completely asynchronous. This is because safety violations must occur at some finite point in time, and there would be some continuation of the violating execution in which eventually holds.

This argument is correct as stated, i.e. for the one-shot consensus problem. Subtly, essentially the same argument can be adapted to protocols with safety properties that need to be satisfied continuously. However, it cannot correctly be applied to liveness properties of non-terminating protocols. The authors (Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer) would certainly have known this: notice how they carefully distinguish “the GST model” from “partial synchrony”. They cannot plausibly have intended this GST formalization to be applied unmodified to analyze liveness in such protocols, which seems to be common in the block-chain literature, including in the Ebb-and-Flow paper [NTT2020] and the Streamlet paper [CS2020]. (The latter does refer to “periods of synchrony” which indicates awareness of the issue, but then it uses the unmodified GST model in the proofs.)

This provides further motivation to avoid taking the GST formalization of partial synchrony as a basic assumption.

For simplicity, we assume that all events occur at global times in a total ordering. This assumption is not realistic in an asynchronous communication model, but it is not essential to the design or analysis and could be removed (essentially: replace times with events and use a partial happens-before ordering on events, in place of a total ordering on times).

A ‑execution is the complete set of events (message sends/receives and decisions by protocol participants) that occur in a particular run of from its initiation up to a given time. A prefix of a ‑execution is also a ‑execution. Since executions always start from protocol initiation, a strict suffix of a ‑execution is not a ‑execution.

If maintains a (single) block chain, refers to the genesis block of a ‑chain.

Let be the sequence of transactions in the given chain , starting from genesis.

For convenience we conflate ‑blocks with ‑chains; that is, we identify a chain with the block at its tip. This is justified because, assuming that the hash function used for parent links is collision-resistant, there can be only one ‑chain corresponding to a ‑block.

If is a ‑chain, means with the last blocks pruned, except that if , the result is the ‑chain consisting only of .

The block at depth in a ‑chain is defined to be the tip of . Thus the block at depth in a chain is the last one that cannot be affected by a rollback of length .

Terminology note

Our usage of “depth” is different from [NTT2020], which uses “depth” to refer to what Bitcoin and Zcash call “height”. It also differs by from the convention for confirmation depths in zcashd, where the tip is considered to be at depth , rather than .

For ‑blocks and ,

  • the notation means that the ‑chain with tip is a prefix of the one with tip . This includes the case .
  • the notation means that either or . That is, “one of and is a prefix of the other”.

We also use (without a subscript on ) to mean that the transaction ledger is a prefix of . Similarly to above, means that either or ; that is, “one of and is a prefix of the other”.

The notation means the sequence of for each ‑block in chain order from genesis up to and including . ( is a bound variable within this construct.)

Subprotocols

As in Snap‑and‑Chat, we depend on a BFT protocol , and a best‑chain protocol .

Info

See this terminology note for why we do not call a “longest‑chain” protocol.

We modify (resp. ) to give (resp. ) by adding structural elements, changing validity rules, and potentially changing the specified behaviour of honest nodes.

A Crosslink node must participate in both and ; that is, it must maintain a view of the state of each protocol. Acting in more specific roles such as bft‑proposer, bft‑validator, or bc‑block‑producer is optional, but we assume that all such actors are Crosslink nodes.

Model for BFT protocols (Π{origbft,bft})

A player’s view in includes a set of bft‑block chains each rooted at a fixed genesis bft‑block . There is a bft‑block‑validity rule (specified below), which depends only on the content of the block and its ancestors. A non‑genesis block can only be bft‑block‑valid if its parent is bft‑block‑valid. A bft‑valid‑chain is a chain of bft‑block‑valid blocks.

Execution proceeds in a sequence of epochs. In each epoch, an honest proposer for that epoch may make a bft‑proposal.

A bft‑proposal refers to a parent bft‑block, and specifies the proposal’s epoch. The content of a proposal is signed by the proposer using a strongly unforgeable signature scheme. We consider the proposal to include this signature. There is a bft‑proposal‑validity rule, depending only on the content of the proposal and its parent block, and the validity of the proposer’s signature.

Info

We will shorten bft‑block‑valid bft‑block” to bft‑valid‑block”, and bft‑proposal‑valid bft‑proposal” to bft‑valid‑proposal”.

For each epoch, there is a fixed number of voting units distributed between the players, which they use to vote for a bft‑proposal. We say that a voting unit has been cast for a bft‑proposal at a given time in a bft‑execution, if and only if is bft‑proposal‑valid and a ballot for authenticated by the holder of the voting unit exists at that time.

Using knowledge of ballots cast for a bft‑proposal that collectively satisfy a notarization rule at a given time in a bft‑execution, and only with such knowledge, it is possible to obtain a valid bft‑notarization‑proof . The notarization rule must require at least a two‑thirds absolute supermajority of voting units in ’s epoch to have been cast for . It may also require other conditions.

A voting unit is cast non‑honestly for an epoch’s proposal iff:

  • it is cast other than by the holder of the unit (due to key compromise or any flaw in the voting protocol, for example); or
  • it is double‑cast (i.e. there are at least two ballots casting it for distinct proposals); or
  • the holder of the unit following the conditions for honest voting in , according to its view, should not have cast that vote.

Definition: One‑third bound on non‑honest voting

An execution of has the one‑third bound on non‑honest voting property iff for every epoch, strictly fewer than one third of the total voting units for that epoch are ever cast non‑honestly.

Info

It may be the case that a ballot cast for is not in honest view when it is used to create a notarisation proof for . Since we are not assuming synchrony, it may also be the case that such a ballot is in honest view but that any given node has not received it (and perhaps will never receive it).

There may be multiple distinct ballots or distinct ballot messages attempting to cast a given voting unit for the same proposal; this is undesirable for bandwidth usage, but it is not necessary to consider it to be non‑honest behaviour for the purpose of security analysis, as long as such ballots are not double‑counted toward the two‑thirds threshold.

Security caveat

The one‑third bound on non‑honest voting property considers all ballots cast in the entire execution. In particular, it is possible that a validator’s key is compromised and then used to cast its voting units for a proposal of an epoch long finished. If the number of voting units cast non-honestly for any epoch ever reaches one third of the total voting units for that epoch during an execution, then the one‑third bound on non‑honest voting property is violated for that execution.

Therefore, validator keys of honest nodes must remain secret indefinitely. Whenever a key is rotated, the old key must be securely deleted. For further discussion and potential improvements, see tfl-book issue #140.

A bft‑block consists of re‑signed by the same proposer using a strongly unforgeable signature scheme. It is bft‑block‑valid iff:

  • is bft‑proposal‑valid; and
  • is a valid proof that some subset of ballots cast for are sufficient to satisfy the notarization rule; and
  • the proposer’s outer signature on is valid.

A bft‑proposal’s parent reference hashes the entire parent bft‑block, i.e. proposal, proof, and outer signature.

Info

Neither nor the proposer’s outer signature is unique for a given . The proposer’s outer signature is however third‑party nonmalleable, by definition of a strongly unforgeable signature scheme. An “honest bft‑proposal” is a bft‑proposal made for a given epoch by a proposer who is honest in that epoch. Such a proposer will only create one proposal and only sign at most once for each epoch, and so there will be at most one “honestly submitted” bft‑block for each epoch.

It is possible for there to be multiple bft‑valid‑blocks for the same proposal, with different notarization proofs and/or outer signatures, if the proposer is not honest. However, the property that there will be at most one “honestly submitted” bft‑block for each epoch is important for liveness, even though we cannot guarantee that any particular proposer for an epoch is honest. ==TODO check that we are correctly using this in the liveness analysis.==

There is an efficiently computable function . For a bft‑block‑valid input block , this function outputs the last ancestor of that is final in the context of .

Info

The chain of ancestors is unambiguously determined because a bft‑proposal’s parent reference hashes the entire parent bft‑block; each bft‑block commits to a proposal; and the parent hashes are collision-resistant. This holds despite the caveat mentioned above that there may be multiple bft‑valid‑blocks for the same proposal.

must satisfy all of the following:

  • is not bft‑block‑valid.
  • If is bft‑block‑valid, then:
    • (and therefore it must also be bft‑block‑valid);
    • for all bft‑valid‑blocks such that , .
  • .

Info

It is correct to talk about the “last final block” of a given chain (that is, each bft‑valid-block unambiguously determines a bft‑valid-block ), but it is not correct to refer to a given bft‑block as objectively bft‑final”.

A particular BFT protocol might need adaptations to fit it into this model for , before we apply the Crosslink modifications to obtain . Any such adaptions are necessarily protocol-specific. In particular,

  • origbft‑proposal‑validity should correspond to the strongest property of an origbft‑proposal that is objectively and feasibly verifiable from the content of the proposal and its parent origbft‑block at the time the proposal is made. It must include verification of the proposer’s signature.
  • origbft‑block‑validity should correspond to the strongest property of an origbft‑block that is objectively and feasibly verifiable from the content of the block and its ancestors at the time the block is added to an origbft‑chain. It should typically include all of the relevant checks from origbft‑proposal‑validity that apply to the created block (or equivalent checks). It must also include verification of the notarization proof and the proposer’s outer signature.
  • If a node observes an origbft‑valid block , then it should be infeasible for an adversary to cause a rollback in that node’s view past , and the view of the chain up to should agree with that of all other honest nodes. This is formalized in the next section.

Safety of

The intuition behind the following safety property is that:

  • For to be safe, it should never be the case that two honest nodes observe (at any time) bft‑blocks and respectively that they each consider final in some context, but does not hold.
  • By definition, an honest node observes a bft‑block to be final in the context of another bft‑block , iff .

We say that a bft‑block is “in honest view” if a party observes it at some time at which that party is honest.

Definition: Final Agreement

An execution of has Final Agreement iff for all bft‑valid blocks in honest view at time and in honest view at time , we have .

Note that it is possible for this property to hold for an execution of a BFT protocol in an asynchronous communication model. The following caveat applies: if the one‑third bound on non‑honest voting property is ever broken at any time in an execution, then it may not be possible to maintain Final Agreement from that point on. This is an area of possible improvement in the design and analysis, left for future work.

Adapting the Streamlet BFT protocol.

Streamlet as described in [CS2020] has three possible states of a block in a player’s view:

  • “valid” (but not notarized or final);
  • “notarized” (but not final);
  • “final”.

By “valid” the Streamlet paper means just that it satisfies the structural property of being part of a block chain with parent hashes. The role of bft‑block‑validity in our model corresponds roughly to Streamlet’s “notarized”. It turns out that with some straightforward changes relative to Streamlet, we can identify “origbft‑block‑valid” with “notarized” and consider an origbft‑valid‑chain to only consist of notarized blocks. This is not obvious, but is a useful simplification.

Here is how the paper defines “notarized”:

When a block gains votes from at least distinct players, it becomes notarized. A chain is notarized if its constituent blocks are all notarized.

This implies that blocks can be added to chains independently of notarization. However, the paper also says that a leader always proposes a block extending from a notarized chain. Therefore, only notarized chains really matter in the protocol.

In unmodified Streamlet, the order in which a player sees signatures might cause it to view blocks as notarized out of order. Streamlet’s security analysis is in a synchronous model, and assumes for liveness that any vote will have been received by all players within two epochs.

In Crosslink, however, we need origbft‑block‑validity to be an objectively and feasibly verifiable property. We also would prefer reliable message delivery within bounded time not to be a basic assumption of our communication model. (This does not dictate what assumptions about message delivery are made for particular security analyses.) If we did not make a modification to the protocol to take this into account, then some Crosslink nodes might receive a two‑thirds absolute supermajority of voting messages and consider a BFT block to be notarized, while others might never receive enough of those messages.

Obviously a proposal cannot include signatures on itself — but the block formed from it can include proofs about the proposal and signatures. We can therefore say that when a proposal gains a two‑thirds absolute supermajority of signatures, a block is created from it that contains a proof (such as an aggregate signature) that it had such a supermajority. For example, we can have the proposer itself make this proof once it has enough votes, sign the resulting to create a block, then submit that block in a separate message. (The proposer has most incentive to do this in order to gain whatever reward attaches to a successful proposal; it can outsource the proving task if needed.) Then the origbft‑block‑validity rule can require a valid supermajority proof, which is objectively and feasibly verifiable. Players that see an origbft‑block‑valid block can immediately consider it notarized.

Note that for the liveness analysis to be unaffected, we need to assume that the combined latency of messages, of collecting and aggregating signatures, and of block submission is such that all players will receive a notarized block corresponding to a given proposal (rather than just all of the votes for the proposal) within two epochs. Alternatively we could re‑do the timing analysis.

With this change, “origbft‑block‑valid” and “notarized” do not need to be distinguished.

Streamlet’s finality rule is:

If in any notarized chain, there are three adjacent blocks with consecutive epoch numbers, the prefix of the chain up to the second of the three blocks is considered final. When a block becomes final, all of its prefix must be final too.

We can straightforwardly express this as an function of a context block , as required by the model:

For an origbft‑valid block , is the last origbft‑valid block such that either or is the second block of a group of three adjacent blocks with consecutive epoch numbers.

Note that “When a block becomes final, all of its prefix must be final too.” is implicit in the model.

Model for best-chain protocols (Π{origbc,bc})

A node’s view in includes a set of bc‑block chains each rooted at a fixed genesis bc‑block . There is a bc‑block‑validity rule (often described as a collection of “consensus rules”), depending only on the content of the block and its ancestors. A non‑genesis block can only be bc‑block‑valid if its parent is bc‑block‑valid. By “bc‑valid‑chain” we mean a chain of bc‑block‑valid blocks.

The definition of bc‑block‑validity is such that it is hard for a block producer to extend a bc‑valid‑chain unless they are selected by a random process that chooses a block producer in proportion to their resources with an approximately known and consistent time distribution, subject to some assumption about the total proportion of resources held by honest nodes.

There is a function , with a strict total ordering on . An honest node will choose one of the bc‑valid‑chains with highest score as the bc‑best‑chain in its view. Any rule can be specified for breaking ties.

The function is required to satisfy for any non‑genesis bc‑valid‑chain .

Info

For a Proof‑of‑Work protocol, the score of a bc‑chain should be its accumulated work.

Unless an adversary is able to censor knowledge of other chains from the node’s view, it should be difficult to cause the node to switch to a chain with a last common ancestor more than blocks back from the tip of their previous bc‑best‑chain.

Following [NTT2020], we use the notation for node ’s bc‑best‑chain at time .

A bc‑context allows testing whether a given transaction is contextually valid (“bc‑context‑valid”), and adding it to the context if it is. Given a context , the resulting sequence of contextually valid transactions since genesis can be obtained as . It is possible to obtain a bc‑context corresponding to the state after the genesis bc‑block.

We assume that the only reasons for a transaction to be contextually invalid are that it double‑spends, or that an input is missing.

Why is this assumption needed?

It is needed for equivalence of the following two ways to construct :

  1. Concatenate the transactions from each bft‑block snapshot of a bc‑chain, and sanitize the resulting transaction sequence by including each transaction iff it is contextually valid.
  2. Concatenate the bc‑blocks from each bft‑block snapshot of a bc‑chain, remove duplicate blocks, and only then sanitize the resulting transaction sequence by including each transaction iff it is contextually valid.

We want these to be equivalent to enable a crucial optimization. In practice there will be many duplicate blocks from chain prefixes in the input to sanitization, and so a literal implementation of the first variant would have to recheck all duplicate transactions for contextual validity. That would have at least complexity (more likely ) in the length of the block chain, because the length of each final snapshot grows with . But in the second variant, we can just concatenate suffixes of each snapshot omitting any bc‑blocks that are common to a previous snapshot.

Given that the only reasons for a transaction to be contextually invalid are double‑spends and missing inputs, the argument for equivalence is that:

  • If a transaction is omitted due to a double‑spend, then any subsequent time it is checked, that input will still have been double‑spent.
  • If a transaction is omitted due to a missing input, this can only be because an earlier transaction in the input to sanitization was omitted. So the structure of omitted transactions forms a DAG in which parent links must be to earlier omitted transactions. The roots of the DAG are at double-spending transactions, which cannot be reinstated (that is, included after they had previously been excluded). A child cannot be reinstated until its parents have been reinstated. Therefore, no transactions are reinstated.

It might be possible to relax this assumption, but it would require additional analysis to ensure that the above equivalence still holds.

A bc‑block logically contains a sequence of transactions. In addition to other rules, a block is only bc‑block‑valid if its transactions, taken in order, are all bc‑context‑valid given previous blocks and previous transactions in the block.

Detail of ‘logically contains’ for Zcash.

In Zcash, a block logically contains its transactions by having the block header commit to a Merkle tree over txids, and another Merkle tree (in the same transaction order) over witness hashes. A txid and witness hash together authenticate all data fields of a transaction.

For v5 transactions, the txid commits to effecting data (that determines the effect of the transaction) and the witness hash commits to witness data (signatures and proofs). For earlier transaction versions, the witness hash is null and the txid commits to all transaction data.

When a block is downloaded, its transactions are parsed and its header is checked against the transaction data. Assuming no bugs or errors in the design, this is effectively equivalent to the block data being directly authenticated by the header.

Is this model of contextual validity sufficient for Zcash?

There are several Zcash consensus rules that mention block height or position within a block, or that depend on other transactions in a block:

Transaction consensus rule:

  • A coinbase transaction for a non‑genesis block MUST have a script that, as its first item, encodes the block height [in a specified way].

In addition, a coinbase transaction is implicitly able to spend the total amount of fees of other transactions in the block.

Block consensus rule:

  • The first transaction in a block MUST be a coinbase transaction, and subsequent transactions MUST NOT be coinbase transactions.

Payment of Founders’ Reward:

  • [Pre‑Canopy] A coinbase transaction at MUST include at least one output that pays exactly zatoshi with [details of script omitted].

Payment of Funding Streams:

  • [Canopy onward] The coinbase transaction at block height MUST contain at least one output per funding stream active at , that pays zatoshi in the prescribed way to the stream’s recipient address [...]

However, none of these need to be treated as contextual validity rules.

The following transaction consensus rule must be modified:

  • A transaction MUST NOT spend a transparent output of a coinbase transaction from a block less than 100 blocks prior to the spend. Note that transparent outputs of coinbase transactions include Founders’ Reward outputs and transparent funding stream outputs.

To achieve the intent, it is sufficient to change this rule to only allow coinbase outputs to be spent if their coinbase transaction has been finalized.

If we assume that coinbase block subsidies and fees, and the position of coinbase transactions as the first transaction in each block have already been checked as bc‑block‑validity rules, then the model is sufficient.

A “coinbase transaction” is a bc‑transaction that only distributes newly issued funds and has no inputs.

Define so that iff has exactly one transaction that is a coinbase transaction.

Each bc‑block is summarized by a bc‑header that commits to the block. There is a notion of bc‑header‑validity that is necessary, but not sufficient, for validity of the block. We will only make the distinction between bc‑headers and bc‑blocks when it is necessary to avoid ambiguity.

Header validity for Proof‑of‑Work protocols.

In a Proof‑of‑Work protocol, it is normally possible to check the Proof‑of‑Work of a block using only the header. There is a difficulty adjustment function that determines the target difficulty for a block based on its parent chain. So, checking that the correct difficulty target has been used relies on knowing that the header’s parent chain is valid.

Checking header validity before expending further resources on a purported block can be relevant to mitigating denial‑of‑service attacks that attempt to inflate validation cost.

Typically, Bitcoin‑derived best chain protocols do not need much adaptation to fit into this model. The model still omits some details that would be important to implementing Crosslink, but distracting for this level of abstraction.

Safety of

We make an assumption on executions of that we will call Prefix Consistency (introduced in [PSS2016, section 3.3] as just “consistency”):

Definition: Prefix Consistency

An execution of has Prefix Consistency at confirmation depth , iff for all times and all nodes , (potentially the same) such that is honest at time and is honest at time , we have that .

Explain the confusion in the literature about what variants of this property are called.

The literature uses the same name, “common‑prefix property”, for two different properties of very different strength.

[PSS2016, section 3.3] introduced the stronger variant. That paper first describes the weaker variant, calling it the “common‑prefix property by Garay et al [GKL2015].” Then it explains what is essentially a bug in that variant, and describes the stronger variant which it just calls “consistency”:

The common‑prefix property by Garay et al [GKL2015], which was already considered and studied by Nakamoto [Nakamoto2008], requires that in any round , the record chains of any two honest players , agree on all, but potentially the last , records. We note that this property (even in combination with the other two desiderata [of Chain Growth and Chain Quality]) provides quite weak guarantees: even if any two honest parties perfectly agree on the chains, the chain could be completely different on, say, even rounds and odd rounds. We here consider a stronger notion of consistency which additionally stipulates players should be consistent with their “future selves”.

Let iff for all rounds , and all players , (potentially the same) such that is honest at and is honest at , we have that the prefixes of and consisting of the first records are identical.

Unfortunately, [GKL2020], which is a revised version of [GKL2015], switches to the stronger variant without changing the name. (The eprint version history may be useful; the change was made in version 20181013:200033, page 17.)

Note that [GKL2020] uses an adaptive‑corruption model, “meaning that the adversary is allowed to take control of parties on the fly”, and so their wording in Definition 3:

... for any pair of honest players , adopting the chains , at rounds in view respectively, it holds that .

is intended to mean the same as our

... for all times and all nodes , (potentially the same) such that is honest at time and is honest at time , we have that .

(The latter is closer to [PSS2016].)

Incidentally, I cannot find any variant of this property in [Nakamoto2008]. Maybe implicitly, but it’s a stretch.

Discussion of [GKL2020]’s communication model and network partition.

Prefix Consistency implies that, in the relevant executions, the network of honest nodes is never partitioned — unless (roughly speaking) any partition lasts only for a short length of time relative to block times. If node is on one side of a full partition and node on the other, then after node ’s best chain has been extended by more than blocks, will contain information that has no way to get to node . And even if the partition is incomplete, we cannot guarantee that the Prefix Consistency property will hold for any given pair of nodes.

And yet, [GKL2020] claims to prove this property from other assumptions. So we know that those assumptions must also rule out a long partition between honest nodes. In fact the required assumption is implicit in the communication model:

  • A synchronous network cannot be partitioned.
  • A partially synchronous network —that is, providing reliable delivery with bounded but unknown delay— cannot be partitioned for longer than the delay.

We might be concerned that these implicit assumptions are stronger than we would like. In practice, the peer‑to‑peer network protocol of Bitcoin and Zcash attempts to flood blocks to all nodes. This protocol might have weaknesses, but it is not intended to (and plausibly does not) depend on all messages being received. (Incidentally, Streamlet also implicitly floods messages to all nodes.)

Also, Streamlet and many other BFT protocols do not assume for safety that the network is not partitioned. That is, BFT protocols can be safe in a fully asynchronous communication model with unreliable messaging. That is why we avoid taking synchrony or partial synchrony as an implicit assumption of the communication model, or else we could end up with a protocol with weaker safety properties than alone.

This leaves the question of whether the Prefix Consistency property is still too strong, even if we do not rely on it for the analysis of safety when has not been subverted. In particular, if a particular node is not well-connected to the rest of the network, then that will inevitably affect node ’s security, but should not affect other honest nodes’ security.

Fortunately, it is not the case that disconnecting a single node from the network causes the security assumption to be voided. The solution is to view as not honest in that case (even though it would follow the protocol if it could). This achieves the desired effect within the model, because other nodes can no longer rely on ’s honest input. Although viewing as potentially adversarial might seem conservative from the point of view of other nodes, bear in mind that an adversary could censor an arbitrary subset of incoming and outgoing messages from the node, and this may be best modelled by considering it to be effectively controlled by the adversary.

Prefix Consistency compares the -truncated chain of some node with the untruncated chain of node . For our analysis of safety of the derived ledgers, we will also need to make an assumption on executions of that at any given time , any two honest nodes and agree on their confirmed prefixes — with only the caveat that one may have observed more of the chain than the other. That is:

Definition: Prefix Agreement

An execution of has Prefix Agreement at confirmation depth , iff for all times , and all nodes , (potentially the same) such that is honest at time and is honest at time , we have .

Why are this property, and Prefix Consistency above, stated as unconditional properties of protocol executions, rather than as probabilistic assumptions?

Our security arguments that depend on these properties will all be of the form “in an execution where (safety properties) are not violated, (undesirable thing) cannot happen”.

It is not necessary to involve probability in arguments of this form. Any probabilistic reasoning can be done separately.

In particular, if a statement of this form holds, and (safety properties) are violated with probability at most under certain conditions, then it immediately follows that under those conditions (undesirable thing) happens with probability at most . Furthermore, (undesirable thing) can only happen after (safety properties) have been violated, because the execution up to that point has been an execution in which (safety properties) are not violated.

With few exceptions, involving probability in a security argument is best done only to account for nondeterministic choices in the protocol itself. This is opinionated advice, but I think a lot of security proofs would be simpler if inherently probabilistic arguments were more distinctly separated from unconditional ones.

In the case of the Prefix Agreement property, an alternative approach would be to prove that Prefix Agreement holds with some probability given Prefix Consistency and some other chain properties. This is what [NTT2020] does in its Theorem 2, which essentially says that under certain conditions Prefix Agreement holds except with probability .

The conclusions that can be obtained from this approach are necessarily probabilistic, and depending on the techniques used, the proof may not be tight; that is, the proof may obtain a bound on the probability of failure that is (either asymptotically or concretely) higher than needed. This is the case for [NTT2020, Theorem 2]; footnote 10 in that paper points out that the expression for the probability can be asymptotically improved:

Using the recursive bootstrapping argument developed in [DKT+2020, Section 4.2], it is possible to bring the error probability as close to an exponential decay as possible. In this context, for any , it is possible to find constants , such that is secure after C with confirmation time except with probability .

(Here is the probability that any given node gets to produce a block in any given time slot.)

In fact none of the proofs of security properties for Snap‑and‑Chat depend on the particular expression ; for example in the proofs of Lemma 5 and Theorem 1, this probability just “passes through” the proof from the premisses to the conclusion, because the argument is not probabilistic. The same will be true of our safety arguments.

Talking about what is possible in particular executions has further advantages:

  • It sidesteps the issue of how to interpret results in the partially synchronous model, when we do not know what C is. See also the critique of applying the partially synchronous model to block-chain protocols under “Discussion of [GKL2020]’s communication model and network partition” above.
  • We do not require to be a Nakamoto‑style Proof‑of‑Work block chain protocol. Some other kind of protocol could potentially satisfy Prefix Consistency and Prefix Agreement.
  • It is not clear whether a probability of failure would be concretely adequate. That would depend on the value of and the constant hidden by the notation. The asymptotic property using tells us whether a sufficiently large could be chosen, but we are more interested in what needs to be assumed for a given concrete choice of .
  • If a violation of a required safety property occurs in a given execution, then the safety argument for Crosslink that depended on the property fails for that execution, regardless of what the probability of that occurrence was. This approach therefore more precisely models the consequences of such violations.

Why, intuitively, should we believe that Prefix Agreement and Prefix Consistency for a large enough confirmation depth hold with high probability for executions of a PoW‑based best‑chain protocol?

Roughly speaking, the intuition behind both properties is as follows:

Honest nodes are collectively able to find blocks faster than an adversary, and communication between honest nodes is sufficiently reliable that they act as a combined network racing against that adversary. Then by the argument in [Nakamoto2008], modified by [GP2020] to correct an error in the concrete analysis, a private mining attack that attempts to cause a ‑block rollback will, with high probability, fail for large enough . A private mining attack is optimal by the argument in [DKT+2020].

Any further analysis of the conditions under which these properties hold should be done in the context of a particular .

Why is the quantification over two different times t and t′?

This strengthens the security property, relative to quantifying over a single time. The question can then be split into several parts:

  1. What does the strengthened property mean, intuitively? Consider the full tree of bc‑blocks seen by honest nodes at any times during the execution. This property holds iff, when we strip off all branches of length up to and including blocks, the resulting tree is linear.
  2. Why is the strengthening needed? Suppose that time were split into periods such that honest nodes agreed on one chain in odd periods, and a completely different chain in even periods. This would obviously not satisfy the intent, but it would satisfy a version of the property that did not quantify over different times and .
  3. Why should we expect the strengthened property to hold? If node were far ahead, i.e. , then it is obvious that should hold. Conversely, if node were far ahead then it is obvious that should hold. The case where is the same as quantifying over a single time. By considering intermediate cases where and converge from the extremes or where they diverge from being equal, you should be able to convince yourself that the property holds for any relative values of and , in executions of a reasonable best‑chain protocol.

Safety Mode

Let be the finalized ledger seen by node at time , and let be the more-available ledger at bc‑confirmation‑depth seen by node at time .

The following definition is rough and only intended to provide intuition.

Consider, at a point in time , the number of bc‑blocks of transactions that have entered but have not (yet) entered . We call this the “finality gap” at time . Under an assumption about the distribution of bc‑block intervals, if this gap stays roughly constant then it corresponds to the approximate time that transactions added to have taken to enter (if they do so at all) just prior to time .

As explained in detail by The Arguments for Bounded Dynamic Availability and Finality Overrides, if this bound exceeds a reasonable threshold then it likely signals an exceptional or emergency condition, in which it is undesirable to keep accepting user transactions that spend funds into .

The condition that the network enters in such cases will be called “Safety Mode”. For a given higher‑level transaction protocol, we can define a policy for which bc‑blocks will be accepted in Safety Mode. This will be modelled by a predicate . A bc‑block for which returns is called a “safety block”.

Note that a bc‑block producer is only constrained to produce safety blocks while, roughly speaking, its view of the finalization point remains stalled. In particular an adversary that has subverted the BFT protocol in a way that does not keep it in a stalled state can always avoid being constrained by Safety Mode.

The desired properties of safety blocks and a possible Safety Mode policy for Zcash are discussed in the How to block hazards section of The Arguments for Bounded Dynamic Availability and Finality Overrides.

Parameters

Crosslink is parameterized by a bc‑confirmation‑depth (as in Snap‑and‑Chat), and also a finalization gap bound with significantly greater than .

Each node always uses the fixed confirmation depth to obtain its view of the finalized ledger .

Each node chooses a potentially different bc‑confirmation‑depth to obtain its view of the bounded‑dynamically‑available ledger . We will assume that , since there is no reason to choose . Choosing is at the node’s own risk and may increase the risk of rollback attacks against (it does not affect ). The default should be .

Info

The definition of is also used internally to the protocol with .

Structural additions

  1. Each bc‑header has, in addition to origbc‑header fields, a field that commits to a bft‑block.
  2. Each bft‑proposal has, in addition to origbft‑proposal fields, a field containing a sequence of exactly bc‑headers (zero‑indexed, deepest first).
  3. Each non‑genesis bft‑block has, in addition to origbft‑block fields, a field containing a sequence of exactly bc‑headers (zero-indexed, deepest first). The genesis bft‑block has .
  4. Each bc‑transaction has, in addition to origbc‑transaction fields, a field labelling it with the bc‑block that it comes from. (This is not used directly by Crosslink but it may be needed to check consensus rules.)

For a bft‑block or bft‑proposal , define

Use of the headers_bc field, and its relation to the ch field in Snap‑and‑Chat.

For a bft‑proposal or bft‑block , the role of the bc‑chain snapshot referenced by is comparable to the snapshot referenced by in the Snap‑and‑Chat construction from [NTT2020]. The motivation for the additional headers is to demonstrate, to any party that sees a bft‑proposal (resp. bft‑block), that the snapshot had been confirmed when the proposal (resp. the block’s proposal) was made.

Typically, a node that is validating an honest bft‑proposal or bft‑block will have seen at least the snapshotted bc‑block (and possibly some of the subsequent bc‑blocks in the chain) before. For this not to be the case, the validator’s bc‑best‑chain would have to be more than bc‑blocks behind the honest proposer’s bc‑best‑chain at a given time, which would violate the Prefix Consistency property of .

If the headers do not connect to any bc‑valid‑chain known to the validator, then the validator should be suspicious that the proposer might not be honest. It can assign a lower priority to validating the proposal in this case, or simply drop it. The latter option could drop a valid proposal, but this does not in practice cause a problem as long as a sufficient number of validators are properly synced (so that Prefix Consistency holds for them).

If the headers do connect to a known bc‑valid‑chain, it could still be the case that the whole header chain up to and including is not a bc‑valid‑chain. Therefore, to limit denial‑of‑service attacks the validator should first check the Proofs‑of‑Work and difficulty adjustment —which it can do locally using only the headers— before attempting to download and validate any bc‑blocks that it has not already seen. This is why we include the full headers rather than just the block hashes.

Why is a distinguished value needed for the headers_bc field in the genesis bft‑block?

It would be conceptually nice for to refer to , as well as being so that . That reflects the fact that we know “from the start” that neither genesis block can be rolled back.

This is not literally implementable using block hashes because it would involve a hash cycle, but we achieve the same effect by defining a function that allows us to “patch” to be . We do it this way around rather than “patching” the link from a bc‑block to a bft‑block, because the genesis bft‑block already needs a special case since there are not bc‑headers available.

Why is the context_bft field needed? Why not use a final_bft field to refer directly to the last final bft‑block before the context block?

The finality of some bft‑block is only defined in the context of another bft‑block. One possible design would be for a bc‑block to have both and fields, so that the finality of could be checked objectively in the context of .

However, specifying just the context block is sufficient information to determine its last final ancestor. There would never be any need to give a context block and a final ancestor that is not the last one. The function can be computed efficiently for typical BFT protocols. Therefore, having just the field is sufficient.

Πbft changes from Πorigbft

Πbft proposal and block validity

is bft‑valid.

A bft‑proposal (resp. non‑genesis bft‑block) is bft‑proposal‑valid (resp. bft‑block‑valid) iff all of the following hold:

  • Inherited origbft rules: The corresponding origbft‑proposal‑validity (resp. origbft‑block‑validity) rules hold for .
  • Increasing Score rule: Either or .
  • Tail Confirmation rule: form the ‑block tail of a bc‑valid‑chain.

The “corresponding validity rules” are assumed to include the Parent rule that ’s parent is bft‑valid.

Note: origbft‑block‑validity rules may be different to origbft‑proposal‑validity rules. For example, in adapted Streamlet, a origbft‑block needs evidence that it was voted for by a supermajority, and an origbft‑proposal doesn’t. Such differences also apply to bft‑block‑validity vs bft‑proposal‑validity.

What about making the bc‑block producer the bft‑proposer?

If this were enforced, it could be an alternative way of ensuring that every bft‑proposal snapshots a new bc‑block with a higher score than previous snapshots, potentially making the Increasing Score rule redundant. However, it would require merging bc‑block producers and bft‑proposers, which could have concerning knock‑on effects (such as concentrating security into fewer participants).

For a contrary view, see What about making the bc‑block‑producer the bft‑proposer? in Potential changes to Crosslink.

Why have validity rules been separated from the honest voting condition below?

The reason to separate the validity rules from the honest voting condition, is that the validity rules are objective: they don’t depend on an observer’s view of the bc‑best‑chain. Therefore, they can be checked independently of validator signatures. Even a proposal voted for by 100% of validators will not be considered bft‑proposal‑valid by other nodes unless it satisfies the above rules. If more than two thirds of voting units are cast for an invalid proposal, something is seriously and visibly wrong; in any case, the block will not be accepted as a valid bft‑block. Importantly, a purportedly valid bft‑block will not be recognized as such by any honest Crosslink node even if it includes a valid notarization proof, if it does not meet other bft‑block‑validity rules.

This is essential to making safe against a flaw in or its security assumptions (even, say, a complete break of the validator signature algorithm), as long as remains safe.

What does the Increasing Score rule do?

This rule ensures that each snapshot in a bft‑valid‑chain is strictly “better” than the last distinct snapshot (and therefore any earlier distinct snapshot), according to the same metric used to choose the bc‑best‑chain.

This rule has several positive effects:

  • It prevents potential attacks that rely on proposing a bc‑valid‑chain that forks from a much earlier block. This is necessary because the difficulty (or stake threshold) at that point could have been much lower.
  • It limits the extent of disruption an adversary can feasibly cause to , even if it has subverted . In particular, if transactions are inserted into at the finalization point, this rule ensures that they can only come from a bc‑valid‑chain that has a higher score than the previous snapshot. And since the adversary must prove that its snapshot is confirmed, there must be at least blocks’ worth of Proof‑of‑Work on top of that snapshot, at a difficulty close to the current network difficulty. Note that the adversary could take advantage of an “accidental” fork and start its attack from the base of that fork, so that not all of this work is done by it alone. This is also possible in the case of a standard “private mining” attack, and is not so much of a problem in practice because accidental forks are expected to be short. In any case, should be chosen to take it into account.
  • A snapshot with a higher score than any previous snapshot cannot be a prefix of a previous snapshot (because score strictly increases within a bc‑valid‑chain). So, this excludes proposals that would be a no‑op for that reason.

The increase in score is intentionally always relative to the snapshot of the parent bft‑block, even if it is not final in the context of the current bft‑block. This is because the rule needs to hold if and when it becomes final in the context of some descendant bft‑block.

==PoS Desideratum: we want leader selection with good security / performance properties that will be relevant to this rule. (Suggested: PoSAT.)==

Why does the Increasing Score rule allow keeping the same snapshot as the parent?

This is necessary in order to preserve liveness of relative to . Liveness of might require honest proposers to make proposals at a minimum rate. That requirement could be consistently violated if it were not always possible to make a valid proposal. But given that it is allowed to repeat the same snapshot as in the parent bft‑block, neither the Increasing Score rule nor the Tail Confirmation rule can prevent making a valid proposal — and all other rules of affecting the ability to make valid proposals are the same as in . (In principle, changes to voting in could also affect its liveness; we’ll discuss that in the liveness proof later.)

For example, Streamlet requires three notarized blocks in consecutive epochs in order to finalize a block [CS2020, section 1.1]. Its proof of liveness depends on the assumption that in each epoch for which the leader is honest, that leader will make a proposal, and that during a “period of synchrony” this proposal will be received by every node [CS2020, section 3.6]. This argument can also be extended to adapted‑Streamlet.

We could alternatively have allowed to always make a “null” proposal, rather than to always make a proposal with the same snapshot as the parent. We prefer the latter because the former would require specifying the rules for null proposals in .

As a clarification, no BFT protocol that uses leader election can require a proposal in each epoch, because the leader might be dishonest. The above issue concerns liveness of the protocol when assumptions about the attacker’s share of bft‑validators or stake are met, so that it can be assumed that sufficiently long periods with enough honest leaders to make progress (5 consecutive epochs in the case of Streamlet), will occur with significant probability.

Why is it not allowed to switch between snapshots with the same score?

Consider the following variant of the Increasing Score rule: .

This would allow keeping the same snapshot as the parent as discussed in the previous answer. However, it would also allow continually cycling within a given set of snapshots, without making progress and without needing any new Proof‑of‑Work to be performed. This is worse than not making progress due to the same snapshot continually being proposed, because it increases the number of snapshots that need to be considered for sanitization, and therefore it could potentially be used for a denial‑of‑service.

Πbft block finality in context

The finality rule for bft‑blocks in a given context is unchanged from origbft‑finality. That is, is defined in the same way as (modulo referring to bft‑block‑validity and ).

Πbft honest proposal

An honest proposer of a bft‑proposal chooses as the ‑block tail of its bc‑best‑chain, provided that it is consistent with the Increasing Score rule. If it would not be consistent with that rule, it sets to the same field as ’s parent bft‑block. It does not make proposals until its bc‑best‑chain is at least blocks long.

Why σ + 1?

If the length were less than blocks, it would be impossible to construct the field of the proposal.

Note that when the length of the proposer’s bc‑best‑chain is exactly blocks, the snapshot must be of But this does not violate the Increasing Score rule, because matches the previous snapshot by .

How is it possible that the Increasing Score rule would not be satisfied by choosing headers from the proposer’s bc‑best‑chain?

Assume for this discussion that uses PoW.

Depending on the value of , the timestamps of bc‑blocks, and the difficulty adjustment rule, it could be the case that the difficulty on the new bc‑best‑chain increases relative to the chain of the previous snapshot. In that case, when there is a fork, the new chain could reach a higher score than the previous chain in less than blocks from the fork point, and so its ‑confirmed snapshot could be before the previous snapshot.

(For Zcash’s difficulty adjustment algorithm, the difficulty of each block is adjusted based on the median timestamps and difficulty target thresholds over a range of blocks, where each median is taken over blocks. Other damping factors and clamps are applied in order to prevent instability and to reduce the influence that adversarially chosen timestamps can have on difficulty adjustment. This makes it unlikely that an adversary could gain a significant advantage by manipulating the difficulty adjustment.)

For a variation on the Increasing Score rule that would be automatically satisfied by choosing headers from the proposer’s bc‑best‑chain, see Potential changes to Crosslink.

Πbft honest voting

An honest validator considering a proposal , first updates its view of both subprotocols with the bc‑headers given in , downloading bc‑blocks for these headers and checking their bc‑block‑validity.

For each downloaded bc‑block, the bft‑chain referenced by its field might need to be validated if it has not been seen before.

Wait what, how much validation is that?

In general the entire referenced bft‑chain needs to be validated, not just the referenced block — and for each bft‑block, the bc‑chain in needs to be validated, and so on recursively. If this sounds overwhelming, note that:

  • We should check the requirement that a bft‑valid‑block must have been voted for by a two‑thirds absolute supermajority of validators, and any other non‑recursive bft‑validity rules, first.
  • Before validating a bc‑chain referenced by a field, we check that it connects to an already-validated bc‑chain and that the Proofs‑of‑Work are valid. This implies that the amount of bc‑block validation is constrained by how fast the network can find valid Proofs‑of‑Work.

In other words, the order of validation is important to avoid denial‑of‑service. But it already is in Bitcoin and Zcash.

After updating its view, the validator will vote for a proposal only if:

  • Valid proposal criterion: it is bft‑proposal‑valid, and
  • Confirmed best‑chain criterion: is part of the validator’s bc‑best‑chain at a bc‑confirmation‑depth of at least .

Blocks in a bc‑best‑chain are by definition bc‑block‑valid. If we’re checking the Confirmed best‑chain criterion, why do we need to have separately checked that the blocks referenced by the headers are bc‑block‑valid?

The Confirmed best‑chain criterion is quite subtle. It ensures that is bc‑block‑valid and has bc‑block‑valid blocks after it in the validator’s bc‑best‑chain. However, it need not be the case that is part of the validator’s bc‑best‑chain after it updates its view. That is, the chain could fork after .

The bft‑proposal‑validity rule must be objective; it can’t depend on what the validator’s bc‑best‑chain is. The validator’s bc‑best‑chain may have been updated to (if it has the highest score), but it also may not.

However, if the validator’s bc‑best‑chain was updated, that makes it more likely that it will be able to vote for the proposal.

In any case, if the validator does not check that all of the blocks referenced by the headers are bc‑block‑valid, then its vote may be invalid.

How does this compare to Snap‑and‑Chat?

Snap‑and‑Chat already had the voting condition:

An honest node only votes for a proposed BFT block if it views as confirmed.

but it did not give the headers potentially needed to update the validator’s view, and it did not require a proposal to be for an objectively confirmed snapshot as a matter of validity.

If a Crosslink‑like protocol were to require an objectively confirmed snapshot but without including the bc‑headers in the proposal, then validators would not immediately know which bc‑blocks to download to check its validity. This would increase latency, and would be likely to lead proposers to be more conservative and only propose blocks that they think will already be in at least a two‑thirds absolute supermajority of validators’ best chains.

That is, showing to all of the validators is advantageous to the proposer, because the proposer does not have to guess what blocks the validators might have already seen. It is also advantageous for the protocol goals in general, because it improves the trade‑off between finalization latency and security.

Πbc changes from Πorigbc

Definitions of LOGtfin,i and LOGtbda,i,μ

In Snap‑and‑Chat, a node at time obtains from its latest view of bft‑finality.

In Crosslink, it obtains from the view of bft‑finality given by , i.e. the block at depth in node ’s bc‑best‑chain at time .

Specifically,

For the definitions that use it, is a confirmation depth.

Security caveat

Using small values of is not recommended. is an allowed value only because it is used in the definition of contextual validity in the next section.

Sanitization is the same as in Snap‑and‑Chat: it does a left fold over an input sequence of transactions, adding each of them to a running bc‑context if they are bc‑context‑valid. The initial bc‑context for this fold corresponds to the state after the genesis bc‑block. This process can be optimized by immediately discarding duplicate bc‑blocks in the concatenated snapshot chains after their first occurrence.

Implementation advice

Memoize the function. If you see an input to that extends a previous input (either adding a new snapshot, or adding blocks to the last snapshot), compute the result incrementally from the previous result.

Theorem: Ledger prefix property

For all nodes , times , and confirmation depths , .

Proof: By construction of and .

Πbc contextual validity change

In Crosslink or Snap‑and‑Chat, contextual validity is used in three different cases:

a) sanitization of the constructed ledgers and ;

b) checking whether a block is bc‑block‑valid in order to append it to a chain;

c) checking whether a transaction in the mempool could be included in a block.

Note that the model we presented earlier considers only adding a transaction to an origbc‑context. It is straightforward to apply this to the sanitization use case a), as described in the previous section. However, the other two use cases raise the question of which starting bc‑context to use.

We resolve this by saying that new blocks and mempool transactions are checked in the bc‑context obtained from , where is the relevant parent bc‑block.

Πbc block validity

Genesis rule: For the genesis bc‑block we must have , and therefore .

A bc‑block is bc‑block‑valid iff all of the following hold:

  • Inherited origbc rules: satisfies the corresponding origbc‑block‑validity rules.
  • Valid context rule: is bft‑block‑valid.
  • Extension rule: .
  • Finality depth rule: Define: Then either or .

Explain the definition of finality‑depth.

We need a way to measure how far ahead a given bc‑block is relative to the corresponding finalization point. is a sequence of transactions, not blocks, so it is not entirely obvious how to do this.

Let , i.e. the tip of node ’s bc‑best‑chain at time .

Then, is the bft‑block providing the last snapshot that will be used to construct . is the last common ancestor of that snapshot and .

The idea behind the above definition is that:

  • If is an ancestor of , then it is equal to , and that is the last bc‑block that contributes to . If the best chain were to continue from without a rollback, then the bc‑blocks to be finalized next would be the ones from to , and so the number of those blocks gives the finality depth.
  • Otherwise, must be on a different fork, and the bc‑blocks to be finalized next would resume from the fork point toward unless some of those blocks have already been included, as discussed below. will definitely contain transactions from blocks up to , but usually not subsequent transactions on ’s fork. So it is still reasonable to measure the finality depth from to .

Strictly speaking, it is possible that a previous bft‑block took a snapshot that is between and . This can only happen if there have been at least two rollbacks longer than blocks (i.e. we went more than blocks down ’s fork from , then reorged to more than blocks down the fork to , then reorged again to ’s fork). In that case, the finalized ledger would already have the non‑conflicting transactions from blocks between and and it could be argued that the correct definition of finality depth in such cases is the number of blocks from to , not from to .

However,

  • The definition above is simpler and easier to compute.
  • The effect of overestimating the finality depth in such corner cases would only cause us to enforce Safety Mode slightly sooner, which seems fine (and even desirable) in any situation where there have been at least two rollbacks longer than blocks.

By the way, the “tailhead” of a tailed animal is the area where the posterior of the tail joins the rump (also called the “dock” in some animals).

Πbc honest block production

An honest producer of a bc‑block must follow the consensus rules under block validity above. In particular, it must produce a safety block if required to do so by the Finality depth rule. It also must only include transactions that are valid in the context specified under contextual validity change above.

To choose , the producer considers a subset of the tips of bft‑valid‑chains in its view: It chooses one of the longest of these chains, , breaking ties by maximizing . If there is still a tie then it is broken arbitrarily.

The honest block producer then sets .

Why not choose T  such that H ⌈1bc . context_bft  ⪯bft  bft‑last‑final(T )?

The effect of this would be to tend to more often follow the last bft‑block seen by the producer of the parent bc‑block, if there is a choice. It is not always possible to do so, though: the resulting set of candidates for might be empty.

Also, it is not clear that giving the parent bc‑block‑producer the chance to “guide” what bft‑block should be chosen next is beneficial, since that producer might be adversarial and the resulting incentives are difficult to reason about.

Why choose the longest C, rather than the longest bft‑last‑final(C )?

We could have instead chosen to maximize the length of . The rule we chose follows Streamlet, which builds on the longest notarized chain, not the longest finalized chain. This may call for more analysis specific to the chosen BFT protocol.

Why this tie‑breaking rule?

Choosing the bft‑chain that has the last final snapshot with the highest score, tends to inhibit an adversary’s ability to finalize its own chain if it has a lesser score. (If it has a greater score, then it has already won a hash race and we cannot stop the adversary chain from being finalized.)

If we switch to using the Increasing Tip Score rule, then it would be more consistent to also change this tie‑breaking rule to use the tip score, i.e. .

At this point we have completed the definition of Crosslink. In Security Analysis of Crosslink, we will prove it secure.