Syncing...

We put all of Bitcoin’s proof of work in a single zkSNARK. By using the power of recursive SNARKs and massively parallel compute, we create a single succinct proof of the total amount of work for Bitcoin chain’s full history.

In a blockchain such as Bitcoin, to validate transactions, new blocks may only be added through Proof of Work, where a nonce must be discovered (or “mined”) in order for the header’s double SHA256 hash to fall below the difficulty threshold. As such, the “work” required to mine each block can be represented by the size of the search space of the nonce divided by the difficulty threshold. The Bitcoin chain has “probabilistic finality”, meaning that the current block is determined by the tip of the “heaviest chain”, i.e. the chain of blocks with the most “work” associated with them.

Fortunately, the header of each block encodes all this information in a fairly simple way, along with the hash from the parent block. This means that we can use the headers prove the heaviest chain!

Light clients essentially do this. They only keep track of block headers and verify proof of work. Light clients are useful for compute-constrained environments (like Ethereum). However, this requires you to download all block headers. It’d be nice if a light client could sync and keep up to date with chain without having to download and verify POW for every single header.

⚡ BTC Warp ⚡ is a single succinct proof that attests to the entire history of work in the Bitcoin chain. We use recursive SNARKs to generate a succinct proof that a particular header has a specified amount of work attached to it. Light clients can efficiently verify this single SNARK proof to keep updated with the tip of the chain without having to download all block headers.

Looking to the future, this project is notable because with EVM verification of plonky2 proofs, you can zero-shot sync a Bitcoin light client running in an Ethereum smart contract. This opens the door for a trust-minimized BTC <> Ethereum bridge. There’s no longer a need to verify every header’s proof of work; we can verify a single proof to “jump” multiple headers at a time. Other potential use cases include: (1) Ultralight clients for mobile wallets, in-browser wallets, (2) Zero-shot Bitcoin node syncing, (3) Integration with ZCash’s zkSNARKs of Bitcoin state-transition to get security of an honest full-node.

First, we generate a succinct proof that a particular header has a certain amount of “work” associated with it. We use the plonky2 framework in Rust to generate these proofs. The core challenges are that SHA256 is very SNARK unfriendly arithmetic and implementing big int arithmetic that check that the hash is indeed below the difficulty threshold is not easy because of all the bit math in a SNARK and we need to reconcile the little-endianness of block headers in bytes.

Then, since Bitcoin has too many blocks, we leverage recursive SNARKs to parallelize the proving, essentially generating proofs of batches of blocks, then generating proofs of batches of those, and so on, like a tree. Notably, we need different circuits for each layer and we need to write all the different circuits to verify the previous layer + leaves.

Finally, even after the circuits are written, we need to set up massively parallel computation for generating recursive proof for entire chain, which we did on AWS.

In a blockchain such as Bitcoin, to validate transactions, new blocks may only be added through Proof of Work, where a nonce must be discovered (or “mined”) in order for the header’s double SHA256 hash to fall below the difficulty threshold. As such, the “work” required to mine each block can be represented by the size of the search space of the nonce divided by the difficulty threshold. The Bitcoin chain has “probabilistic finality”, meaning that the current block is determined by the tip of the “heaviest chain”, i.e. the chain of blocks with the most “work” associated with them.

Fortunately, the header of each block encodes all this information in a fairly simple way, along with the hash from the parent block. This means that we can use the headers prove the heaviest chain!

Light clients essentially do this. They only keep track of block headers and verify proof of work. Light clients are useful for compute-constrained environments (like Ethereum). However, this requires you to download all block headers. It’d be nice if a light client could sync and keep up to date with chain without having to download and verify POW for every single header.

⚡ BTC Warp ⚡ is a single succinct proof that attests to the entire history of work in the Bitcoin chain. We use recursive SNARKs to generate a succinct proof that a particular header has a specified amount of work attached to it. Light clients can efficiently verify this single SNARK proof to keep updated with the tip of the chain without having to download all block headers.

Looking to the future, this project is notable because with EVM verification of plonky2 proofs, you can zero-shot sync a Bitcoin light client running in an Ethereum smart contract. This opens the door for a trust-minimized BTC <> Ethereum bridge. There’s no longer a need to verify every header’s proof of work; we can verify a single proof to “jump” multiple headers at a time. Other potential use cases include: (1) Ultralight clients for mobile wallets, in-browser wallets, (2) Zero-shot Bitcoin node syncing, (3) Integration with ZCash’s zkSNARKs of Bitcoin state-transition to get security of an honest full-node.

First, we generate a succinct proof that a particular header has a certain amount of “work” associated with it. We use the plonky2 framework in Rust to generate these proofs. The core challenges are that SHA256 is very SNARK unfriendly arithmetic and implementing big int arithmetic that check that the hash is indeed below the difficulty threshold is not easy because of all the bit math in a SNARK and we need to reconcile the little-endianness of block headers in bytes.

Then, since Bitcoin has too many blocks, we leverage recursive SNARKs to parallelize the proving, essentially generating proofs of batches of blocks, then generating proofs of batches of those, and so on, like a tree. Notably, we need different circuits for each layer and we need to write all the different circuits to verify the previous layer + leaves.

Finally, even after the circuits are written, we need to set up massively parallel computation for generating recursive proof for entire chain, which we did on AWS.

Want to check out the code?