Categories

# Zero Knowledge: SNARKs vs. STARKs

## Introduction

With the rise of cryptocurrencies like Ethereum, zero-knowledge proof technology is increasing in popularity due to the variety its applicable use cases, such as verifiable computation and privacy-preservation. In this article, we aim to review the class of zero-knowledge proof constructions by Ben-Sasson, Bentov, Horesh and Riabzev (BBHR18) in 2018 that overcomes the abovementioned weaknesses: ZK-STARKs (Zero-Knowledge Scalable Transparent Arguments of Knowledge).

ZK-STARKs are the class of argument systems that satisfy the following properties:

• Zero-Knowledge: Private inputs are shielded
• Scalable: Proofs for computational integrity lasting $T$ cycles are generated with time quasi-linear in $T$ (roughly $T$ cycles) and verified with time exponentially faster than $T$ (roughly $\log T$ cycles)
• Transparent: No trusted setup, verifier messages are random coins
• Argument of Knowledge: An efficient procedure can extract the secrets from a prover (more informally, proofs can only be generated by parties with knowledge of the private inputs)

## ZK-SNARKs

Proposed by Joe Kilian and Silvio Micali in the 1990s, Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge (ZK-SNARKs) are a class of computationally-sound ZK proof systems that were considered intriguing yet impractical, and invoked mainly for their theoretical applications. In particular, the feasibility of the prover and verifier runtimes were a huge concern.

Recently, new ideas and schemes lead to both improved theoretical foundations and full working ZK-SNARK implementations, such as the cryptocurrency zcash. These recent ZK-SNARKs are instantiated differently than the ones originally conceived by Kilian and Micali: they primarily rely on the Knowledge of Exponent (KEA1) assumption, and they involve reductions to circuits (and constraints on them), then further reduction into polynomials before setting up a linear PCP (discussed later) and then realizing it into a ZK-SNARK.

The ZK-STARK implementation discussed in this article has roots in the early ZK-SNARK system described by Kilian and Micali, with large improvements in feasibility and efficiency by using a generalization of the original proofs used (“PCPs”) known as IOPs (and even further extended with IOPPs), which will be discussed later. Despite having more similarities in construction with the Kilian-Micali ZK-SNARK, we aim to compare ZK-STARKs with the ZK-SNARKs that have more concrete implementations, namely the class of ZK-SNARKs based on KEA1 and how it offers (more concrete) increases in performance and security guarantees. Hence, from this point forth we will refer to all such KEA1-based ZK-SNARKs simply as ZK-SNARKs. Such ZK-SNARKs have three primary weaknesses that we wish to examine:

• Reliance on a trusted setup mechanism, such as pre-generated keys or a multiparty computation
• Not very scalable in terms of verifying the computational integrity of proofs
• Uses elliptic curves, pairings and the Knowledge-of-Exponent assumption, making it susceptible to quantum attacks

I will give an overview of the primary novel units (and the intuition behind them) that forms the basis of ZK-STARKs. The novel construction presented in the paper is a scalable and transparent ZK system in the IOP model, realized as a ZK-STARK which consists of the following high-level components:

• Arithmetization to Reed-Solomon (RS) Proximity Testing (RPT) Problems
• Interactive Oracle Proof of Knowledge (IOP) via Fast Reed-Solomon IOP of Proximity (FRI)
• Argument of Knowledge (ZK-STARK)

### Motivation for ZK-SNARKs

In the paper by BBHR18, a DNA Profile Match (DPM) problem scenario was used to illustrate the motivation (and application) of ZK-STARKs. In short, the DPM scenario is as follows: suppose that a candidate has been offered a position as a high-ranking government official, conditioned on having committed no serious crimes. The candidate argues that if they were guilty, they would appear in the criminal DNA database maintained by the police. The police maintains a large database of DNA records for criminals. If we accept the candidate’s argument, then we would need to check that every criminal DNA entry in the database is different from the candidate’s DNA. The DNA sample would have to be exposed for public verification (no privacy preserved) and the database checking will take a very long time due to its size.

The DPM scenario poses several problems that motivates ZK-STARKs:

• Computational Integrity: A prover $P$ executing a public computation $C$ on a dataset $D$ may be dishonest. How do we ensure $P$ reports the correct output $\alpha_P=C(D)$?
• Scalable Verification: With a public dataset $D$, any verifier $V$ can verify that $\alpha_P=C(D)$ naively, but it does not scale because $V$ must read $D$ in its entirety and the verifier runtime is equal to the runtime of $C$. How do we make the verification scalable?
• Confidentiality: If $D$ is confidential, then $D$ cannot be made public and any broker $P$ computing $C(D)$ may violate computational integrity under the guise of confidentiality. How do we achieve confidentiality without compromising computational integrity?

## Probabilistically-Checkable Proofs (PCPs)

ZK-based proof and argument systems replace the need for a third-party “auditor” as a way to guarantee computational integrity over some confidential data. More formally:

A Zero-Knowledge (ZK) system $S$ for a publicly computable function $C$ is a pair of randomized algorithms $S=(P,V)$ where the prover $P$ is the algorithm/procedure used to generate proofs of computational integrity, and the verifier $V$ is the algorithm used to check such proofs. $P$ can efficiently generate proofs accepted by $V$ for all truisms, but only accepted proofs with negligible probability for any falsities.

Early ZK systems with scalable verification used Probabilistically Checkable Proofs (PCPs). The PCP setup is to prove some $x \in L$ for some language $L\in\textbf{NTIME}(T(n))$ where:

• Verifier has oracle access to a PCP $\pi$ and runs in time $\text{poly}(n+\log(T(n)))$
• If $x\in L$ then there exists an accepting proof $\pi$ that is accepted with probability 1
• If $x\notin L$ then all such non-accepting proofs $\pi^*\neq\pi$ are rejected with probability more than $1/2$

This offered a trade-off where the prover runtime scales polynomially with respect to naive computation time ($C(D)$), and the verifier runtime decreases exponentially with respect to prover runtime – i.e. $T_P=T_C^{\mathcal{O}(1)}, T_V=\log^{\mathcal{O}(1)}T_C$. Such ZK systems based on PCPs (“ZK-PCPs”) offer 6 primary guarantees:

• Post-Quantum Resilience. The collision-resistant hash functions and common-access random functions (in the Random Oracle Model) are not susceptible to attacks by quantum computers.
• Proof-of-Knowledge (POK). ZK-PCPs provide a POK, or when realized in the DPM scenario, an Argument-of-Knowledge (ARK).
• Transparency. This is achieved via public randomness (of the verifier) which removes the need for an external trusted setup phase. This is particularly important since the ability of very powerful cheating provers is severely limited as long as public randomness is maintained. Such transparency is realized through Arthur-Merlin protocols.
• Universality. ZK-PCPs can be applied to any efficient publicly computable $C$.
• Confidentiality. Private inputs are not compromised.
• Scalable Verification, as discussed in the trade-off above.

Hence, ZK-PCPs are very useful for publically-trusted computational integrity over confidential data. However, until the ZK-STARK paper, no ZK-PCP has been realized in code since the proofs were very convoluted. More recent implementations of general-purpose ZK systems fail to achieve all of the abovementioned properties.

## Interactive Oracle Proofs (IOPs)

In a bid to enhance the scalability of the prover while maintaining the properties of ZK-PCPs, a generalization of the PCP (and other similar models) was suggested: an Interactive Oracle Proof (IOP). By definition, PCP systems are transparent 1-round IOP systems. Similarly, the IOP verifier does not need to read the entire prover message but instead sample random locations. The prover and verifier interact over several rounds. An IOP is setup to prove for some language $L\in\textbf{NTIME}(T(n))$ that some $x\in L$:

• Verifier ($V$) has oracle access to first oracle proof $\pi_1$
• $V$ sends public random token $r_1$
• $V$ has oracle access to second oracle proof $\pi_2$
• $\ldots$
• $V$ has oracle access to $n$-th oracle proof $\pi_n$
• $V$ sends public random token $r_n$
• Verifier runtime is $\text{poly}(n+\log(T(n)))$
• If $x\in L$ there exists accepting proofs $\pi_1,\pi_2,\ldots,\pi_t$ that are accepted with probability 1
• If $x\notin L$ then all proofs $\overrightarrow{\pi}$ are rejected with probability at least $1/2$

IOPs with perfect ZK and scalable verifiers for languages in the classes $\textbf{NP}$ and $\textbf{NEXP}$ were recently illustrated, and the prover running time is shown to be bounded by $T_C\cdot\log^{\mathcal{O}(1)}T_C$ (referred to in the ZK-STARK paper as “scalable proving time”).

## Scalable Transparent IOP of Knowledge (STIK)

Scalable ZK System: A ZK system is scalable if both prover and verifier times are bounded by the poly-logarithmic factor $T_C\cdot\log^{\mathcal{O}(1)}T_C$ for some publicly computable function $C$ that runs with time $T_C$.

ZK-STIK: A ZK system in the Scalable Transparent IOP of Knowledge (STIK) model for a language $L\in\textbf{NTIME}(T(n))$ is

• Scalable, as per the definition above
• Transparent, through the use of public random coins for all verifier messages
• An IOP of Knowledge, if there is some poly-time extractor that extracts a witness $w$ for membership of $x\in L$ from some honest prover

Basically, A ZK in the IOP model (ZK-IOP) that has the 6 primary guarantees of a ZK-PCP as well as the scalability described above (Scalable ZK) is called a Scalable Transparent IOP of Knowledge (ZK-STIK).

Theoretical constructions of ZK-STIKs have surfaced recently, but the ZK-STARK paper is the only one so far that demonstrates their efficiency and applicability.

## Arithmetization

Arithmetization is a commonly used technique in numerous modern ZK systems, which involves the reduction of computational problems to algebraic ones. Such algebraic problems involve a polynomial over some finite field $\mathbb{F}$, where the degree of the polynomial is much smaller than the size of $\mathbb{F}$ (low-degree).

Why do we need such reductions? We require similar math to polynomial oversampling used in erasure coding (which is used to make data fault-tolerant). If we have some data set that we encode as a polynomial of degree $d=10^6$, we can sample $2d$ points from the polynomial. This way, any $d+1$ points will allow us to recover the original data set via polynomial interpolation. However, any single error in the original data set is amplified and will change at least $d$ points on the polynomial. In a similar fashion, algorithms in ZK-STARKs make use of polynomial oversampling and interpolation to the effect of error amplification.

In a ZK system, arithmetization begins with a computational integrity claim that some prover $P$ wishes to prove, e.g. “$\alpha$ is the result of executing $T$ steps of $C$ on input $x$”. It is then converted into a polynomial (or algebraic intermediate representation/AIR), then determined if it is proximal to the expected polynomial. The problem is easy if we have the ability to look at every point on the polynomial, but it is not viable for large polynomials in practice – hence the number of queries if limited. More concretely, for a polynomial with a very high degree $d$, and we want to verify the degree $d$ with less than $d$ queries. It is very difficult to do directly, however if we sample a small subset of points and perform a proximity test (there’s always the possibility that most are on the low-degree polynomial, but some are not) we can reduce the false positive rate to a negligible value. The exact arithmetization is much more complex, but this hopefully provides some high-level intuition.

The Reed-Solomon Proximity Testing problem is the reduction target of computational problems in ZK-STARKs.

Reed-Solomon Codes: For an $n$-element evaluation set $S$ in a finite field $\mathbb{F}$ ($S\subset\mathbb{F}, |S|=n)$ and rate parameter $\rho\in(0,1]$, $\textbf{RS}[\mathbb{F},S,\rho]$ is the set of Reed-Solomon Codes $f:S\rightarrow\mathbb{F}$ that are evaluations of polynomials with degree $d<\rho n$.

Reed-Solomon Proximity (RP) Problem: Given oracle access to a Reed-Solomon code $f:S\rightarrow\mathbb{F}$, the Reed-Solomon Proximity Problem asks that a verifier $V$ distinguishes between two cases with high probability: (1) $f\in \textbf{RS}[\mathbb{F},S,\rho]$ and (2) $f$ is $\delta$-far pairwise Hamming distance from all $f^\prime\in\textbf{RS}[\mathbb{F},S,\rho]$, $f\neq f^\prime$.

Reed-Solomon Proximity Testing (RPT) Problem: RPT extends the RP Problem with access to a proof of proximity (such as an interactive oracle proof of proximity (IOPP)) and its oracle.

What’s novel in this BBHR18 paper is that the authors made the effort to reduce the many computational bottlenecks in arithmetization which is key to making the system scalable:

• A new protocol is used to solve the RPT problem, called the Fast RS IOPP or FRI. With FRI, the runtime (asymptotic and concrete) runtimes of previous RPT solutions (which require quasilinear prover arithmetic complexity) are improved significantly (to strictly linear complexity).
• The binary field $\mathbb{F}_{2^{64}}$ is chosen for its efficiency in arithmetic operations, in conjunction with cryptographic hash functions that are efficient on binary fields (e.g. SHA2, AES128-DM).
• Reducing the maximal degree the resultant “arithmetized” polynomial. The main bottleneck dominating prover space and time complexity is the cost of performing polynomial interpolations and evaluations, which in turn is dominated by the maximal degree of the polynomials in questions. This is achieved through some linear algebraic tricks and computation structure-specific optimization. This gave a large multiplicative improvement factor ($>5000$x) over existing arithmetization methods in ZK systems.
• Using a more recent and novel additive FFT for polynomial interpolation and evaluation, which has strictly quasi-linear arithmetic complexity.
• Using the Davies-Meyer hash with AES (AES-DM) as the hash function of choice for the commit-reveal scheme, since AES is available as a native instruction set on many modern processors.

## From STIK to STARK

After some computational integrity claim is converted into an algebraic representation, an IOP is setup for the proximity proof of the algebraic representation (polynomial). Such a setup is also known as an Interactive Oracle Proof of Proximity or IOPP – it differs from an IOP in that instead of proving some $x\in L, L\in\textbf{NTIME}(T(n))$, we want to prove that some oracle (in this case, a Reed-Solomon (RS) code) $f$ is proximal to some code $\mathbb{W} \in \mathcal{F}^n$ for some family of codes $\mathcal{F}$ (such as $\textbf{RS}[\mathbb{F},S,\rho]$). If $f\notin \mathbb{W}$ then all proofs $\overrightarrow{\pi}$ are rejected with probability at least $s(\Delta(f, \mathbb{W}))$, where $\Delta$ is a distance function and $s$ is a soundness function that we wish to maximize. Once we are done with all the transformations to a scalable and transparent IOPP model (STIK), there are two main ways to transform the system into a STARK:

• Interactive STARK (ZK-iSTARK): Collision-resistant hash functions can be used to convert a STIK into an interactive argument system (as shown by Kilian). If the STIK has perfect ZK (as defined before) then the argument system has computational ZK.
• Non-Interactive STARK (ZK-nSTARK): Via the PCP and IOP models, any STIK can be compiled into a non-interactive argument system in the ROM model.

## ZK-STARKs vs. ZK-SNARKs

ZK-SNARKs have some problems that prevents it from gaining traction in mainstream cryptographic use, such as in blockchain technology and cryptocurrencies:

• There is a trusted setup phase that can be compromised. For example in the cryptocurrency zcash that uses ZK-SNARKs, there is a Genesis block that needs to be setup and the assumption is that it is secure.
• The scalability can be improved in terms of proof generation and verification.
• For reasons we will see in a bit, ZK-SNARKs are vulnerable to quantum attacks.

### Transparency

The largest problem with ZK-SNARKs is that the user base has to trust the initial setup phase (and in turn, the parties involved in setting up the system honestly). The users will never know if the setup phase was compromised from the get-go (or in the future). Contrast to ZK-STARKs, where there is no external trusted setup phase. All randomness is publicly verifiable so no party (without being discovered) can obtain the setup parameters and generate false accepting proofs.

### Scalability

ZK-STARKs exhibit much more scalability than ZK-SNARKs (initial benchmarks from the paper show promising results). ZK-STARKs improve many bottlenecking aspects of ZK-SNARKs:

• Arithmetic complexity. The reduction to algebraic representations and their relevant computations are key to proof generation and verification, and the authors have made many optimizations to reduce the computational and arithmetic complexity.
• Prover and verifier complexity. Due to reductions in arithmetic complexity, ZK-STARKs are faster than ZK-SNARKs by a factor of 8-10x with respect to computation size for proof generation. The usage of the FRI algorithm and a newer additive FFT substantially sped up the verifier runtime in ZK-STARKs.
• Communication complexity. The amount of messages required between $n$ parties to solve a problem (in this case, verifying a proof) grows linearly in $n$ for a ZK-SNARK; in contrast, ZK-STARKs grow at a much slower rate.

While ZK-STARKs have the above scalability benefits, ZK-SNARKs have a much smaller proof size. In this regard, ZK-STARKs may have its own scalability limitation.

### Post-Quantum Resilience

ZK-SNARKs rely on private-public encryption schemes such as RSA and ECDSA to generate private-public key pairs. With the (upcoming) advent of quantum computing, there exists algorithms that breaks such cryptosystems – for example, Shor’s algorithm greatly speeds up the derivation of such private keys from their public keys by solving the large prime factoring problem. In contrast, ZK-STARKs adopt collision-resistant hash functions for ZK-iSTARKs, and the random oracle model for ZK-nSTARKs. In the IOPP of a ZK-STARK, Merkle trees (which are post-quantum resilient) are used. All these cryptographic primitives make ZK-STARKs resistant to known quantum computation attacks.

## Conclusion

ZK-STARKs are still very new and in the process of commercialization. If realized, it could be leveraged across many use-cases. The added benefits of being scalable and transparent with universal application enables creates better trust in the technology. They provide safer versions of ZK-SNARKs with a simpler structure in terms of the underlying cryptographic assumptions.

## References

[BBHR18] Eli Ben-Sasson, Iddo Bentov, Yinon Horesh and Michael Riabzev. Scalable, transparent, and post-quantum secure computational integrity. In Cryptology ePrint Archive, Report 2018/046. https://eprint.iacr.org/2018/046

[Kil92] Joe Killian. A note on efficient zero-knowledge proofs and arguments (extended abstract). In STOC ’92 Proceedings of the twenty-fourth annual ACM symposium on Theory of Computing, pages 723-732. May 1992.

[Mic94] Silvio Micali. CS proofs (extended abstracts). In 35th FOCS}, pages 436–453. IEEE Computer Society Press, November 1994.

[BBHR17] Eli Ben-Sasson, Iddo Bentov, Ynon Horesh, and Michael Riabzev. Fast Reed-Solomon interactive oracle proofs of proximity (2nd revision). Electronic Colloquium on Computational Complexity (ECCC), 24:134, 2017.