Introduction

Stwo is a state-of-the-art framework for creating ZK proofs that boasts many compelling features to the ZK ecosystem, some of which include:

  • Provides both the frontend and backend of creating ZK proofs
  • Frontend is designed to be flexible to allow you to express your own proof system
  • Backend leverages Circle STARKs over the Mersenne31 prime field for fast prover performance
  • Seamlessly integrated with Cairo

This book will guide you through why you should create a proof system using Stwo, guide you through the process of creating your own proof system, and also provide in-depth explanations of the inner workings of Stwo.

Why Use a Proof System?

At its core, a proof system can prove the validity of a statement \(C(x)=y\) where \(C\) is a representation of some logic, while \(x\) is an input and \(y\) the output of said logic. (Assuming that we are only dealing with logic that can be expressed as a computation, we will henceforth refer to this logic as a computation). This means that we can verify the validity of a statement by either directly running the computation, or by verifying the validity of the proof produced with the proof system. The verifier can benefit from the second option in terms of time and space, if the time to verify the proof is faster than the time to run the computation, or the size of the proof is smaller than the input to the statement.

This property of a proof system is often referred to as succinctness, and it is exactly why proof systems have seen wide adoption in the blockchain space, where computation on-chain is much more expensive compared to off-chain computation. Using a proof system, it becomes possible to replace a large collection of computation to be executed on-chain with a proof of execution of the same collection of computation and verifying it on-chain. This way, proof generation can be handled off-chain using large machines and only the proof verification needs to be done on-chain.

But there are applications of proof systems beyond just blockchains. Since a proof is verifiable as well as succinct, it can also be used as auxiliary data to verify that the computation of an untrusted party was done correctly. For example, when we delegate computation to an untrusted server, we can ask it to provide a proof along with the computation result that the result indeed came from running a specific computation. Another example could be to ask a server running an ML model to provide proof that it ran inference on the correct model. The size of the accompanying proof and the time to verify it will be negligible compared to the cost of running the computation, but we gain the guarantee that the computation was done correctly.

Another optional feature of proof systems is zero-knowledge, which means that the proof reveals nothing about the computation other than its validity. In general, the output \(y\) of the computation \(C(x)=y\) will be public (i.e. revealed to the verifier), but the input \(x\) may be split into public and private parts, so that the verifier does not learn anything about the private part. With this feature, the intermediate values computed by the prover while computing \(C(x)\) will also be hidden from the verifier.

Why Stwo?

Before we dive into why we should choose Stwo, let's define some terminology. When we talked about proof systems in the previous section, we were only referring to the part that takes a statement and outputs a proof. In reality, however, we first need to structure the statement in a way that it can be proven by the proof system. This structuring part is often referred to as the frontend, and the proof system is commonly called the backend.

With that out of the way, let's dive into some of the advantages of using Stwo.

First, Stwo is a standalone framework that provides both the frontend and backend and therefore handles the entire proving process. There are other frameworks that only provide the frontend or the backend, which has its advantages as its modular structure makes it possible to pick and choose a backend or frontend of one's liking. However, having a single integrated frontend and backend reduces the complexity of the system and is also easier to maintain.

In addition, Stwo's frontend structures statements in the Algebraic Intermediate Representation (AIR), which is a representation that is especially useful for proving statements that are repetitive (e.g. the CPU in a VM, which essentially repeats the same fetch-decode-execute over and over again).

Stwo's backend is also optimized for prover performance. This is due to largely three factors.

  1. It implements STARKs, or hash-based SNARKs, which boasts a faster prover compared to elliptic curve-based SNARKs like Groth16 or PLONK. This improvement comes mainly from running the majority of the computation in a small prime field (32 bits); Elliptic curve-based SNARKs, on the other hand, need to use big prime fields (e.g. 254-bit prime fields), which incur a lot of overhead as most computation does not require that many bits.

  2. Even amongst multiple STARK backends, however, Stwo provides state-of-the-art prover performance by running the Mersenne-31 prime field (modulo \(2^{31} - 1\)), which is faster than another popular 32-bit prime field like BabyBear (modulo \(2^{31} - 2^{27} + 1\)). We suggest going through this post for a breakdown of why this is the case.

  3. Finally, Stwo offers various CPU and GPU optimizations that improves prover performance as shown in Figure 1 below. It can also be compiled to WASM, allowing for fast proving in web environments.

Figure 1: Prover performance optimizations in Stwo

One of the drawbacks of STARKs is that they have a larger proof size compared to elliptic curve-based SNARKs. One way to mitigate this drawback is by batching multiple proofs together to form a single proof.

Note

On zero-knowledge:

As of the time of this writing, Stwo does not provide the "zero-knowledge" feature. "Zero-knowledge" here refers to the fact that the proof should not reveal any additional information other than the validity of the statement, which is not true for Stwo as it reveals to the verifier commitments to its witness values without hiding them by e.g. adding randomness. This reveals some information about the witness values, which may be used in conjunction with other information to infer the witness values.

AIR Development

This section is intended for developers who want to create custom proofs using Stwo (proofs of custom VMs, ML inference, etc.). It assumes that the reader is familiar with Rust and has some background knowledge of cryptography (e.g. finite fields). It also assumes that the reader is familiar with the concept of zero-knowledge proofs and knows what they want to create a zero-knowledge proof for, but it does not assume any firsthand experience with zero-knowledge proof systems.

Note

All the code that appears throughout this section is available here.

Writing a Simple AIR

Figure 1: Proving lifecycle in Stwo

Welcome to the guide for writing AIRs in Stwo!

In this "Writing a Simple AIR" section, we will go through the process of writing a simple AIR from scratch. This requires some understanding of the proving lifecycle in Stwo, so we added a diagram showing a high-level overview of the whole process. As we go through each step, please note that the diagram may contain more steps than the code. This is because there are steps that are abstracted away by the Stwo implementation, but is necessary for understanding the code that we need to write when creating an AIR.

Hello (ZK) World

Let's first set up a Rust project with Stwo.

$ cargo new stwo-example

We need to specify the nightly Rust compiler to use Stwo.

$ echo -e "[toolchain]\nchannel = \"nightly-2025-01-02\"" > rust-toolchain.toml

Now let's edit the Cargo.toml file as follows:

[package]
name = "stwo-examples"
version = "0.1.0"
edition = "2021"
license = "MIT"

[dependencies]
stwo-prover = { git = "https://github.com/starkware-libs/stwo.git", rev = "92984c060b49d0db05e021883755fac0a71a2fa7" }
num-traits = "0.2.17"
itertools = "0.12.0"
rand = "0.8.5"

We are all set!

Writing a Spreadsheet

Figure 1: Prover workflow: Create a table

In order to write a proof, we first need to create a table of rows and columns. This is no different than writing integers to an Excel spreadsheet as we can see in Figure 1.

Figure 2: From spreadsheet to table

But there is a slight caveat to consider when creating the table. Stwo implements SIMD operations to speed up the prover in the CPU, but this requires providing the table cells in chunks of 16 rows. Simply put, this is because Stwo supports 16 lanes of 32-bit integers, which means that the same instruction can be run simultaneously for 16 different data.

Alas, for our table, we will need to create 14 dummy rows to make the total number of rows equal to 16, as shown in Figure 3. For the sake of simplicity, however, we will omit the dummy rows in the diagrams of the following sections.

Figure 3: Creating a table with 16 rows

Given all that, let's create this table using Stwo.

use stwo_prover::core::{
    backend::{
        simd::{column::BaseColumn, m31::N_LANES},
        Column,
    },
    fields::m31::M31,
};

fn main() {
    let num_rows = N_LANES;

    let mut col_1 = BaseColumn::zeros(num_rows as usize);
    col_1.set(0, M31::from(1));
    col_1.set(1, M31::from(7));

    let mut col_2 = BaseColumn::zeros(num_rows as usize);
    col_2.set(0, M31::from(5));
    col_2.set(1, M31::from(11));
}

As mentioned above, we instantiate the num_rows of our table as N_LANES=16 to accommodate SIMD operations. Then we create a BaseColumn of N_LANES=16 rows for each column and populate the first two rows with our values and the rest with dummy values.

Note that the values in the BaseColumn need to be of type M31, which refers to the Mersenne-31 prime field that Stwo uses. This means that the integers in the table must be in the range \([0, 2^{31}-1]\).

Now that we have our table, let's move on!

From Spreadsheet to Trace Polynomials

Figure 1: Prover workflow: Trace polynomials

In the previous section, we created a table (aka spreadsheet). In this section, we will convert the table into something called trace polynomials.

Figure 2: From spreadsheet to trace polynomials

As we can see in Figure 2, the cells in each column of the table can be seen as evaluations of a Lagrange polynomial. A characteristic of a Lagrange polynomial is that interpolating \(n\) distinct points will result in a unique polynomial of at most \(n-1\) degrees. So if we consider each row value \(f(x_i)\) as the evaluation of a distinct point \(x_i\), we can get a unique polynomial of degree \(num\_rows-1\) for each column, which is also known as a trace polynomial.

We will explain why using a polynomial representation is useful in the next section, but for now, let's see how we can create trace polynomials for our code. Note that we are building upon the code from the previous section, so there's not much new code here.

fn main() {
    // --snip--

    // Convert table to trace polynomials
    let domain = CanonicCoset::new(log_num_rows).circle_domain();
    let _trace: ColumnVec<CircleEvaluation<SimdBackend, M31, BitReversedOrder>> =
        vec![col_1, col_2]
            .into_iter()
            .map(|col| CircleEvaluation::new(domain, col))
            .collect();
}

Here, domain refers to the \(x_i\) values used to interpolate the trace polynomials. For example, \(x_1, x_2\) values in Figure 2 are the domain values for our example (in reality, we need the size of the domain needs to be 16 as we have 16 rows). We can ignore terms like CanonicCoset and .circle_domain() for now, but should note that the log_num_rows in CanonicCoset::new(log_num_rows).circle_domain() needs to be equal to the log of the actual number of rows that are used in the table.

Now that we have created 2 trace polynomials for our 2 columns, let's move on to the next section where we commit to those polynomials!

Committing to the Trace Polynomials

Figure 1: Prover workflow: Commitment

Now that we have created the trace polynomials, we need to commit to them.

As we can see in Figure 1, Stwo commits to the trace polynomials by first expanding the trace polynomials (i.e. adding more evaluations) and then committing to the expanded evaluations using a merkle tree.

The rate of expansion (commonly referred to as the blowup factor) is a parameter of FRI and readers who are interested in learning more about how to set this parameter can refer to the Circle-STARKs paper (We are also working on adding an explanation in the book).

For the puposes of this tutorial, we will use the default values for the blowup factor.

const CONSTRAINT_EVAL_BLOWUP_FACTOR: u32 = 1;

fn main() {
    // --snip--

    // Config for FRI and PoW
    let config = PcsConfig::default();

    // Precompute twiddles for evaluating and interpolating the trace
    let twiddles = SimdBackend::precompute_twiddles(
        CanonicCoset::new(
            log_num_rows + CONSTRAINT_EVAL_BLOWUP_FACTOR + config.fri_config.log_blowup_factor,
        )
        .circle_domain()
        .half_coset,
    );

    // Create the channel and commitment scheme
    let channel = &mut Blake2sChannel::default();
    let mut commitment_scheme =
        CommitmentSchemeProver::<SimdBackend, Blake2sMerkleChannel>::new(config, &twiddles);

    // Commit to the preprocessed trace
    let mut tree_builder = commitment_scheme.tree_builder();
    tree_builder.extend_evals(vec![]);
    tree_builder.commit(channel);

    // Commit to the size of the trace
    channel.mix_u64(log_num_rows as u64);

    // Commit to the original trace
    let mut tree_builder = commitment_scheme.tree_builder();
    tree_builder.extend_evals(trace);
    tree_builder.commit(channel);
}

We begin with some setup. First, we create a default PcsConfig instance, which sets the values for the FRI and PoW operations. Setting non-default values is related to the security of the proof, which is out of the scope for this tutorial.

Next, we precompute twiddles, also known as the domain over which the rows in the table are evaluated on. The log size of this domain is set to log_num_rows + CONSTRAINT_EVAL_BLOWUP_FACTOR + config.fri_config.log_blowup_factor, which is the max log size of the domain that is needed throughout the proving process. Why we need to add CONSTRAINT_EVAL_BLOWUP_FACTOR will be explained in the next section.

The final setup is creating a commitment scheme and a channel. The commitment scheme will be used to commit to the trace polynomials as merkle trees, while the channel will be used to keep a running hash of all data in the proving process (i.e. transcript of the proof). This is part of the Fiat-Shamir transformation where randomness can be generated safely even in a non-interactive setting. Here, we use the Blake2sChannel and Blake2sMerkleChannel for the channel and commitment scheme, respectively, but we can also use the Poseidon252Channel and Poseidon252MerkleChannel pair.

Now that we have our setup, we can commit to the trace polynomials. But before we do so, we need to first commit to an empty vector called a preprocessed trace, which doesn't do anything but is required by Stwo. Then, we need to commit to the size of the trace, which is a vital part of the proof system that the prover should not be able to cheat on. Once we have done these, we can finally commit to the original trace polynomials.

Now that we have committed to the trace polynomials, we can move on to how we can create constraints over the trace polynomials!

Evaluating Constraints Over Trace Polynomials

Figure 1: Prover workflow: Constraints

Proving Spreadsheet Functions

When we want to perform computations over the cells in a spreadsheet, we don't want to manually fill in the computed values. Instead, we leverage spreadsheet functions to autofill cells based on a given computation.

We can do the same thing with our table, except in addition to autofilling cells, we can also create a constraint that the result was computed correctly. Remember that the purpose of using a proof system is that the verifier can verify a computation was executed correctly without having to execute it themselves? Well, that's exactly why we need to create a constraint.

Now let's say we want to add a new column C to our spreadsheet that computes the product of the previous columns plus the first column. We can set C1 as A1 * B1 + A1 as in Figure 2.

In the same vein, we can create a new column in our table that computes the sum of the two previous columns. And we can constrain the value of the third column by creating an equation that must equal 0: col1_row1 * col2_row1 + col1_row1 - col3_row1 = 0.

Figure 2: Proving spreadsheet functions as constraints

Identical Constraints Every Row

Obviously, as can be seen in Figure 2, our new constraint is satisfied for every row in the table. This means that we can substitute creating a constraint for each row with a single constraint over the columns, i.e. the trace polynomials.

Thus, col1_row1 * col2_row1 + col1_row1 - col3_row1 = 0 becomes \(f_1(x) \cdot f_2(x) + f_1(x) - f_3(x) = 0\).

Note

The idea that all rows must have the same constraint may seem restrictive, compared to say a spreadsheet where we can define different functions for different rows. However, we will show in later sections how to handle such use-cases.

(Spoiler alert: it involves selectors and components)

Composition Polynomial

We will now give a name to the polynomial that expresses the constraint: a composition polynomial.

\(C(x) = f_1(x) \cdot f_2(x) + f_1(x) - f_3(x)\)

Basically, in order to prove that the constraints are satisfied, we need to show that the composition polynomial evaluates to 0 over the original domain (i.e. the domain of size the number of rows in the table).

But first, as can be seen in Figure 1, we need to expand the evaluations of the trace polynomials by a factor of 2. This is because the composition polynomial has degree 2, while the trace polynomials have degree 1, and thus we need more evaluations to uniquely determine the Lagrange polynomial.

Once we have the expanded evaluations, we can evaluate the composition polynomial. Checking that the composition polynomial evaluates to 0 over the original domain is done in FRI, so once again we need to expand the composition polynomial evaluations by a factor of 2 and commit to them.

We'll see in the code below how this is implemented.

Code

struct TestEval {
    log_size: u32,
}

impl FrameworkEval for TestEval {
    fn log_size(&self) -> u32 {
        self.log_size
    }

    fn max_constraint_log_degree_bound(&self) -> u32 {
        self.log_size + CONSTRAINT_EVAL_BLOWUP_FACTOR
    }

    fn evaluate<E: EvalAtRow>(&self, mut eval: E) -> E {
        let col_1 = eval.next_trace_mask();
        let col_2 = eval.next_trace_mask();
        let col_3 = eval.next_trace_mask();
        eval.add_constraint(col_1.clone() * col_2.clone() + col_1.clone() - col_3.clone());
        eval
    }
}

fn main() {
    // --snip--

    let mut col_3 = BaseColumn::zeros(num_rows);
    col_3.set(0, col_1.at(0) * col_2.at(0) + col_1.at(0));
    col_3.set(1, col_1.at(1) * col_2.at(1) + col_1.at(1));

    // Convert table to trace polynomials
    let domain = CanonicCoset::new(log_num_rows).circle_domain();
    let trace: ColumnVec<CircleEvaluation<SimdBackend, M31, BitReversedOrder>> =
        vec![col_1, col_2, col_3]
            .into_iter()
            .map(|col| CircleEvaluation::new(domain, col))
            .collect();

    // --snip--

    // Create a component
    let _component = FrameworkComponent::<TestEval>::new(
        &mut TraceLocationAllocator::default(),
        TestEval {
            log_size: log_num_rows,
        },
        QM31::zero(),
    );
}

First, we add a new column col_3 that contains the result of the computation: col_1 * col_2 + col_1.

Then, to create a constraint over the trace polynomials, we first create a TestEval struct that implements the FrameworkEval trait. Then, we add our constraint logic in the FrameworkEval::evaluate function. Note that this function is called for every row in the table, so we only need to define the constraint once.

Inside FrameworkEval::evaluate, we call eval.next_trace_mask() consecutively three times, retrieving the cell values of all three columns (see Figure 3 below for a visual representation). Once we retrieve all three column values, we add a constraint of the form col_1 * col_2 + col_1 - col_3, which should equal 0.

Figure 3: Evaluate function

We also need to implement FrameworkEval::max_constraint_log_degree_bound(&self) for FrameworkEval. As mentioned in the Composition Polynomial section, we need to expand the trace polynomial evaluations because the degree of our composition polynomial is higher than the trace polynomial. Expanding it by the lowest value CONSTRAINT_EVAL_BLOWUP_FACTOR=1 is sufficient for our example as the degree of our composition polynomial is not very high, so we can return self.log_size + CONSTRAINT_EVAL_BLOWUP_FACTOR. For those who are interested in how to set this value in general, we leave a detailed note below.

Note

What value to set for max_constraint_log_degree_bound(&self)?

self.log_size + max(1, ceil(log2(max_degree - 1))), where max_degree is the maximum degree of all defined constraint polynomials.

e.g.

  • degree 1 - 3: self.log_size + 1
  • degree 4 - 5: self.log_size + 2
  • degree 6 - 9: self.log_size + 3
  • degree 10 - 17: self.log_size + 4
  • ...

Note

Now that we know the degree of the composition polynomial, we can also explain the following code:

    // Precompute twiddles for evaluating and interpolating the trace
    let twiddles = SimdBackend::precompute_twiddles(
        CanonicCoset::new(
            log_num_rows + CONSTRAINT_EVAL_BLOWUP_FACTOR + config.fri_config.log_blowup_factor,
        )
        .circle_domain()
        .half_coset,
    );

Why is the log_size of the domain set to log_num_rows + CONSTRAINT_EVAL_BLOWUP_FACTOR + config.fri_config.log_blowup_factor here? As we can see in Figure 1, once we have the composition polynomial, we need to expand it again for before committing to it for the FRI step. Thus, the maximum size of the domain that we need in the entire proving process is the FRI blow-up factor times the degree of the composition polynomial.

Using the new TestEval struct, we can create a new FrameworkComponent::<TestEval> component, which the prover will use to evaluate the constraint. For now, we can ignore the other parameters of the FrameworkComponent::<TestEval> constructor.

We now move on to the final section where we finally create and verify a proof.

Definition

Finally, we can break down what an Algebraic Intermediate Representation (AIR) means.

Algebraic means that we are using polynomials to represent the constraints.

Intermediate Representation means that this is a modified representation of our statement so that it can be proven by a proof system.

So AIR is just another way of saying that we are representing statements to be proven as constraints over polynomials.

Proving and Verifying an AIR

Figure 1: Prover workflow: perform FRI and PoW

We're finally ready to take the final step--prove and verify an AIR!

Since the code is relatively short, let us present it first and then go over the details.

fn main() {
    // --snip--

    // Prove
    let proof = prove(&[&component], channel, commitment_scheme).unwrap();

    // Verify
    let channel = &mut Blake2sChannel::default();
    let commitment_scheme = &mut CommitmentSchemeVerifier::<Blake2sMerkleChannel>::new(config);
    let sizes = component.trace_log_degree_bounds();

    commitment_scheme.commit(proof.commitments[0], &sizes[0], channel);
    channel.mix_u64(log_num_rows as u64);
    commitment_scheme.commit(proof.commitments[1], &sizes[1], channel);

    verify(&[&component], channel, commitment_scheme, proof).unwrap();
}

Prove

As you can see, there is only a single line of code added to create the proof. The prove function performs the FRI and PoW operations under the hood, although technically, the constraint related steps in Figure 1 were not performed in the previous section and are only performed once prove is called.

Verify

In order to verify our proof, we need to check that the constraints are satisfied using the commitments from the proof. In order to do that, we need to set up a Blake2sChannel and CommitmentSchemeVerifier<Blake2sMerkleChannel>, along with the same PcsConfig that we used when creating the proof. Then, we need to recreate the running hash channel by passing the merkle tree commitments and the log_num_rows to the CommitmentSchemeVerifier instance by calling commit (remember, the order is important!). Then, we can verify the proof using the verify function.

Exercise

Try setting the dummy values in the table to 1 instead of 0. Does it fail? If so, can you see why?

Congratulations! We have come full circle. We now know how to create a table, convert it to trace polynomials, commit to them, create constraints over the trace polynomials, and prove and verify the constraints (i.e. an AIR). In the following sections, we will go over some more complicated AIRs to explain Stwo's other features.

Preprocessed Trace

This section and the following sections are intended for developers who have completed the Writing a Simple AIR section or are already familiar with the workflow of creating an AIR. If you have not gone through the Writing a Simple AIR section, we recommend you to do so first as the following sections gloss over a lot of boilerplate code.

For those of you who have completed the Writing a Simple AIR tutorial, you should now be familiar with the concept of a trace as a table of integers that are filled in by the prover (we will now refer to this as the original trace).

In addition to the original trace, Stwo also has a concept of a preprocessed trace, which is a table whose values are fixed and therefore cannot be arbitrarily chosen by the prover. In other words, these are columns whose values are known in advance of creating a proof and essentially agreed upon by both the prover and the verifier.

One of the use cases of the preprocessed trace is as a selector for different constraints. Remember that in an AIR, the same constraints are applied to every row of the trace? If we go back to the spreadsheet analogy, this means that we can't create a spreadsheet that runs different computations for different rows. We can get around this issue by composing multiple constraints using a selector column as part of the preprocessed trace. For example, let's say we want to create a constraint that runs different computations for the first 2 rows and the next 2 rows. We can do this by using a preprocessed trace that has value 1 for the first 2 rows and 0 for the next 2 rows, essentially as a selector for the first 2 rows. The resulting single constraint composes the two different constraints by adding them together: \((1 - \text{preprocessed_trace}) * \text{constraint_1} + \text{preprocessed_trace} * \text{constraint_2}=0\)

Figure 1: Preprocessed trace as a selector

Another use case is to use the preprocessed trace for expressing constant values used in the constraints. For example, when creating a hash function in an AIR, we often need to use round constants, which the verifier needs to be able to verify or the resulting hash may be invalid. We can also "look up" the constant values as an optimization technique, which we will discuss in more detail in the next section.

In this section, we will explore how to implement a preprocessed trace as a selector, and we will implement the simplest form: a single isFirst column, where the value is 1 for the first row and 0 for all other rows.

Note

Boilerplate code is omitted for brevity. Please refer to the full example code for the full implementation.

struct IsFirstColumn {
    pub log_size: u32,
}

#[allow(dead_code)]
impl IsFirstColumn {
    pub fn new(log_size: u32) -> Self {
        Self { log_size }
    }

    pub fn gen_column(&self) -> CircleEvaluation<SimdBackend, M31, BitReversedOrder> {
        let mut col = BaseColumn::zeros(1 << self.log_size);
        col.set(0, M31::from(1));
        CircleEvaluation::new(CanonicCoset::new(self.log_size).circle_domain(), col)
    }

    pub fn id(&self) -> PreProcessedColumnId {
        PreProcessedColumnId {
            id: format!("is_first_{}", self.log_size),
        }
    }
}

First, we need to define a IsFirstColumn struct that will be used as a preprocessed trace. We will use the gen_column() function to generate a CircleEvaluation struct that is 1 for the first row and 0 for all other rows. The id() function is needed to identify this column when evaluating the constraints.

fn main() {
    ...
    // Create and commit to the preprocessed trace
    let is_first_column = IsFirstColumn::new(log_size);
    let mut tree_builder = commitment_scheme.tree_builder();
    tree_builder.extend_evals(vec![is_first_column.gen_column()]);
    tree_builder.commit(channel);

    // Commit to the size of the trace
    channel.mix_u64(log_size as u64);

    // Create and commit to the original trace
    let trace = gen_trace(log_size);
    let mut tree_builder = commitment_scheme.tree_builder();
    tree_builder.extend_evals(trace);
    tree_builder.commit(channel);
    ...
}

Then, in our main function, we will create and commit to the preprocessed and original traces. For those of you who are curious about why we need to commit to the trace, please refer to the Committing to the Trace Polynomials section.

struct TestEval {
    is_first_id: PreProcessedColumnId,
    log_size: u32,
}

impl FrameworkEval for TestEval {
    fn log_size(&self) -> u32 {
        self.log_size
    }

    fn max_constraint_log_degree_bound(&self) -> u32 {
        self.log_size + CONSTRAINT_EVAL_BLOWUP_FACTOR
    }

    fn evaluate<E: EvalAtRow>(&self, mut eval: E) -> E {
        let is_first = eval.get_preprocessed_column(self.is_first_id.clone());

        let col_1 = eval.next_trace_mask();
        let col_2 = eval.next_trace_mask();
        let col_3 = eval.next_trace_mask();

        // If is_first is 1, then the constraint is col_1 * col_2 - col_3 = 0
        // If is_first is 0, then the constraint is col_1 * col_2 + col_1 - col_3 = 0
        eval.add_constraint(
            (col_1.clone() * col_2.clone() - col_3.clone()) * is_first.clone()
                + (col_1.clone() * col_2.clone() + col_1.clone() - col_3.clone())
                    * (E::F::from(M31::from(1)) - is_first.clone()),
        );

        eval
    }
}

Now that we have the traces, we need to create a struct that contains the logic for evaluating the constraints. As mentioned before, we need to use the is_first_id field to retrieve the row value of the IsFirstColumn struct. Then, we compose two constraints using the IsFirstColumn row value as a selector and adding them together.

If you're unfamiliar with how max_constraint_log_degree_bound(&self) should be implemented, please refer to this note.

fn main() {
    ...
    // Create a component
    let component = FrameworkComponent::<TestEval>::new(
        &mut TraceLocationAllocator::default(),
        TestEval {
            is_first_id: is_first_column.id(),
            log_size,
        },
        QM31::zero(),
    );

    // Prove
    let proof = prove(&[&component], channel, commitment_scheme).unwrap();

    // Verify
    let channel = &mut Blake2sChannel::default();
    let commitment_scheme = &mut CommitmentSchemeVerifier::<Blake2sMerkleChannel>::new(config);
    let sizes = component.trace_log_degree_bounds();

    commitment_scheme.commit(proof.commitments[0], &sizes[0], channel);
    channel.mix_u64(log_size as u64);
    commitment_scheme.commit(proof.commitments[1], &sizes[1], channel);

    verify(&[&component], channel, commitment_scheme, proof).unwrap();
}

Finally, we can create a FrameworkComponent using the TestEval struct and then prove and verify the component.

Static Lookups

In the previous section, we showed how to create a preprocessed trace. In this section, we will introduce the concept of static lookups, where we will create columns that look up values from a preprocessed trace.

Note

Readers who are unfamiliar with the concept of lookups can refer to the Lookups section for a quick introduction.

Specifically, we will implement a range-check. A range-check is a technique used to check that a certain value is within a given range. This proves useful especially in proof systems like Stwo that uses finite fields because it allows checking for underflow and overflow.

A range-check checks that all values in a column are within a certain range. For example, as in Figure 1, we can check that all values in the lookup columns are between 0 and 3. We do this by creating a multiplicity column that counts the number of times each value in the preprocessed trace appears in the lookup columns.

Then, we create two LogUp columns. The first column contains in each row a fraction where the nominator is the multiplicity and the denominator is the random linear combination of the value in the range-check column. For example, for row 1, the fraction should be \(\dfrac{2}{X-0}\). Note that in Figure 1, the nominator is actually \(-2\), i.e. we apply a negation to the multiplicity, because we want the sum of the first column to be equal to the sum of the second column.

The second column contains fractions where the nominator is always 1 and the denominator is the random linear combination of the value in the lookup column. Note that a single row batches two lookup tables by adding two fractions together.

Figure 1: Range-check lookup

If you stare at the LogUp columns hard enough, you'll notice that if we add all the fractions in the two columns together, we get 0. This is no coincidence! The prover will provide the sum of the LogUp columns and the verifier check in the open that this value is indeed 0.

Now let's move on to the implementation.

struct RangeCheckColumn {
    pub log_size: u32,
}

#[allow(dead_code)]
impl RangeCheckColumn {
    pub fn new(log_size: u32) -> Self {
        Self { log_size }
    }

    pub fn gen_column(&self) -> CircleEvaluation<SimdBackend, M31, BitReversedOrder> {
        let col = BaseColumn::from_iter((0..(1 << self.log_size)).map(|i| M31::from(i)));
        CircleEvaluation::new(CanonicCoset::new(self.log_size).circle_domain(), col)
    }

    pub fn id(&self) -> PreProcessedColumnId {
        PreProcessedColumnId {
            id: format!("range_check_{}_bits", self.log_size),
        }
    }
}

First, we need to create the range-check column as a preprocessed column. This should look familiar to the code from the previous section.

fn gen_trace(log_size: u32) -> Vec<CircleEvaluation<SimdBackend, M31, BitReversedOrder>> {
    // Create a table with random values
    let mut rng = rand::thread_rng();
    let mut lookup_col_1 =
        BaseColumn::from_iter((0..(1 << log_size)).map(|_| M31::from(rng.gen_range(0..16))));
    let mut lookup_col_2 =
        BaseColumn::from_iter((0..(1 << log_size)).map(|_| M31::from(rng.gen_range(0..16))));

    let mut multiplicity_col = BaseColumn::zeros(1 << log_size);
    lookup_col_1
        .as_mut_slice()
        .iter()
        .chain(lookup_col_2.as_mut_slice().iter())
        .for_each(|value| {
            let index = value.0 as usize;
            multiplicity_col.set(index, multiplicity_col.at(index) + M31::from(1));
        });

    // Convert table to trace polynomials
    let domain = CanonicCoset::new(log_size).circle_domain();
    vec![
        lookup_col_1.clone(),
        lookup_col_2.clone(),
        multiplicity_col.clone(),
    ]
    .into_iter()
    .map(|col| CircleEvaluation::new(domain, col))
    .collect()
}

Next, we create the trace columns. The first two columns are random values in the range \([0, 15]\), and the third column contains the counts of the values in the range-check column.

relation!(LookupElements, 1);

fn gen_logup_trace(
    log_size: u32,
    range_check_col: &BaseColumn,
    lookup_col_1: &BaseColumn,
    lookup_col_2: &BaseColumn,
    multiplicity_col: &BaseColumn,
    lookup_elements: &LookupElements,
) -> (
    Vec<CircleEvaluation<SimdBackend, M31, BitReversedOrder>>,
    SecureField,
) {
    let mut logup_gen = LogupTraceGenerator::new(log_size);

    let mut col_gen = logup_gen.new_col();
    for simd_row in 0..(1 << (log_size - LOG_N_LANES)) {
        let numerator: PackedSecureField = PackedSecureField::from(multiplicity_col.data[simd_row]);
        let denom: PackedSecureField = lookup_elements.combine(&[range_check_col.data[simd_row]]);
        col_gen.write_frac(simd_row, -numerator, denom);
    }
    col_gen.finalize_col();

    let mut col_gen = logup_gen.new_col();
    for simd_row in 0..(1 << (log_size - LOG_N_LANES)) {
        let lookup_col_1_val: PackedSecureField =
            lookup_elements.combine(&[lookup_col_1.data[simd_row]]);
        let lookup_col_2_val: PackedSecureField =
            lookup_elements.combine(&[lookup_col_2.data[simd_row]]);
        // 1 / denom1 + 1 / denom2 = (denom1 + denom2) / (denom1 * denom2)
        let numerator = lookup_col_1_val + lookup_col_2_val;
        let denom = lookup_col_1_val * lookup_col_2_val;
        col_gen.write_frac(simd_row, numerator, denom);
    }
    col_gen.finalize_col();

    logup_gen.finalize_last()
}

fn main() {
    ...
    // Draw random elements to use when creating the random linear combination of lookup values in the LogUp columns
    let lookup_elements = LookupElements::draw(channel);

    // Create and commit to the LogUp columns
    let (logup_cols, claimed_sum) = gen_logup_trace(
        log_size,
        &range_check_col,
        &trace[0],
        &trace[1],
        &trace[2],
        &lookup_elements,
    );
    ...
}

Now we need to create the LogUp columns.

First, note that we are creating a LookupElements instance using the macro relation!. This macro creates an API for performing random linear combinations. Under the hood, it creates two random values \(z, \alpha\) that can create a random linear combination of an arbitrary number of elements. In our case, we only need to combine one value (value in \([0,15]\)), which is why we pass in 1 to the macro.

Inside gen_logup_trace, we create a LogupTraceGenerator instance. This is a helper class that allows us to create LogUp columns. Every time we create a new column, we need to call new_col() on the LogupTraceGenerator instance.

You may notice that we are iterating over BaseColumn in chunks of 16, or 1 << LOG_N_LANES values. This is because we are using the SimdBackend, which runs 16 lanes simultaneously, so we need to preserve this structure. The Packed in PackedSecureField means that it packs 16 values into a single value.

You may also notice that we are using a SecureField instead of just the Field. This is because the random value we created in LookupElements will be in the degree-4 extension field \(\mathbb{F}_{p^4}\). Interested readers can refer to the Mersenne Primes section for more details.

Once we set the fractions for each simd_row, we need to call finalize_col() to finalize the column. This process modifies the LogUp columns from individual fractions to cumulative sums of the fractions as shown in Figure 2.

Figure 2: Finalizing each LogUp column

Finally, we need to call finalize_last() on the LogupTraceGenerator instance to finalize the LogUp columns, which will return the LogUp columns as well as the sum of the fractions in the LogUp columns.

struct TestEval {
    range_check_id: PreProcessedColumnId,
    log_size: u32,
    lookup_elements: LookupElements,
}

impl FrameworkEval for TestEval {
    fn log_size(&self) -> u32 {
        self.log_size
    }

    fn max_constraint_log_degree_bound(&self) -> u32 {
        self.log_size + CONSTRAINT_EVAL_BLOWUP_FACTOR
    }

    fn evaluate<E: EvalAtRow>(&self, mut eval: E) -> E {
        let range_check_col = eval.get_preprocessed_column(self.range_check_id.clone());

        let lookup_col_1 = eval.next_trace_mask();
        let lookup_col_2 = eval.next_trace_mask();
        let multiplicity_col = eval.next_trace_mask();

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            -E::EF::from(multiplicity_col),
            &[range_check_col],
        ));

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            E::EF::one(),
            &[lookup_col_1],
        ));

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            E::EF::one(),
            &[lookup_col_2],
        ));

        eval.finalize_logup_batched(&vec![0, 1, 1]);

        eval
    }
}

The last piece of the puzzle is to create the constraints. We use the same TestEval struct as in the previous sections, but the evaluate function will look slightly different. Instead of calling add_constraint on the EvalAtRow instance, we will call add_to_relation, which recreates the fractions that we added in the LogUp columns using values in the range-check, lookup, and multiplicity columns.

Once we add the fractions as constraints, we call the finalize_logup_batched function, which indicates how we want to batch the fractions. In our case, we added 3 fractions but want to create batches where the last two fractions are batched together, so we pass in &vec![0, 1, 1].

    // Verify
    assert_eq!(claimed_sum, SecureField::zero());

    let channel = &mut Blake2sChannel::default();
    let commitment_scheme = &mut CommitmentSchemeVerifier::<Blake2sMerkleChannel>::new(config);
    let sizes = component.trace_log_degree_bounds();

    commitment_scheme.commit(proof.commitments[0], &sizes[0], channel);
    channel.mix_u64(log_size as u64);
    commitment_scheme.commit(proof.commitments[1], &sizes[1], channel);
    commitment_scheme.commit(proof.commitments[2], &sizes[2], channel);

    verify(&[&component], channel, commitment_scheme, proof).unwrap();

When we verify the proof, as promised, we check that the claimed_sum, which is the sum of the fractions in the LogUp columns, is 0.

And that's it! We have successfully created a static lookup for a range-check.

Dynamic Lookups

In the last section, we implemented a static lookup. A dynamic lookup is the same as a static lookup except that the values that are being looked up are not known before the proving process (i.e. they are not preprocessed columns but trace columns).

In this section, we will implement one of the simplest dynamic lookups: a permutation check.

A permutation check simply checks that two sets of values have the same elements, but not necessarily in the same order. For example, the values [1, 2, 3] and [3, 1, 2] are a permutation of each other, but [1, 2, 3] and [1, 2] are not.

If you went through the previous section, you should have a good intuition for how to implement this. First create two LogUp columns where the first column contains the values in the original set of values with multiplicity \(1\) and the second column contains the values in the second set of values with multiplicity \(-1\). Then, check that the claimed_sum, or the sum of the fractions in the two LogUp columns, is \(0\).

We can optimize further by batching the two columns into a single LogUp column so that a LogUp column row looks something like \(\frac{1}{col_1} - \frac{1}{col_2}\).

fn gen_trace(log_size: u32) -> Vec<CircleEvaluation<SimdBackend, M31, BitReversedOrder>> {
    // Create a table with values [0, 1 << log_size)
    let mut rng = rand::thread_rng();
    let values = (0..(1 << log_size)).map(|i| i).collect::<Vec<_>>();
    let original_col = BaseColumn::from_iter(values.iter().map(|v| M31::from(*v)));

    // Create a random permutation of the values
    let mut permutation = values.clone();
    permutation.shuffle(&mut rng);
    let permuted_col = BaseColumn::from_iter(permutation.iter().map(|v| M31::from(*v)));

    // Convert table to trace polynomials
    let domain = CanonicCoset::new(log_size).circle_domain();
    vec![original_col, permuted_col]
        .into_iter()
        .map(|col| CircleEvaluation::new(domain, col))
        .collect()
}

fn gen_logup_trace(
    log_size: u32,
    original_col: &BaseColumn,
    permuted_col: &BaseColumn,
    lookup_elements: &LookupElements,
) -> (
    Vec<CircleEvaluation<SimdBackend, M31, BitReversedOrder>>,
    SecureField,
) {
    let mut logup_gen = LogupTraceGenerator::new(log_size);

    let mut col_gen = logup_gen.new_col();
    for row in 0..(1 << (log_size - LOG_N_LANES)) {
        // 1 / original - 1 / permuted = (permuted - original) / (original * permuted)
        let original_val: PackedSecureField = lookup_elements.combine(&[original_col.data[row]]);
        let permuted_val: PackedSecureField = lookup_elements.combine(&[permuted_col.data[row]]);
        col_gen.write_frac(
            row,
            permuted_val - original_val,
            original_val * permuted_val,
        );
    }
    col_gen.finalize_col();

    logup_gen.finalize_last()
}

fn main() {
    ...
    // Create and commit to the trace columns
    let trace = gen_trace(log_size);
    let mut tree_builder = commitment_scheme.tree_builder();
    tree_builder.extend_evals(trace.clone());
    tree_builder.commit(channel);

    // Draw random elements to use when creating the random linear combination of lookup values in the LogUp columns
    let lookup_elements = LookupElements::draw(channel);

    // Create and commit to the LogUp columns
    let (logup_cols, claimed_sum) =
        gen_logup_trace(log_size, &trace[0], &trace[1], &lookup_elements);
    let mut tree_builder = commitment_scheme.tree_builder();
    tree_builder.extend_evals(logup_cols);
    tree_builder.commit(channel);
    ...
}

Looking at the code above, we can see that it looks very similar to the implementation in the previous section. Instead of creating a preprocessed column, we create a trace column that contains the values [0, 1 << log_size) in order. Then, we create a random permutation of the trace column values and set it as the second trace column. Note that this is equivalent to "looking up" all values in the first trace column once. And since all the values are looked up only once, we do not need a separate multiplicity column.

Then, we create a LogUp column that contains the values \(\frac{1}{original} - \frac{1}{permuted}\).

struct TestEval {
    log_size: u32,
    lookup_elements: LookupElements,
}

impl FrameworkEval for TestEval {
    fn log_size(&self) -> u32 {
        self.log_size
    }

    fn max_constraint_log_degree_bound(&self) -> u32 {
        self.log_size + CONSTRAINT_EVAL_BLOWUP_FACTOR
    }

    fn evaluate<E: EvalAtRow>(&self, mut eval: E) -> E {
        let original_col = eval.next_trace_mask();
        let permuted_col = eval.next_trace_mask();

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            E::EF::one(),
            &[original_col],
        ));

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            -E::EF::one(),
            &[permuted_col],
        ));

        eval.finalize_logup_in_pairs();

        eval
    }
}

The TestEval struct is also very similar to the one in the previous section. The only difference is that we call add_to_relation twice and add them together by calling finalize_logup_in_pairs() on the TestEval instance. This is equivalent to calling the finalize_logup_batched function with &vec![0, 0].

Local Row Constraints

Info

  1. Change the order of elements in BaseColumn in-place via bit_reverse_coset_to_circle_domain_order before creating a CircleEvaluation instance.
  2. The previous row in the first row will point to the last row, so you may need to disable the constraint for the first row.

Until now, we have only considered constraints that apply over values in a single row. But what if we want to express constraints over multiple rows? For example, we may want to ensure that the difference between the values in two adjacent rows is always the same.

Turns out we can implement this as an AIR constraint, as long as the same constraints are applied to all rows. In this section, we will see how to implement this.

We will build upon the example in the previous section, where we created a two columns and proved that they are permutations of each other by asserting that the second column looks up all values in the first column exactly once.

Here, we will create two columns and prove that not only are they permutations of each other, but also that the second row is a sorted version of the first row.

More specifically, the sorted column will contain in order the values \([0,num\_rows)\), which means that the difference between every current row and the previous row should be \(1\).

We will go through three iterations, fixing an issue in each iteration.

First Try

    fn evaluate<E: EvalAtRow>(&self, mut eval: E) -> E {
        let unsorted_col = eval.next_trace_mask();
        let [sorted_col_prev_row, sorted_col_curr_row] =
            eval.next_interaction_mask(ORIGINAL_TRACE_IDX, [-1, 0]);

        // New constraint
        eval.add_constraint(
            E::F::one() - (sorted_col_curr_row.clone() - sorted_col_prev_row.clone()),
        );

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            E::EF::one(),
            &[unsorted_col],
        ));

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            -E::EF::one(),
            &[sorted_col_curr_row],
        ));

        eval.finalize_logup_in_pairs();

        eval
    }

Basically, the same logic for creating the trace and LogUp columns are equal to the previous section, so we omit them for brevity.

What does change is the evaluate function, where we call next_interaction_mask on the EvalAtRow instance. This function can retrieve values from arbitrary row offsets, which means we can access the previous row value using -1. Since we call this function with offsets [-1, 0], we will retrieve the values for the previous and current rows.

Once we have these values, we can now assert that the difference between the current and previous row is always 1 with the constraint: E::F::one() - (sorted_col_curr_row.clone() - sorted_col_prev_row.clone()).

But this will fail with a ConstraintsNotSatisfied error, can you see why? (You can try running it yourself here)

Second Try

The issue was that when calling evaluate on the first row of our trace, the previous row value wraps around to the last row because there are no negative indices.

This means that in our example, we are expecting the 0 - 15 = 1 constraint to hold, which is clearly not true.

To fix this, we can use the IsFirstColumn preprocessed column that we created in the Preprocessed Trace section. So we will copy over the same code for creating the preprocessed column and modify our new constraint as follows:

        let is_first_col = eval.get_preprocessed_column(self.is_first_id.clone());

        eval.add_constraint(
            (E::F::one() - is_first_col.clone())
                * (E::F::one() - (sorted_col_curr_row.clone() - sorted_col_prev_row.clone())),
        );

Now, we have a constraint that is disabled for the first row, which is exactly what we want.

Still, however, this will fail with the same ConstraintsNotSatisfied error. (You can run it here)

Third Try

So when we were creating CircleEvaluation instances from our BaseColumn instances, the order of the elements that we were creating it with was actually not the order that Stwo understands it to be. Instead, it assumes that the values are in the bit-reversed, circle domain order. It's not important to understand what this order is, specifically, but this does mean that when Stwo tries to find the -1 offset when calling evaluate, it will find the previous value assuming that it's in a different order. This means that when we create a CircleEvaluation instance, we need to convert it to a bit-reversed circle domain order.

Thus, every time we create a CircleEvaluation instance, we need to convert the order of the values in the BaseColumn beforehand.

impl IsFirstColumn {
    ...
    pub fn gen_column(&self) -> CircleEvaluation<SimdBackend, M31, BitReversedOrder> {
        let mut col = BaseColumn::zeros(1 << self.log_size);
        col.set(0, M31::from(1));

        //////////////////////////////////////////////////////////////
        // Convert the columns to bit-reversed circle domain order
        bit_reverse_coset_to_circle_domain_order(col.as_mut_slice());
        //////////////////////////////////////////////////////////////

        CircleEvaluation::new(CanonicCoset::new(self.log_size).circle_domain(), col)
    }
    ...
}

fn gen_trace(log_size: u32) -> Vec<CircleEvaluation<SimdBackend, M31, BitReversedOrder>> {
    // Create a table with random values
    let mut rng = rand::thread_rng();
    let sorted_values = (0..(1 << log_size)).map(|i| i).collect::<Vec<_>>();
    let mut unsorted_values = sorted_values.clone();
    unsorted_values.shuffle(&mut rng);

    let mut unsorted_col = BaseColumn::from_iter(unsorted_values.iter().map(|v| M31::from(*v)));
    let mut sorted_col = BaseColumn::from_iter(sorted_values.iter().map(|v| M31::from(*v)));

    // Convert table to trace polynomials
    let domain = CanonicCoset::new(log_size).circle_domain();

    ////////////////////////////////////////////////////////////////////
    // Convert the columns to bit-reversed circle domain order
    bit_reverse_coset_to_circle_domain_order(unsorted_col.as_mut_slice());
    bit_reverse_coset_to_circle_domain_order(sorted_col.as_mut_slice());
    ////////////////////////////////////////////////////////////////////

    vec![unsorted_col, sorted_col]
        .into_iter()
        .map(|col| CircleEvaluation::new(domain, col))
        .collect()
}

And voilà, we have successfully implemented the constraint. You can run it here.

Components

So now that we know how to create a self-contained AIR, the inevitable question arises: How do we make this modular?

Fortunately, Stwo provides an abstraction called a component that allows us to create independent AIRs and compose them together. In other proving frontends, this is also commonly referred to as a chip, but the idea is the same.

One of the most common use cases of components is to separate frequently used functions (e.g. a hash function) from the main component into a separate component and reuse it, avoiding trace column bloat. Even if the function is not frequently used, it could be useful to separate it into a component to avoid the degree of the constraints becoming too high. This second point is possible because when we create a new component and connect it to the old component, we do it by using lookups, which means that the constraints of the new component are not added to the degree of the old component.

Example

To illustrate how to use components, we will create two components where the main component calls a hash function component. For simplicity, instead of an actual hash function, the second component will compute \(x^5 + 1\) from an input \(x\). This component will have in total three columns: [input, intermediate, output], which will correspond to the values \([x, x^3, x^5 + 1]\). Our main component, on the other hand, will have two columns, [input, output], which corresponds to the values \([x, x^5 + 1]\).

We'll now refer to the main component as the scheduling component and the hash function component the computing component, as the main component is essentially scheduling the hash function component to run its function with a given input and the hash function component computes on the provided input. As can be seen in Figure 1, the inputs and outputs of each component are connected by lookups.

Figure 1: Scheduling and Computing components

Design

Figure 2: Traces of each component

When we implement this in Stwo, the traces of each component will look like Figure 2 above. Each component has its own original and LogUp traces, and the inputs and outputs of each component are connected by lookups. Since the scheduling component adds the inputs as positive values and the outputs as negative values, while the computing component adds the inputs as negative values and the outputs as positive values, the verifier can simply check that the sum of the two LogUp columns is zero.

Code

fn main() {
    // --snip--

    // Create trace columns
    let scheduling_trace = gen_scheduling_trace(log_size);
    let computing_trace = gen_computing_trace(log_size, &scheduling_trace[0], &scheduling_trace[1]);

    // Statement 0
    let statement0 = ComponentsStatement0 { log_size };
    statement0.mix_into(channel);

    // Commit to the trace columns
    let mut tree_builder = commitment_scheme.tree_builder();
    tree_builder.extend_evals([scheduling_trace.clone(), computing_trace.clone()].concat());
    tree_builder.commit(channel);

    // Draw random elements to use when creating the random linear combination of lookup values in the LogUp columns
    let lookup_elements = ComputationLookupElements::draw(channel);

    // Create LogUp columns
    let (scheduling_logup_cols, scheduling_claimed_sum) = gen_scheduling_logup_trace(
        log_size,
        &scheduling_trace[0],
        &scheduling_trace[1],
        &lookup_elements,
    );
    let (computing_logup_cols, computing_claimed_sum) = gen_computing_logup_trace(
        log_size,
        &computing_trace[0],
        &computing_trace[2],
        &lookup_elements,
    );

    // Statement 1
    let statement1 = ComponentsStatement1 {
        scheduling_claimed_sum,
        computing_claimed_sum,
    };
    statement1.mix_into(channel);

    // Commit to the LogUp columns
    let mut tree_builder = commitment_scheme.tree_builder();
    tree_builder.extend_evals([scheduling_logup_cols, computing_logup_cols].concat());
    tree_builder.commit(channel);

    let components = Components::new(&statement0, &lookup_elements, &statement1);

    let stark_proof = prove(&components.component_provers(), channel, commitment_scheme).unwrap();

    let proof = ComponentsProof {
        statement0,
        statement1,
        stark_proof,
    };

    // --snip--
}

The code above for proving the components should look pretty familiar by now. Since we need to do everything twice the amount of times, we create structs like ComponentsStatement0, ComponentsStatement1, Components and ComponentsProof, but the main logic is the same.

Let's take a closer look at how the LogUp columns are generated.

fn gen_scheduling_logup_trace(
    log_size: u32,
    scheduling_col_1: &CircleEvaluation<SimdBackend, M31, BitReversedOrder>,
    scheduling_col_2: &CircleEvaluation<SimdBackend, M31, BitReversedOrder>,
    lookup_elements: &ComputationLookupElements,
) -> (
    Vec<CircleEvaluation<SimdBackend, M31, BitReversedOrder>>,
    SecureField,
) {
        // --snip--

        let scheduling_input: PackedSecureField =
            lookup_elements.combine(&[scheduling_col_1.data[row]]);
        let scheduling_output: PackedSecureField =
            lookup_elements.combine(&[scheduling_col_2.data[row]]);
        col_gen.write_frac(
            row,
            scheduling_output - scheduling_input,
            scheduling_input * scheduling_output,
        );

        // --snip--


fn gen_computing_logup_trace(
    log_size: u32,
    computing_col_1: &CircleEvaluation<SimdBackend, M31, BitReversedOrder>,
    computing_col_3: &CircleEvaluation<SimdBackend, M31, BitReversedOrder>,
    lookup_elements: &ComputationLookupElements,
) -> (
    Vec<CircleEvaluation<SimdBackend, M31, BitReversedOrder>>,
    SecureField,
) {
        // --snip--

        let computing_input: PackedSecureField =
            lookup_elements.combine(&[computing_col_1.data[row]]);
        let computing_output: PackedSecureField =
            lookup_elements.combine(&[computing_col_3.data[row]]);
        col_gen.write_frac(
            row,
            computing_input - computing_output,
            computing_input * computing_output,
        );

        // --snip--
}

As you can see, the LogUp values of the input and output columns of both the scheduling and computing components are batched together, but in the scheduling component, the output LogUp value is subtracted from the input LogUp value, while in the computing component, the input LogUp value is subtracted from the output LogUp value. This means that when the LogUp sums from both components are added together, they should cancel out and equal zero.

Next, let's check how the constraints are created.

impl FrameworkEval for SchedulingEval {
    // --snip--

    fn evaluate<E: EvalAtRow>(&self, mut eval: E) -> E {
        let input_col = eval.next_trace_mask();
        let output_col = eval.next_trace_mask();

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            E::EF::one(),
            &[input_col],
        ));

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            -E::EF::one(),
            &[output_col],
        ));

        eval.finalize_logup_in_pairs();

        eval
    }
}

impl FrameworkEval for ComputingEval {
    // --snip--

    fn evaluate<E: EvalAtRow>(&self, mut eval: E) -> E {
        let input_col = eval.next_trace_mask();
        let intermediate_col = eval.next_trace_mask();
        let output_col = eval.next_trace_mask();

        eval.add_constraint(
            intermediate_col.clone() - input_col.clone() * input_col.clone() * input_col.clone(),
        );
        eval.add_constraint(
            output_col.clone()
                - intermediate_col.clone() * input_col.clone() * input_col.clone()
                - E::F::one(),
        );

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            -E::EF::one(),
            &[input_col],
        ));

        eval.add_to_relation(RelationEntry::new(
            &self.lookup_elements,
            E::EF::one(),
            &[output_col],
        ));

        eval.finalize_logup_in_pairs();

        eval
    }
}

As you can see, we define the LogUp constraints for each component, and we also add two constraints that make sure the computations \(x^3\) and \(x^5 + 1\) are correct.

fn main() {
    // --snip--

    // Verify claimed sums
    assert_eq!(
        scheduling_claimed_sum + computing_claimed_sum,
        SecureField::zero()
    );

    // Unpack proof
    let statement0 = proof.statement0;
    let statement1 = proof.statement1;
    let stark_proof = proof.stark_proof;

    // Create channel and commitment scheme
    let channel = &mut Blake2sChannel::default();
    let commitment_scheme = &mut CommitmentSchemeVerifier::<Blake2sMerkleChannel>::new(config);
    let log_sizes = statement0.log_sizes();

    // Preprocessed columns.
    commitment_scheme.commit(stark_proof.commitments[0], &log_sizes[0], channel);

    // Commit to statement 0
    statement0.mix_into(channel);

    // Trace columns.
    commitment_scheme.commit(stark_proof.commitments[1], &log_sizes[1], channel);

    // Draw lookup element.
    let lookup_elements = ComputationLookupElements::draw(channel);

    // Commit to statement 1
    statement1.mix_into(channel);

    // Interaction columns.
    commitment_scheme.commit(stark_proof.commitments[2], &log_sizes[2], channel);

    // Create components
    let components = Components::new(&statement0, &lookup_elements, &statement1);

    verify(
        &components.components(),
        channel,
        commitment_scheme,
        stark_proof,
    )
    .unwrap();

Finally, we verify the components!

Additional Examples

ExamplePreprocessed ColumnsTrace ColumnsLogUp Columns
Permutation argument check- unordered list
- ordered list
1 / unordered list - 1 / ordered list
Range Check (0 <= a < 2^bits)[0,2^bits) rows- lookup columns
- multiplicities column
- 1 / lookup
- multiplicity / preprocessed
Comparator check (a > b)[0,2^bits) rows- a
- b
- multiplicities column
- 1 / (a - b)
- multiplicity / preprocessed
IsZero check (a == 0)- a
- a_inv
XOR operationsvalid XOR operations- lookup columns
- multiplicities column
Selectorscolumns of 0s and 1s
Checking whether current row is first row or notsingle column (first row = 1, other rows = 0)
Connecting multiple components (output of Component A is input of Component B)- 1 / output
- 1 / input * (-1)
Public Input/Output1 / input + 1 / output

Above is a list of additional examples that you can implement as an AIR using Stwo, some of which we have already implemented in the previous sections.

Cairo as a Stwo AIR

🚧

How Does It Work?

This section is for those who want an in-depth explanation of various components of Stwo.

Mersenne Primes

Proof systems typically rely on finite field operations, where efficient field arithmetic is crucial for optimizing proof generation. In STARK protocols, there is no direct dependency between the security level of the proof system and the field size. This allows the use of small fields with highly efficient arithmetic, such as Mersenne prime fields.

A Mersenne prime is defined as a prime number that is one less than a power of two, expressed as \( p = 2^k -1 \).

Consider the Mersenne prime field \( \mathbb{F}_p \) where \( p = 2^{31} - 1 \). Our objective is to perform field multiplication \( a \cdot b \), where \( a, b \in \mathbb{F}_p \). This operation involves a 31-bit integer multiplication, producing a 62-bit intermediate result, which is then reduced modulo \( p \).

Let \( x = a \cdot b \), where \( a, b \) are 31-bit values, resulting in a 62-bit product \( x \). We can decompose \( x \) into two 31-bit values \( b \) and \( s \), such that \( x = 2^{31} \cdot b + s \), as shown in the following figure.

Mersenne Prime Multiplication

To perform modular reduction, we start with: \[ x \equiv (2^{31} \cdot b + s) \quad mod \quad (2^{31} - 1) \] Substituting \( 2^{31} \equiv 1 \mod (2^{31} - 1) \) gives: \[ x \equiv (b + s) \quad mod \quad (2^{31} - 1) \]

Since \( b \) and \( s \) are both 31-bit values, they can be directly represented as field elements. Consequently, modular reduction is performed with a single field addition. This makes arithmetic over Mersenne primes exceptionally fast, making them an ideal choice for our STARK protocol.

However, we instantiate STARK protocols over an FFT-friendly field, meaning a field that contains a multiplicative subgroup of order that is a large power of two (commonly referred to as a smooth subgroup).

\[ |\mathbb{F}_p^*| = p-1 = 2^k-2\]

As shown above, Mersenne prime fields lack a smooth subgroup of size that is a large power of two because there is no large power of two that divides \( |\mathbb{F}_{p}^*| \). In other words, there does not exist a sufficiently large \( n \) such that \( 2^n \, | \, p - 1 \).

Extensions of Mersenne Prime Field

To make Mersenne prime fields compatible with STARKs, we use a degree-2 extension of \( \mathbb{F}_p \), defined as follows:

\[ \mathbb{F}_{p^2} = \mathbb{F}_p[X]/(X^2 + 1) \]

This extension forms a field of size \( p^2 \), where elements can be represented as \( (a, b) \) or \[ a + i \cdot b \] where \( a, b \in \mathbb{F}_p \) and \( i \) is the root of the polynomial \( X^2 + 1 \) i.e. \( i^2 + 1 = 0\).

The order of the multiplicative group of this extended field is calculated as follows:

\[ |\mathbb{F}_{p^2}^*| = p^2 - 1 = (p-1) \cdot (p+1)\]

For Mersenne primes of the form \( p = 2^k - 1 \), this becomes:

\[ |\mathbb{F}_{p^2}^*| = (2^k-2) \cdot (2^k)\]

As shown above, \( 2^k \, | \, |\mathbb{F}_{p^2}^*| \) i.e. \( \mathbb{F}_{p^2}^* \) contains a subgroup of size that is a large power of two. This makes it suitable for instantiating STARKs. This subgroup is what we refer to as the Circle group (explored further in the next section).

Secure Field

For the soundness of the protocol, it is crucial that the verifier samples random challenges from a sufficiently large field to ensure that an adversary cannot guess or brute-force the challenges and generate a proof that passes verification without knowledge of the witness.

If we use \( p = 2^{31} -1 \), then 31-bit random challenges are not sufficient to maintain the security of the protocol. To address this, the verifier draws random challenges from a degree-4 extension of \( \mathbb{F}_{p} \), which is equivalent to degree-2 extension of \( \mathbb{F}_{p^2} \), denoted as \[ \mathbb{F}_{p^4} = \mathbb{F}_{p^2}[X]/(X^2 - 2 - i) \]

The elements of \( \mathbb{F}_{p^4} \) can be represented as \( (r, s) \) or \[ r + u \cdot s \] where \( r, s \in \mathbb{F}_{p^2} \) and \( u \) is the root of the polynomial \( X^2 - 2 - i \) i.e. \( u^2 - 2 - i = 0\).

Alternatively, the elements of \( \mathbb{F}_{p^4} \) can also be represented as four elements of \( \mathbb{F}_{p} \) i.e. \( ((a, b), (c, d)) \) or \[ (a + i \cdot b) + (c + i \cdot d) \cdot u \]

where \( a, b, c, d \in \mathbb{F}_p \). With four elements from \( \mathbb{F}_{p} \), the challenge space consists of 124-bit values, offering a sufficiently large \( 2^{124} \) possibilities to sample a random challenge.

Circle Group

As discussed in the previous section, Mersenne prime fields \( \mathbb{F}_p \) lack a smooth subgroup whose order is a large power of two. This property makes such fields unsuitable for instantiating STARK protocols. To address this, we consider extensions of \( \mathbb{F}_p \) that have smooth subgroups, which are suitable for performing FFTs and implementing the FRI protocol.

For a field extension \( F \) of \( \mathbb{F}_p \), we define the circle curve \( C(F) \) as the set of points \( (x, y) \in F^2 \) satisfying the relation: \[ x^2 + y^2 = 1 \]

In Stwo implementation, a point on the circle is defined as follows:

pub struct CirclePoint<F> {
    pub x: F,
    pub y: F,
}

The set \( C(F) \) forms a cyclic group under the operation defined by: \[ (x,y) + (x', y') = (xx' - yy', xy' + x'y) \]

Here, the group is defined additively, which differs from the multiplicative notation used in the Circle STARKs paper. In this documentation, we adopt the additive notation for consistency with the implementation. The above group operation is implemented as:

    fn add(self, rhs: Self) -> Self::Output {
        let x = self.x.clone() * rhs.x.clone() - self.y.clone() * rhs.y.clone();
        let y = self.x * rhs.y + self.y * rhs.x;
        Self { x, y }
    }

The identity element in this group is \( (1, 0) \), implemented as:

    pub fn zero() -> Self {
        Self {
            x: F::one(),
            y: F::zero(),
        }
    }

Negation in the circle group corresponds to the conjugation map \( J \), defined by: \[ J(x, y) = (x, -y) \] This is same as complex conjugation in complex numbers. In Stwo, the conjugate of a CirclePoint is computed as:

    pub fn conjugate(&self) -> CirclePoint<F> {
        Self {
            x: self.x.clone(),
            y: -self.y.clone(),
        }
    }

The total number of points in the circle group \( C(F) \) is given by \( |F| + 1 \). Specifically, for \( C(\mathbb{F}_p) \), the number of points is \( p + 1 \), which, as discussed earlier, is a large power of two and can thus be used in STARK protocol instantiations. This result is proven in Lemma 1 of the Circle STARKs paper.

In Stwo implementation, the generator \( g \) of the group \( C(\mathbb{F}_p) \) is defined as: \[ g = (2, 1268011823) \] where \( p = 2^{31} -1\). Subgroups of \( C(\mathbb{F}_p) \) of size \( 2^n \) can be generated using the following function:

    pub fn subgroup_gen(n: u32) -> CirclePoint<F> {
        assert!(n <= M31_CIRCLE_LOG_ORDER); // M31_CIRCLE_LOG_ORDER = 31
        let s = 1 << (M31_CIRCLE_LOG_ORDER - n);
        M31_CIRCLE_GEN.mul(s) // M31_CIRCLE_GEN = g = (2, 1268011823)
    }

To generate a subgroup \( \langle g_n \rangle \) of size \( 2^n \), the function computes \( 2^{31 - n} \cdot g \), i.e. it applies the group law to the generator \( g \) with itself \( 2^{31 - n} \) times, as shown below:

    pub fn mul(&self, mut scalar: u128) -> CirclePoint<F> {
        let mut res = Self::zero();
        let mut cur = self.clone();
        while scalar > 0 {
            if scalar & 1 == 1 {
                res = res + cur.clone();
            }
            cur = cur.double();
            scalar >>= 1;
        }
        res
    }

Hence, the point \( 2^{31-n} \cdot g \) serves as a generator of a subgroup \( \langle g_n \rangle \) of order \( 2^n \).

Circle Domain

In a STARK protocol, the computation trace is interpolated as a low-degree polynomial over a domain using FFT. For Circle STARKs, this domain consists of points on the circle curve and is referred to as the circle domain. The circle domain \( D \) is constructed as the union of two disjoint cosets: \[ D = q + \langle g_{n-1} \rangle \cup -q + \langle g_{n-1} \rangle \] Here, \( \langle g_{n-1} \rangle \) is a subgroup of size \( 2^{n-1} \), and \( q \) is the coset offset. This union is also called the twin-coset. The second coset in the union can be viewed as the negation (or conjugation) of the first: \[ J(q + \langle g_{n-1} \rangle) = -q + \langle g_{n-1} \rangle \] Therefore, it suffices to store only the half coset \( q + \langle g_{n-1} \rangle \), and generate the full domain via its conjugates. The circle domain is defined in Stwo as:

pub struct CircleDomain {
    pub half_coset: Coset,
}

The following animation shows a circle domain of size 8. It is constructed from the half coset \( q + \langle g_2 \rangle \) of size 4 (shown as red points) and its negation \( -q + \langle g_2 \rangle \) (shown as blue points).

To iterate over all points in the circle domain, we can iterate over the half coset and its conjugates:

    pub fn iter(&self) -> CircleDomainIterator {
        self.half_coset
            .iter()
            .chain(self.half_coset.conjugate().iter())
    }

Canonic Coset

For a specific choice of offset \( q \), the twin-coset \( D \) becomes a coset of a larger subgroup. In particular, if \( q \) is a generator of a subgroup of order \( 2^{n+1} \), then: \[ D = q + \langle g_n \rangle = q + \langle g_{n-1} \rangle \cup -q + \langle g_{n-1} \rangle \] This result is proven in Proposition 1 of the Circle STARKs paper. Such domains are called standard position coset, or are referred to as canonic cosets. They are implemented as follows:

pub struct CanonicCoset {
    pub coset: Coset,
}

Here, CanonicCoset represents the full coset \( q + \langle g_n \rangle \), while CircleDomain is represented with its half coset \( q + \langle g_{n-1} \rangle \). Thus to compute the CircleDomain from the CanonicCoset, first calculate the half coset \( q + \langle g_{n-1} \rangle \), which will be used to initialize the CircleDomain as shown below:

    pub fn circle_domain(&self) -> CircleDomain {
        CircleDomain::new(self.half_coset())
    }

The following animation shows a canonic coset of size 8. It is constructed from the coset \( \langle g_3 \rangle \) of size 8 followed by an offset by \( q \), where \( q \) is the generator of subgroup \( \langle g_4 \rangle \).

We can verify whether a given CircleDomain is canonic by checking the step size of the half coset against the initial coset offset. In the CircleDomain implementation, only the half coset \( q + \langle g_{n-1} \rangle \) is explicitly stored. If CircleDomain is canonic, \( q \) must be a generator of the subgroup \( \langle g_{n+1} \rangle \), which has order \( 2^{n+1} \) i.e. \( q = 2^{31 - (n+1)} \cdot g \). Recall that the generator of the subgroup \( \langle g_{n-1} \rangle \) is \( 2^{31 - (n-1)} \cdot g \).

Thus, the step size between consecutive elements in the half coset is \( 2^{31 - (n-1)} \cdot g \), and the initial point is \( q = 2^{31 - (n+1)} \cdot g \). Therefore, the ratio between the step size and the initial coset offset is: \[ \frac{2^{31 - (n-1)}}{2^{31 - (n+1)}} = 2^2 = 4 \] This means that in a canonic coset, the step size is exactly four times the initial coset offset. This condition is used to check whether a CircleDomain is canonic, as shown below:

    pub fn is_canonic(&self) -> bool {
        self.half_coset.initial_index * 4 == self.half_coset.step_size
    }

In the next section, we will dive into polynomials defined over the circle.

Lookups

Lookups are simply a way to connect one part of the table to another. When we "look up" a value, we are doing nothing more than creating a constraint that allows us to use that value in another part of the table without breaking soundness.

Design

We will walk through four steps to incrementally build up the design of lookups.

Step 1: Suppose we want to have two columns with the same values.

We can do this by creating two columns with the exact same values and adding a constraint over them: col_1 - col_2 = 0.

Step 2: We want to check that the two columns have the same values but in a different order.

We can use the idea that two sets of values will have the same cumulative product if they are indeed permutations of each other. So we add new columns, col_1_cumprod for col_1 and col_2_cumprod for col_2, which contain the running cumulative product of col_1 and col_2, respectively. The new constraints will check that each of these new columns do indeed contain the cumulative product values and that their last values are the same. We can optimize this by creating just one new column that keeps a running cumulative product of the fraction col_1 / col_2.

Step 3: We want to check that all values in col_2 are in col_1, but each value appears an arbitrary number of times.

(Note that this is a generalization of the second step in that for the second step,all values in col_2 appear exactly once in col_1)

Supporting this third step is actually pretty simple: when creating the running cumulative product, we need to raise each value in col_1 to its multiplicity, or the number of times it appears in col_2. The rest of the constraints do not need to be changed.

Step 4: We want to check that all values in [col_2, col_3, ...] are in col_1 with arbitrary multiplicities

Finally, we want to create many more columns that contain values from col_1. Fortunately,

To support this, we can use the same idea as the third step: when creating the running cumulative product, we need to raise each value in col_1 to the power of the number of times it appears in [col_2, col_3, ...].

Note

In summary, lookups support the following use-cases:

  1. Prove equality: we want to prove that the values of the first column are equal to the values of the second column.
  2. Prove permutation: we want to prove that the values of the first column are a permutation of the values of the second column.
  3. Prove permutation with multiplicities: we want to prove that each value of the first column appears a certain number of times over multiple columns.

Technique: LogUp

LogUp is a technique used to constrain lookups. It's a successor to Plookup, and is especially useful for proving permutation with multiplicities. Here, we'll briefly explain why this is the case.

Plookup and its variants use a technique called the Grand Product Check to prove permutation.

$$ \prod_{i=0}^{n-1} (X - a_i) = \prod_{i=0}^{n-1} (X - b_i) $$

In the equation above, we can check that the set ${a_0,...,a_{n-1}}$ is a permutation of the set ${b_0,...,b_{n-1}}$ by setting $X$ to a random value provided by the verifier.

However, this becomes inefficient when we have multiplicities since we need to encode the multiplicities as powers of each lookup polynomial, and thus the degree of the polynomial increases linearly with the number of multiplicities.

$$ \prod_{i=0}^{n-1} (X - a_i) = \prod_{i=0}^{n-1} (X - b_i)^{m_i} $$

On the other hand, LogUp uses the derivative of the Grand Product Check:

$$ \sum_{i=0}^{n-1} \frac{1}{X - a_i} = \sum_{i=0}^{n-1} \frac{m_i}{X - b_i} $$

In this approach, each lookup polynomial is represented as a rational function with the multiplicity as the numerator. This transformation is significant because the degree of the polynomial remains constant regardless of the number of multiplicities, making LogUp more efficient for handling multiple lookups of the same value.

Implementation

The following figures show the implementation of lookups in Stwo that looks up values from a preprocessed trace and constraining them using the LogUp technique.

Figure 1: Create trace columns that look up values from a preprocessed trace
Figure 2: Add a multiplicity column
Figure 3: Create LogUp columns

Awesome Stwo

We refer you to the Awesome Stwo repo for additional resources on Stwo and a list of awesome projects building on Stwo.

Benchmarks Report

  • Overview of zkVM and Proof Systems
zkVM / Proof SystemArchitectureFrontendBackendSecurity Bits
RISC ZeroRISC-VRustSTARK-based96 bits
SP1RISC-VRustSTARK-based100 bits
OpenVMRISC-VRustSTARK-based100 bits
JoltRISC-VRustLookup-based-
StoneCairo VMCairoSTARK-based100 bits
StwoCairo VMCairoSTARK-based96 bits
  • All benchmarks presented here are CPU-only and do not utilize any GPU acceleration.
  • Stone benchmarks were generated using the dynamic layout with these configurations.
  • The benchmarks for SP1, R0 and OpenVM use compressed or succinct prover type, which aggregates all the STARK proofs into a single STARK proof.
  • Benchmarks which run out of memory have been indicated by 💾 and benchmarks which generate errors in proof generation have been indicated by in the tables.

Time and Commit Hash

  • Commit Hash: f27856ec17fbd9e85ec31cba9c2cb9e96d8dd08f
  • Timestamp: Thursday, July 03, 2025 20:52:50 UTC

System Information

OS Version

Ubuntu 24.04.2 LTS

CPU Info

  • Architecture: x86_64

  • CPU(s): 48

  • Model name: AMD EPYC-Rome Processor

  • Thread(s) per core: 2

  • Core(s) per socket: 24

  • Socket(s): 1

  • L3 cache: 16 MiB (1 instance)

Memory Info

  • MemTotal: 184.25 GB

  • MemFree: 166.19 GB

  • MemAvailable: 178.69 GB

Fibonacci

Benchmark n Fibonacci iterations.

Prover Time (s)

njoltsp1openvmr0stonestwo
327683.20815.00334.29117.12856.32513.198
655365.18318.25242.40329.24195.46811.372
1310729.1724.98257.97852.298💾12.886
26214415.77139.32792.22986.743💾12.587
52428828.1654.812158.128167.607💾16.166
104857654.4272.028320.082306.386💾18.612
2097152107.89128.215515.567605.92💾31.047
4194304222.3321042.581206.76💾60.939

Verifier Time (ms)

njoltsp1openvmr0stonestwo
3276853.01081422392.041
6553677.010713910113.016
13107258.010114023💾18
262144103.010014122💾89
52428856.08314024💾14
1048576104.0835723💾414
209715298.0845710💾367
4194304835723💾10

Proof Size (KB)

njoltsp1openvmr0stonestwo
32768187.6151315.531708.31223.234125.608789.578
65536197.7991315.531708.31223.234129.8783.63
131072208.3991315.531708.31223.234💾783.834
262144219.4151315.531708.31223.234💾802.678
524288230.8471315.531708.31223.234💾801.194
1048576242.6951315.53852.937223.234💾808.038
2097152254.9591315.53852.937223.234💾819.122
41943041315.53816.649223.234💾866.882

Cycle Count

njoltsp1r0stonestwo
32768196974168808166215229390262143
65536393582332648330055458766524287
131072786798660328657735💾1048575
262144157323013156881313095💾2097151
524288314604126264082623815💾4194303
1048576629182252478485245255💾8388607
2097152125832931049072810488135💾16777215
41943042097648820973895💾33554431

Peak Memory (GB)

njoltsp1openvmr0stonestwo
327686.445.1272.359.4411.08
655365.836.425.434.58118.5411.46
1310728.799.185.439.15💾12.2
26214414.7314.36.579.17💾14.01
52428826.9515.1212.229.17💾17.57
104857650.8525.9412.229.17💾24.57
209715298.9335.6112.429.18💾38.62
419430451.2512.439.18💾72.26

Sha2

Benchmark Sha256 hash of n bytes. For Stone, the cairo implementation of sha256 by cartridge was used for benchmarking and for other zkvms sha2 Rust crate was used for benchmarking.

Prover Time (s)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompile
2561.65712.7128.94311.19610.83517.97834.0198.31644.301
5121.71113.18429.56111.30619.8815.05133.99111.3343.728
10242.17313.86330.60917.13631.02116.83333.8311.20144.897
20483.32415.02134.18428.93431.29417.72634.53117.17444.683
409616.8839.3852.48766.53914.08434.41828.98247.701
819224.03750.53986.688116.46714.97935.56452.4848.995
1638437.95878.057167.856💾21.40238.267110.88752.865
3276852.566129.842300.572💾16.55546.209184.87360.148
6553672.136249.403589.84💾18.38460.138357.71679.957
131072136.592532.7961172.21💾27.87187.29709.726124.996
262144257.02916.6032338.03💾27.362121.0221415.34219.654
524288513.7261790.294673.32💾43.242233.9592825.46393.678

Verifier Time (ms)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompile
25650.010814110428.0108322142
51249.011314420418.01110623141
102449.010214023426.0119015141
204875.09714022440.01110922142
409610613922454.0119923141
819210914122525.0118722142
163848214422💾429223142
327689414023💾4910215142
655368314123💾1439523141
131072835724💾4418324141
262144835624💾2368323142
524288835623💾128224141

Proof Size (KB)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompile
256172.1431315.561708.31223.482104.104986.3941315.56223.4821784.35
512172.1431315.561708.31223.482114.1521000.611315.56223.4821784.35
1024181.4951315.561708.31223.482117.2241011.161315.56223.4821784.35
2048191.2631315.561708.31223.482120.041011.711315.56223.4821784.35
40961315.561708.31223.482126.6321007.771315.56223.4821784.35
81921315.561708.31223.482130.8241013.421315.56223.4821784.35
163841315.561708.31223.482💾1005.311315.56223.4821784.35
327681315.561708.31223.482💾1018.491315.56223.4821784.35
655361315.561708.31223.482💾1017.721315.56223.4821784.35
1310721315.56852.937223.482💾1037.151315.56223.4821784.35
2621441315.56852.937223.482💾1045.931315.56223.4821784.35
5242881315.56816.649223.482💾1092.141315.56223.4821784.35

Cycle Count

njoltsp1r0stonestwosp1-precompiler0-precompile
25635325.0339115128014791.01310711149231643
51260893.0527439084829233.01310711582055467
10241120299040716998446295.013107124476103133
204821430116573532825680419.013107141788198463
409631639164480016048926214376412389106
81926177031277888308807524287145660770392
1638412203272544064💾10485752841561532981
3276824255755076416💾20971515611483058159
65536483607110141120💾419430311151326108532
131072965706320270528💾8388607222310012209241
2621441929904740529344💾16777215443903624410676
5242883858301581046976💾33554431887090848813546

Peak Memory (GB)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompile
2565.094.295.451.439.6310.797.841.435.44
5124.94.425.441.4218.910.828.181.425.45
10244.94.545.442.335.7410.868.011.425.44
20484.915.025.444.5835.7310.878.12.35.44
40966.285.449.1571.4611.038.294.595.44
81928.65.449.17142.6811.348.539.165.44
1638413.325.449.18💾11.948.799.175.44
3276815.318.789.18💾13.488.639.185.44
6553629.1916.649.18💾16.5212.99.185.44
13107235.421.69.2💾22.521.269.196.89
26214445.3422.779.21💾34.5234.159.2112.86
52428854.8825.599.25💾63.8949.579.2324.78

Sha2-Chain

Benchmark Sha256 hash of 32 bytes for n iteration.

Prover Time (s)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompile
82.16412.79829.44411.30311.08311.48834.6066.86243.511
162.97313.87530.34911.25211.11518.58334.3398.35244.295
324.91214.6233.81417.15611.99111.93834.2038.30543.893
648.28517.61738.29428.94810.63114.15833.86211.28944.197
12814.46924.72849.58252.45719.20610.33535.13411.31545.006
25625.90937.47173.57474.81826.18414.05736.23417.42242.793
51250.06452.895126.086144.23456.52922.01240.64329.05746.573
102496.90171.752231.888285.01295.87820.84650.0252.52252.862
2048132.275516.789574.819💾14.06177.808110.07161.761
4096227.325905.7061110.77💾17.799103.011191.20881.617
8192444.0281814.442210.89💾18.607193.572365.446122.31
16384872.6383596.934404.24💾20.569319.33723.963212.16
327681698.857067.268791.74💾25.041628.7121443.04388.56

Verifier Time (ms)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompile
849.08214022432.0108323141
1655.010114023398.0109320142
3286.010814022402.0109421141
64102.08814023375.0108722141
12878.010814123413.0108421141
25660.08313922422.0118323141
51260.08414022420.0378322142
1024103.08313923456.0418322141
2048835723💾408310141
4096865622💾1198323141
8192835622💾1168310142
16384845624💾1828623141
32768835624💾2628323141

Proof Size (KB)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompile
8181.4951315.561708.31223.482109.544952.0941315.56223.4821784.35
16191.2631315.561708.31223.482106.728971.1541315.56223.4821784.35
32201.4471315.561708.31223.482108.776955.5541315.56223.4821784.35
64212.0471315.561708.31223.482106.568962.7981315.56223.4821784.35
128223.0631315.561708.31223.482112.36946.1581315.56223.4821784.35
256234.4951315.561708.31223.482117.992969.6981315.56223.4821784.35
512246.3431315.561708.31223.482124.168947.7381315.56223.4821784.35
1024258.6071315.561708.31223.482129.512948.8741315.56223.4821784.35
20481315.56852.937223.482💾959.4141315.56223.4821784.35
40961315.56852.937223.482💾966.0741315.56223.4821784.35
81921315.56816.649223.482💾976.1421315.56223.4821784.35
163841315.56816.649223.482💾986.1861315.56223.4821784.35
327681315.56852.937223.482💾1001.511315.56223.4821784.35

Cycle Count

njoltsp1r0stonestwosp1-precompiler0-precompile
869290.048118459523008.0655351349514624
1613628285686834325872.0655352028720776
3227026616082215839211600.0655353387133080
6453823431109430831223056.0655356103957688
128107418761163860815245968.065535115375106904
25621460591212726120783291792.0131071224047205336
512428980324149022407192183440262143441391402200
1024857729148192544805912366736524287876079795928
204896279589603352💾104857517454551583384
40961924536619198232💾209715134842073158296
81923848018238387992💾419430369617116308120
163847694981476767512💾83886071391671912607768
32768153889078153526552💾167772152782673525207064

Peak Memory (GB)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompile
85.14.315.441.429.110.747.811.445.44
164.914.545.441.429.0410.748.041.435.44
325.924.985.442.39.0410.758.51.435.44
648.696.195.444.589.0510.778.051.425.44
12814.398.815.449.1417.8710.788.291.425.44
25626.3613.275.449.1729.6610.838.522.35.44
51248.3214.78.579.1759.3310.958.634.595.44
102493.0327.216.249.17118.6211.1811.089.155.44
204840.420.599.18💾11.6418.619.175.44
409644.0420.69.19💾12.8835.189.175.44
819255.1420.629.2💾15.3153.39.176.64
1638458.4720.669.2💾20.0852.239.1812.34
3276860.2220.89.27💾29.7163.039.1823.74

Sha3

Benchmark Keccak256 hash of n bytes. For Stone, the implementation of Keccak256 from stdlib as well as builtin was benchmarked.

Prover Time (s)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompilestone-precompile
2561.71913.32831.47111.34820.26619.74360.34333.957.92518.361
5122.10913.62732.77317.20530.97210.89760.38736.85359.35417.21
10243.1515.27534.77717.17965.92523.25860.19342.78958.74818.037
20484.92417.56439.92929.094115.74616.66160.69643.04759.41916.58
409624.09645.14252.54💾11.69360.07354.79261.50327.567
819237.85160.927110.08💾16.39261.47778.20363.79843.043
1638450.15894.74225.876💾15.42263.416136.10370.00386.217
3276866.943165.212422.471💾16.52870.234251.26379.62151.873
65536109.098308.693841.045💾21.26984.047437.697102.599💾
131072207.231753.4941671.26💾32.63196.357856.136151.591💾
262144367.8671526.343310.6💾59.078133.311691.49252.934💾
524288701.993048.736615.82💾106.153191.3523376.32454.067💾

Verifier Time (ms)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompilestone-precompile
25677.010713923505.01111624141269.0
51266.010713923494.01110823141213.0
102458.08413922495.0118823141244.0
204861.010914024544.01110422141217.0
409611014022💾1111422142229.0
819210813923💾118922142254.0
163848314024💾389123142262.0
327688314024💾1398822141273.0
655368214024💾19811123141💾
131072835724💾2018822141💾
262144835623💾6318823141💾
524288835622💾1918823141💾

Proof Size (KB)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompilestone-precompile
256170.8231315.561708.31223.482113.641002.791477.23223.4821787.55108.008
512180.1751315.561708.31223.482119.272991.4941477.23223.4821787.55107.752
1024189.9431315.561708.31223.482126.921001.151477.23223.4821787.55110.312
2048200.1271315.561708.31223.482130.5361001.41477.23223.4821787.55110.056
40961315.561708.31223.482💾1016.951477.23223.4821787.55114.92
81921315.561708.31223.482💾1012.451477.23223.4821787.55119.848
163841315.561708.31223.482💾1027.41477.23223.4821787.55126.088
327681315.561708.31223.482💾1014.611477.23223.4821787.55132.872
655361315.561708.31223.482💾1037.781477.23223.4821787.55💾
1310721315.56852.937223.482💾1073.611477.23223.4821787.55💾
2621441315.56816.649223.482💾1154.341477.23223.4821787.55💾
5242881315.56852.937223.482💾1179.281477.23223.4821787.55💾

Cycle Count

njoltsp1r0stonestwosp1-precompiler0-precompilestone-precompile
25646732.0476536365119431.01310711533736390998.0
51290274.08471412192538611.013107120082639101762.0
102417735815882323847359058.0262143295591189503286.0
2048351574307036471569117859524287485082290306334.0
4096586913921296💾10485758601544810212241.0
819211466541820750💾209715116101688624624055.0
1638422661473619658💾2097151311029176253447683.0
3276845051107217474💾4194303611032351511094930.0
65536899965314429571💾838860712114977021350💾
1310721798871428853765💾16777215241240214038604💾
2621443596682357702153💾33554431481419928068169💾
52428871923036115398929💾67108863961778856132428💾

Peak Memory (GB)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompilestone-precompile
2565.434.55.441.4219.0210.8211.563.465.4911.13
5124.94.545.442.335.7410.8511.593.455.4911.07
10244.95.015.442.371.4110.9711.553.455.4911.07
20485.86.165.444.59142.5411.2111.623.455.4911.07
40968.75.449.17💾11.911.794.595.4921.97
819213.655.449.17💾13.1911.969.165.4943.91
1638412.245.859.18💾13.7311.879.175.4987.17
3276824.0310.779.18💾1712.239.175.48174.24
655363920.539.19💾23.5412.229.185.49💾
13107252.2321.239.2💾42.0817.099.188.19💾
26214452.2822.779.2💾78.9727.259.2115.42💾
52428857.6722.939.27💾151.1655.689.2229.87💾

Sha3-Chain

Benchmark Keccak256 hash of 32 bytes for n iteration.

Prover Time (s)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompilestone-precompile
83.15114.69131.42717.266.29113.55259.26634.11458.80616.827
165.14217.43134.81828.941115.2818.4760.35337.32259.32918.279
328.5723.60343.13552.437💾13.47560.40936.97358.91327.208
6415.08237.42257.01774.883💾14.5560.37742.97859.97143.739
12829.2550.15190.378132.931💾18.69959.43354.81461.48785.956
25655.49666.364156.377261.039💾22.32961.84378.50365.217151.067
512105.689104.036297.659515.447💾29.07865.63492.15172.255💾
1024192.377715.3261037.74💾52.1473.4176.44188.201💾
2048357.8851474.132034.16💾94.35891.595348.058115.792💾
4096681.1382909.764083.82💾109.247665.897171.808💾
81921316.415795.098122.75💾183.781286.1510.264💾
163842597.8611684.616241.2💾353.9352565.25870.635💾
327685192.0423068.532449.9💾657.3835090.451712.06💾

Verifier Time (ms)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompilestone-precompile
883.010713923526.011.012220142252.0
16114.08613923555.011.011623142254.0
3275.011113923💾11.08823142256.0
6479.010213923💾11.011623142265.0
12857.09913923💾11.09922141270.0
25664.08313922💾42.09020141288.0
512118.08313923💾44.010323141💾
1024835623💾49.010623144💾
2048845623💾572.08824142💾
4096835623💾8823141💾
8192845623💾872256💾
16384835520💾882457💾
32768845723💾882255💾

Proof Size (KB)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompilestone-precompile
8191.2631315.561708.31223.482124.616995.5541477.23223.4821787.55109.8
16201.4471315.561708.31223.482129.961006.3021477.23223.4821787.55108.264
32212.0471315.561708.31223.482💾1028.5581477.23223.4821787.55115.944
64223.0631315.561708.31223.482💾1023.4581477.23223.4821787.55122.856
128234.4951315.561708.31223.482💾1032.0581477.23223.4821787.55123.304
256246.3431315.561708.31223.482💾1031.811477.23223.4821787.55131.944
512258.6071315.561708.31223.482💾1084.7461477.23223.4821787.55💾
10241315.56852.937223.482💾1155.0541477.23223.4821787.55💾
20481315.56816.649223.482💾1182.0621477.23223.4821787.55💾
40961315.56852.937223.482💾1477.23223.4821787.55💾
81921315.56816.649223.482💾1477.23223.482852.937💾
163841315.56816.649223.482💾1477.23223.482852.937💾
327681315.56852.937223.482💾1477.23223.482816.649💾

Cycle Count

njoltsp1r0stonestwosp1-precompiler0-precompilestone-precompile
820333114784114732857648.026214318577278053480.0
1640436328435328623211518552428725825436936904.0
32806427557377564040💾1048575403217546913752.0
64161055511034251119656💾20971516931313902127448.0
128321882821955212230888💾419430312729726612554840.0
256643534043797134453352💾8388607243265520333109624
5121286836487480978898280💾167772154752011028749💾
10241748486517788136💾335544319390732050355💾
20483495840135567848💾6710886318668174093575💾
40966990547371127272💾37223058174812💾
8192139799617142246120💾743328116338156💾
16384279587905284483816💾1485523332669621💾
32768559164481568959208💾2969913765327886💾

Peak Memory (GB)

njoltsp1openvmr0stonestwosp1-precompiler0-precompileopenvm-precompilestone-precompile
85.095.145.442.371.4910.9511.613.455.4911.07
165.856.235.444.58142.5311.1411.433.455.4911.07
328.889.035.449.14💾11.8311.953.455.4921.98
6414.8913.135.449.17💾13.211.63.455.4943.91
12827.1312.025.79.18💾15.611.494.575.4987.26
25651.2421.9810.489.18💾20.6611.149.135.49174.37
512100.039.8419.959.18💾36.3611.929.165.49💾
102441.1819.959.19💾67.5711.249.165.49💾
204847.419.969.2💾128.3914.179.166.28💾
409659.5120.319.22💾30.39.1711.39💾
819261.1620.59.26💾53.859.1820.72💾
1638461.4320.759.34💾57.239.220.8💾
3276863.5820.939.49💾63.679.2220.87💾

BLAKE

Benchmark Blake2s256 hash of n bytes. For the Stwo benchmarks, a dedicated opcode for BLAKE was used, as illustrated in this Cairo source file. SP1 and R0 do not support precompiles for BLAKE.

Prover Time (s)

nsp1r0stwo-precompile
204814.27228.81735.206
409614.77352.3334.752
819217.67865.6730.911
1638424.308126.21932.214
3276837.485240.84330.324
6553655.119476.27338.614
131072101.786947.30635.269
262144162.8631888.1533.434

Verifier Time (ms)

nsp1r0stwo-precompile
20481021011
40961052311
81921112211
16384992411
327681042312
65536812212
131072812337
262144812248

Proof Size (KB)

nsp1r0stwo-precompile
20481315.56223.4821117.14
40961315.56223.4821138.83
81921315.56223.4821134.37
163841315.56223.4821121.81
327681315.56223.4821144.87
655361315.56223.4821158.19
1310721315.56223.4821141.67
2621441315.56223.4821148.66

Cycle Count

nsp1r0stwo-precompile
204810099726451565535
409619306152176365535
8192377189103625965535
16384745445206525165535
3276814819574123235131071
6553629549818239203262143
131072590102916471139524287
26214411793125329350111048575

Peak Memory (GB)

nsp1r0stwo-precompile
20484.554.5819.98
409659.1419.94
81926.099.1719.98
163848.839.1820.02
3276813.839.1820.17
6553616.599.1820.5
131072389.1920.88
26214445.319.1921.97

BLAKE-Chain

Benchmark Blake2s256 hash of 32 bytes for n iteration. For the Stwo benchmarks, a dedicated opcode for BLAKE was used, as illustrated in this Cairo source file. SP1 and R0 do not support precompiles for BLAKE.

Prover Time (s)

nsp1r0stwo-precompile
12817.31928.7630.885
25624.27952.07432.974
51238.586.18133.26
102453.129166.70630.723
204889.423316.02938.153
4096145.299628.11831.717
8192259.231260.1437.422
16384495.8782521.1735.82

Verifier Time (ms)

nsp1r0stwo-precompile
1281052211
2561042211
5121082311
1024842311
2048812341
4096852340
8192812349
16384812239

Proof Size (KB)

nsp1r0stwo-precompile
1281315.56223.4821119.07
2561315.56223.4821121.09
5121315.56223.4821128.54
10241315.56223.4821119.77
20481315.56223.4821127.81
40961315.56223.4821127.26
81921315.56223.4821134
163841315.56223.4821143.83

Cycle Count

nsp1r0stwo-precompile
12837624935650765535
25674284170249165535
51214760251394459131071
102429423932778395262143
204858751295546267524287
409611740601110820111048575
819223471545221534992097151
1638446933433442964754194303

Peak Memory (GB)

nsp1r0stwo-precompile
1286.164.5819.99
2569.129.1520.01
51213.179.1720.13
102415.969.1720.39
204832.889.1720.72
409644.69.1921.54
819250.579.1923.31
1638453.259.2127.19

Matrix Multiplication

Benchmark multiplication of two matrices of size n x n.

Prover Time (s)

njoltsp1openvmr0stonestwo
41.19613.17929.5086.76310.92212.72
81.64812.86230.1198.219.87919.566
164.97915.25233.46717.13526.38416.531
3227.70226.49158.10265.806💾14.655
64135.352292.125480.724💾21.199
128881.292671.683809.76💾76.061

Verifier Time (ms)

njoltsp1openvmr0stonestwo
444.011014023111.010
851.011014022120.010
1686.010213923139.010
3284.010914025💾18
648514022💾41
128845623💾45

Proof Size (KB)

njoltsp1openvmr0stonestwo
4144.6471315.531708.31223.234109.032815.674
8169.6071315.531708.31223.234108.52833.086
16198.9111315.531708.31223.234116.456844.238
32233.5911315.531708.31223.234💾844.914
641315.531708.31223.234💾876.354
1281315.53852.937223.234💾981.178

Cycle Count

njoltsp1r0stonestwo
47181.0843058202181.032767
842753.0239512135213521.032767
1631478213984413728094569.0131071
32245771010443941041904💾1048575
6482110628208720💾8388607
1286530663865304592💾67108863

Peak Memory (GB)

njoltsp1openvmr0stonestwo
45.434.455.431.429.0910.73
84.914.295.431.439.0310.73
165.85.125.432.2929.6810.82
3226.4510.345.439.17💾11.61
6439.2819.699.17💾19.92
12860.0120.099.19💾101.6

Elliptic Curve Addition

Benchmark n point doubling for secp256k1.

Prover Time (s)

njoltsp1openvmr0stonestwo
25650.4170.731158.631283.94118.86911.993
51296.814116.11292.691531.30930.52919.268
1024211.962582.3561037.1665.69315.356
2048394.0221048.012072.757114.31214.318
4096741.1692029.724079.796💾14.407
81921426.644046.328181.789💾16.635
163842844.327953.88💾24.382
327685628.3915793.7💾28.304

Verifier Time (ms)

njoltsp1openvmr0stonestwo
25667.08314022.0188.010
51260.08314023.0181.010
1024835623.0187.034
2048835625.0238.039
4096835723.0💾41
8192835611.0💾36
163848356💾130
327688355💾15

Proof Size (KB)

njoltsp1openvmr0stonestwo
256249.3991315.561708.31223.482111.592918.41
512261.6631315.561708.31223.482120.04897.182
10241315.56852.937223.482127.784889.49
20481315.56852.937223.482131.976896.15
40961315.56816.649223.482💾901.758
81921315.56816.649223.482💾910.154
163841315.56852.937💾929.282
327681315.56816.649💾931.042

Cycle Count

njoltsp1r0stonestwo
25647607604715656471191359936.0131071
512929016891695449165801119840262143
10241807732018073577239648262143
20483589287235889129479264524287
40967152397671520233💾1048575
8192142786184142782441💾2097151
16384285310600💾4194303
32768570359432💾8388607

Peak Memory (GB)

njoltsp1openvmr0stonestwo
25648.6223.610.379.1817.9310.9
51293.947.3819.89.1935.711.08
102444.3119.839.1971.4711.27
204855.8219.849.19142.6811.81
409662.4919.869.22💾13.22
819263.1419.949.26💾16.01
1638466.4819.95💾21.62
3276866.9719.96💾32.64