Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

The Native Rollups Book

Table of Contents

What are native rollups?

A native rollup is a new type of EVM-based rollup that directly makes use of Ethereum’s execution environment for its own state transitions, removing the need for complex and hard-to-maintain custom proof systems.

Problem statement

Governance risk

Today, EVM-based rollups need to make a trade-off between security and L1 equivalence. Everytime Ethereum forks, EVM-based rollups need to go through bespoke governance processes to upgrade the contracts and maintain equivalence with L1 features. A rollup governance system cannot be forced to follow Ethereum’s governance decisions, and thus is free to arbitrarily diverge from it. Because of this, the best that an EVM-based rollup that strives for L1 equivalence can do is to provide a long exit window for their users, to protect them from its governance going rogue.

Exit windows present themselves with yet another trade-off:

  • They can be short and reduce the un-equivalence time for users, but reducing at the same time the cases in which the exit window is effective. All protocols that require long enough time delays (e.g. vesting contracts, staking contracts, timelocked governance) would not be protected by the exit window.
  • They can be long to protect more cases, at the cost of increased un-equivalence time. It’s important to remember though that no finite (and reasonable) exit window length can protect all possible applications.

The only way to avoid governance risk today is to give up upgrades, remain immutable and accept that the rollup will increasingly diverge from L1 over time.

Bug risk

EVM-based rollups need to implement complex proof systems just to be able to support what Ethereum already provides on L1. Such proof systems, even though they are getting faster and cheaper over time, are still considered not safe to be used in production in a permissionless environment. Rollups today aim to reduce this problem by implementing multiple independent proof systems that need to agree before a state transition can be considered valid, which increases protocol costs and complexity.

The EXECUTE precompile

Native rollups solve these problems by replacing complex proof systems with a call to the EXECUTE precompile, which under the hood implements a recursive call to Ethereum’s own execution environment. As a consequence, every time Ethereum forks, native rollups automatically adopt the new features without the need for dedicated governance processes. Moreover, the EXECUTE precompile is “bug-free” by construction, in the sense that any bug in the precompile is also a bug in Ethereum itself which will always be forked and fixed by the Ethereum community. At the same time, the Ethereum community will focus on hardening the guarantees of L1 execution with extensive testing, multiple client implementations and formal verification, and native rollups will be able to benefit from all these efforts automatically.

Purpose of this book

This book is designed to serve as a comprehensive resource for understanding and contributing to our work on native rollups.

Goals of this book include:

  • Provide in-depth explanations of the inner workings of the EXECUTE precompile.
  • Provide technical guidance on how native rollups can be built around the precompile.
  • Educate readers on the benefits of native execution and how the proposal compares to other scalability solutions.
  • Provide a starting point for community members to discuss and contribute to the design and implementation of native rollups.

Native rollup verification

Table of Contents

Re-execution vs ZK enforcement

The original proposal for the EXECUTE precompile presented two possible enforcement mechanisms: re-execution and ZK proofs. While the latter requires the L1 ZK-EVM upgrade to take place, the former can potentially be implemented beforehand and set the stage for the ZK version, in a similar way that proto-danksharding was first introduced without PeerDAS.

The re-execution variant would only be able to support optimistic rollups with a bisection protocol that goes down to single or few-L2-blocks sized steps, and that are EVM-equivalent. Today there are three stacks with working bisection protocols, namely Orbit stack (Arbitrum), OP stack (Optimism) and Cartesi. Cartesi is built to run a Linux VM so they wouldn’t be able to use the precompile, and Orbit supports Stylus which doesn’t make them fully EVM-equivalent, unless a Stylus-less version is implemented, but even in this case it wouldn’t be able to support Arbitrum One. OP stack is mostly EVM-equivalent, but still requires heavy modifications to support native execution. It’s therefore unclear whether trying to implement the re-execution version of the precompile is worth it, or if it’s better to wait for the more powerful ZK version.

While the L1 ZK-EVM upgrade is not needed for the re-execution version, statelessness is, as we want L1 validators to be able to verify the precompile without having to hold all rollups’ state. It’s not clear whether the time interval between statelessness and L1 ZK-EVM will be long enough to justify the implementation of the re-execution variant, or whether statelessness will be implemented before the L1 ZK-EVM in the first place.

Both variants are specified in this document. The re-execution spec comes first because it is simpler and helps explain the progression to ZK: the core function being executed or proven is the same (verify_stateless_new_payload), and the contract patterns (state management, messaging, anchoring) are shared. The ZK spec then shows what changes when re-execution is replaced by proof verification.

Design principles

The core principle is to re-use as many L1 components as possible. L2 operators run the same proving infrastructure as L1 provers: they prove the same program (verify_stateless_new_payload) with the same keys. L1 nodes verify L2 proofs using the same EIP-8025 infrastructure they use for L1 block proofs. In the re-execution variant, L1 validators run the same stateless validation function directly.

This means native rollups inherit whatever the L1 EVM supports: no custom transaction types, precompiles, or fee markets on the L2 side. Any such change would require modifying the shared program, making the proposal more complex and harder to accept. The L2-specific logic (blob transaction filtering, L1 anchoring) is kept outside the standard function, in a thin preprocessing layer.

Significant parts of the design depend on its tech dependencies, which might still be in development. The specification tries to be written as if such features were already implemented, without trying to predict exact details. Significant changes are therefore expected once they mature.

Data layout

Both variants execute or prove the same function (verify_stateless_new_payload) and therefore share the same L2 block structure. The following tables describe how native rollup blocks map to the standard spec types.

Fields marked constrained are validated during execution (wrong value = proof/execution fails). Fields marked unconstrained are free inputs chosen by the operator. Fields marked fixed have a constant value for L2.

The unconstrained fields (fee_recipient, prev_randao, parent_beacon_block_root) correspond to the PayloadAttributes that on L1 are trusted to come from the consensus layer. The EL never validates them; it accepts whatever the CL provides. Since native rollups have no CL, these become free inputs for the operator. timestamp is also CL-provided on L1 but additionally constrained by the EL (> parent_header.timestamp).

StatelessInput

StatelessInput: input to verify_stateless_new_payload.

FieldExpected sourceNotes
new_payload_requestsee below
witnesscalldata (re-execution) / offchain (ZK)ExecutionWitness: MPT node preimages, contract bytecodes, ancestor headers
chain_configstorage (chain_id)ChainConfig: L2 chain configuration
public_keyscalldata (re-execution) / offchain (ZK)Pre-recovered ECDSA keys to avoid expensive recovery in the ZK circuit

NewPayloadRequest

NewPayloadRequest (inside StatelessInput):

FieldConstrainedExpected sourceNotes
execution_payloadSee below
versioned_hashesyesZK: BLOBHASH / re-execution: emptyOrdered list of blob versioned hashes. On L1, the first payload_blob_count entries are payload blobs (EIP-8142, carrying block data) and the rest are from type-3 blob transactions. On L2, since blob transactions are not supported, the list contains only payload blob hashes. In ZK, read via BLOBHASH from the proof-carrying transaction’s blobs. Empty in re-execution (no blobs)
parent_beacon_block_rootnocomputed onchainRepurposed as the L1 anchor on L2. The existing EIP-4788 system transaction inside apply_body writes this value to the beacon roots predeploy, making it available to L2 contracts. The rollup contract chooses what to pass in this field (e.g. an L1 block hash, a message queue commitment, or any other value useful for L1->L2 communication). See L1 anchoring and L1->L2 messaging
execution_requestsfixedconstantEmpty for L2

ExecutionPayload

ExecutionPayload (inside NewPayloadRequest):

FieldConstrainedExpected sourceNotes
parent_hashyesstorageHash of the previous L2 block. Chain continuity link
fee_recipientnovariousFee recipient (coinbase). Could come from calldata, or be a fixed address in storage (e.g. a DAO treasury)
state_rootyescalldataPost-state root
receipts_rootyescalldataPost-receipts root
logs_bloomyescalldataComputed during execution
prev_randaonovariousSee L1 vs L2 diff: RANDAO
block_numberyesstorageMust equal parent_header.number + 1
gas_limityesstorageBounds check against parent (1/1024 rule). TBD: ZK gas handling
gas_usedyescalldataComputed during execution
timestampyescalldataMust be > parent_header.timestamp
extra_datanovariousMax 32 bytes
base_fee_per_gasyescalldataMust match EIP-1559 formula from parent header
block_hashyescalldataComputed from header
transactionsyescalldata or blobsRe-execution: full transactions in calldata. ZK: transactions_root in calldata, full transactions in blobs (EIP-8142)
withdrawalsfixedconstantEmpty for L2
blob_gas_usedfixedconstant0 for L2
excess_blob_gasfixedconstant0 for L2
block_access_listyesTBDTBD

Re-execution specification

Overview

The re-execution variant uses an EXECUTE precompile that performs L2-specific preprocessing and then calls the standard stateless validation function. This is the simplest enforcement mechanism: the L1 EL directly re-executes the L2 state transition.

This variant is not intended for production. It is specified for:

  • Testing: validating the native rollup contract patterns without needing ZK infrastructure.
  • Understanding: providing a concrete, executable reference for the verification flow.
  • Progression: showing the stepping stone from re-execution to ZK (see From re-execution to ZK).

The ethrex project implements a version of this approach (PR #6186). Our spec follows the same pattern but wraps the standard verify_stateless_new_payload directly, rather than using a custom apply_body variant with individual ABI-encoded fields.

The EXECUTE precompile

The EXECUTE precompile is the precompile equivalent of run_stateless_guest, the ZK guest entry point that takes SSZ-serialized input, runs verify_stateless_new_payload, and returns SSZ-serialized output. The precompile adds L2-specific preprocessing (fixed-field checks) before calling the same function. The pseudocode uses types and functions from the execution-specs:

from ethereum_types.numeric import U64

from ethereum.forks.amsterdam.vm import Evm
from ethereum.forks.amsterdam.vm.gas import charge_gas
from ethereum.forks.amsterdam.vm.exceptions import ExceptionalHalt, InvalidParameter

from ethereum.forks.amsterdam.stateless import (
    verify_stateless_new_payload,
)
from ethereum.forks.amsterdam.stateless_guest import (
    deserialize_stateless_input,
    serialize_stateless_output,
)


def execute(evm: Evm) -> None:
    data = evm.message.data
    stateless_input = deserialize_stateless_input(data)

    charge_gas(evm, stateless_input.new_payload_request.execution_payload.gas_used)

    # L2-specific preprocessing: enforce fixed fields.
    payload = stateless_input.new_payload_request.execution_payload
    request = stateless_input.new_payload_request
    if payload.blob_gas_used != U64(0) or payload.excess_blob_gas != U64(0):
        raise InvalidParameter
    if len(payload.withdrawals) != 0:
        raise InvalidParameter
    if len(request.execution_requests) != 0:
        raise InvalidParameter

    # Standard stateless validation (identical to L1).
    result = verify_stateless_new_payload(stateless_input)

    if not result.successful_validation:
        raise ExceptionalHalt

    evm.output = serialize_stateless_output(result)

Input: The precompile reads its input from evm.message.data, which contains an SSZ-serialized StatelessInput, decoded via deserialize_stateless_input.

Output:

  • On success: evm.output contains the SSZ-serialized StatelessValidationResult, encoded via serialize_stateless_output. This includes new_payload_request_root, successful_validation, and chain_config (with chain_id).
  • On failure: raises InvalidParameter for bad fixed fields, or ExceptionalHalt if validation fails. Both consume all gas and cause the STATICCALL to return success = false with empty output.

L2-specific preprocessing

The preprocessing steps before verify_stateless_new_payload enforce that all fixed fields are zero/empty:

  • blob_gas_used = 0 and excess_blob_gas = 0: L2 does not support type-3 (blob-carrying) transactions. Rather than iterating through all transactions to detect them, the precompile asserts these two fields are zero. Since the STF verifies that the computed blob_gas_used matches the header claim, any blob transaction would cause a mismatch and be rejected inside verify_stateless_new_payload itself. This makes the check O(1) instead of O(n). See also EIP-8079 which defines a similar check.
  • withdrawals empty: L2 has no beacon chain, so withdrawals must be empty. Without this check, an operator could mint arbitrary ETH on L2 via fake withdrawals.
  • execution_requests empty: L2 has no validator operations (deposits, exits, consolidations).

L1 anchoring does not require preprocessing. The parent_beacon_block_root field of NewPayloadRequest is repurposed to carry an L1 anchor value chosen by the rollup contract. The existing EIP-4788 system transaction inside apply_body writes this value to the beacon roots predeploy, making it available to L2 contracts. This happens inside verify_stateless_new_payload, so the anchor write is covered by the L2 execution proof (or re-execution). No separate system transaction or predeploy is needed. The format of the anchor is not prescribed: rollups can pass an L1 block hash, a message queue commitment, or any other value useful for L1->L2 communication. See L1 anchoring for more details.

NativeRollup contract (re-execution)

The following contract is a proof of concept showing how the EXECUTE precompile can be used. It calls the precompile and updates its onchain state. Like the ZK variant, the contract constructs the precompile input from a mix of storage (chain state that the contract enforces) and calldata (operator-provided fields). The contract stores:

  • stateRoot, blockNumber, blockHash, gasLimit: current L2 chain head
  • chainId: L2 chain identifier (part of ChainConfig)
  • stateRootHistory: mapping of block numbers to state roots (for L2->L1 messaging via state proofs)
contract NativeRollup {

    struct BlockParams {
        // Constrained fields (validated by re-execution)
        bytes32 stateRoot;
        bytes32 receiptsRoot;
        bytes   logsBloom;
        uint256 gasUsed;
        uint256 timestamp;
        uint256 baseFeePerGas;
        bytes32 blockHash;
        // Unconstrained fields (free operator inputs)
        address feeRecipient;
        bytes32 prevRandao;
        bytes32 extraData;
    }

    // L2 chain state tracked onchain
    bytes32 public blockHash;
    bytes32 public stateRoot;
    uint256 public blockNumber;
    uint256 public gasLimit;
    uint256 public chainId;

    // L2 state root history (for L2->L1 messaging via state proofs)
    mapping(uint256 => bytes32) public stateRootHistory;

    // L1->L2 message queue. Messages are stored in this contract's
    // storage and become accessible on L2 via storage proofs against
    // the anchored L1 block hash.
    bytes32[] public pendingL1Messages;

    // EXECUTE precompile address (TBD)
    address constant EXECUTE = address(0xTBD);

    // Queue an L1->L2 message. The message hash is stored in this
    // contract's storage. On L2, a relayer provides a storage proof
    // against the anchored L1 block hash to prove the message exists.
    function sendMessage(address to, bytes calldata data) external payable {
        bytes32 messageHash = keccak256(
            abi.encodePacked(msg.sender, to, msg.value, keccak256(data), pendingL1Messages.length)
        );
        pendingL1Messages.push(messageHash);
    }

    function advance(
        BlockParams calldata params,
        bytes calldata transactions,
        bytes calldata witness,
        bytes calldata publicKeys
    ) external {
        // 1. Compute the L1 anchor.
        //    Passed as parent_beacon_block_root; the EIP-4788 system
        //    transaction inside apply_body writes it to the beacon
        //    roots predeploy, making it available to L2 contracts.
        //    The anchor format is up to the rollup. This example uses
        //    the L1 block hash, which lets L2 contracts use storage
        //    proofs to access any L1 state.
        //    See: l1_anchoring.md, l1_l2_messaging.md
        bytes32 l1Anchor = blockhash(block.number - 1);

        // 2. Call EXECUTE precompile with SSZ-serialized StatelessInput.
        bytes memory input = SSZ.encodeStatelessInput(
            NewPayloadRequest(
                ExecutionPayload(
                    blockHash,                  // parent_hash (from storage)
                    params.feeRecipient,
                    params.stateRoot,
                    params.receiptsRoot,
                    params.logsBloom,
                    params.prevRandao,
                    blockNumber + 1,            // block_number (from storage)
                    gasLimit,                   // gas_limit (from storage)
                    params.gasUsed,
                    params.timestamp,
                    params.extraData,
                    params.baseFeePerGas,
                    params.blockHash,
                    transactions,
                    new bytes[](0),             // withdrawals (empty for L2)
                    0,                          // blob_gas_used (zero for L2)
                    0                           // excess_blob_gas (zero for L2)
                ),
                new bytes32[](0),               // versioned_hashes (empty for re-execution)
                l1Anchor,                       // parent_beacon_block_root
                new bytes[](0)                  // execution_requests (empty for L2)
            ),
            witness,
            chainId,
            publicKeys
        );
        (bool success, bytes memory result) = EXECUTE.staticcall(input);
        require(success, "EXECUTE failed");

        // 3. Decode and verify result (SSZ-encoded StatelessValidationResult).
        (bytes32 newPayloadRequestRoot, bool validationSuccessful, uint64 provenChainId) =
            SSZ.decodeStatelessValidationResult(result);
        require(validationSuccessful, "L2 validation failed");
        require(provenChainId == chainId, "chain_id mismatch");

        // 4. Update onchain state
        blockHash = params.blockHash;
        stateRoot = params.stateRoot;
        blockNumber = blockNumber + 1;
        stateRootHistory[blockNumber] = params.stateRoot;
    }
}

ZK specification

Overview

The ZK variant replaces the EXECUTE precompile with proof-carrying transactions and a PROOFROOT opcode. Instead of re-executing the L2 state transition on L1, the rollup operator generates a ZK proof and the consensus layer validates it. The rollup contract computes the expected commitment onchain and checks it against the proof.

This follows the same EL/CL split pattern as EIP-4844 blob transactions: the EL references a commitment (the validation_result_root, a hash of the proof’s full public output), and the CL validates the corresponding proof.

The specification follows the stateless execution model from the execution-specs, the Block-in-Blobs (EIP-8142) pattern for transaction data availability, and the EIP-8025 proof validation infrastructure from the consensus-specs.

The flow:

  1. The rollup operator builds an L2 block and generates an execution proof by proving verify_stateless_new_payload, the same program that L1 provers prove for L1 blocks.
  2. The operator submits a proof-carrying transaction to L1. The transaction body includes the validation_result_root (accessible to the EVM via PROOFROOT) and blob_versioned_hashes. The sidecar carries the ZK proof and blobs (EIP-8142-encoded L2 block data).
  3. The rollup contract reconstructs the expected validation_result_root and checks it against PROOFROOT (see Root computation).
  4. The consensus layer validates the proof from the sidecar (see Transaction processing). If the proof is invalid, the L1 block is rejected.
  5. The rollup contract updates its onchain state (block hash, state root, etc.).

No precompile is needed. The rollup contract is the “programmable consensus layer” that decides which fields come from storage, which from the operator, and which are fixed.

There are two possible strategies for how L2 proofs are validated relative to the L1 block proof: separate proofs (nodes validate L2 proofs independently, 1 + N proofs per block) and recursive proofs (the L1 prover recursively verifies L2 proofs, 1 proof per block). This document describes the separate proofs approach. See Proof strategies for details on both.

From re-execution to ZK

The ZK variant builds directly on the re-execution spec. The core function being verified is the same: verify_stateless_new_payload. What changes is how that function is verified and where the data lives.

AspectRe-executionZK
EnforcementEXECUTE precompile re-runs L2 STFCL validates ZK proof
L2 dataCalldata (txs + witness + public keys)Blobs for txs (EIP-8142), witness + public keys offchain
EVM accessPrecompile return valueBLOBHASH + PROOFROOT opcodes
Contract roleCalls precompile, checks returnComputes validation_result_root, checks PROOFROOT
CL involvementNone (pure EL)Validates proof (EIP-8025)
L1 provesFull L2 execution (re-execution)Only L2-specific preprocessing
RequiresStatelessnessStatelessness + L1 ZK-EVM

What stays the same:

  • NativeRollup contract pattern (state management, messaging)
  • L1 anchoring mechanism
  • L1->L2 and L2->L1 messaging patterns
  • The function being proven/executed: verify_stateless_new_payload
  • The StatelessValidationResult as the proof’s public output (which contains new_payload_request_root)

What changes:

The main change is where L2 block data lives and who processes it:

  • Transactions and block access list: in re-execution, the full transaction list is passed as calldata to the EXECUTE precompile, which re-executes them. In ZK, the full data moves to blobs following EIP-8142: the block access list and RLP-encoded transactions are packed into blobs via execution_payload_data_to_blobs. The contract only receives transactions_root in calldata, a constrained field validated by the L2 proof. The blobs ensure data availability so that L2 nodes and provers can reconstruct the block.
  • Witness and public keys: in re-execution, the ExecutionWitness (trie node preimages, contract codes, ancestor headers) and pre-recovered public keys are passed as calldata because the EL needs them to re-execute. In ZK, neither is needed onchain: the prover uses them offchain to generate the proof, and they are not posted to L1.
  • Block parameters: remain in calldata in both variants. The contract needs them to either call the precompile (re-execution) or compute the validation_result_root onchain (ZK).
  • Verification: the EXECUTE precompile is replaced by proof-carrying transactions + PROOFROOT. The CL validates the proof; the L1 EL no longer re-executes the L2 STF. The L1 block proof only covers the contract’s Root computation and PROOFROOT check, not the full L2 execution.

Proof-carrying transactions

Proof-carrying transactions are a new EIP-2718 transaction type that extends EIP-4844 blob transactions with a proof. Like blob transactions, they declare blob_versioned_hashes in their body and carry blobs in their sidecar. Additionally, they declare a validation_result_root and carry a ZK proof in the sidecar.

The blobs contain the L2 block data (EIP-8142 encoded). The proof attests to the correctness of that block’s execution. Bundling both in one transaction ensures that data availability (via DAS) and execution correctness (via the proof) are guaranteed atomically: if the blobs are unavailable, the L1 block is invalid; if the proof is invalid, the L1 block is also invalid.

Each proof-carrying transaction proves exactly one L2 block.

TransactionType: PROOF_TX_TYPE

TransactionPayloadBody:
[chain_id, nonce, max_priority_fee_per_gas, max_fee_per_gas, gas_limit,
 to, value, data, access_list, max_fee_per_blob_gas,
 blob_versioned_hashes, validation_result_root,
 y_parity, r, s]

During transaction gossip responses (PooledTransactions), the transaction payload is wrapped to include blobs, KZG commitments, KZG proofs, and the execution proof:

rlp([tx_payload_body, blobs, kzg_commitments, kzg_proofs, execution_proof])

The validation_result_root is a hash of the proof’s public output (see Proof validation). The blob_versioned_hashes commit to the EIP-8142-encoded L2 block data. The execution_proof is validated by the CL (see Transaction processing).

The pattern extends EIP-4844:

Blob transactions (EIP-4844)Proof-carrying transactions
BlobsTransaction dataL2 block data (EIP-8142)
Onchain referenceblob_versioned_hashesblob_versioned_hashes + validation_result_root
EVM accessBLOBHASH opcodeBLOBHASH + PROOFROOT opcodes
CL validationBlob availability (KZG, DAS)Blob availability + proof validity (EIP-8025)
EL validationis_valid_versioned_hashes (protocol)validation_result_root != 0 (protocol) + contract checks PROOFROOT

See also Proof-carrying transactions for more context on the proof format and the two-proof model.

The PROOFROOT opcode

A new opcode PROOFROOT provides EVM access to the validation_result_root declared in the current proof-carrying transaction. It is simpler than BLOBHASH since there is exactly one root per transaction (no index input).

Opcode0x4b (TBD)
Stack inputnone
Stack outputvalidation_result_root (bytes32)
Gas costG_base (2)

The gas cost follows the same pricing as other zero-input environment opcodes like ORIGIN, GASPRICE, and BLOBBASEFEE, which all cost G_base (2 gas).

def proofroot(evm: Evm) -> None:
    charge_gas(evm, GAS_BASE)

    root = evm.message.tx_env.validation_result_root
    push(evm.stack, root)

    evm.pc += Uint(1)

If the transaction is not a proof-carrying transaction, PROOFROOT returns bytes32(0), analogous to how BLOBHASH returns zero for non-blob transactions.

Transaction processing

Proof-carrying transactions are processed like blob transactions with one additional field (validation_result_root) and one additional CL validation step (proof verification). The EL treats the proof as opaque; the CL handles all proof validation via the existing EIP-8025 ProofEngine.

EL: transaction decoding and validation. A new transaction type (e.g. PROOF_TX_TYPE = 0x05) is added to transactions.py. The ProofCarryingTransaction class extends BlobTransaction with validation_result_root: Hash32. The signing hash includes validation_result_root, so the sender commits to which L2 block is being proven.

check_transaction applies the same validation as blob transactions (blob count, version byte, max_fee_per_blob_gas >= blob_gas_price, balance coverage including blob gas) plus:

  • validation_result_root != bytes32(0)

The blobs use the existing blob gas market. How to price the proof verification cost is an open question (see Open questions).

EL: transaction environment. TransactionEnvironment is extended with validation_result_root: Hash32 (set from the transaction in process_transaction, or bytes32(0) for non-proof-carrying txs). This is what the PROOFROOT opcode reads, mirroring how blob_versioned_hashes in TransactionEnvironment is what BLOBHASH reads.

EL: engine API. is_valid_versioned_hashes validates that versioned hashes from the CL match blob transaction hashes in the payload. It would need to be updated to also extract blob_versioned_hashes from ProofCarryingTransaction instances (currently it only handles BlobTransaction). Beyond this, no additional engine API changes are needed: the EL does not see or validate the proof, just as it does not see blob contents.

CL: proof validation. The consensus layer extracts the execution_proof from the proof-carrying transaction’s sidecar and validates it using proof_engine.verify_execution_proof. Unlike L1 block proofs, which are delivered as SignedExecutionProof messages signed by active validators and processed via process_execution_proof, L2 proofs are delivered via the transaction sidecar and do not require a validator signature — the CL calls verify_execution_proof directly. The proof’s public output must match the validation_result_root declared in the transaction body. If the proof is invalid, the L1 block is rejected, analogous to how an invalid KZG proof in a blob sidecar invalidates the block.

CL: data availability. Blob availability is handled by the existing DAS mechanism (EIP-4844 / EIP-7594 PeerDAS). The CL treats proof-carrying transaction blobs identically to any other blobs: they are included in blob_kzg_commitments, sampled via DAS, and their versioned hashes are passed to the EL for cross-verification.

P2P: gossip. During transaction gossip (PooledTransactions), the network representation wraps the transaction body with blobs, KZG data, and the execution proof (as shown in the sidecar format above). Receiving nodes validate:

  • Blob/commitment/proof counts match blob_versioned_hashes (same as blob txs)
  • kzg_to_versioned_hash(commitments[i]) == blob_versioned_hashes[i] (same as blob txs)
  • KZG proofs are valid for the blobs (same as blob txs, via verify_blob_kzg_proof_batch)
  • The execution proof is valid for the declared validation_result_root (via proof_engine.verify_execution_proof). Note: this requires the EL gossip layer to have access to the proof engine, which is currently a CL component. How this cross-layer validation works at gossip time is TBD

Summary of changes by layer:

LayerBlob transactions (EIP-4844)Proof-carrying transactions (additions)
EL: tx typeBlobTransaction (0x03)ProofCarryingTransaction (0x05), adds validation_result_root
EL: validationBlob count, version byte, fee checksSame + validation_result_root != 0
EL: tx envblob_versioned_hashesSame + validation_result_root
EL: engine APIis_valid_versioned_hashesUpdated to also handle ProofCarryingTransaction
CL: validationis_data_available (DAS)Same + proof_engine.verify_execution_proof
P2P: sidecar[tx, blobs, commitments, kzg_proofs]Same + execution_proof

Proof validation

An L2 execution proof uses the same ExecutionProof structure as an L1 block proof. It proves that verify_stateless_new_payload succeeded for the L2 block. The proof’s public output is the full StatelessValidationResult:

The validation_result_root declared in the proof-carrying transaction is a hash of this full StatelessValidationResult. Unlike L1 proofs, which are gossipped as SignedExecutionProof messages signed by active validators, L2 proofs are delivered via the proof-carrying transaction sidecar and do not require a validator signature. The CL validates them using the same proof_engine.verify_execution_proof function (see Transaction processing).

Together, the EL check (contract reconstructs expected root and matches PROOFROOT, see Root computation) and the CL check (valid proof for that root) guarantee that the L2 state transition was executed correctly.

chain_id and proof binding. The chain_id is part of StatelessInput.chain_config but not part of NewPayloadRequest or the block header. If the public output were only the new_payload_request_root, the prover could freely choose chain_id as a private input, enabling cross-chain transaction replay: for typed transactions (EIP-2930 and later), recover_sender uses the transaction’s own tx.chain_id for signature recovery, not block_env.chain_id, so transactions from any chain would execute successfully. By including chain_config in StatelessValidationResult (PR #2342), the proof attests to which chain_id was used, and the contract can verify it matches its stored value by reconstructing the full StatelessValidationResult before hashing.

Root computation

The rollup contract must reconstruct the expected validation_result_root and check it against PROOFROOT. This requires two steps:

  1. Compute new_payload_request_root from block header fields via compute_new_payload_request_root. The hashing scheme is SSZ hash_tree_root over the SszNewPayloadRequest container (see stateless_ssz.py). The contract only has header-level data (roots and scalar fields, not full transaction lists), so the scheme must allow reconstruction from NewPayloadRequestHeader fields. This is the same requirement that the CL has in process_execution_payload, where the proof engine verifies a header against stored proofs.

  2. Hash the full StatelessValidationResult: compute hash_tree_root of the SszStatelessValidationResult containing new_payload_request_root (from step 1), successful_validation = true, and chain_config (with chain_id from storage). The result is the validation_result_root.

The contract has access to every field needed for step 1:

LeafExpected source
Scalar header fields (parent_hash, block_number, etc.)Contract storage and operator calldata
transactions_rootOperator calldata (constrained: proven by the L2 proof)
withdrawals_rootKnown constant (empty for L2)
versioned_hashesBLOBHASH from the proof-carrying transaction
parent_beacon_block_rootComputed onchain (L1 anchor)
execution_requestsKnown constant (empty for L2)

transactions_root is a constrained field: if the operator provides a wrong value, the L2 proof fails and the CL rejects the L1 block. This is the same trust model as state_root and receipts_root, which are also claimed by the operator and validated by the proof.

NativeRollup contract (ZK)

The ZK contract implements the Root computation steps and checks the result against PROOFROOT. The operator submits a proof-carrying transaction where the blobs contain L2 block data, the calldata contains block parameters, and the sidecar contains the execution proof.

contract NativeRollup {

    struct BlockParams {
        // Constrained fields (validated by the L2 proof)
        bytes32 stateRoot;
        bytes32 receiptsRoot;
        bytes   logsBloom;
        uint256 gasUsed;
        uint256 timestamp;
        uint256 baseFeePerGas;
        bytes32 blockHash;
        bytes32 transactionsRoot;
        uint256 payloadBlobCount;
        // Unconstrained fields (free operator inputs)
        address feeRecipient;
        bytes32 prevRandao;
        bytes32 extraData;
    }

    // L2 chain state tracked onchain
    bytes32 public blockHash;
    bytes32 public stateRoot;
    uint256 public blockNumber;
    uint256 public gasLimit;
    uint256 public chainId;

    // L2 state root history (for L2->L1 messaging via state proofs)
    mapping(uint256 => bytes32) public stateRootHistory;

    // L1->L2 message queue (same as re-execution variant)
    bytes32[] public pendingL1Messages;

    function sendMessage(address to, bytes calldata data) external payable {
        bytes32 messageHash = keccak256(
            abi.encodePacked(msg.sender, to, msg.value, keccak256(data), pendingL1Messages.length)
        );
        pendingL1Messages.push(messageHash);
    }

    function advance(BlockParams calldata params) external {
        // 1. Compute the L1 anchor.
        //    Same pattern as the re-execution variant (see above).
        bytes32 l1Anchor = blockhash(block.number - 1);

        // 2. Compute new_payload_request_root from
        //    storage + calldata + versioned hashes + computed anchor.
        //    Uses header-level fields: transactions_root instead of
        //    full transactions list.
        //    Hashing scheme is SSZ hash_tree_root. TBD: onchain library.
        bytes32 npRoot = computeNewPayloadRequestRoot(
            // ExecutionPayloadHeader fields
            parentHash:          blockHash,              // from storage
            feeRecipient:        params.feeRecipient,
            stateRoot:           params.stateRoot,
            receiptsRoot:        params.receiptsRoot,
            logsBloom:           params.logsBloom,
            prevRandao:          params.prevRandao,
            blockNumber:         blockNumber + 1,        // from storage
            gasLimit:            gasLimit,                // from storage
            gasUsed:             params.gasUsed,
            timestamp:           params.timestamp,
            extraData:           params.extraData,
            baseFeePerGas:       params.baseFeePerGas,
            blockHash:           params.blockHash,
            transactionsRoot:    params.transactionsRoot, // constrained (proven)
            withdrawalsRoot:     bytes32(0),              // empty for L2
            blobGasUsed:         0,                       // fixed for L2
            excessBlobGas:       0,                       // fixed for L2
            payloadBlobCount:    params.payloadBlobCount,
            // NewPayloadRequest fields
            versionedHashes:     getVersionedHashes(params.payloadBlobCount),
            parentBeaconBlockRoot: l1Anchor,              // computed onchain
            executionRequests:   bytes32(0)               // empty for L2
        );

        // 3. Hash the full StatelessValidationResult:
        //    (new_payload_request_root, successful_validation, chain_config)
        //    This implicitly verifies chain_id (from storage) and
        //    successful_validation (always true for a valid proof).
        bytes32 validationResultRoot = hash(
            npRoot,
            true,               // successful_validation
            chainId             // from storage (chain_config)
        );

        // 4. Verify the computed root matches the proof-carrying tx's
        //    declared root. The CL validates the proof for this root.
        require(validationResultRoot == PROOFROOT, "root mismatch");

        // 5. Update onchain state.
        blockHash = params.blockHash;
        stateRoot = params.stateRoot;
        blockNumber = blockNumber + 1;
        stateRootHistory[blockNumber] = params.stateRoot;
    }

    function getVersionedHashes(uint256 count) internal view returns (bytes32[] memory) {
        // Read blob versioned hashes from the proof-carrying tx via
        // BLOBHASH. Since L2 has no type-3 blob transactions, all blobs
        // in the tx are payload blobs (EIP-8142 encoded L2 block data).
        // DAS guarantees these blobs are available.
        bytes32[] memory hashes = new bytes32[](count);
        for (uint256 i = 0; i < count; i++) {
            hashes[i] = blobhash(i);
        }
        return hashes;
    }

}

See also: L1 anchoring, L1->L2 messaging, L2->L1 messaging

Blob encoding

L2 block data (transactions + block access list) is encoded into blobs following EIP-8142. The operator calls execution_payload_data_to_blobs to produce an ordered list of blobs, which are included in the proof-carrying transaction’s sidecar.

Data availability guarantee. The proof-carrying transaction carries both the blobs and the ZK proof. The blob_versioned_hashes in the transaction body commit to the blob data via KZG. The validation_result_root commits (through the new_payload_request_root) to a NewPayloadRequest that includes those same versioned_hashes. The contract reads the blob hashes via BLOBHASH and includes them in the root computation, binding the proof to the blob data. DAS ensures the blobs are available. An operator cannot withhold L2 data: if the blobs are missing, the L1 block is invalid (same as any blob transaction); if the versioned hashes don’t match, the root check fails.

Proof strategies

Each L1 block that contains native rollup state transitions involves two categories of proofs: one or more L2 execution proofs (one per proof-carrying transaction) and an L1 block proof. There are two strategies for how these proofs are validated.

Separate proofs

In this approach, L2 proofs and the L1 block proof are validated independently by the CL:

  1. N L2 execution proofs (generated by rollup operators): each proves verify_stateless_new_payload for one L2 block. Carried in proof-carrying transaction sidecars and validated by the CL independently.

  2. 1 L1 block proof (generated by the L1 prover): proves the L1 block execution, including the rollup contract’s root computation and PROOFROOT check. Does not re-execute any L2 state transitions, as those are already proven by the operators.

A node validates 1 + N proofs per L1 block: the L1 block proof plus one L2 proof per proof-carrying transaction. This is the approach described in this document.

Recursive proofs (TBD)

In this approach, the L1 prover recursively verifies the L2 proofs as part of proving the L1 block:

  1. The rollup operator generates the L2 execution proof and makes it available to the L1 prover.
  2. The L1 prover, when proving the L1 block, also proves the verification of each L2 proof it encounters. The resulting L1 block proof recursively attests to the correctness of all L2 state transitions within that block.
  3. A node validates only 1 proof per L1 block, regardless of how many L2 blocks were proven.

This reduces per-block verification cost from 1 + N to 1, at the expense of increased complexity for the L1 prover (which must support recursive proof verification). The exact mechanism, including how L2 proofs are delivered to the L1 prover and how the recursive verification circuit is structured, is TBD. See also Recursive L1+L2 proofs.

Open questions

  1. Blob transaction restriction: L2 does not support type-3 (blob-carrying) transactions. EIP-8079 explicitly checks for this in the precompile. In the current design, this could be enforced via the L2 ChainConfig (the proof circuit rejects blob txs), or via additional onchain checks. The exact mechanism is TBD.

  2. block_access_list handling: With EIP-8142, block access lists may also be encoded in blobs. If so, the operator would provide the block_access_list root as calldata (same pattern as transactions_root).

  3. Proof-carrying transaction pricing: Proof verification imposes a cost on the CL (and on L1 provers in the ZK L1 model). Whether this requires a separate proof gas market (analogous to the blob gas market), a flat fee, or is folded into the existing gas model is TBD. Related: how the overall gas model works depends on the L1 ZK-EVM design. See tech dependencies.

  4. Root computation library: The rollup contract needs to compute new_payload_request_root onchain via SSZ hash_tree_root (over SszNewPayloadRequest) and then hash_tree_root the full SszStatelessValidationResult. The availability and gas cost of an SSZ hash_tree_root library in Solidity is a practical consideration.

  5. Re-execution data encoding: The EXECUTE precompile takes an SSZ-serialized StatelessInput as calldata. The encoding must be efficient given the potentially large witness size.

  6. Re-execution gas cost: The gas cost of the EXECUTE precompile depends on the L2 block complexity. The gas metering model is TBD.

  7. Sequence-first-prove-later: The current spec requires blobs and proof to be in the same transaction, so the operator must have the proof ready at data posting time. Supporting sequence-first-prove-later (post data first, prove later) would require a mechanism to reference past blobs from the proof-carrying transaction. BLOBHASH only accesses blobs in the current transaction. Possible approaches include a new opcode or precompile that can attest to blob availability from past blocks (within the DAS availability window), or a contract-level registry of blob commitments.

  8. Recursive proof delivery: In the recursive proofs strategy, how L2 proofs are delivered to the L1 prover, and the structure of the recursive verification circuit, are TBD.

Tech dependencies

Table of Contents

Statelessness (EIP-7864)

L1 validators shouldn’t store the state of all rollups, therefore the EXECUTE precompile requires its verification to be stateless. The statelessness upgrade is therefore required, with all its associated EIPs.

Some adjacent EIPs that are relevant in this context are:

  • EIP-2935: Serve historical block hashes from state (live with Pectra).
  • EIP-7709: Read BLOCKHASH from storage and update cost (SFI in Fusaka).

L1 ZK-EVM

The ZK version of the EXECUTE precompile requires the L1 ZK-EVM upgrade to take place first and it will influence how exactly the precompile will be implemented:

  • Offchain vs onchain proofs: influences whether the precompile needs to take a ZK proof (or multiple proofs) as input.
  • Gas limit handling: influences whether the precompile needs to take a gas limit as an input or not. Some L1 ZK-EVM proposals suggest the complete removal of the gas limit, as long as the block proposer itself is also required to provide the ZK proof (see Prover Killers Killer: You Build it, You Prove it).

Relevant EIPs:

FOCIL (EIP-7805)

While not strictly required, the addition of FOCIL would help simplifying the design of forced transaction mechanisms, as described in the FOCIL section of the Forced transactions page.

RISC-V (or equivalent)

⚠️ This is only to be considered for future versions of native rollups. General compatibility should still be kept in mind.

Non-EVM-based native rollups can be supported if L1 migrates it’s low-level architecture to RISC-V or equivalent. At that point, L1 execution can provide two services to native rollups:

  • A RISC-V or equivalent ISA that can be accessed directly. L1 will provide multi-proofs, audits, formal verification, and a socially “bug-free” implementation.
  • An EVM host program that sits on top, which can be again considered socially bug-free and it’s automatically upgraded through L1 governance process. Under the hood, the RISC-V or equivalent infrastructure is used.

One idea is to then split the EXECUTE into two versions: a EVMEXECUTE and a RISCVEXECUTE precompile, where non-EVM rollups would choose to call the second one with a custom host program. Note that this is highly speculative and heavily depends on the specific implementation of the RISC-V proposal. Open questions remain around how to guarantee availability of host programs to be able to detect bugs in the ZK verification process.

L1 Anchoring

Table of Contents

Overview

To allow messaging from L1 to L2, a rollup needs to be able to obtain some information from the L1 chain, with the most general information being an L1 block hash. The process of placing a “cross-chain validity reference” is tipically called “anchoring”. In practice, projects relay from L1 various types of information depending on their specific needs.

Current approaches

We first discuss how some existing rollups handle the L1 anchoring problem to better inform the design of the EXECUTE precompile.

OP stack

[spec] A special L1Block contract is predeployed on L2 which processes “L1 attributes deposited transactions” during derivation. The contract stores L1 information such as the latest L1 block number, hash, timestamp, and base fee. A deposited transaction is a custom transaction type that is derived from the L1, does not include a signature and does not consume L2 gas.

It’s important to note that reception of L1 to L2 messages on the L2 side does not depend on this contract, but rather on “user-deposited transactions” that are derived from events emitted on L1, which again are implemented through the custom transaction type.

Linea

Linea, in the L2MessageService contract on L2, adds a function that allows a permissioned relayer to send information from L1 to L2:

function anchorL1L2MessageHashes(
    bytes32[] calldata _messageHashes,
    uint256 _startingMessageNumber,
    uint256 _finalMessageNumber,
    bytes32 _finalRollingHash
) external whenTypeNotPaused(PauseType.GENERAL) onlyRole(L1_L2_MESSAGE_SETTER_ROLE)

The permissioned relayer is supposed to only relay rolling hashes that are associated with L1 blocks that are finalized. On L1, a wrapper around the STF checks that the “rolling hash” being relayed is correct, otherwise proof verification fails. Since anchoring is done through regular transactions, the function is permissioned, otherwise any user could send a transaction with an invalid rolling hash, which would be accepted by the L2 but rejected during settlement. In other words, blocks containing invalid anchor transactions are not considered no-ops.

Taiko

[docs] An anchorV3 function is implemented in the TaikoAnchor contract which allows a GOLDEN_TOUCH_ADDRESS to relay an L1 state root to L2. The private key of the GOLDEN_TOUCH_ADDRESS is publicly known, but the node guarantees that the first transaction is always an anchor transaction, and that other transactions present in the block revert.

function anchorV3(
    uint64 _anchorBlockId,
    bytes32 _anchorStateRoot,
    uint32 _parentGasUsed,
    LibSharedData.BaseFeeConfig calldata _baseFeeConfig,
    bytes32[] calldata _signalSlots
)
    external
    nonZeroBytes32(_anchorStateRoot)
    nonZeroValue(_anchorBlockId)
    nonZeroValue(_baseFeeConfig.gasIssuancePerSecond)
    nonZeroValue(_baseFeeConfig.adjustmentQuotient)
    onlyGoldenTouch
    nonReentrant

Since proposing blocks in Taiko is untrusted, some additional checks are performed on the validity of anchor blocks, which are passed on L1. In particular, it is checked that the anchor block number is not more than 96 blocks in the past, that it is less than current block number, and that it is greater than the latest anchor block.

The validity of the _anchorStateRoot value is explicitly checked by Taiko’s proof system. L2 blocks containing an invalid anchor block are skipped.

Orbit stack

Orbit stack chains relay information from L1 to L2 per message, similarly to the OP stack. New transaction types without signatures are introduced which are derived and authenticated by L1. In particular, the following types are added:

ArbitrumDepositTxType         = 0x64
ArbitrumUnsignedTxType        = 0x65
ArbitrumContractTxType        = 0x66
ArbitrumRetryTxType           = 0x68
ArbitrumSubmitRetryableTxType = 0x69
ArbitrumInternalTxType        = 0x6A
ArbitrumLegacyTxType          = 0x78

ArbOS handles the translation from message types to transaction types. For example, a L1MessageType_L2FundedByL1 message generates two transactions, one with type ArbitrumDepositTxType for funding and a ArbitrumUnsignedTxType for the actual message.

As opposed to other chains, retryable messages are implemented as a new transaction type instead of being implemented within smart contract logic.

Proposed design

An L1_ANCHOR system contract is predeployed on L2 that receives an arbitrary bytes32 value from L1 to be saved in its storage. The contract is intended to be used for L1->L2 messaging without being tied to any specific format, as long it is encoded as a bytes32 value. Validation of this value is left to the rollup contract on L1. The exact implementation of the contract is TBD, but EIP-2935 can be used as a reference. A messaging system can be implemented on top of this by passing roots and providing proofs of inclusions on the L2. Such mechanisms are better discussed in L1 to L2 messaging.

Other approaches

One approach consists in re-using the parent_beacon_block_root field to pass an arbitrary bytes32 value, which is saved in the BEACON_ROOTS_ADDRESS predeploy on L2 as defined in EIP-4788. This would allow not to have an additional system transaction in the EXECUTE precompile and an additional predeploy, at the cost of changing the semantics of parent_beacon_block_root if data that is not a beacon block root is passed. Some projects might want to both pass the beacon block root and a custom L1 anchor value.

Another proposed design suggests passing arbitrary bytes as in-memory context instead of a bytes32 that gets saved in storage. This requires an additional precompile on L2 to be able to read such context, which would not be usable on L1.

L1 to L2 messaging

Table of Contents

Current approaches

L1 to L2 messaging systems are built on top of the L1 anchoring mechanism. We first discuss how some existing rollups handle L1 to L2 messaging to better understand how similar mechanisms can be implemented on top of the L1 anchoring mechanism proposed here for native rollups.

OP stack

There are two ways to send messages from L1 to L2, either by using the low-level API of deposited transactions, or by using the high-level API of the “Cross Domain Messenger” contracts, which are built on top of the low-level API.

Deposited transactions are derived from TransactionDeposited events emitted in the OptimismPortal contract on L1. Deposited transactions are a new transaction type with prefix 0x7E that have been added in the OP stack STF, which are fully derived on L1, they cannot be sent to L2 directly, and do not contain signatures as the authentication is already performed on L1. The deposited transaction on the L2 specifies the tx.origin and the msg.sender as the msg.sender of the transaction on L1 that emitted the TransactionDeposited event if EOA, if not, the aliased msg.sender is used to prevent conflicts with L2 contracts that might have the same address. Moreover the function mints L2 gas tokens based on the value that is sent on L1.

The Cross Domain Messengers are contracts built on top of this mechanism. The sendMessage function on L1 calls OptimismPortal.depositTransaction, and will therefore be the (aliased) msg.sender on the L2 side. The actual caller of the sendMessage function is passed as opaque bytes to be later unpacked. On L2, the corresponding Cross Domain Messenger contract receives a call to the relayMessage function, which checks that the msg.sender is the aliased L1 Cross Domain Messenger. A special xDomainMsgSender storage variable saves the actual L1 cross domain caller, and finally executes the call. The application on the other side will then be able to access the xDomainMsgSender variable to know who sent the message, and the msg.sender will be the Cross Domain Messenger contract on L2. If the sender on L1 was a contract, the address is not and doesn’t need to be aliased as checking the xDomainMsgSender already scopes callers to just L1 callers and no conflict with L2 contracts can happen.

It’s important to note that such messaging mechanism is completely disconnected from the onchain L1 anchoring mechanism that saves the L1 block information in the L2 L1Block contract, as it is fully handled by the derivation logic.

Linea

The sendMessage function is called on the LineaRollup contract on L1, also identified as the “message service” contract by others. A numbered “rolling hash” is saved in a mapping with the content of the message to be sent on L2. During Linea’s anchoring process, such rolling hash is relayed on the L2 together with all the message hashes that make up the rolling hashes that are then saved in the inboxL1L2MessageStatus mapping. The message is finally executed by calling the claimMessage function on the L2MessageService, which references the message status mapping. The destination contract can call the sender() function on the L2MessageService to check who was the original sender of the message on L1. The value is set only for the duration of the call and is reset to default values after the call returns. If the sender on L1 was a contract, the address is not and doesn’t need to be aliased as checking the sender() already scopes callers to just L1 callers and no conflict with L2 contracts can happen.

Taiko

To send a message from L1 to L2, the sendSignal function is called on the SignalService contract on L1, which stores message hashes in its storage at slots computed based on the message itself. On the L2 side, after anchoring of the L1 block state root, the proveSignalReceived function is called on the SignalService L2 contract, with complex merkle proofs that unpack the so-passed state root and gets to the message hashes saved in storage of the L1 SignalService contract.

A higher-level Bridge contract is deployed on L1 that performs the actuall contract call through the processMessage function given the informations received by the SignalService L2 contract. The destination contract can call the context() function on the Bridge to check what was the origin chain and the origin sender of the message. The context() is set only for the duration of the call and it is reset to default values after the call returns. If the sender on L1 was a contract, the address is not and doesn’t need to be aliased as checking the context() already scopes callers to just L1 callers and no conflict with L2 contracts can happen.

Orbit stack

Messages are sent from L1 to L2 by enqueuing “delayed messages” on the Bridge contract using authorized Inbox contracts. Those messages can have different “kinds” based on their purposes:

uint8 constant L2_MSG = 3;
uint8 constant L1MessageType_L2FundedByL1 = 7;
uint8 constant L1MessageType_submitRetryableTx = 9;
uint8 constant L1MessageType_ethDeposit = 12;
uint8 constant L1MessageType_batchPostingReport = 13;
uint8 constant L2MessageType_unsignedEOATx = 0;
uint8 constant L2MessageType_unsignedContractTx = 1;

Those message types will then internally correspond to different transaction types, as already listed in Orbit’s L1 Anchoring section. For those messages to be included on the L2, the permissioned sequencer needs to either explicitly include them in an L2 block, or if they are not processed within some time they can be forced included by the user. On the L2, transactions magically appear and have no signature, without the need to explicitly claim them, and have the proper msg.sender from L1, which is aliased if the sender on L1 is a contract.

Proposed design

Designs can be classified into two categories, those that support L1 to L2 messages with the proper msg.sender on the L2, and those that don’t. Using the proper L1 msg.sender (aliased if the sender is a contract) for the L2 transaction has the advantage that many contracts don’t need to be modified to explicitly support L1 to L2 messages, as access control works in the usual way by checking the msg.sender. The downside is that this requires the addition of a new transaction type without signature, that needs to be scoped for native rollups usage only and prohibited on L1.

Following the design principles, and the fact that existing projects can already handle L1 to L2 messaging without an additional transaction type, it is preferred not to add a new transaction type. The downside is that now contracts need to be explicitly modified to support the L1 to L2 message interface for crosschain message authentication. Many projects already do this, and effort can be made to standardize the interface across projects.

Messages need to be claimed against the hashed relayed during the anchoring process using inclusion proofs, and contextual information can be saved in the contract state for the duration of the call, as already done in the projects discussed above, or alternatively passed directly to the destination contract.

L2 to L1 messaging

Table of Contents

Current approaches

While L2 -> L1 messaging can be built on top of the state root that the EXECUTE precompile already exposes, some projects expose a shallower interface to make it easier to provide inclusion proofs. We first discuss how existing projects implement L2 -> L1 messaging, to better understand how similar mechanisms can be implemented in native rollups.

OP stack

[spec] The piece of data that is used on the L1 side of the L2->L1 messaging bridge is a “block output root, which is defined as:

struct BlockOutput {
  bytes32 version;
  bytes32 stateRoot;
  bytes32 messagePasserStorageRoot;
  bytes32 blockHash;
}

Inclusion proofs are verified against the messagePasserStorageRoot instead of the stateRoot, which represents the storage root of the L2ToL1MessagePasser contract on L2. On the L2 side, the L2ToL1MessagePasser contract takes a message, hashes it, and stores it in a mapping.

Linea

[docs] Linea uses a custom merkle tree of messages which is then provided as an input during settlement and verified as part of the validity proof. On the L2 side, an MessageSent event from the L2->L1 message.

Taiko

[docs] Taiko uses the same mechanism as L1->L2 messaging with a SignalService. The protocol is general enough to support both providing proofs againt a contract storage root or against a state root, by also providing an account proof.

Orbit stack

WIP.

Proposed design

At this point it’s not clear whether it is possible to easily expose a custom data structure from L2 to L1. The EXECUTE precompile naturally exposes the state root, and the block_output can also expose the receipts_trie in some form, for example by exposing its root.

In principle, EIP-7685: General purpose execution layer requests could be used, but this requires overloading its semantic from EL->CL to L2->L1 requests, and adding a new type of request that also “pollutes” the L1 execution environment.

On the other hand, it is expected that statelessness will help in reducing the cost of providing inclusion proofs directly against a state root, which might remove the need to provide a shallower interface.

Gas token deposits

Table of Contents

Overview

Rollup users need a way to obtain the gas token to be able to send transactions on the L2. Existing solutions divide into two approaches: either an escrow contract contains preminted tokens that are unlocked through the L2 to L1 messaging channel, or a new transaction type that is able to mint the gas token is added to the STF. This page will also discuss two more approaches that are currently not used in any project.

Current approaches

OP stack

The custom DepositTransaction type allows to mint the gas token based on TransactionDeposited event fields. On L2, the gas token magically appears in the user’s balance.

Linea

Linea uses preminted tokens in the L2MessageService contract, which are then unlocked when L1 to L2 messages are processed on the L2. No new transaction type that can mint gas token is added to the STF.

Taiko

Taiko uses preminted tokens in the L2 Bridge contract, which are then unlocked when L1 to L2 messages are processed on the L2. No new transaction type that can mint gas token is added to the STF.

Orbit stack

Orbit stack uses a custom transaction type that is able to mint the gas token based on the ArbitrumDepositTx type. On L2, the gas token magically appears in the user’s balance.

Other approaches

Manual state manipulation

Before (and after) calling the EXECUTE precompile, projects are free to modify the L2 state root directly with custom execution, including dedicated proving systems. This can be used to touch balances, but it requires doing all updates either before or after block execution. This strategy cannot be used to support arbitrary intra-block gas-token deposits.

Beacon chain withdrawals

Another possible mechanism is to use the beacon chain withdrawal mechanisms which mints the gas token on L1. Withdrawals are processed at the end of a block, so they wouldn’t be able to allow deposits to be processed intra-block. As of now, no existing project uses beacon chain withdrawals for gas token deposits, but the mechanism can be left open for use.

Proposed design

Following the design principles, it is preferred not to add a new transaction type that can mint gas tokens, as existing projects already handle gas token deposits through other means. The preferred approach is to use a predeployed contract that contains preminted tokens, which are then unlocked when L1 to L2 messages are processed on the L2. This design fully supports custom gas tokens as it is not opinionated on what type of message unlocks the gas token on L2, it being ETH, an ERC20, NFTs, or mining mechanisms.

The first deposit problem

WIP.

To be discussed:

  • How can a user claim on L2 the first deposit if they don’t have any gas token to pay for the transaction fees?

L2 fee market

Table of Contents

Fee collection

The EXECUTE precompile exposes the coinbase address as an input parameter so that projects can decide by themselves how to collect priority fees on the L2.

While L1 burns the base fee, most L2s in production today decide to redirect it to a dedicated address. This is not possible to be supported by native rollups out of the box, and requires additional changes to the L1 protocol.

One proposal consists in exposing in the block_output the cumulative fee that is burned in the block, both by the effective_gas_fee and the blob_gas_fee. We highlight in green the two additional lines that need to be added to the BlockOutput class and to the process_transaction function to support this feature.

+++ vm/__init__.py
class BlockOutput:
    block_gas_used
    transactions_trie
    receipts_trie
    receipt_keys
    block_logs
    withdrawals_trie
    blob_gas_used
    requests
+   burned_fees
+++ fork.py
def process_transaction(​block_env: ethereum.osaka.vm.BlockEnvironment, ​​block_output: ethereum.osaka.vm.BlockOutput, ​​tx: Transaction, ​​index: Uint​) -> None:
    """
    Execute a transaction against the provided environment.
    This function processes the actions needed to execute a transaction.
    It decrements the sender's account after calculating the gas fee and
    refunds them the proper amount after execution. Calling contracts,
    deploying code, and incrementing nonces are all examples of actions that
    happen within this function or from a call made within this function.
    Accounts that are marked for deletion are processed and destroyed after
    execution.
    Parameters
    ----------
    block_env :
        Environment for the Ethereum Virtual Machine.
    block_output :
        The block output for the current block.
    tx :
        Transaction to execute.
    index:
        Index of the transaction in the block.
    """
    trie_set(
        block_output.transactions_trie,
        rlp.encode(index),
        encode_transaction(tx),
    )
    intrinsic_gas, calldata_floor_gas_cost = validate_transaction(tx)
    (
        sender,
        effective_gas_price,
        blob_versioned_hashes,
        tx_blob_gas_used,
    ) = check_transaction(
        block_env=block_env,
        block_output=block_output,
        tx=tx,
    )
    sender_account = get_account(block_env.state, sender)
    if isinstance(tx, BlobTransaction):
        blob_gas_fee = calculate_data_fee(block_env.excess_blob_gas, tx)
    else:
        blob_gas_fee = Uint(0)
    effective_gas_fee = tx.gas * effective_gas_price
    gas = tx.gas - intrinsic_gas
    increment_nonce(block_env.state, sender)
    sender_balance_after_gas_fee = (
        Uint(sender_account.balance) - effective_gas_fee - blob_gas_fee
    )
    set_account_balance(
        block_env.state, sender, U256(sender_balance_after_gas_fee)
    )
    access_list_addresses = set()
    access_list_storage_keys = set()
    access_list_addresses.add(block_env.coinbase)
    if isinstance(
        tx,
        (
            AccessListTransaction,
            FeeMarketTransaction,
            BlobTransaction,
            SetCodeTransaction,
        ),
    ):
        for access in tx.access_list:
            access_list_addresses.add(access.account)
            for slot in access.slots:
                access_list_storage_keys.add((access.account, slot))
    authorizations: Tuple[Authorization, ...] = ()
    if isinstance(tx, SetCodeTransaction):
        authorizations = tx.authorizations
    tx_env = vm.TransactionEnvironment(
        origin=sender,
        gas_price=effective_gas_price,
        gas=gas,
        access_list_addresses=access_list_addresses,
        access_list_storage_keys=access_list_storage_keys,
        transient_storage=TransientStorage(),
        blob_versioned_hashes=blob_versioned_hashes,
        authorizations=authorizations,
        index_in_block=index,
        tx_hash=get_transaction_hash(encode_transaction(tx)),
    )
    message = prepare_message(block_env, tx_env, tx)
    tx_output = process_message_call(message)
    # For EIP-7623 we first calculate the execution_gas_used, which includes
    # the execution gas refund.
    tx_gas_used_before_refund = tx.gas - tx_output.gas_left
    tx_gas_refund = min(
        tx_gas_used_before_refund // Uint(5), Uint(tx_output.refund_counter)
    )
    tx_gas_used_after_refund = tx_gas_used_before_refund - tx_gas_refund
    # Transactions with less execution_gas_used than the floor pay at the
    # floor cost.
    tx_gas_used_after_refund = max(
        tx_gas_used_after_refund, calldata_floor_gas_cost
    )
    tx_gas_left = tx.gas - tx_gas_used_after_refund
    gas_refund_amount = tx_gas_left * effective_gas_price
    # For non-1559 transactions effective_gas_price == tx.gas_price
    priority_fee_per_gas = effective_gas_price - block_env.base_fee_per_gas
    transaction_fee = tx_gas_used_after_refund * priority_fee_per_gas
    # refund gas
    sender_balance_after_refund = get_account(
        block_env.state, sender
    ).balance + U256(gas_refund_amount)
    set_account_balance(block_env.state, sender, sender_balance_after_refund)
    # transfer miner fees
    coinbase_balance_after_mining_fee = get_account(
        block_env.state, block_env.coinbase
    ).balance + U256(transaction_fee)
    if coinbase_balance_after_mining_fee != 0:
        set_account_balance(
            block_env.state,
            block_env.coinbase,
            coinbase_balance_after_mining_fee,
        )
    elif account_exists_and_is_empty(block_env.state, block_env.coinbase):
        destroy_account(block_env.state, block_env.coinbase)
    for address in tx_output.accounts_to_delete:
        destroy_account(block_env.state, address)
    block_output.block_gas_used += tx_gas_used_after_refund
    block_output.blob_gas_used += tx_blob_gas_used
+   block_output.burned_fees += 
+       effective_gas_fee - gas_refund_amount - transaction_fee + blob_gas_fee
    receipt = make_receipt(
        tx, tx_output.error, block_output.block_gas_used, tx_output.logs
    )
    receipt_key = rlp.encode(Uint(index))
    block_output.receipt_keys += (receipt_key,)
    trie_set(
        block_output.receipts_trie,
        receipt_key,
        receipt,
    )
    block_output.block_logs += tx_output.logs

This value can be exposed as an output of the EXECUTE precompile for projects to decide how to handle it. For example, for those projects whose gas token is bridged through L1, the L1 bridge can decide to credit the burned fees to a dedicated address.

Pricing

WIP.

To be discussed:

  • handling of DA costs in the L2 transactions fee market.

Forced transactions

Table of Contents

Overview

Rollups with centralized sequencers must implement a forced transaction mechanism if they wish to preserve L1 censorship resistance properties. Users need to be able to permissionlessly send transactions to L1 and have the guarantee that they will be eventually included on L2 if they are accepted on L1.

Fundamentally, proposed forced transactions do not need to include an L2 transaction signature because they can be authenticated using the L1 transaction signature, assuming that they are supposed to be the same address on L1 and L2. In existing implementations, forced transactions are usually pushed through calldata on L1, given that using a blob would leave most of the space unused and might therefore be cost-inefficient. Forced transaction mechanisms should be designed not to interfere with centralized preconfirmations where possible.

Brainstorming

⚠️ The following are just examples to demonstrate that forced transaction mechanisms are compatible with native rollups and that their implementations isn’t unrealistic. Projects are free to design their own mechanisms and the precompile aims to be flexible enough to accommodate them.

The EXECUTE precompile can only support transactions with signatures, so forced transactions must include a signature too. On L1, one exception is made for withdrawals from the beacon chain, which are not authenticated on the execution layer. The limitation of withdrawals is that they only cover ETH minting and they cannot be used as a replacement for general message passing.

One approach consists in having a mechanism to detect whether sequenced blocks contain individual forced transactions from a queue on L1 that are older than a certain time threshold, and if not, revert the block submission. It’s unclear whether proving inclusion of arbitrary bytes in blobs is feasible to be done on L1 or if it requires a dedicated ZK verifier. This design alone doesn’t solve for a sequencer that is not only censoring but is completely offline, and therefore an additional fallback mechanism that removes the sequencer whitelist might be needed.

Another solution is to allow the EXECUTE precompile not only to reference blobs, but also storage or calldata. In this way users can save their forced txs in storage and the contract around the EXECUTE calls can force the transactions input to be read from that storage if forced txs older than a certain time threshold are present.

FOCIL (EIP-7805)

FOCIL introduces a new inclusion_list_transactions parameter to the state_transition function and the apply_body function, that conditions the validity of the block with a validate_inclusion_list_transactions function. In particular, it is checked that block transactions include all valid transactions from the IL that can fit in the block. The execution spec diff on top of osaka can be found here.

Native rollups can re-use such logic where the IL comes from a smart contract as opposed to the CL. Custom logic can be applied that act as an additional filter, or that add delays to preserve the validity of already issued preconfirmations. The IL would therefore become another input to the EXECUTE precompile. In this case, FOCIL inclusion on L1 becomes an obvious tech dependency.

It is still an open question how to properly manage the equivalent of a mempool within a smart contract, potential DoS attacks and re-submissions. Some of these problems are not unique to this “L2 FOCIL” design and might be present in existing forced transaction mechanisms too.

L1 vs L2 diff

Table of Contents

Blob-carrying transactions

Since rollups are not (supposed to be) connected to a dedicated consensus layer that handles blobs, they cannot support blob-carrying transactions and related functionality. This is solved by the EXECUTE by filtering all type-3 transactions before calling the state transition function.

As a consequence, all blocks will simply not contain any blob-carrying transactions, which allows maintaing BLOBHASH and point evaluation operations untouched, since they would behave the same as in an L1 block with no blob-carrying transactions.

Since the EXECUTE precompile does a recursive call to apply_body and not state_transition, header checks are skipped, and block_env values can either be passed as an input, or re-use the values from L1. Since no blob-carrying transactions are present, the excess_blob_gas would default to zero, unless another value is passed from L1. It’s important to note that L1 exposes block.blobbasefee and not excess_blob_gas, so some translation would be needed to have the proper input for block_env, or some other changes on L1 needs to be made.

RANDAO

The block.prevrandao behaviour across existing rollups varies. Orbit stack chains return the constant 1. OP stack chains return the value from the latest synced L1 block on L2. Linea returns the constant 2. Scroll returns the constant 0. ZKsync returns the constant 2500000000000000.

The current proposal is to leave the field as an input to the EXECUTE precompile so that projects can decide by themselves how to handle it.

Beacon roots storage

Not all projects support EIP-4788: Beacon block root in the EVM as rollups are not directly connected to the beacon chain. The EXECUTE precompile leaves the parent_beacon_block_root as an input so that projects can decide by themselves how to handle it.

Open problems

Table of Contents

apply_body mutability

The apply_body interface is not immutable, and has changed in the past with the addition of withdrawals and will change in the future with the addition of inclusion_list_transactionsin FOCIL. Usage of state_transition instead of apply_body might appear to be more stable at first, as it just takes a Blockchain and a Block, but FOCIL adds inclusion_list_transactions there too.

Every time the apply_body interface changes, the EXECUTE precompile needs to be updated to potentially add “default values” that leave untouch the EXECUTE interface. As a consequence, native rollups would not be able to automatically benefit from new features that are added outside the apply_body function but that affect it too. It’s an open question if is always possible to add default values for new parameters.

Past blobs references

WIP.

Customization

Table of Contents

Custom gas tokens

We present PoCs for custom gas token implementations using Linea’s messaging bridge as inspiration, which is reported here:

  /**
   * @notice Adds a message for sending cross-chain and emits MessageSent.
   * @dev The message number is preset (nextMessageNumber) and only incremented at the end if successful for the next caller.
   * @dev This function should be called with a msg.value = _value + _fee. The fee will be paid on the destination chain.
   * @param _to The address the message is intended for.
   * @param _fee The fee being paid for the message delivery.
   * @param _calldata The calldata to pass to the recipient.
   */
  function sendMessage(
    address _to,
    uint256 _fee,
    bytes calldata _calldata
  ) external payable whenTypeAndGeneralNotPaused(PauseType.L1_L2) {
    if (_to == address(0)) {
      revert ZeroAddressNotAllowed();
    }

    if (_fee > msg.value) {
      revert ValueSentTooLow();
    }

    uint256 messageNumber = nextMessageNumber++;
    uint256 valueSent = msg.value - _fee;

    bytes32 messageHash = MessageHashing._hashMessage(msg.sender, _to, _fee, valueSent, messageNumber, _calldata);

    _addRollingHash(messageNumber, messageHash);

    emit MessageSent(msg.sender, _to, _fee, valueSent, messageNumber, _calldata, messageHash);
  }

Arbitrary ERC20

Instead of using msg.value as the valueSent, we use ERC20 transfer amount.

function sendMessage(
  address _to,
  uint256 _fee,
  uint256 _value,
  bytes calldata _calldata
) external whenTypeAndGeneralNotPaused(PauseType.L1_L2) {
  if (_to == address(0)) {
    revert ZeroAddressNotAllowed();
  }

  uint256 totalAmount = _value + _fee;

  bool success = gasToken.transferFrom(msg.sender, address(this), totalAmount);
  if (!success) {
    revert TokenTransferFailed();
  }

  uint256 messageNumber = nextMessageNumber++;

  bytes32 messageHash = MessageHashing._hashMessage(
    msg.sender,
    _to,
    _fee,
    _value,
    messageNumber,
    _calldata
  );

  _addRollingHash(messageNumber, messageHash);

  emit MessageSent(msg.sender, _to, _fee, _value, messageNumber, _calldata, messageHash);
}

Special considerations apply when tokens with non-standard decimals or transfer logic are used.

ETH burn

Instead of collecting deposits, the ETH sent is burned to a designated burn address.

function sendMessage(
  address _to,
  uint256 _fee,
  bytes calldata _calldata
) external payable whenTypeAndGeneralNotPaused(PauseType.L1_L2) {
  if (_to == address(0)) {
    revert ZeroAddressNotAllowed();
  }

  if (_fee > msg.value) {
    revert ValueSentTooLow();
  }

  uint256 messageNumber = nextMessageNumber++;
  uint256 valueSent = msg.value - _fee;

  // Burn the full amount
  (bool success, ) = address(BURN_ADDRESS).call{value: msg.value}("");
  if (!success) {
    revert BurnFailed();
  }

  bytes32 messageHash = MessageHashing._hashMessage(msg.sender, _to, _fee, valueSent, messageNumber, _calldata);

  _addRollingHash(messageNumber, messageHash);

  emit MessageSent(msg.sender, _to, _fee, valueSent, messageNumber, _calldata, messageHash);
}

NFT-gated gas credits

NFT holders can claim free gas tokens on L2 once per NFT.

function sendMessage(
  address _to,
  uint256 _tokenId,
  bytes calldata _calldata
) external whenTypeAndGeneralNotPaused(PauseType.L1_L2) {
  if (_to == address(0)) {
    revert ZeroAddressNotAllowed();
  }

  if (NFT_COLLECTION.ownerOf(_tokenId) != msg.sender) {
    revert NotNFTOwner();
  }

  if (claimed[_tokenId]) {
    revert AlreadyClaimed();
  }

  claimed[_tokenId] = true;

  uint256 messageNumber = nextMessageNumber++;
  uint256 valueSent = GAS_CREDIT_AMOUNT;

  bytes32 messageHash = MessageHashing._hashMessage(msg.sender, _to, 0, valueSent, messageNumber, _calldata);

  _addRollingHash(messageNumber, messageHash);

  emit MessageSent(msg.sender, _to, 0, valueSent, messageNumber, _calldata, messageHash);
}

Custom sequencing

TODO

Custom VMs

TODO

Proofs

⚠️ This is a heavy work in progress. Details are likely to change as the ZK L1 upgrade effort progresses.

Table of Contents

Problem statement

The ZK version of the EXECUTE precompile needs to provide a ZK proof for nodes to verify that its execution was correct. The exact mechanism used should resemble the way that proofs are provided and verified for L1 blocks. Some information on the current effort for the ZK L1 upgrade can be found here.

Background

For L1, the ZK current interface candidate looks as follows:

  • Verifier: takes in input the blockhash, parenthash and boolean saying whether the STF is valid or not. The blockhash already contains the parenthash, but we need to check whether the one we already have matches with the new block being validated.
  • Prover: takes a block and an execution witness. The execution witness is computed during payload execution and it is made up of the all required state trie node preimages required for execution, list of all contract code preimages required for execution, all account and storage key preimages required for execution, and the state root of the previous block header which contains the pre-state and the parent header info required for validation.

Given that the inputs reflect those of the state_transition function, the same verification keys can potentially be used both for L1 blocks and for native rollup blocks.

Non-enshrined vs enshrined proofs

It is expected that the first version of the ZK L1 upgrade will not enshrine any particular quorum of proof systems, but nodes will be able to choose which proof system to use. This means that every computation that gets ZK proven needs to somehow make its witness available for arbitrary prover nodes to pick it up and generate proofs. As a consequence, the EXECUTE precompile is forced to check for availability of transaction commitments, i.e. of blobs. For this reason, calls to the precompile should be able to directly reference a blob, rather than just pass an arbitrary commitment, which is the way existing rollup verifiers work.

If enough confidence is gained in the proof systems being used, a second version might enshrine a quorum of fixed proof systems. In this case, as long as some computation provides a proof, nodes will be able to verify it, regardless of whether the witness is available or not. In this scenario, it will be possible to support native alt-DA L2s too.

Range proofs

While ideally the EXECUTE precompile would reuse the same verification keys as those used for L1 blocks, the downside is that a precompile call would only be able to verify one block at a time. If L1 provides a verification key that is able to verify multiple blocks within a single proof, then projects would not be forced to perform one EXECUTE call per block.

Proof-carrying transactions

Since calls to the precompile need to provide a proof, and we don’t want the proof to be sent onchain, at least with the first version of the ZK L1 upgrade, we introduce a new EIP-2718 transaction type, “proof-carrying transaction”, where the TransactionType is PROOF_TX_TYPE and the TransactionPayload is the RLP serialization of the following TransactionPayloadBody:

[chain_id, nonce, max_priority_fee_per_gas, max_fee_per_gas, gas_limit, to, value, data, access_list, y_parity, r, s]

similarly to EIP-4844, proof-carrying transactions have two network representations. During transaction gossip responses (PooledTransactions), the EIP-2718 TransactionPayload of the proof-carrying transaction is wrapped to become:

rlp([tx_payload_body, proofs])

Proofs are validated on the consensus layer, similarly to how they’ll be validated for L1 blocks. Multiple proofs might be needed to convince enough validators, since each one of them might be subscribed to receive proofs from different proof systems.

Recursive L1+L2 proofs

There are two strategies that can be employed to verify L1 and EXECUTE precompile proofs:

  1. Separate proofs: the L1 block proof covers everything, including the EXECUTE call, up until the the point where the state_transition function is recursively called. In this case, the node will required to verify an additional proof separetely.
  2. Recursive proofs: the L1 state_transition proof and the L2 state_transition proof(s) are combined together in a single proof. This might introduce additional latency and complexity, but reduces the cost of verification for each block.

NR vs sharding

Table of Contents

WIP.

To be discussed:

  • how native rollups compare with other sharding designs, e.g. Near, Anoma.

Stacks review

Table of Contents

WIP

Orbit stack

Table of Contents

Sequencing

Arbitrum sequencing
Arbitrum sequencing data flow.

The main function used to sequence blobs in the orbit stack is the addSequencerFromBlobsImpl function, whose interface is as follows:

function addSequencerL2BatchFromBlobsImpl(
    uint256 sequenceNumber,
    uint256 afterDelayedMessagesRead,
    uint256 prevMessageCount,
    uint256 newMessageCount
) internal {

Example of calls:

  1. Block: 22866981 (link)

    • sequenceNumber: 934316
    • afterDelayedMessagesRead: 2032069
    • prevMessageCount: 332968910
    • newMessageCount: 332969371 (+461)
  2. Block: 22866990 (+9) (link)

    • sequenceNumber: 934317 (+1)
    • afterDelayedMessagesRead: 2032073 (+4)
    • prevMessageCount: 332969371 (+0)
    • newMessageCount: 332969899 (+528)
  3. Block: 22867001 (+11) (link)

    • sequenceNumber: 934318 (+1)
    • afterDelayedMessagesRead: 2032073 (+0)
    • prevMessageCount: 332969899 (+0)
    • newMessageCount: 332970398 (+499)

It’s important to note that when a batch is submitted, also a “batch spending report” is submitted with the purpose of reimbursing the batch poster on the L2. The function will be analyzed later on.

The formBlobDataHash function is called to prepare the data that is then saved in storage. Its interface is as follows:

function formBlobDataHash(
    uint256 afterDelayedMessagesRead
) internal view virtual returns (bytes32, IBridge.TimeBounds memory, uint256)

First, the function fetches the blob hashes of the current transaction using a Reader4844 yul contract. Then it creates a “packed header” using the packHeader function, which is defined as follows:

function packHeader(
    uint256 afterDelayedMessagesRead
) internal view returns (bytes memory, IBridge.TimeBounds memory) {

The function takes the rollup’s bridge “time bounds” and computes the appropriate bounds given the maxTimeVariation values, the current timestamp and block number. Such values are then returned together with the afterDelayedMessagesRead value.

A time bounds struct is defined as follows:

struct TimeBounds {
    uint64 minTimestamp;
    uint64 maxTimestamp;
    uint64 minBlockNumber;
    uint64 maxBlockNumber;
}

and the maxTimeVariation is a set of four values representing how much in the past or in the future the time or blocks can be from the current time and block number. This is done to prevent reorgs from invalidating sequencer preconfirmations, while also establishing some bounds.

For Arbitrum One, these values are set to:

  • delayBlocks: 7200 blocks (24 hours at 12s block time)
  • futureBlocks: 64 blocks (12.8 minutes at 12s block time)
  • delaySeconds: 86400 seconds (24 hours)
  • futureSeconds: 768 seconds (12.8 minutes)

The formBlobDataHash function then computes the blobs cost by taking the current blob base fee, the (fixed) amount of gas used per blob and the number of blobs. Right after, the following value is returned:

return (
    keccak256(bytes.concat(header, DATA_BLOB_HEADER_FLAG, abi.encodePacked(dataHashes))),
    timeBounds,
    block.basefee > 0 ? blobCost / block.basefee : 0
);

Now that the dataHash is computed, the addSequencerL2BatchImpl function is called, which is defined as follows:

function addSequencerL2BatchImpl(
    bytes32 dataHash,
    uint256 afterDelayedMessagesRead,
    uint256 calldataLengthPosted,
    uint256 prevMessageCount,
    uint256 newMessageCount
)
    internal
    returns (uint256 seqMessageIndex, bytes32 beforeAcc, bytes32 delayedAcc, bytes32 acc)

The function, after some checks, calls the enqueueSequencerMessage on the Bridge contract, passing the dataHash, afterDelayedMessagesRead, prevMessageCount and newMessageCount values.

The enqueueSequencerMessage function is defined as follows:

function enqueueSequencerMessage(
    bytes32 dataHash,
    uint256 afterDelayedMessagesRead,
    uint256 prevMessageCount,
    uint256 newMessageCount
)
    external
    onlySequencerInbox
    returns (uint256 seqMessageIndex, bytes32 beforeAcc, bytes32 delayedAcc, bytes32 acc)

The function, after some checks, fetches the previous “accumulated” hash and merges it with the new dataHash and the “delayed inbox” accumulated hash given the current afterDelayedMessagesRead. This new hash is then pushed in the sequencerInboxAccs array, which represents the canonical list of inputs to the rollup state transition function.

Where are these inputs used?

  • When creating a new assertion, to ensure that the caller is creating an assertion on the expected inputs;
  • When “fast confirming” assertions (in the context of AnyTrust);
  • When validating the inputs in the last step of a fraud proof, specifically inside the OneStepProverHostIo contract.

Batch spending report

TODO

Proving

State roots are proved using an optimistic proof system involving an interactive bisection protocol and a final onchain one-step execution. In particular, a bisection can conclude with a call to the confirmEdgeByOneStepProof function, which ultimately references the inputs that have been posted onchain.

The bisection protocol is divided into 3 “levels”, depending on the size of the step: block level, big step level, and small step level. The interaction between these levels is non-trivial and also effects its economic guarantees.

A dedicated smart contract, the OneStepProofEntry, manages a set of sub-contracts depending on the type of step that needs to be executed onchain, and finally returns the post-execution state hash to the caller.

The proveOneStep function in OneStepProofEntry is defined as follows:

function proveOneStep(
    ExecutionContext calldata execCtx,
    uint256 machineStep,
    bytes32 beforeHash,
    bytes calldata proof
) external view override returns (bytes32 afterHash)

The ExecutionContext struct is defined as follows:

struct ExecutionContext {
    uint256 maxInboxMessagesRead;
    IBridge bridge;
    bytes32 initialWasmModuleRoot;
}

where the maxInboxMessagesRead is set to the nextInboxPosition of the previous assertion, which can be seen as the “inbox” target set by the previous assertion to the new assertion. This value should at least be one more than the inbox position covered by the previous assertion, and is set to the current sequencer message count for the next assertion. If an assertion reaches the maximum number of blocks allowed but doesn’t reach the nextInboxPosition, it is considered an “overflow” assertion which has its own specific checks.

The OneStepProofEntry contract populates the machine value and frame stacks and registries given the proof. A machine hash is computed using these values and the wasmModuleRoot, which determines the program to execute. Instructions and necessary merkle proofs are deserialized from the proof. Based on the opcode to be executed onchain, a sub-contract is selected to actually execute the step. The ones that require referencing the inbox inputs are the ones that require calling the OneStepProverHostIo contract. An Instruction is simply defined as:

struct Instruction {
    uint16 opcode;
    uint256 argumentData;
}

If the instruction is READ_INBOX_MESSAGE, the executeReadInboxMessage function is called, which either references the sequencer inbox or the delayed inbox, depending whether the argument is INBOX_INDEX_SEQUENCER or INBOX_INDEX_DELAYED. The functions compute the appropriate accumulated hash given its inputs, fetched from the sequencerInboxAccs or delayedInboxAccs, and checks that it matches the expected one.

The executeReadPreImage function is instead used to execute a “read” out of either a keccak256 preimage or a blob hash preimage, using the 4844 point evaluation precompile. It is made sure that the correct point is used for the evaluation.

L1 to L2 messaging

Different types of messages can be sent from L1 to L2, and each of them is identified by a “kind” value, as follows:

uint8 constant L2_MSG = 3;
uint8 constant L1MessageType_L2FundedByL1 = 7;
uint8 constant L1MessageType_submitRetryableTx = 9;
uint8 constant L1MessageType_ethDeposit = 12;
uint8 constant L1MessageType_batchPostingReport = 13;
uint8 constant L2MessageType_unsignedEOATx = 0;
uint8 constant L2MessageType_unsignedContractTx = 1;

uint8 constant ROLLUP_PROTOCOL_EVENT_TYPE = 8;
uint8 constant INITIALIZATION_MSG_TYPE = 11;

In ArbOS, other message types can be found, whose purpose is TBR (To Be Researched):

const (
	L1MessageType_L2Message             = 3
	L1MessageType_EndOfBlock            = 6
	L1MessageType_L2FundedByL1          = 7
	L1MessageType_RollupEvent           = 8
	L1MessageType_SubmitRetryable       = 9
	L1MessageType_BatchForGasEstimation = 10 // probably won't use this in practice
	L1MessageType_Initialize            = 11
	L1MessageType_EthDeposit            = 12
	L1MessageType_BatchPostingReport    = 13
	L1MessageType_Invalid               = 0xFF
)

Gas token deposit (ETH)

To deposit ETH on the L2 to be used as a gas token, the depositEth function on the Inbox contract (also called “delayed inbox”) is used, which is defined as follows:

function depositEth() public payable whenNotPaused onlyAllowed returns (uint256)

Here’s an example transaction.

The onlyAllowed modifier checks an “allow list”, if enabled. The control passes to the _deliverMessage function, which is defined as follows:

function _deliverMessage(
    uint8 _kind,
    address _sender,
    bytes memory _messageData,
    uint256 amount
) internal returns (uint256)

The message kind used here is L1MessageType_ethDeposit. Ultimately, the enqueueDelayedMessage function is called on the Bridge contract. The function ultimately pushes an accumulated hash to the delayedInboxAccs array.

TODO

Derivation

The derivation logic for Arbitrum and Orbit chains is defined in the nitro node.

The L1 node connection is done through the L1Reader in the getL1Reader function in arbnode/node.go. All necessary addresses are fetched from the /cmd/chaininfo/arbitrum_chain_info.json file. For example, here’s the configuration for Arbitrum One:

{
    "chain-name": "arb1",
    "parent-chain-id": 1,
    "parent-chain-is-arbitrum": false,
    "sequencer-url": "https://arb1-sequencer.arbitrum.io/rpc",
    "secondary-forwarding-target": "https://arb1-sequencer-fallback-1.arbitrum.io/rpc,https://arb1-sequencer-fallback-2.arbitrum.io/rpc,https://arb1-sequencer-fallback-3.arbitrum.io/rpc,https://arb1-sequencer-fallback-4.arbitrum.io/rpc,https://arb1-sequencer-fallback-5.arbitrum.io/rpc",
    "feed-url": "wss://arb1-feed.arbitrum.io/feed",
    "secondary-feed-url": "wss://arb1-delayed-feed.arbitrum.io/feed,wss://arb1-feed-fallback-1.arbitrum.io/feed,wss://arb1-feed-fallback-2.arbitrum.io/feed,wss://arb1-feed-fallback-3.arbitrum.io/feed,wss://arb1-feed-fallback-4.arbitrum.io/feed,wss://arb1-feed-fallback-5.arbitrum.io/feed",
    "has-genesis-state": true,
    "block-metadata-url": "https://arb1.arbitrum.io/rpc",
    "track-block-metadata-from": 327000000,
    "chain-config": {
      "chainId": 42161,
      "homesteadBlock": 0,
      "daoForkBlock": null,
      "daoForkSupport": true,
      "eip150Block": 0,
      "eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
      "eip155Block": 0,
      "eip158Block": 0,
      "byzantiumBlock": 0,
      "constantinopleBlock": 0,
      "petersburgBlock": 0,
      "istanbulBlock": 0,
      "muirGlacierBlock": 0,
      "berlinBlock": 0,
      "londonBlock": 0,
      "clique": {
        "period": 0,
        "epoch": 0
      },
      "arbitrum": {
        "EnableArbOS": true,
        "AllowDebugPrecompiles": false,
        "DataAvailabilityCommittee": false,
        "InitialArbOSVersion": 6,
        "InitialChainOwner": "0xd345e41ae2cb00311956aa7109fc801ae8c81a52",
        "GenesisBlockNum": 0
      }
    },
    "rollup": {
      "bridge": "0x8315177ab297ba92a06054ce80a67ed4dbd7ed3a",
      "inbox": "0x4dbd4fc535ac27206064b68ffcf827b0a60bab3f",
      "rollup": "0x5ef0d09d1e6204141b4d37530808ed19f60fba35",
      "sequencer-inbox": "0x1c479675ad559dc151f6ec7ed3fbf8cee79582b6",
      "validator-utils": "0x9e40625f52829cf04bc4839f186d621ee33b0e67",
      "validator-wallet-creator": "0x960953f7c69cd2bc2322db9223a815c680ccc7ea",
      "stake-token": "0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2",
      "deployed-at": 15411056
    }
}

Then an InboxReader is created using the NewInboxReader function in arbnode/inbox_reader.go, which, among other things, takes in input the L1 reader, the sequencer inbox address and the delayed inbox address.

Transaction data from the sequencer is fetched using the LookupBatchesInRange method on the SequencerInbox type, which returns a list of SequencerInboxBatch values.

Transaction data from the sequencer inbox is processed using the getSequencerData method on the SequencerInboxBatch type in arbnode/sequencer_inbox.go. The method is called in the Serialize method.

The SequencerInboxBatch type is defined as follows:

type SequencerInboxBatch struct {
	BlockHash              common.Hash
	ParentChainBlockNumber uint64
	SequenceNumber         uint64
	BeforeInboxAcc         common.Hash
	AfterInboxAcc          common.Hash
	AfterDelayedAcc        common.Hash
	AfterDelayedCount      uint64
	TimeBounds             bridgegen.IBridgeTimeBounds
	RawLog                 types.Log
	DataLocation           BatchDataLocation
	BridgeAddress          common.Address
	Serialized             []byte // nil if serialization isn't cached yet
}

— TODO —

If the segment type is BatchSegmentKindAdvanceTimestamp or BatchSegmentKindAdvanceL1BlockNumber, the timestamp or the referenced block number is increased accordingly. If the segment type is BatchSegmentKindL2Message or BatchSegmentKindL2MessageBrotli, a MessageWithMetadata is created:

msg = &arbostypes.MessageWithMetadata{
    Message: &arbostypes.L1IncomingMessage{
        Header: &arbostypes.L1IncomingMessageHeader{
            Kind:        arbostypes.L1MessageType_L2Message,
            Poster:      l1pricing.BatchPosterAddress,
            BlockNumber: blockNumber,
            Timestamp:   timestamp,
            RequestId:   nil,
            L1BaseFee:   big.NewInt(0),
        },
        L2msg: segment,
    },
    DelayedMessagesRead: r.delayedMessagesRead,
}

Messages are then passed to the TransactionStreamer by calling the AddMesageAndEndBatch method.

L1 ZK-EVM tracker

Last update: Feb 25, 2026

Blocks in blobs

def execution_payload_data_to_blobs(data: ExecutionPayloadData) -> List[Blob]:
    """
    Canonically encode the execution-payload data into an ordered list of blobs.

    Encoding steps:
      1. bal_bytes = data.blockAccessList
      2. transactions_bytes = RLP.encode(data.transactions)
      3. Create 8-byte header: [4 bytes BAL length][4 bytes tx length]
      4. payload_bytes = header + bal_bytes + transactions_bytes
      5. return bytes_to_blobs(payload_bytes)

    The first blob will contain (in order):
      - [4 bytes] BAL data length
      - [4 bytes] Transaction data length
      - [variable] BAL data (may span multiple blobs)
      - [variable] Transaction data (may span multiple blobs)

    This allows extracting just the BAL data without transactions.

    Note: blockAccessList is already RLP-encoded per EIP-7928. Transactions are RLP-encoded as a list.
    """
    bal_bytes = data.blockAccessList
    transactions_bytes = RLP.encode(data.transactions)

    # Create 8-byte header: [4 bytes BAL length][4 bytes tx length]
    bal_length = len(bal_bytes).to_bytes(4, 'little')
    txs_length = len(transactions_bytes).to_bytes(4, 'little')
    header = bal_length + txs_length

    # Combine header + data
    payload_bytes = header + bal_bytes + transactions_bytes

    return bytes_to_blobs(payload_bytes)

Stateless validation

@slotted_freezable
@dataclass
class ExecutionPayload:
    """
    Represent a new block to be processed by the execution layer.

    The consensus layer constructs this from a beacon block body and
    passes it to the execution engine for validation.

    Note: execution_request_hash is not a direct field in ExecutionPayload
    but it is indirectly committed to via `block_hash` since `request_hash`
    is in the EL-block header.
    """

    parent_hash: Hash32
    fee_recipient: Address
    state_root: Root
    receipts_root: Root
    logs_bloom: Bloom
    prev_randao: Bytes32
    block_number: Uint
    gas_limit: Uint
    gas_used: Uint
    timestamp: U256
    extra_data: Bytes
    base_fee_per_gas: Uint
    block_hash: Hash32
    transactions: Tuple[LegacyTransaction | Bytes, ...]
    withdrawals: Tuple[Withdrawal, ...]
    blob_gas_used: U64
    excess_blob_gas: U64
    block_access_list: Bytes
@slotted_freezable
@dataclass
class NewPayloadRequest:
    """
    Contains an execution payload along with versioned hashes, the
    parent beacon block root, and execution requests for the
    ``verify_and_notify_new_payload`` entry point.

    This corresponds to the consensus-layer `NewPayloadRequest`
    container used for Engine API calls.

    [Bellatrix `NewPayloadRequest`]:
    https://ethereum.github.io/consensus-specs/specs/bellatrix/beacon-chain/#newpayloadrequest
    [Electra modified `NewPayloadRequest`]:
    https://ethereum.github.io/consensus-specs/specs/electra/beacon-chain/#modified-newpayloadrequest
    """

    execution_payload: ExecutionPayload
    versioned_hashes: Tuple[VersionedHash, ...]
    parent_beacon_block_root: Root
    execution_requests: ExecutionRequests
@slotted_freezable
@dataclass
class ExecutionWitness:
    """
    Execution witness data for stateless validation.
    """

    state: Tuple[Bytes, ...]
    """
    Hashed trie-node preimages needed during execution and state-root
    recomputation.
    """

    codes: Tuple[Bytes, ...]
    """
    Contract-code preimages (created or accessed) needed during execution.
    """

    headers: Tuple[Bytes, ...]
    """
    RLP-encoded block headers used for pre-state and ``BLOCKHASH`` correctness
    proofs. This may trend toward empty EIP-7709.
    """
@slotted_freezable
@dataclass
class StatelessInput:
    """
    Input to stateless validation.
    """

    new_payload_request: NewPayloadRequest
    """
    Consensus-layer payload request to validate statelessly. See
    ``execution_engine.NewPayloadRequest`` for structure and links to
    consensus-specs.
    """

    witness: ExecutionWitness
    """
    Execution witness material required to re-execute the core
    state transition function statelessly.
    """

    chain_config: ChainConfig
    """
    Chain configuration values needed during stateless validation.
    """

    public_keys: Tuple[Bytes, ...]
    """
    Recovered transaction public keys, in transaction order.
    """
def verify_stateless_new_payload(
    stateless_input: StatelessInput,
) -> StatelessValidationResult:
    """
    Statelessly validate the execution payload.
    """
    # TODO: We can fill this in properly once the pre-state PR
    # TODO: and state change PRs are completed.
    # TODO: We would effectively call `verify_and_notify_new_payload`

    return StatelessValidationResult(
        new_payload_request_root=compute_new_payload_request_root(
            stateless_input
        ),
        successful_validation=True,
        chain_config=stateless_input.chain_config,
    )