The Native Rollups Book
Table of Contents
What are native rollups?
A native rollup is a new type of EVM-based rollup that directly makes use of Ethereum's execution environment for their own state transitions, removing the need for complex and hard-to-maintain proof systems.
Problem statement
EVM-based rollups that target L1 equivalence must track Ethereum on two axes: governance and implementation. Forks require discretionary upgrades, creating windows of un-equivalence and dependence on exit mechanics. In parallel, custom proving stacks must replicate L1 semanticsm, introducing complexity and bug risk. The sections below detail these two risks and why common mitigations (long exit windows, multiple provers) treat symptoms rather than causes.
Governance risk
Today, EVM-based rollups need to make a trade-off between security and L1 equivalence. Everytime Ethereum forks, EVM-based rollups need to go through bespoke governance processes to upgrade the contracts and maintain equivalence with L1 features. A rollup governance system cannot be forced to follow Ethereum's governance decisions, and thus is free to arbitrarily diverge from it. Because of this, the best that an EVM-based rollup that strives for L1 equivalence can do is to provide a long exit window for their users, to protect them from its governance going rogue.
Exit windows present themselves with yet another trade-off:
- They can be short and reduce the un-equivalence time for users, but reducing at the same time the cases in which the exit window is effective. All protocols that require long enough time delays (e.g. vesting contracts, staking contracts, timelocked governance) would not be protected by the exit window.
- They can be long to protect more cases, at the cost of increased un-equivalence time. It's important to remember though that no finite (and reasonable) exit window length can protect all possible applications.
The only way to avoid governance risk today is to give up upgrades, remain immutable and accept that the rollup will increasingly diverge from L1 over time.
Bug risk
EVM-based rollups need to implement complex proof systems just to be able to support what Ethereum already provides on L1. Such proof systems, even though they are getting faster and cheaper over time, are still considered not safe to be used in production in a permissionless environment. Rollups today aim to reduce this problem by implementing multiple independent proof systems that need to agree before a state transition can be considered valid, which increases protocol costs and complexity.
The EXECUTE
precompile
Native rollups solve these problems by replacing complex proof systems with a call to the EXECUTE
precompile, which under the hood implements a recursive call to Ethereum's own execution environment. As a consequence, every time Ethereum forks, native rollups automatically adopt the new features without the need for dedicated governance processes. Moreover, the EXECUTE
precompile is "bug-free" by construction, in the sense that any bug in the precompile is also a bug in Ethereum itself which will always be forked and fixed by the Ethereum community.
Purpose of this book
This book is designed to serve as a comprehensive resource for understanding and contributing to our work on native rollups.
Goals of this book include:
- Provide in-depth explanations of the inner workings of the
EXECUTE
precompile. - Provide technical guidance on how native rollups can be built around the precompile.
- Educate readers on the benefits of native execution and how the proposal compares to other scalability solutions.
- Foster a community of like-minded individuals by providing resources, tools and best practices for collaboration.
The EXECUTE precompile
Table of Contents
Re-execution vs ZK enforcement
The original proposal for the EXECUTE
precompile presented two possible enforcement mechanisms: re-execution and ZK proofs. While the latter requires the L1 ZK-EVM upgrade to take place, the former can potentially be implemented beforehand and set the stage for the ZK version, in a similar way that proto-danksharding was first introduced without PeerDAS.
The re-execution variant would only be able to support optimistic rollups with a bisection protocol that goes down to single or few-L2-blocks sized steps, and that are EVM-equivalent. Today there are three stacks with working bisection protocols, namely Orbit stack (Arbitrum), OP stack (Optimism) and Cartesi. Cartesi is built to run a Linux VM so they wouldn't be able to use the precompile, and Orbit supports Stylus which doesn't make them fully EVM-equivalent, unless a Stylus-less version is implemented, but even in this case it wouldn't be able to support Arbitrum One. OP stack is mostly EVM-equivalent, but still requires heavy modifications to support native execution. It's therefore unclear whether trying to implement the re-execution version of the precompile is worth it, or if it's better to wait for the more powerful ZK version.
While L1 ZK-EVM is not needed for the re-execution version, statelessness is, as we want L1 validators to be able to verify the precompile without having to hold all rollups' state. It's not clear whether the time interval between statelessness and L1 ZK-EVM will be long enough to justify the implementation of the re-execution variant.
(WIP) Specification
def execute(evm: Evm) -> None:
data = evm.message.data
...
charge_gas(...) # TBD
...
# Inputs
chain_id = ... buffer_read(...) # likely hard-coded in a contract
number = ... buffer_read(...)
pre_state = ... buffer_read(...)
post_state = ... buffer_read(...)
post_receipts = ... buffer_read(...) # TODO: consider for L2->L1 msgs
block_gas_limit = ... buffer_read(...) # TBD: depends on ZK gas handling
coinbase = ... buffer_read(...)
prev_randao = ... buffer_read(...)
excess_blob_gas = ... buffer_read(...)
transactions = ... buffer_read(...) # TBD: this should be a ref to blobs
l1_anchor = ... buffer_read(...) # TBD: arbitrary info that is passed from L1 to L2 storage
# Disable blob-carrying transactions
for tx in map(decode_transaction, transactions):
if isinstance(tx, BlobTransaction):
raise ExecuteError
block_env = vm.BlockEnvironment(
chain_id=chain_id,
state=pre_state,
block_gas_limit=block_gas_limit,
block_hashes=..., # TBD: depends how it will look like post-7709
coinbase=coinbase,
number=number, # TBD: they probably need to be strictly sequential
base_fee_per_gas=..., # TBD
time=..., # TBD: depends if we want to use sequencing or proving time
prev_randao=prev_randao, # NOTE: assigning `evm.message.block_env.prev_randao` prevents ahead-of-time sequencing
excess_blob_gas=excess_blob_gas, # TODO: consider proposals where blob and calldata gas is merged for L2 pricing
parent_beacon_block_root=... # TBD
)
# Handle L1 anchoring
process_unchecked_system_transaction( # TODO: consider unchecked vs checked and gas accounting if the predeploy is custom
block_env=block_env,
target_address=L1_ANCHOR_ADDRESS, # TBD: exact predeploy address + implementation. Also: does it even need to be a fixed address?
data=l1_anchor # TBD: exact format
)
# TODO: decide what to do things that are not valid on a rollup, e.g. blobs
block_output = apply_body(
block_env=block_env,
transactions=transactions,
withdrawals=() # TODO: consider using this for deposits
)
# NOTE: some things might look different with statelessness
block_state_root = state_root(block_env.state)
receipt_root = root(block_output.receipts_trie)
# TODO: consider using requests_hash for withdrawals
# TODO: consider adding a gas_used top-level param
if block_state_root != post_state:
raise ExecuteError # TODO: check if this is the proper way to handle errs
if receipt_root != post_receipts
raise ExecuteError
evm.output = ... # TBD: maybe block_gas_used?
(WIP) Usage example
The following example shows how an ahead-of-time sequenced rollup can use the EXECUTE
precompile to settle blocks.
contract Rollup {
struct L2Block {
uint l1AnchorBlock;
bytes32 blobHash;
bytes32 prevRandao;
uint64 excessBlobGas;
}
uint64 public constant chainId = 1234321;
uint public gasLimit;
// latest settled state
bytes32 public state;
// receipts of latest settled block
bytes32 public receipts;
// block number to be sequenced next
uint public nextBlockNumberToSequence;
// block number to be settled next
uint public nextBlockNumberToSettle;
// blob refs store (L2 block number, (L1 block number, blob hash))
mapping(uint => L2Block) public blocks;
// assumes that one blob is one block
function sequence(uint blobIndex) public {
blocks[nextBlockNumberToSequence] = L2Block({
l1AnchorBlock: block.number,
blobHash: blockhash(index),
prevRandao: block.prevrandao,
excessBlobGas: ... // TBD: L1 only exposes block.blobbasefee, not the excess blob gas
});
nextBlockNumberToSequence++;
}
function settle(
bytes32 _newState,
bytes32 _receipts,
) public {
(uint l1AnchorBlock, bytes32 blobHash, bytes32 prev_randao, uint64 excess_blob_gas) = blocks[nextBlockNumberToSettle];
EXECUTE(
chainId,
nextBlockNumberToSettle,
l1AnchorBlock,
state,
_newState,
_receipts,
gasLimit,
msg.sender,
prev_randao,
excess_blob_gas, // TBD
(l1AnchorBlock, blobHash) // TBD: unclear how to reference past blobs at this point
)
state = _newState;
receipts = _receipts;
nextBlockNumberToSettle++;
}
}
Tech dependencies
Table of Contents
Statelessness (EIP-6800)
L1 validators shouldn't store the state of all rollups, therefore the EXECUTE
precompile requires its verification to be stateless. The statelessness upgrade is therefore required, with all its associated EIPs.
Some adjacent EIPs that are relevant in this context are:
- EIP-2935: Serve historical block hashes from state (live with Pectra).
- EIP-7709: Read BLOCKHASH from storage and update cost (SFI in Fusaka).
L1 ZK-EVM (EIP-?)
The ZK version of the EXECUTE
precompile requires the L1 ZK-EVM upgrade to take place first and it will influence how exactly the precompile will be implemented:
- Offchain vs onchain proofs: influences whether the precompile needs to take a ZK proof (or multiple proofs) as input.
- Gas limit handling: influences whether the precompile needs to take a gas limit as an input or not. Some L1 ZK-EVM proposals suggest the complete removal of the gas limit, as long as the block proposer itself is also required to provide the ZK proof (see Prover Killers Killer: You Build it, You Prove it).
L1 Anchoring
Table of Contents
Overview
To allow messaging from L1 to L2, a rollup needs to be able to obtain some information from the L1 chain, with the most general information being an L1 block hash. In practice, projects relay from L1 various types of information depending on their specific needs.
Current approaches
We first discuss how some existing rollups handle the L1 anchoring problem to better inform the design of the EXECUTE
precompile.
OP stack
[spec]
A special L1Block
contract is predeployed on L2 which processes "L1 attributes deposited transactions" during derivation. The contract stores L1 information such as the latest L1 block number, hash, timestamp, and base fee. A deposited transaction is a custom transaction type that is derived from the L1, does not include a signature and does not consume L2 gas.
It's important to note that reception of L1 to L2 messages on the L2 side does not depend on this contract, but rather on "user-deposited transactions" that are derived from events emitted on L1, which again are implemented through the custom transaction type.
Linea
Linea, in their L2MessageService
contract on L2, adds a function that allows a permissioned relayer to send information from L1 to L2:
function anchorL1L2MessageHashes(
bytes32[] calldata _messageHashes,
uint256 _startingMessageNumber,
uint256 _finalMessageNumber,
bytes32 _finalRollingHash
) external whenTypeNotPaused(PauseType.GENERAL) onlyRole(L1_L2_MESSAGE_SETTER_ROLE)
On L1, a wrapper around the STF checks that the "rolling hash" being relayed is correct, otherwise proof verificatoin fails. Since anchoring is done through regular transactions, the function is permissioned, otherwise any user could send a transaction with an invalid rolling hash, which would be accepted by the L2 but rejected during settlement.
Taiko
[docs] An anchorV3
function is implemented in the TaikoAnchor
contract which allows a GOLDEN_TOUCH_ADDRESS
to relay an L1 state root to L2. The private key of the GOLDEN_TOUCH_ADDRESS
is publicly known, but the node guarantees that the first transaction is always an anchor transaction, and that other transactions present in the block revert.
function anchorV3(
uint64 _anchorBlockId,
bytes32 _anchorStateRoot,
uint32 _parentGasUsed,
LibSharedData.BaseFeeConfig calldata _baseFeeConfig,
bytes32[] calldata _signalSlots
)
external
nonZeroBytes32(_anchorStateRoot)
nonZeroValue(_anchorBlockId)
nonZeroValue(_baseFeeConfig.gasIssuancePerSecond)
nonZeroValue(_baseFeeConfig.adjustmentQuotient)
onlyGoldenTouch
nonReentrant
The validity of the _anchorStateRoot
value is explicitly checked by Taiko's proof system.
Orbit stack
WIP
Proposed design
An L1_ANCHOR
system contract is predeployed on L2 that receives an arbitrary bytes32
value from L1 to be saved in its storage. The contract is intended to be used for L1->L2 messaging without being tied to any specific format, as long it is encoded as a bytes32
value. Validation of this value is left to the rollup contract on L1. The exact implementation of the contract is TBD, but EIP-2935 can be used as a reference.
L1 to L2 messaging
Table of Contents
Current approaches
L1 to L2 messaging systems are built on top of the L1 anchoring mechanism. We first discuss how some existing rollups handle L1 to L2 messaging to better understand how similar mechanisms can be implemented on top of the L1 anchoring mechanism proposed here for native rollups.
OP stack
There are two ways to send messages from L1 to L2, either by using the low-level API of deposited transactions, or by using the high-level API of the "Cross Domain Messenger" contracts, which are built on top of the low-level API.
Deposited transactions are derived from TransactionDeposited
events emitted in the OptimismPortal
contract on L1. Deposited transactions are a new transaction type with prefix 0x7E
that have been added in the OP stack STF, which are fully derived on L1, they cannot be sent to L2 directly, and do not contain signatures as the authentication is already performed on L1. The deposited transaction on the L2 specifies the tx.origin
and the msg.sender
as the msg.sender
of the transaction on L1 that emitted the TransactionDeposited
event if EOA, if not, the aliased msg.sender
is used to prevent conflicts with L2 contracts that might have the same address. Moreover the function mints L2 gas tokens based on the value that is sent on L1.
The Cross Domain Messengers are contracts built on top of this mechanism. The sendMessage
function on L1 calls OptimismPortal.depositTransaction
, and will therefore be the (aliased) msg.sender
on the L2 side. The actual caller of the sendMessage
function is passed as opaque bytes to be later unpacked. On L2, the corresponding Cross Domain Messenger contract receives a call to the relayMessage
function, which checks that the msg.sender
is the aliased L1 Cross Domain Messenger. A special xDomainMsgSender
storage variable saves the actual L1 cross domain caller, and finally executes the call. The application on the other side will then be able to access the xDomainMsgSender
variable to know who sent the message, and the msg.sender
will be the Cross Domain Messenger contract on L2.
It's important to note that such messaging mechanism is completely disconnected from the onchain L1 anchoring mechanism that saves the L1 block information in the L2 L1Block
contract, as it is fully handled by the derivation logic.
Linea
The sendMessage
function is called on the LineaRollup
contract on L1, also identified as the "message service" contract by others. A numbered "rolling hash" is saved in a mapping with the content of the message to be sent on L2. During Linea's anchoring process, such rolling hash is relayed on the L2 together with all the message hashes that make up the rolling hashes that are then saved in the inboxL1L2MessageStatus
mapping. The message is finally executed by calling the claimMessage
function, which references the message status mapping.
Taiko
To send a message from L1 to L2, the sendSignal
function is called on the SignalService
contract on L1, which stores message hashes in its storage at slots computed based on the message itself. On the L2 side, after anchoring of the L1 block state root, the proveSignalReceived
function is called on the SignalService
L2 contract, with complex merkle proofs that unpack the so-passed state root and gets to the message hashes saved in storage of the L1 SignalService
contract. A higher-level Bridge
contract is deployed on L1 that performs the actuall contract call given the informations received by the SignalService
L2 contract.
Proposed design
WIP
L2 to L1 messaging
Table of Contents
WIP
Gas token deposits
Table of Contents
Overview
Rollup users need a way to obtain the gas token to be able to send transactions on the L2. Existing solutions divide into two approaches: either an escrow contract contains preminted tokens that are unlocked through the L2 to L1 messaging channel, or a new transaction type that is able to mint the gas token is added to the STF. This page will also discuss two more approaches that are currently not used in any project.
Current approaches
OP stack
The custom "deposited transaction" type allows to mint the gas token based on TransactionDeposited
event fields. On L2, the gas token magically appears in the user's balance.
Linea
WIP
Taiko
WIP
Other approaches
Manual state manipulation
WIP
Beacon chain withdrawals
WIP
Proposed design
WIP
L2 fee market
Table of Contents
Pricing
WIP
Fee collection
WIP
L1 vs L2 diff
Table of Contents
Blob-carrying transactions
Since rollups are not (supposed to be) connected to a dedicated consensus layer that handles blobs, they cannot support blob-carrying transactions and related functionality. This is solved by the EXECUTE
by filtering all type-3 transactions before calling the state transition function.
As a consequence, all blocks will simply not contain any blob-carrying transactions, which allows maintaing BLOBHASH
and point evaluation operations untouched, since they would behave the same as in an L1 block with no blob-carrying transactions.
Since the EXECUTE
precompile does a recursive call to apply_body
and not state_transition
, header checks are skipped, and block_env
values can either be passed as an input, or re-use the values from L1. Since no blob-carrying transactions are present, the excess_blob_gas
would default to zero, unless another value is passed from L1. It's important to note that L1 exposes block.blobbasefee
and not excess_blob_gas
, so some translation would be needed to have the proper input for block_env
, or some other changes on L1 needs to be made.
RANDAO
The block.prevrandao
behaviour across existing rollups varies. Orbit stack chains return the constant 1
. OP stack chains return the value from the latest synced L1 block on L2. Linea returns the constant 2
. Scroll returns the constant 0
. ZKsync returns the constant 2500000000000000
.
The current proposal is to leave the field as an input to the EXECUTE
precompile so that projects can decide by themselves how to handle it.
Customization
Table of Contents
WIP
NR vs sharding
Table of Contents
WIP
Stacks review
Table of Contents
WIP
Orbit stack
Table of Contents
Sequencing
The main function used to sequence blobs in the orbit stack is the addSequencerFromBlobsImpl
function, whose interface is as follows:
function addSequencerL2BatchFromBlobsImpl(
uint256 sequenceNumber,
uint256 afterDelayedMessagesRead,
uint256 prevMessageCount,
uint256 newMessageCount
) internal {
Example of calls:
-
Block:
22866981
(link)sequenceNumber
:934316
afterDelayedMessagesRead
:2032069
prevMessageCount
:332968910
newMessageCount
:332969371
(+461
)
-
Block:
22866990
(+9
) (link)sequenceNumber
:934317
(+1
)afterDelayedMessagesRead
:2032073
(+4
)prevMessageCount
:332969371
(+0
)newMessageCount
:332969899
(+528
)
-
Block:
22867001
(+11
) (link)sequenceNumber
:934318
(+1
)afterDelayedMessagesRead
:2032073
(+0
)prevMessageCount
:332969899
(+0
)newMessageCount
:332970398
(+499
)
It's important to note that when a batch is submitted, also a "batch spending report" is submitted with the purpose of reimbursing the batch poster on the L2. The function will be analyzed later on.
The formBlobDataHash
function is called to prepare the data that is then saved in storage. Its interface is as follows:
function formBlobDataHash(
uint256 afterDelayedMessagesRead
) internal view virtual returns (bytes32, IBridge.TimeBounds memory, uint256)
First, the function fetches the blob hashes of the current transaction using a Reader4844 yul contract. Then it creates a "packed header" using the packHeader
function, which is defined as follows:
function packHeader(
uint256 afterDelayedMessagesRead
) internal view returns (bytes memory, IBridge.TimeBounds memory) {
The function takes the rollup's bridge "time bounds" and computes the appropriate bounds given the maxTimeVariation
values, the current timestamp and block number. Such values are then returned together with the afterDelayedMessagesRead
value.
A time bounds struct is defined as follows:
struct TimeBounds {
uint64 minTimestamp;
uint64 maxTimestamp;
uint64 minBlockNumber;
uint64 maxBlockNumber;
}
and the maxTimeVariation
is a set of four values representing how much in the past or in the future the time or blocks can be from the current time and block number. This is done to prevent reorgs from invalidating sequencer preconfirmations, while also establishing some bounds.
For Arbitrum One, these values are set to:
delayBlocks
:7200
blocks (24 hours at 12s block time)futureBlocks
:64
blocks (12.8 minutes at 12s block time)delaySeconds
:86400
seconds (24 hours)futureSeconds
:768
seconds (12.8 minutes)
The formBlobDataHash
function then computes the blobs cost by taking the current blob base fee, the (fixed) amount of gas used per blob and the number of blobs. Right after, the following value is returned:
return (
keccak256(bytes.concat(header, DATA_BLOB_HEADER_FLAG, abi.encodePacked(dataHashes))),
timeBounds,
block.basefee > 0 ? blobCost / block.basefee : 0
);
Now that the dataHash
is computed, the addSequencerL2BatchImpl
function is called, which is defined as follows:
function addSequencerL2BatchImpl(
bytes32 dataHash,
uint256 afterDelayedMessagesRead,
uint256 calldataLengthPosted,
uint256 prevMessageCount,
uint256 newMessageCount
)
internal
returns (uint256 seqMessageIndex, bytes32 beforeAcc, bytes32 delayedAcc, bytes32 acc)
The function, after some checks, calls the enqueueSequencerMessage
on the Bridge
contract, passing the dataHash
, afterDelayedMessagesRead
, prevMessageCount
and newMessageCount
values.
The enqueueSequencerMessage
function is defined as follows:
function enqueueSequencerMessage(
bytes32 dataHash,
uint256 afterDelayedMessagesRead,
uint256 prevMessageCount,
uint256 newMessageCount
)
external
onlySequencerInbox
returns (uint256 seqMessageIndex, bytes32 beforeAcc, bytes32 delayedAcc, bytes32 acc)
The function, after some checks, fetches the previous "accumulated" hash and merges it with the new dataHash
and the "delayed inbox" accumulated hash given the current afterDelayedMessagesRead
. This new hash is then pushed in the sequencerInboxAccs
array, which represents the canonical list of inputs to the rollup state transition function.
Where are these inputs used?
- When creating a new assertion, to ensure that the caller is creating an assertion on the expected inputs;
- When "fast confirming" assertions (in the context of AnyTrust);
- When validating the inputs in the last step of a fraud proof, specifically inside the
OneStepProverHostIo
contract.
Batch spending report
TODO
Proving
State roots are proved using an optimistic proof system involving an interactive bisection protocol and a final onchain one-step execution. In particular, a bisection can conclude with a call to the confirmEdgeByOneStepProof
function, which ultimately references the inputs that have been posted onchain.
The bisection protocol is divided into 3 "levels", depending on the size of the step: block level, big step level, and small step level. The interaction between these levels is non-trivial and also effects its economic guarantees.
A dedicated smart contract, the OneStepProofEntry
, manages a set of sub-contracts depending on the type of step that needs to be executed onchain, and finally returns the post-execution state hash to the caller.
The proveOneStep
function in OneStepProofEntry
is defined as follows:
function proveOneStep(
ExecutionContext calldata execCtx,
uint256 machineStep,
bytes32 beforeHash,
bytes calldata proof
) external view override returns (bytes32 afterHash)
The ExecutionContext
struct is defined as follows:
struct ExecutionContext {
uint256 maxInboxMessagesRead;
IBridge bridge;
bytes32 initialWasmModuleRoot;
}
where the maxInboxMessagesRead
is set to the nextInboxPosition
of the previous assertion, which can be seen as the "inbox" target set by the previous assertion to the new assertion. This value should at least be one more than the inbox position covered by the previous assertion, and is set to the current sequencer message count for the next assertion. If an assertion reaches the maximum number of blocks allowed but doesn't reach the nextInboxPosition
, it is considered an "overflow" assertion which has its own specific checks.
The OneStepProofEntry
contract populates the machine value and frame stacks and registries given the proof
. A machine hash is computed using these values and the wasmModuleRoot
, which determines the program to execute. Instructions and necessary merkle proofs are deserialized from the proof. Based on the opcode to be executed onchain, a sub-contract is selected to actually execute the step. The ones that require referencing the inbox inputs are the ones that require calling the OneStepProverHostIo
contract. An Instruction
is simply defined as:
struct Instruction {
uint16 opcode;
uint256 argumentData;
}
If the instruction is READ_INBOX_MESSAGE
, the executeReadInboxMessage
function is called, which either references the sequencer inbox or the delayed inbox, depending whether the argument is INBOX_INDEX_SEQUENCER
or INBOX_INDEX_DELAYED
. The functions compute the appropriate accumulated hash given its inputs, fetched from the sequencerInboxAccs
or delayedInboxAccs
, and checks that it matches the expected one.
The executeReadPreImage
function is instead used to execute a "read" out of either a keccak256 preimage or a blob hash preimage, using the 4844 point evaluation precompile. It is made sure that the correct point is used for the evaluation.
L1 to L2 messaging
Different types of messages can be sent from L1 to L2, and each of them is identified by a "kind" value, as follows:
uint8 constant L2_MSG = 3;
uint8 constant L1MessageType_L2FundedByL1 = 7;
uint8 constant L1MessageType_submitRetryableTx = 9;
uint8 constant L1MessageType_ethDeposit = 12;
uint8 constant L1MessageType_batchPostingReport = 13;
uint8 constant L2MessageType_unsignedEOATx = 0;
uint8 constant L2MessageType_unsignedContractTx = 1;
uint8 constant ROLLUP_PROTOCOL_EVENT_TYPE = 8;
uint8 constant INITIALIZATION_MSG_TYPE = 11;
In ArbOS, other message types can be found, whose purpose is TBR (To Be Researched):
const (
L1MessageType_L2Message = 3
L1MessageType_EndOfBlock = 6
L1MessageType_L2FundedByL1 = 7
L1MessageType_RollupEvent = 8
L1MessageType_SubmitRetryable = 9
L1MessageType_BatchForGasEstimation = 10 // probably won't use this in practice
L1MessageType_Initialize = 11
L1MessageType_EthDeposit = 12
L1MessageType_BatchPostingReport = 13
L1MessageType_Invalid = 0xFF
)
Gas token deposit (ETH)
To deposit ETH on the L2 to be used as a gas token, the depositEth
function on the Inbox
contract (also called "delayed inbox") is used, which is defined as follows:
function depositEth() public payable whenNotPaused onlyAllowed returns (uint256)
Here's an example transaction.
The onlyAllowed
modifier checks an "allow list", if enabled. The control passes to the _deliverMessage
function, which is defined as follows:
function _deliverMessage(
uint8 _kind,
address _sender,
bytes memory _messageData,
uint256 amount
) internal returns (uint256)
The message kind used here is L1MessageType_ethDeposit
. Ultimately, the enqueueDelayedMessage
function is called on the Bridge
contract. The function ultimately pushes an accumulated hash to the delayedInboxAccs
array.
TODO
Derivation
The derivation logic for Arbitrum and Orbit chains is defined in the nitro node.
The L1 node connection is done through the L1Reader
in the getL1Reader
function in arbnode/node.go
. All necessary addresses are fetched from the /cmd/chaininfo/arbitrum_chain_info.json
file. For example, here's the configuration for Arbitrum One:
{
"chain-name": "arb1",
"parent-chain-id": 1,
"parent-chain-is-arbitrum": false,
"sequencer-url": "https://arb1-sequencer.arbitrum.io/rpc",
"secondary-forwarding-target": "https://arb1-sequencer-fallback-1.arbitrum.io/rpc,https://arb1-sequencer-fallback-2.arbitrum.io/rpc,https://arb1-sequencer-fallback-3.arbitrum.io/rpc,https://arb1-sequencer-fallback-4.arbitrum.io/rpc,https://arb1-sequencer-fallback-5.arbitrum.io/rpc",
"feed-url": "wss://arb1-feed.arbitrum.io/feed",
"secondary-feed-url": "wss://arb1-delayed-feed.arbitrum.io/feed,wss://arb1-feed-fallback-1.arbitrum.io/feed,wss://arb1-feed-fallback-2.arbitrum.io/feed,wss://arb1-feed-fallback-3.arbitrum.io/feed,wss://arb1-feed-fallback-4.arbitrum.io/feed,wss://arb1-feed-fallback-5.arbitrum.io/feed",
"has-genesis-state": true,
"block-metadata-url": "https://arb1.arbitrum.io/rpc",
"track-block-metadata-from": 327000000,
"chain-config": {
"chainId": 42161,
"homesteadBlock": 0,
"daoForkBlock": null,
"daoForkSupport": true,
"eip150Block": 0,
"eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"muirGlacierBlock": 0,
"berlinBlock": 0,
"londonBlock": 0,
"clique": {
"period": 0,
"epoch": 0
},
"arbitrum": {
"EnableArbOS": true,
"AllowDebugPrecompiles": false,
"DataAvailabilityCommittee": false,
"InitialArbOSVersion": 6,
"InitialChainOwner": "0xd345e41ae2cb00311956aa7109fc801ae8c81a52",
"GenesisBlockNum": 0
}
},
"rollup": {
"bridge": "0x8315177ab297ba92a06054ce80a67ed4dbd7ed3a",
"inbox": "0x4dbd4fc535ac27206064b68ffcf827b0a60bab3f",
"rollup": "0x5ef0d09d1e6204141b4d37530808ed19f60fba35",
"sequencer-inbox": "0x1c479675ad559dc151f6ec7ed3fbf8cee79582b6",
"validator-utils": "0x9e40625f52829cf04bc4839f186d621ee33b0e67",
"validator-wallet-creator": "0x960953f7c69cd2bc2322db9223a815c680ccc7ea",
"stake-token": "0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2",
"deployed-at": 15411056
}
}
Then an InboxReader
is created using the NewInboxReader
function in arbnode/inbox_reader.go
, which, among other things, takes in input the L1 reader, the sequencer inbox address and the delayed inbox address.
Transaction data from the sequencer is fetched using the LookupBatchesInRange
method on the SequencerInbox
type, which returns a list of SequencerInboxBatch
values.
Transaction data from the sequencer inbox is processed using the getSequencerData
method on the SequencerInboxBatch
type in arbnode/sequencer_inbox.go
. The method is called in the Serialize
method.
The SequencerInboxBatch
type is defined as follows:
type SequencerInboxBatch struct {
BlockHash common.Hash
ParentChainBlockNumber uint64
SequenceNumber uint64
BeforeInboxAcc common.Hash
AfterInboxAcc common.Hash
AfterDelayedAcc common.Hash
AfterDelayedCount uint64
TimeBounds bridgegen.IBridgeTimeBounds
RawLog types.Log
DataLocation BatchDataLocation
BridgeAddress common.Address
Serialized []byte // nil if serialization isn't cached yet
}
--- TODO ---
If the segment type is BatchSegmentKindAdvanceTimestamp
or BatchSegmentKindAdvanceL1BlockNumber
, the timestamp or the referenced block number is increased accordingly. If the segment type is BatchSegmentKindL2Message
or BatchSegmentKindL2MessageBrotli
, a MessageWithMetadata
is created:
msg = &arbostypes.MessageWithMetadata{
Message: &arbostypes.L1IncomingMessage{
Header: &arbostypes.L1IncomingMessageHeader{
Kind: arbostypes.L1MessageType_L2Message,
Poster: l1pricing.BatchPosterAddress,
BlockNumber: blockNumber,
Timestamp: timestamp,
RequestId: nil,
L1BaseFee: big.NewInt(0),
},
L2msg: segment,
},
DelayedMessagesRead: r.delayedMessagesRead,
}
Messages are then passed to the TransactionStreamer
by calling the AddMesageAndEndBatch
method.