Releases: ethereum-optimism/optimism
Release op-contracts/v1.5.0-rc.1 — Safe Extensions
Overview
This release adds 3 Safe extensions that are intended to help OP Stack chains reach Stage 1. You can read more about this in the governance post.
Contract Changes
Only the following 3 contracts are released with the version:
LivenessGuard
, a guard for SafesLivenessModule
, a module for Safes, intended to be paired withLivenessGuard
.DeputyGuardianModule
, a module for Safes.
These are all unrelated to the core OP Stack protocol contracts.
Full Contract Set
A chain using this contracts release must be using the following contracts at the specified semvers. Because the op-contracts/v1.4.0
release is pending governance approval, the versions below are from the op-contracts/v1.3.0
release.
The three new contracts:
- LivenessGuard: 1.0.0
- LivenessModule: 1.2.0
- DeputyGuardianModule: 1.1.0
And the prior release from op-contracts/v1.3.0
:
- AddressManager: Latest (this has no version) (No change from prior version)
- L1CrossDomainMessenger: 2.3.0
- L1ERC721Bridge: 2.1.0
- L1StandardBridge: 2.1.0
- L2OutputOracle: 1.8.0
- OptimismMintableERC20Factory: 1.9.0
- OptimismPortal: 2.5.0
- SystemConfig: 1.12.0
- SuperchainConfig: 1.1.0 (No change from prior version)
- ProtocolVersions: 1.0.0 (No change from prior version)
Full Changelog
The full contracts diff between this release and the prior release can be found at the link below. Note that, because this is a monorepo, this will likely include many unrelated changes and will be a noisy diff. Because op-contracts/v1.4.0
is also pending governance approval, this release is compared to op-contracts/v1.3.0
Release op-contracts/v1.4.0-rc.4 - Fault Proofs
Overview
This release adds Fault Proofs to help OP Stack chains reach Stage 1. You can read more about this in the governance post.
Contract Changes
This includes the following new contracts:
FaultDisputeGame
, An implementation of theIDisputeGame
interface for Fault ProofsPermissionedDisputeGame
, AFaultDisputeGame
contract that is permissionedDisputeGameFactory
, A factory contract for creatingIDisputeGame
contractsAnchorStateRegistry
, Stores the latest "anchor" state used by theFaultDisputeGame
contractDelayedWETH
, An extension toWETH9
that allows delayed withdrawalsMIPS
, An onchain MIPS32 VMPreimageOracle
, A contract for storing permissioned pre-images
Full Contract Set
A chain using this contracts release must be using the following contracts at the specified semvers.
The new contracts:
- FaultDisputeGame: 1.2.0
- PermissionedDisputeGame: 1.2.0
- DisputeGameFactory: 1.0.0
- AnchorStateRegistry: 1.0.0
- DelayedWETH: 1.0.0
- MIPS: 1.0.1
- PreimageOracle: 1.0.0
The updated contracts:
- OptimismPortal: 3.10.0
- SystemConfig: 2.2.0
And contracts unchanged from the prior op-contracts/v1.3.0
release:
- AddressManager: Latest (This has no version)
- L1CrossDomainMessenger: 2.3.0
- L1ERC721Bridge: 2.1.0
- L1StandardBridge: 2.1.0
- OptimismMintableERC20Factory: 1.9.0
- SuperchainConfig: 1.1.0
- ProtocolVersions: 1.0.0
Note that the L2OutputOracle has been removed, and is no longer used for chains running this version of the contracts.
Full Changelog
The full contracts diff between this release and the prior release can be found at the link below. Note that, because this is a monorepo, this will likely include many unrelated changes and will be a noisy diff.
Release op-contracts v1.4.0-rc.2 - Fault Proofs V1
Overview
This release candidate enables fault proofs in the withdrawal path of the bridge on L1. It also modifies the SystemConfig
to remove the legacy L2OutputOracle
contract in favor of the DisputeGameFactory
.
Specification here.
The full set of L1 contracts included in this release is:
- AddressManager: Latest (this has no version) (No change from prior version)
- AnchorStateRegistry: 1.0.0 (New Contract)
- DelayedWETH: 1.0.0 (New Contract)
- DisputeGameFactory: 1.0.0 (New Contract)
- L1CrossDomainMessenger: 2.3.0 (No change from prior version)
- L1ERC721Bridge: 2.1.0 (No change from prior version)
- L1StandardBridge: 2.1.0 (No change from prior version)
- OptimismMintableERC20Factory: 1.9.0 (No change from prior version)
- OptimismPortal: 3.8.0 (Modified from prior version, with breaking changes)
- SystemConfig: 2.0.0 (Modified from prior version, with breaking changes)
- SuperchainConfig: 1.1.0 (No change from prior version)
- ProtocolVersions: 1.0.0 (No change from prior version)
The L2OutputOracle is no longer used for chains running this version of the L1 contracts.
Contracts Changed
L2OutputOracle
- The
L2OutputOracle
has been removed from the deployed protocol.
- The
OptimismPortal
- The
OptimismPortal
has been modified to allow users to prove their withdrawals against outputs that were proposed as dispute games, created via a trustedDisputeGameFactory
contract. spec.
- The
SystemConfig
- The
SystemConfig
has been changed to remove theL2_OUTPUT_ORACLE
storage slot as well as the getter for the contract. To replace it, a new getter for theDisputeGameFactory
proxy has been added.
- The
New Contracts
DisputeGameFactory
- The
DisputeGameFactory
is the new inbox for L2 output proposals on L1, creating dispute games. - Output proposals are now permissionless by default.
- Challenging output proposals is now permissionless by default.
- The
FaultDisputeGame
- The
FaultDisputeGame
facilitates trustless disputes over L2 output roots proposed on L1. spec.
- The
PermissionedDisputeGame
- A child of the
FaultDisputeGame
contract, that permissions proposing and challenging. Deployed as a safety mechanism to temporarily restore liveness in the event of theFaultDisputeGame
's failure.
- A child of the
MIPS
- The
MIPS
VM is a minimal kernel emulating the MIPS32 ISA with a subset of available Linux syscalls. This contract allows for executing single steps of a fault proof program at the base case of disputes in theFaultDisputeGame
. spec.
- The
PreimageOracle
- The
PreimageOracle
contract is responsible for serving verified data to the program running on top of theMIPS
VM during single-step execution. When data enters thePreimageOracle
, it is verified to be correctly formatted and honest. spec.
- The
AnchorStateRegistry
- The
AnchorStateRegistry
contract is responsible for tracking the latest finalized root claims from various dispute game types.
- The
DelayedWETH
DelayedWETH
is an extension ofWETH9
that delays unwrapping operations. Bonds that are placed in dispute games are held within this contract, and the owner may intervene in withdrawals to redistribute funds to submitters in case of dispute game resolution failure.
Full Changelog
op-node v1.7.5
Partial changelog (op-node)
- chore(op-service): reduce allocations by @hoank101 in #10331
- op-service/eth: Optimize ssz decoding by @sebastianst in #10362
New Contributors (all monorepo)
- @Ethnical made their first contribution in #10246
- @AaronChen0 made their first contribution in #10284
- @threewebcode made their first contribution in #10229
- @SanShi2023 made their first contribution in #10329
- @hoank101 made their first contribution in #10331
Full Changelog (all monorepo): v1.7.4...op-node/v1.7.5
🚢 Docker image: https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-node:v1.7.5
op-stack v1.7.4
⚠️ Strongly recommended maintenance release
🐞 op-node blob reorg bug fix (#10210)
If an L1 block got reorg'd out during blob retrieval, an op-node might get stuck in a loop retrieving a blob that will never exist, requiring a restart. This got fixed by internally signaling the right error types, forcing a derivation pipeline reset in such cases.
✨ op-batcher & op-proposer node sync start (#10116 #10193 #10262 #10273)
op-batcher and op-proposer can now wait for the sequencer to sync to the current L1 head before starting their work.
This fixes an issue where a restart of op-batcher/proposer and op-node at the same time might cause to resend duplicate batches from the last finalized L2 block, because a freshly restarted op-node resyncs from the finalized head, potentially signaling a too early safe head in its sync status.
🏳️ This feature is off by default, so we recommend testing it by using the new batcher and proposer flag --wait-node-sync
(or its corresponding env vars).
Enabling this will cause op-batcher and op-proposer to wait for the sequencer's verifier confirmation depth for typically 4 L1 blocks, or ~1 min, at startup.
🏳️ To speed up this process in case that no recent batcher transaction have happened, there's another optional new batcher flag --check-recent-txs-depth
that lets the batcher check for recent batcher transactions to determine a potentially earlier sync target. This feature is off by default (0) and should be set to the sequencer's verifier confirmation depth to get enabled.
Partial changelog
op-node
- fix: set version during build process for op-node,batcher,proposer by @bitwiseguy in #10087
- fix: Fix the error judgment when obtaining the finalized/safe block o… by @anonymousGiga in #10127
- chore: Update dependency on
superchain
package by @geoknee in #10204 - op-service: Return ethereum.NotFound on 404 by @trianglesphere in #10210
- op-node: remove dependency on bindings by @tynes in #10213
- op-node: prevent spamming of reqs for blocks triggered by
checkForGapInUnsafeQueue
by @bitwiseguy in #10063
op-batcher & op-proposer
- op-service/dial: Add WaitRollupSync by @sebastianst in #10116
- op-proposer: remove dep on op-bindings by @tynes in #10218
- op-batcher: wait for node sync & check recent L1 txs at startup to avoid duplicate txs by @bitwiseguy in #10193
- op-proposer: Add option to wait for rollup sync during startup by @bitwiseguy in #10262
- op-batcher: Always use recent block from startup tx check by @sebastianst in #10273
New Contributors (all monorepo)
- @sellskin made their first contribution in #9963
- @bitwiseguy made their first contribution in #10087
- @0xyjk made their first contribution in #10101
- @anonymousGiga made their first contribution in #10127
- @iczc made their first contribution in #10131
- @brucexc made their first contribution in #9874
- @dome made their first contribution in #10165
- @testwill made their first contribution in #10203
- @dajuguan made their first contribution in #9986
Full Changelog: v1.7.3...v1.7.4
🚢 Docker Images:
op-contracts/v1.4.0-rc.1
This contracts release adds an optional DA Challenge contract for use with OP Plasma. If usePlasma
is set to true in the deploy config, then the OP Plasma feature will be enabled.
The challenge DA contract is used to ensure that data posted as part of OP Plasma is made available. There are four deploy config parameters that must be set when using this feature: daChallengeWindow
, daResolveWindow
, daBondSize
, daResolverRefundPercentage
Release op-node, op-batcher, op-proposer v1.7.3
⬆️ This is a recommended release for Optimism Mainnet, particularly for op-batcher operators.
This release contains general fixes & improvements to op-node, op-batcher, & op-proposer. This also update the monorepo op-geth dependency to https://github.com/ethereum-optimism/op-geth/releases/tag/v1.101311.0
The most important change to be aware of is that the op-batcher is now significantly more performant in handling span batches that contain a large number of L2 blocks.
Partial Changelog
- Rename derive.CompressorFullErr to conventional ErrCompressorFull by @sebastianst in #9936
- chore(op-proposer): Update Proposer Description by @refcell in #9916
- op-node: fetch l1 block with retry by @jsvisa in #9869
- op-challenger: Unhide subcommands by @ajsutton in #9989
- Tests: Batching Benchmarks by @axelKingsley in #9927
- feat(op-service):Persist RethDB instance in the go fetcher struct. by @Nickqiaoo in #9904
- op-batcher: stateful span batches & blind compressor by @axelKingsley in #9954
- simplify bigMSB by @zhiqiangxu in #9998
- all: use the built-in slices library by @carehabit in #10005
- op-node: p2p ping test CI flake fix by @protolambda in #10010
- fix(op-node): handle async disconnects to avoid test flakiness by @felipe-op in #10019
- Update op-geth dependency to v1.101309.0-rc.2 by @roberto-bayardo in #9935
- txmgr: fix racy access to nonces slice in TestQueue_Send with mutex by @sebastianst in #10016
- CI: Less verbose output by @trianglesphere in #10059
- handle
Read
more correctly by @zhiqiangxu in #10034 - update geth dependency to version w/ v1.13.11 upstream commits by @roberto-bayardo in #10041
- op-batcher: Embed Zlib Compressor into Span Channel Out ; Compression Avoidance Strategy by @axelKingsley in #10002
Full Changelog: v1.7.2...v1.7.3
🚢 Docker Images:
op-node, op-batcher, op-proposer v1.7.2 - Batcher Improvements
⬆️ This is a strongly recommended release of op-batcher for all chain operators.
op-batcher changes
Multi-blob support in op-batcher
See release notes for v1.7.2-rc.3
for details on how to configure a multi-blob batcher.
Improved channel duration tracking
The batcher now tracks channel durations relative to the last L1 origin in a previous channel. The last channel's L1 origin is restored at startup and during reorgs.
This ensures that the desired channel duration survives restarts of the batcher, which is particularly important for low-throughput chains that use channel durations of a few hours.
There's a known quirk in the new tracking design, which leads to a slightly lower effective channel duration (~1min lower), related to how a channel timeout is determined relative to the current L1 head, not current channel's newest L1 origin. This will be improved in a future release.
Breaking compressor configuration change
The channel and compressor configuration got simplified by removal of the target-frame-size
flag. The only configuration parameters left to configure the channel size are
max-l1-tx-size
- default of 120k for calldata; for blobs this is overwritten to the max blob sizetaget-num-frames
- default of 1 for calldata; for multi-blob txs, set this to the desired amount of blobs per blob-tx (e.g. 6)
The default compressor is the shadow compressor, which is recommended in production.
Overflow frames bug fix
The batcher now correctly estimates a channel's output size, fixing a rarely but regularly occurring bug that produced overflow frames, leading for example to a 7th blob that was sent in a second batcher transaction.
op-node changes
- Improved peering behavior
- Per-chain hardfork activation times via superchain-registry
Partial Changelog
- feat(op-node): clean peer state when disconnectPeer is called and log intercept blocks by @felipe-op in #9706
- make txmgr aware of the txpool.ErrAlreadyReserved condition by @roberto-bayardo in #9683
- op-node/rollup/derive: also mark
IsLast
astrue
whenclosed && maxDataSize==readyBytes
by @zhiqiangxu in #9696 - op-node: Record genesis as being safe from L1 genesis by @ajsutton in #9684
- TXManager: add IsClosed to TxMgr and use check in BatchSubmitter by @axelKingsley in #9470
- Remove hardfork activation time overrides by @geoknee in #9642
- feat(op-node): gater unblock by @felipe-op in #9763
- op-node: Restore previous unsafe chain when invalid span batch by @pcw109550 in #8925
- export ChannelBuilder so we can use it in external analysis scripts by @roberto-bayardo in #9784
- op-node: Unhide the safedb.path option by @ajsutton in #9789
- More bootnodes by @trianglesphere in #9801
- simplify channel state publishing flow by separating tx sending from result processing by @roberto-bayardo in #9757
- op-batcher: Multi-blob Support by @sebastianst in #9779
- feat: add tx data version byte by @tchardin in #9845
- op-batcher: more accurate max channel duration tracking by @danyalprout in #9769
- remove an impossible condition in
NextBatch
by @zhiqiangxu in #9885 - feat(op-node): p2p rpc input validation by @felipe-op in #9897
- op-batcher: rework channel & compressor config, fix overhead bug by @sebastianst in #9887
- op-batcher: fix "handle receipt" log message to properly log id by @sebastianst in #9918
New Contributors
- @alecananian made their first contribution in #9805
- @friendwu made their first contribution in #9862
Full Changelog: v1.7.0...v1.7.2
🚢 Docker Images
op-batcher v1.7.2-rc.3 - Multi-Blob Batcher
🔴✨ Multi-Blob Batcher Pre-Release
The op-batcher in this release candidate has the capabilities to send multiple blobs per single blob transaction. This is accomplished by the use of multi-frame channels, see the specs for more technical details on channels and frames.
A minimal batcher configuration (with env vars) to enable 6-blob batcher transactions is:
- OP_BATCHER_BATCH_TYPE=1 # span batches, optional
- OP_BATCHER_DATA_AVAILABILITY_TYPE=blobs
- OP_BATCHER_TARGET_NUM_FRAMES=6 # 6 blobs per tx
- OP_BATCHER_TXMGR_MIN_BASEFEE=2.0 # 2 gwei, might need to tweak, depending on gas market
- OP_BATCHER_TXMGR_MIN_TIP_CAP=2.0 # 2 gwei, might need to tweak, depending on gas market
- OP_BATCHER_RESUBMISSION_TIMEOUT=240s # wait 4 min before bumping fees
This enables blob transactions and sets the target number of frames to 6, which translates to 6 blobs per transaction. The min. tip cap and base fee are also lifted to 2 gwei because it is uncertain how easy it will be to get 6-blob transactions included and slightly higher priority fees should help. The resubmission timeout is increased to a few minutes to give more time for inclusion before bumping the fees, because current txpool implementations require a doubling of fees for blob transaction replacements.
Multi-blob transactions are particularly interesting for medium to high-throughput chains, where enough transaction volume exists to fill up 6 blobs in a reasonable amount of time. You can use this calculator for your chain to determine what number of blobs are right for you, and what gas scalar configuration to use. Please also refer to our documentation on Blobs for chain operators.
🚢 Docker image: https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-batcher:v1.7.2-rc.3
A full v1.7.2
release of the op-stack follows soon.
Release op-node v1.7.1
⬆️ This is a recommended release for node operators using Snap Sync on Optimism Mainnet & Sepolia. For other users, this is a minor release. Node operators should be on at least v1.7.0.
Changes
- This release contains a fix to snap sync to ensure that all blocks are inserted to the execution engine when snap sync completes. Previously once snap sync would complete, if blocks where received out of order, the op-node could have internally inconsistent state & the unsafe head could stall for a period of time.
- This release also contains a safeDB feature which tracks the L1 block L2 blocks are derived from.
Partial Changelog
- op-node: Add option to enable safe head history database by @ajsutton in #9575
- op-node: Add flag category and improve testing by @ajsutton in #9636
- op-node: fix finalize log by @will-2012 in #9643
- op-node: p2p pinging background service by @protolambda in #9620
- op-node: Cleanup unsafe payload handling by @trianglesphere in #9661
New Contributors
- @will-2012 made their first contribution in #9643
Full Changelog: op-node/v1.7.0...op-node/v1.7.1