Skip to content

Releases: ethereum-optimism/optimism

Release op-contracts/v1.5.0-rc.1 — Safe Extensions

16 May 14:31
9047beb
Compare
Choose a tag to compare

Overview

This release adds 3 Safe extensions that are intended to help OP Stack chains reach Stage 1. You can read more about this in the governance post.

Contract Changes

Only the following 3 contracts are released with the version:

  • LivenessGuard, a guard for Safes
  • LivenessModule, a module for Safes, intended to be paired with LivenessGuard.
  • DeputyGuardianModule, a module for Safes.

These are all unrelated to the core OP Stack protocol contracts.

Full Contract Set

A chain using this contracts release must be using the following contracts at the specified semvers. Because the op-contracts/v1.4.0 release is pending governance approval, the versions below are from the op-contracts/v1.3.0 release.

The three new contracts:

  • LivenessGuard: 1.0.0
  • LivenessModule: 1.2.0
  • DeputyGuardianModule: 1.1.0

And the prior release from op-contracts/v1.3.0:

  • AddressManager: Latest (this has no version) (No change from prior version)
  • L1CrossDomainMessenger: 2.3.0
  • L1ERC721Bridge: 2.1.0
  • L1StandardBridge: 2.1.0
  • L2OutputOracle: 1.8.0
  • OptimismMintableERC20Factory: 1.9.0
  • OptimismPortal: 2.5.0
  • SystemConfig: 1.12.0
  • SuperchainConfig: 1.1.0 (No change from prior version)
  • ProtocolVersions: 1.0.0 (No change from prior version)

Full Changelog

The full contracts diff between this release and the prior release can be found at the link below. Note that, because this is a monorepo, this will likely include many unrelated changes and will be a noisy diff. Because op-contracts/v1.4.0 is also pending governance approval, this release is compared to op-contracts/v1.3.0

op-contracts/v1.3.0...op-contracts/v1.5.0-rc.1

Release op-contracts/v1.4.0-rc.4 - Fault Proofs

21 May 15:39
op-contracts/v1.4.0-rc.4
547ea72
Compare
Choose a tag to compare

Overview

This release adds Fault Proofs to help OP Stack chains reach Stage 1. You can read more about this in the governance post.

Contract Changes

This includes the following new contracts:

  • FaultDisputeGame, An implementation of the IDisputeGame interface for Fault Proofs
  • PermissionedDisputeGame, A FaultDisputeGame contract that is permissioned
  • DisputeGameFactory, A factory contract for creating IDisputeGame contracts
  • AnchorStateRegistry, Stores the latest "anchor" state used by the FaultDisputeGame contract
  • DelayedWETH, An extension to WETH9 that allows delayed withdrawals
  • MIPS, An onchain MIPS32 VM
  • PreimageOracle, A contract for storing permissioned pre-images

Full Contract Set

A chain using this contracts release must be using the following contracts at the specified semvers.

The new contracts:

  • FaultDisputeGame: 1.2.0
  • PermissionedDisputeGame: 1.2.0
  • DisputeGameFactory: 1.0.0
  • AnchorStateRegistry: 1.0.0
  • DelayedWETH: 1.0.0
  • MIPS: 1.0.1
  • PreimageOracle: 1.0.0

The updated contracts:

  • OptimismPortal: 3.10.0
  • SystemConfig: 2.2.0

And contracts unchanged from the prior op-contracts/v1.3.0 release:

  • AddressManager: Latest (This has no version)
  • L1CrossDomainMessenger: 2.3.0
  • L1ERC721Bridge: 2.1.0
  • L1StandardBridge: 2.1.0
  • OptimismMintableERC20Factory: 1.9.0
  • SuperchainConfig: 1.1.0
  • ProtocolVersions: 1.0.0

Note that the L2OutputOracle has been removed, and is no longer used for chains running this version of the contracts.

Full Changelog

The full contracts diff between this release and the prior release can be found at the link below. Note that, because this is a monorepo, this will likely include many unrelated changes and will be a noisy diff.

op-contracts/v1.3.0...op-contracts/v1.4.0-rc.4

Release op-contracts v1.4.0-rc.2 - Fault Proofs V1

03 May 17:21
a6d4eed
Compare
Choose a tag to compare

Overview

This release candidate enables fault proofs in the withdrawal path of the bridge on L1. It also modifies the SystemConfig to remove the legacy L2OutputOracle contract in favor of the DisputeGameFactory.

Specification here.

The full set of L1 contracts included in this release is:

  • AddressManager: Latest (this has no version) (No change from prior version)
  • AnchorStateRegistry: 1.0.0 (New Contract)
  • DelayedWETH: 1.0.0 (New Contract)
  • DisputeGameFactory: 1.0.0 (New Contract)
  • L1CrossDomainMessenger: 2.3.0 (No change from prior version)
  • L1ERC721Bridge: 2.1.0 (No change from prior version)
  • L1StandardBridge: 2.1.0 (No change from prior version)
  • OptimismMintableERC20Factory: 1.9.0 (No change from prior version)
  • OptimismPortal: 3.8.0 (Modified from prior version, with breaking changes)
  • SystemConfig: 2.0.0 (Modified from prior version, with breaking changes)
  • SuperchainConfig: 1.1.0 (No change from prior version)
  • ProtocolVersions: 1.0.0 (No change from prior version)

The L2OutputOracle is no longer used for chains running this version of the L1 contracts.

Contracts Changed

  1. L2OutputOracle
    • The L2OutputOracle has been removed from the deployed protocol.
  2. OptimismPortal
    • The OptimismPortal has been modified to allow users to prove their withdrawals against outputs that were proposed as dispute games, created via a trusted DisputeGameFactory contract. spec.
  3. SystemConfig
    • The SystemConfig has been changed to remove the L2_OUTPUT_ORACLE storage slot as well as the getter for the contract. To replace it, a new getter for the DisputeGameFactory proxy has been added.

New Contracts

  1. DisputeGameFactory
    • The DisputeGameFactory is the new inbox for L2 output proposals on L1, creating dispute games.
    • Output proposals are now permissionless by default.
    • Challenging output proposals is now permissionless by default.
  2. FaultDisputeGame
    • The FaultDisputeGame facilitates trustless disputes over L2 output roots proposed on L1. spec.
  3. PermissionedDisputeGame
    • A child of the FaultDisputeGame contract, that permissions proposing and challenging. Deployed as a safety mechanism to temporarily restore liveness in the event of the FaultDisputeGame's failure.
  4. MIPS
    • The MIPS VM is a minimal kernel emulating the MIPS32 ISA with a subset of available Linux syscalls. This contract allows for executing single steps of a fault proof program at the base case of disputes in the FaultDisputeGame. spec.
  5. PreimageOracle
    • The PreimageOracle contract is responsible for serving verified data to the program running on top of the MIPS VM during single-step execution. When data enters the PreimageOracle, it is verified to be correctly formatted and honest. spec.
  6. AnchorStateRegistry
    • The AnchorStateRegistry contract is responsible for tracking the latest finalized root claims from various dispute game types.
  7. DelayedWETH
    • DelayedWETH is an extension of WETH9 that delays unwrapping operations. Bonds that are placed in dispute games are held within this contract, and the owner may intervene in withdrawals to redistribute funds to submitters in case of dispute game resolution failure.

Full Changelog

op-contracts/v1.4.0-rc.1...op-contracts/v1.4.0-rc.2

op-node v1.7.5

02 May 18:22
e87e5ef
Compare
Choose a tag to compare

⚠️ This is a recommended maintenance release of op-node. It fixes an ssz unmarshaling implementation in the p2p req-resp protocol (#10362).

Partial changelog (op-node)

New Contributors (all monorepo)

Full Changelog (all monorepo): v1.7.4...op-node/v1.7.5

🚢 Docker image: https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-node:v1.7.5

op-stack v1.7.4

26 Apr 18:09
24a8d3e
Compare
Choose a tag to compare

⚠️ Strongly recommended maintenance release

🐞 op-node blob reorg bug fix (#10210)

If an L1 block got reorg'd out during blob retrieval, an op-node might get stuck in a loop retrieving a blob that will never exist, requiring a restart. This got fixed by internally signaling the right error types, forcing a derivation pipeline reset in such cases.

✨ op-batcher & op-proposer node sync start (#10116 #10193 #10262 #10273)

op-batcher and op-proposer can now wait for the sequencer to sync to the current L1 head before starting their work.
This fixes an issue where a restart of op-batcher/proposer and op-node at the same time might cause to resend duplicate batches from the last finalized L2 block, because a freshly restarted op-node resyncs from the finalized head, potentially signaling a too early safe head in its sync status.

🏳️ This feature is off by default, so we recommend testing it by using the new batcher and proposer flag --wait-node-sync (or its corresponding env vars).

Enabling this will cause op-batcher and op-proposer to wait for the sequencer's verifier confirmation depth for typically 4 L1 blocks, or ~1 min, at startup.

🏳️ To speed up this process in case that no recent batcher transaction have happened, there's another optional new batcher flag --check-recent-txs-depth that lets the batcher check for recent batcher transactions to determine a potentially earlier sync target. This feature is off by default (0) and should be set to the sequencer's verifier confirmation depth to get enabled.

Partial changelog

op-node

op-batcher & op-proposer

New Contributors (all monorepo)

Full Changelog: v1.7.3...v1.7.4

🚢 Docker Images:

op-contracts/v1.4.0-rc.1

23 Apr 14:39
802c283
Compare
Choose a tag to compare
Pre-release

This contracts release adds an optional DA Challenge contract for use with OP Plasma. If usePlasma is set to true in the deploy config, then the OP Plasma feature will be enabled.

The challenge DA contract is used to ensure that data posted as part of OP Plasma is made available. There are four deploy config parameters that must be set when using this feature: daChallengeWindow, daResolveWindow, daBondSize, daResolverRefundPercentage

Release op-node, op-batcher, op-proposer v1.7.3

11 Apr 23:35
a3cc8f2
Compare
Choose a tag to compare

⬆️ This is a recommended release for Optimism Mainnet, particularly for op-batcher operators.

This release contains general fixes & improvements to op-node, op-batcher, & op-proposer. This also update the monorepo op-geth dependency to https://github.com/ethereum-optimism/op-geth/releases/tag/v1.101311.0

The most important change to be aware of is that the op-batcher is now significantly more performant in handling span batches that contain a large number of L2 blocks.

Partial Changelog

Full Changelog: v1.7.2...v1.7.3

🚢 Docker Images:

op-node, op-batcher, op-proposer v1.7.2 - Batcher Improvements

22 Mar 19:01
99a5338
Compare
Choose a tag to compare

⬆️ This is a strongly recommended release of op-batcher for all chain operators.

op-batcher changes

Multi-blob support in op-batcher

See release notes for v1.7.2-rc.3 for details on how to configure a multi-blob batcher.

Improved channel duration tracking

The batcher now tracks channel durations relative to the last L1 origin in a previous channel. The last channel's L1 origin is restored at startup and during reorgs.

This ensures that the desired channel duration survives restarts of the batcher, which is particularly important for low-throughput chains that use channel durations of a few hours.

There's a known quirk in the new tracking design, which leads to a slightly lower effective channel duration (~1min lower), related to how a channel timeout is determined relative to the current L1 head, not current channel's newest L1 origin. This will be improved in a future release.

Breaking compressor configuration change

The channel and compressor configuration got simplified by removal of the target-frame-size flag. The only configuration parameters left to configure the channel size are

  • max-l1-tx-size - default of 120k for calldata; for blobs this is overwritten to the max blob size
  • taget-num-frames - default of 1 for calldata; for multi-blob txs, set this to the desired amount of blobs per blob-tx (e.g. 6)
    The default compressor is the shadow compressor, which is recommended in production.

Overflow frames bug fix

The batcher now correctly estimates a channel's output size, fixing a rarely but regularly occurring bug that produced overflow frames, leading for example to a 7th blob that was sent in a second batcher transaction.

op-node changes

  • Improved peering behavior
  • Per-chain hardfork activation times via superchain-registry

Partial Changelog

New Contributors

Full Changelog: v1.7.0...v1.7.2

🚢 Docker Images

op-batcher v1.7.2-rc.3 - Multi-Blob Batcher

13 Mar 10:08
25985c1
Compare
Choose a tag to compare

🔴✨ Multi-Blob Batcher Pre-Release

The op-batcher in this release candidate has the capabilities to send multiple blobs per single blob transaction. This is accomplished by the use of multi-frame channels, see the specs for more technical details on channels and frames.

A minimal batcher configuration (with env vars) to enable 6-blob batcher transactions is:

      - OP_BATCHER_BATCH_TYPE=1 # span batches, optional
      - OP_BATCHER_DATA_AVAILABILITY_TYPE=blobs
      - OP_BATCHER_TARGET_NUM_FRAMES=6 # 6 blobs per tx
      - OP_BATCHER_TXMGR_MIN_BASEFEE=2.0 # 2 gwei, might need to tweak, depending on gas market
      - OP_BATCHER_TXMGR_MIN_TIP_CAP=2.0 # 2 gwei, might need to tweak, depending on gas market
      - OP_BATCHER_RESUBMISSION_TIMEOUT=240s # wait 4 min before bumping fees

This enables blob transactions and sets the target number of frames to 6, which translates to 6 blobs per transaction. The min. tip cap and base fee are also lifted to 2 gwei because it is uncertain how easy it will be to get 6-blob transactions included and slightly higher priority fees should help. The resubmission timeout is increased to a few minutes to give more time for inclusion before bumping the fees, because current txpool implementations require a doubling of fees for blob transaction replacements.

Multi-blob transactions are particularly interesting for medium to high-throughput chains, where enough transaction volume exists to fill up 6 blobs in a reasonable amount of time. You can use this calculator for your chain to determine what number of blobs are right for you, and what gas scalar configuration to use. Please also refer to our documentation on Blobs for chain operators.

🚢 Docker image: https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-batcher:v1.7.2-rc.3

A full v1.7.2 release of the op-stack follows soon.

Release op-node v1.7.1

06 Mar 17:57
c87a469
Compare
Choose a tag to compare

⬆️ This is a recommended release for node operators using Snap Sync on Optimism Mainnet & Sepolia. For other users, this is a minor release. Node operators should be on at least v1.7.0.

Changes

  • This release contains a fix to snap sync to ensure that all blocks are inserted to the execution engine when snap sync completes. Previously once snap sync would complete, if blocks where received out of order, the op-node could have internally inconsistent state & the unsafe head could stall for a period of time.
  • This release also contains a safeDB feature which tracks the L1 block L2 blocks are derived from.

Partial Changelog

New Contributors

Full Changelog: op-node/v1.7.0...op-node/v1.7.1

🚢 Docker Images