New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
all disk deletes hang while Crucible downstairs is unreachable #4331
Milestone
Comments
davepacheco
changed the title
Disk delete hangs can cause future disk deletes to hang
all disk deletes hang while Crucible downstairs is unreachable
Apr 17, 2024
I updated the synopsis here to reflect that it's not the first hang that causes subsequent hangs -- rather, all disk deletes appear to hang as long as any Crucible downstairs is unreachable (from my read of the description). |
jmpesp
added a commit
to jmpesp/omicron
that referenced
this issue
May 1, 2024
When a disk is expunged, any region that was on that disk is assumed to be gone. A single disk expungement can put many Volumes into degraded states, as one of the three mirrors of a region set is now gone. Volumes that are degraded in this way remain degraded until a new region is swapped in, and the Upstairs performs the necessary repair operation (either through a Live Repair or Reconciliation). Nexus can only initiate these repairs - it does not participate in them, instead requesting that a Crucible Upstairs perform the repair. These repair operations can only be done by an Upstairs running as part of an activated Volume: either Nexus has to send this Volume to a Pantry and repair it there, or Nexus has to talk to a propolis that has that active Volume. Further complicating things is that the Volumes in question can be activated and deactivated as a result of user action, namely starting and stopping Instances. This will interrupt any on-going repair. This is ok! Both operations support being interrupted, but as a result it's then Nexus' job to continually monitor these repair operations and initiate further operations if the current one is interrupted. Nexus starts by creating region replacement requests, either manually or as a result of disk expungement. These region replacement requests go through the following states: Requested <-- | | | v | | Allocating -- | v Running <-- | | | v | | Driving -- | v ReplacementDone <-- | | | v | | Completing -- | v Completed A single saga invocation is not enough to continually make sure a Volume is being repaired, so region replacement is structured as series of background tasks and saga invocations from those background tasks. Here's a high level summary: - a `region replacement` background task: - looks for disks that have been expunged and inserts region replacement requests into CRDB with state `Requested` - looks for all region replacemnt requests in state `Requested` (picking up new requests and requests that failed to transition to `Running`), and invokes a `region replacement start` saga. - the `region replacement start` saga: - transitions the request to state `Allocating`, blocking out other invocations of the same saga - allocates a new replacement region - alters the Volume Construction Request by swapping out the old region for the replacement one - transitions the request to state `Running` - any unwind will transition the request back to the `Requested` state. - a `region replacement drive` background task: - looks for requests with state `Running`, and invokes the `region replacement drive` saga for those requests - looks for requests with state `ReplacementDone`, and invokes the `region replacement finish` saga for those requests - the `region replacement drive` saga will: - transition a request to state `Driving`, again blocking out other invocations of the same saga - check if Nexus has taken an action to initiate a repair yet. if not, then one is needed. if it _has_ previously initiated a repair operation, the state of the system is examined: is that operation still running? has something changed? further action may be required depending on this observation. - if an action is required, Nexus will prepare an action that will initiate either Live Repair or Reconciliation based on the current observed state of the system. - that action is then executed. if there was an error, then the saga unwinds. if it was successful, it is recorded as a "repair step" in CRDB and will be checked the next time the saga runs. - if Nexus observed an Upstairs telling it that a repair was completed or not necessary, then the request is placed into the `ReplacementDone` state, otherwise it is placed back into the `Running` state. if the saga unwinds, it unwinds back to the `Running` state. - finally, the `region replacement finish` saga will: - transition a request into `Completing` - delete the old region by deleting a transient Volume that refers to it (in the case where a sled or disk is actually physically gone, expunging that will trigger oxidecomputer#4331, which needs to be fixed!) - transition the request to the `Complete` state More detailed documentation is provided in each of the region replacement saga's beginning docstrings. Testing was done manually using the Canada region using the following test cases: - a disk needing repair is attached to a instance for the duration of the repair - a disk needing repair is attached to a instance that is migrated mid-repair - a disk needing repair is attached to a instance that is stopped mid-repair - a disk needing repair is attached to a instance that is stopped mid-repair, then started in the middle of the pantry's repair - a detached disk needs repair - a detached disk needs repair, and is then attached to an instance that is then started - a sled is expunged, causing region replacement requests for all regions on it Fixes oxidecomputer#3886 Fixes oxidecomputer#5191
jmpesp
added a commit
to jmpesp/omicron
that referenced
this issue
May 17, 2024
If there's a call to an external service, saga execution cannot move forward until the result of that call is known, in the sense that Nexus received a result. If there are transient problems, Nexus must retry until a known result is returned. This is problematic when the destination service is gone - Nexus will retry indefinitely, halting the saga execution. Worse, in the case of sagas calling the volume delete subsaga, subsequent calls will also halt. With the introduction of a physical disk policy, Nexus can know when to stop retrying a call - the destination service is gone, so the known result is an error. This commit adds a `ProgenitorOperationRetry` object that takes an operation to retry plus a "gone" check, and checks each retry iteration if the destination is gone. If it is, then bail out, otherwise assume that any errors seen are transient. Further work is required to deprecate the `retry_until_known_result` function, as retrying indefinitely is a bad pattern. Fixes oxidecomputer#4331 Fixes oxidecomputer#5022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When the
disk_delete
saga runs, one of its responsibilities is to clean up regions that are freed as a result of a volume being deleted (svd_delete_freed_crucible_regions
). This uses the following query to find regions that need to be cleaned up:The results of this looks something like (keys redacted):
One of the related datasets points to a sled that had been physically removed from the rack:
fd00:1122:3344:11b::7
. The regions returned from the query will then eventually reachdelete_crucible_region
which will attempt to delete the region in aretry_until_known_result
loop. Given that the sled is no longer reachable, this saga gets stuck in a running state indefinitely.Given that the above query looks for regions that have been freed for any reason (as opposed to actions that directly occurred in the containing saga), all future
disk_delete
saga will also find these regions and attempt to delete them. This essentially means that once a single disk delete hangs, all future delete operations will hang.For the instance that we saw this, this ultimately was a result of data being leftover in CockroachDB when trying to clean up from removing sled 10.
The text was updated successfully, but these errors were encountered: