Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

routing: shutdown chanrouter correctly. #8497

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

ziggie1984
Copy link
Collaborator

@ziggie1984 ziggie1984 commented Feb 22, 2024

Fixes #8489
EDIT: Fixes #8721

So in the above linked issue, the channel graph could not be synced correctly so the ChanRouter:

2024-02-20 11:18:39.217 [INF] CRTR: Syncing channel graph from height=830127 (hash=00000000000000000003a7ed3b7a5f5fd5571a658972e9db0af2a650f6ade198) to height=831246 (hash=00000000000000000002946973960d53538a7d93333ff7d4653a37a577ba4b58)

...

2024-02-20 11:19:11.325 [WRN] LNWL: Query(34) from peer 142.132.193.144:8333 failed, rescheduling: did not get response before timeout
2024-02-20 11:19:11.326 [DBG] BTCN: Sending getdata (witness block 000000000000000000028352e09a42f6d26d0514a3d483f7f1fb56b2c2954361) to 142.132.193.144:8333 (outbound)
2024-02-20 11:19:15.327 [WRN] LNWL: Query(34) from peer 142.132.193.144:8333 failed and reached maximum number of retries, not rescheduling: did not get response before timeout
2024-02-20 11:19:15.327 [DBG] LNWL: Canceled batch 34
2024-02-20 11:19:15.328 [INF] DISC: Authenticated gossiper shutting down
2024-02-20 11:19:15.328 [INF] DISC: Authenticated Gossiper is stopping

so the 34 query failed and therefore the startup of the chanrouter failed as well.

We fail here and never call the Stop function of the channel router.
https://github.com/lightningnetwork/lnd/blob/master/routing/router.go#L628

When cleaning up all the other subsystems we get stuck however:

goroutine 1652 [select]:
github.com/lightningnetwork/lnd/routing.(*ChannelRouter).UpdateEdge(0xc0002190e0, 0xc00028fea0, {0x0, 0x0, 0x0})
        github.com/lightningnetwork/lnd/routing/router.go:2605 +0x155
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).updateChannel(0xc0004a2790, 0xc0004f0580, 0xc00028fea0)
        github.com/lightningnetwork/lnd/discovery/gossiper.go:2182 +0x1f1
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).retransmitStaleAnns(0xc0004a2790, {0x0?, 0x100c004d10870?, 0x31f6c60?})
        github.com/lightningnetwork/lnd/discovery/gossiper.go:1643 +0x272
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).networkHandler(0xc0004a2790)
        github.com/lightningnetwork/lnd/discovery/gossiper.go:1342 +0x19d
created by github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).start in goroutine 1
        github.com/lightningnetwork/lnd/discovery/gossiper.go:599 +0x145

because we don't close the quit channel of the channel router and therefore the Authenticated Gossiper cannot stop as well so the cleanup process is stuck holding up the shutdown of all subsystems, causing some sideeffects because other subsystems are still running.

2024-02-20 11:19:15.328 [INF] DISC: Authenticated gossiper shutting down
2024-02-20 11:19:15.328 [INF] DISC: Authenticated Gossiper is stopping

Goroutine Dump:

goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc0039a30e0?)
        runtime/sema.go:62 +0x25
sync.(*WaitGroup).Wait(0xc0008ff7a0?)
        sync/waitgroup.go:116 +0x48
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).stop(0xc0004a2790)
        github.com/lightningnetwork/lnd/discovery/gossiper.go:746 +0x115
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).Stop.func1()
        github.com/lightningnetwork/lnd/discovery/gossiper.go:732 +0x69
sync.(*Once).doSlow(0x3?, 0xc00030e6a0?)
        sync/once.go:74 +0xbf
sync.(*Once).Do(...)
        sync/once.go:65
github.com/lightningnetwork/lnd/discovery.(*AuthenticatedGossiper).Stop(0xc0039a32d8?)
        github.com/lightningnetwork/lnd/discovery/gossiper.go:730 +0x3c
github.com/lightningnetwork/lnd.cleaner.run({0xc001c1bc00, 0x1e209aa?, 0x4?})
        github.com/lightningnetwork/lnd/server.go:1858 +0x42github.com/lightningnetwork/lnd.(*server).Start(0xcb01c?)
        github.com/lightningnetwork/lnd/server.go:2248 +0x8egithub.com/lightningnetwork/lnd.Main(0xc0001d0100, {{0x0?, 0x7f4703ac7c40?, 0x101c0000b2000?}}, 0xc000104f60, {0xc0000b3e60, 0xc000222180, 0xc0002221e0, 0xc000222240, {0x0}})
        github.com/lightningnetwork/lnd/lnd.go:684 +0x3be5
main.main()
        github.com/lightningnetwork/lnd/cmd/lnd/main.go:38 +0x1ee

So we need to think how to prevent those situations, because I think we don't close the quit channel for almost all subsystems when the start fails.

Copy link

coderabbitai bot commented Feb 22, 2024

Important

Auto Review Skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ziggie1984
Copy link
Collaborator Author

But this is just part of the fix why the other node is not able to sync the graph to the chain, we definitely need to retry the blockfetch and not fail immediately if we cannot get the block from the first peer. This issue is already tracked in this issue:

btcsuite/btcwallet#904

@ziggie1984
Copy link
Collaborator Author

cc @yyforyongyu @Roasbeef

@ziggie1984
Copy link
Collaborator Author

Swapped the order when we add the cleanup stop function to the garbage-collector. Let's see if tests pass.

@@ -539,6 +539,13 @@ func (r *ChannelRouter) Start() error {
return nil
}

defer func() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this given the change of orders in the following commit?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree we can remove this now.

if err := s.createNewHiddenService(); err != nil {
startErr = err
return
}
cleanup = cleanup.add(s.torController.Stop)
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's also the connMsgr cleanup below

@@ -1847,173 +1847,174 @@ func (s *server) Start() error {
cleanup := cleaner{}

s.start.Do(func() {
cleanup = cleanup.add(s.customMessageServer.Stop)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this change is interesting...need to take a second look to see if this can be done safely, as there's some stop pattern that requires a certain state to be reached before it can stop, such as,

lnd/peer/ping_manager.go

Lines 201 to 205 in acc5951

func (m *PingManager) Stop() error {
if m.pingTicker == nil {
return errors.New("PingManager cannot be stopped because it " +
"isn't running")
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's fine, because we just call the stop function, and if we cannot stop it we just print a warning and continue with the cleanup of the other subsystems. We cannot know at which state the startup might fail so therefore I think we should precautiously call the stop function.

Make sure that each subsystem only starts and stop once. This makes
sure we don't close e.g. quit channels twice.
This commit does two things. It starts up the server in a way that
it can be interrupted and shutdown gracefully.
Moreover it makes sure that subsystems clean themselves up when
they fail to start. This makes sure that depending subsytems can
shutdown gracefully as well and the shutdown process is not stuck.
@ziggie1984
Copy link
Collaborator Author

@yyforyongyu while adding the interruptibility to the startup of the server, I figured out that we need to make sure that each stop call is atomic (only happens once) otherwise we first call it in the cleanup method and when we return an error its also called in the server.Stop() function. While testing I had such a panic with the invoice registry but also the txpublisher.

But I think when the tests pass the switch of the cleanup order should have no side effects and can prevent some cases where subsystems depend on each other and therefore cannot shutdown correctly in case on of them does not close the quit channel.

@lightninglabs-deploy
Copy link

@ziggie1984, remember to re-request review from reviewers when ready

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants