Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

glusterd: avoid starting the same brick twice #4088

Open
wants to merge 1 commit into
base: devel
Choose a base branch
from

Conversation

xhernandez
Copy link
Contributor

There was a race in glusterd code that could cause that two threads start the same brick at the same time. One of the bricks will fail because it will detect the other brick running. Depending on which brick fails, glusterd will report a start failure and mark the brick as stopped even if it's running.

The problem is caused by an attempt to connect to a brick that's being started by another thread. If the brick is not fully initialized, it will refuse all connection attempts. When this happens, glusterd receives a disconnection notification, which forcibly marks the brick as stopped.

Now, if another attempt to start the same brick happens, it will believe that the brick is stopped and it will start it again. If this happens very soon after the first start attempt, the checks done to see if the brick is already running will still fail, triggering the start of the brick process again. One of the bricks will fail to initialize and will report an error. If the failed one is processed by glusterd in the second place, the brick will be marked as stopped, even though the process is actually there and working.

Fixes: #4080

@xhernandez
Copy link
Contributor Author

/run regression

@gluster-ant
Copy link
Collaborator

CLANG-FORMAT FAILURE:
Before merging the patch, this diff needs to be considered for passing clang-format

index 91606dd40..a864a6cd2 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handler.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handler.c
@@ -6375,8 +6375,8 @@ __glusterd_brick_rpc_notify(struct rpc_clnt *rpc, void *mydata,
             }
 
             gf_msg(this->name, GF_LOG_INFO, 0, GD_MSG_BRICK_DISCONNECTED,
-                    "Brick %s:%s has disconnected from glusterd.",
-                    brickinfo->hostname, brickinfo->path);
+                   "Brick %s:%s has disconnected from glusterd.",
+                   brickinfo->hostname, brickinfo->path);
 
             ret = get_volinfo_from_brickid(brickid, &volinfo);
             if (ret) {
@@ -6385,8 +6385,7 @@ __glusterd_brick_rpc_notify(struct rpc_clnt *rpc, void *mydata,
                 goto out;
             }
             gf_event(EVENT_BRICK_DISCONNECTED, "peer=%s;volume=%s;brick=%s",
-                     brickinfo->hostname, volinfo->volname,
-                     brickinfo->path);
+                     brickinfo->hostname, volinfo->volname, brickinfo->path);
             /* In case of an abrupt shutdown of a brick PMAP_SIGNOUT
              * event is not received by glusterd which can lead to a
              * stale port entry in glusterd, so forcibly clean up
@@ -6405,7 +6404,8 @@ __glusterd_brick_rpc_notify(struct rpc_clnt *rpc, void *mydata,
                     gf_msg(this->name, GF_LOG_WARNING,
                            GD_MSG_PMAP_REGISTRY_REMOVE_FAIL, 0,
                            "Failed to remove pmap registry for port %d for "
-                           "brick %s", brickinfo->port, brickinfo->path);
+                           "brick %s",
+                           brickinfo->port, brickinfo->path);
                     ret = 0;
                 }
             }

There was a race in glusterd code that could cause that two threads
start the same brick at the same time. One of the bricks will fail
because it will detect the other brick running. Depending on which
brick fails, glusterd will report a start failure and mark the brick
as stopped even if it's running.

The problem is caused by an attempt to connect to a brick that's being
started by another thread. If the brick is not fully initialized, it
will refuse all connection attempts. When this happens, glusterd receives
a disconnection notification, which forcibly marks the brick as stopped.

Now, if another attempt to start the same brick happens, it will believe
that the brick is stopped and it will start it again. If this happens
very soon after the first start attempt, the checks done to see if the
brick is already running will still fail, triggering the start of the
brick process again. One of the bricks will fail to initialize and will
report an error. If the failed one is processed by glusterd in the
second place, the brick will be marked as stopped, even though the
process is actually there and working.

Fixes: gluster#4080
Signed-off-by: Xavi Hernandez <xhernandez@gmail.com>
@xhernandez
Copy link
Contributor Author

/run regression

@gluster-ant
Copy link
Collaborator

1 test(s) failed
./tests/bugs/rpc/bug-884452.t

0 test(s) generated core

5 test(s) needed retry
./tests/000-flaky/basic_afr_split-brain-favorite-child-policy.t
./tests/000-flaky/features_copy-file-range.t
./tests/000-flaky/glusterd-restart-shd-mux.t
./tests/bugs/replicate/bug-1655050-dir-sbrain-size-policy.t
./tests/bugs/rpc/bug-884452.t

1 flaky test(s) marked as success even though they failed
./tests/000-flaky/features_copy-file-range.t
https://build.gluster.org/job/gh_centos7-regression/3248/


ret = get_volinfo_from_brickid(brickid, &volinfo);
if (!glusterd_is_brick_started(brickinfo)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we not need to set start_triggered to false also if brick is not started?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we not need to set start_triggered to false also if brick is not started?

No. Setting start_triggered to false is precisely what causes that glusterd tries to start the brick twice.

If the brick is still starting here, it means that someone else is managing it, so it's better to not touch anything and let the other thread to adjust the state and flags as necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The flag was introduced by the patch (https://review.gluster.org/#/c/glusterfs/+/18577/) and the patch was specific to brick_mux environment though it is applicable everywhere. The change would not be easy to validate. The purpose of this flag is to indicate brick_start has been triggered and it continues to be true until a brick has been disconnect so if we would not reset it the brick would not start. We can get the race scenario in the case of brick_mux only while continuous run brick stop/start in a loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO the whole start/stop logic is unnecessarily complex. However it's very hard to modify it now. The main problem here is that any attempt to connect to the brick while it's still starting will fail, so current code marks the brick as down, while actually it's still starting and most probably it will start successfully. So I think that marking it as stopped and clearing the start_triggered flag is incorrect (basically this causes that another attempt to start it from another thread creates a new process).

However, after looking again at the code, it seems that we can start bricks in an asynchronous mode (without actually waiting for the process start up) and there's no callback in case of failure. This means that no one will check if the process actually started or not to mark the brick as stopped in case of error. Even worse, just after starting a brick asynchronously, a connection attempt is done, which may easily fail under some conditions (I can hit this issue almost 100% of the time by running some tests on a zram disk).

How would you solve this issue ? I guess that making all brick starts synchronous is not an option, right ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that would be a good idea start a brick asynchronously the challenge is how to make sure the brick has started successfully to establish a connection with glusterd.

@xhernandez
Copy link
Contributor Author

/run s390-regression

Copy link

stale bot commented Dec 15, 2023

Thank you for your contributions.
Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity.
It will be closed in 2 weeks if no one responds with a comment here.

@stale stale bot added the wontfix Managed by stale[bot] label Dec 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix Managed by stale[bot]
Projects
None yet
Development

Successfully merging this pull request may close these issues.

glusterd may try to start bricks twice
3 participants