Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

grpclb: include fallback reason in error status of failing to fallback #8035

Merged

Conversation

voidzcy
Copy link
Contributor

@voidzcy voidzcy commented Apr 1, 2021

Enhance error information reflected by RPC status when failing to fallback (aka, no fallback addresses provided by resolver), by including the original cause of entering fallback. This falls into cases:

  • balancer RPC timeout before receiving any backend addresses (includes a timeout message)
  • balancer RPC failed before receiving any backend addresses (use the error occurred in balancer RPC)
  • all balancer-provided addresses failed, while balancer RPC had failed causing fallback (use the error status for one of the balancer-provided backend)

Note for cases that connections to fallback address fail, it's already using one of fallback addresses' error. See handleSubchannelState(...) -> maybeUseFallbackBackends() (no-op as it's already using fallback backends) -> maybeUpdatePicker() with backendList being non-empty.


Fixes #7997

@voidzcy voidzcy force-pushed the bugfix/improve_grpclb_fallback_error_propagation branch from 51142fc to e5e2e04 Compare April 1, 2021 17:33
…lback (aka, no fallback addresses provided by resolver), by including the original cause of entering fallback. This falls into cases:

  - balancer RPC timeout (includes a timeout message)
  - balancer RPC failed before receiving any backend addresses (use the error occured in balancer RPC)
  - all balancer-provided addresses failed, while balancer RPC had failed causing fallback (use the error status for one of the balancer-provided backend)
@voidzcy voidzcy force-pushed the bugfix/improve_grpclb_fallback_error_propagation branch from e5e2e04 to 4558b74 Compare April 1, 2021 17:59
@voidzcy voidzcy requested a review from ejona86 April 1, 2021 19:30
grpclb/src/main/java/io/grpc/grpclb/GrpclbState.java Outdated Show resolved Hide resolved
@@ -717,6 +743,7 @@ private void handleStreamClosed(Status error) {
cleanUp();
propagateError(error);
balancerWorking = false;
fallbackReason = error;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may not be UNAVAILABLE. We need to create a new Status.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the propagateError(error) two lines above? I was wanting to delete that line. That line fails RPCs for a short time window between balancer RPC closed and trying fallback. Right after fallback is attempted, if failing to fallback, RPCs will change to fail with fallbackReason (which is the same status for the balancer's failure plus a "fail to fallback" message).

So I am wondering if we should remove the propagateError(error) line here and fall RPCs with a single status, after attempting fallback.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

propagateError() is called two places. On of them isn't as it seems. InetAddress.getByAddress() only throws UnknownHostException "if IP address is of illegal length" so the error string "Host for server not found" is wrong.

propagateError() does two things: log and adjust the picker. For logging, we really want to log the original Status, so error here. But we can't use error directly for the picker, even if it is for a short period of time.

So I am wondering if we should remove the propagateError(error) line here and fall RPCs with a single status, after attempting fallback.

That's a functional change, as you no longer cause failures if fallback succeeds. I don't think we'd chose the behavior based on what makes the implementation easiest. I think we want it to behave a certain way in this case. I thought grpclb was supposed to try fallback before failing RPCs, at least when starting up. I honestly don't know where to look up the expected behavior in this case.

Calling @markdroth to help inform us of when gRPC-LB should begin failing RPCs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have enough context here to know which specific cases you're asking about.

In general, there are two types of grpclb fallback, fallback at startup and fallback after startup.

Fallback at startup is triggered in the following cases:

  • When the fallback timer fires before we have received the first response from the balancer.
  • When the balancer channel goes into TRANSIENT_FAILURE before reaching READY. (This short-circuits the fallback timer.)
  • When the balancer call finishes (regardless of status) without receiving the first response from the balancer. (This short-circuits the fallback timer.)

Fallback after startup occurs only after we receive an initial response from the balancer. It is triggered in the following cases:

  • When we get an explicit response from the balancer telling us go into fallback.
  • When both of the following are true:
    • The balancer call has finished (regardless of status) and we have not yet received the first response on the subsequent call.
    • We cannot connect to any of the backends in the last response we received from the balancer.

None of these cases have anything to do with the status of individual data plane calls. However, there are two cases above where fallback is triggered by receiving status on the balancer call, but only when other conditions are also met.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still did not directly answer the question if we should fail RPCs before trying fallback. The specific case we are talking about is when the balancer RPC finishes (regardless of status) and none of the connections to any backends received previously has been READY. Do we fail RPCs immediately while trying to use fallback addresses (which implies RPCs may succeed back again if connections to fallback succeeds)? Or do we wait until fallback has been attempted?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the fallback-at-startup case, we should be in state CONNECTING until we either get connected or go into fallback mode, so we should not fail data plane RPCs until one of those two things happens.

In the fallback-after-startup case, the "get an explicit response from the balancer telling us go into fallback" case should not depend on whether there are currently any READY connections to balancer-given backends, since it's intended to force clients to go to fallback regardless of whether they are currently connected to backends, and you should fix your implementation if it's not doing that. Given that, there are several cases here:

  • If we can't reach any of the balancer-provided backends before we go into fallback mode (e.g., if the backend connections fail before either the balancer connection fails or the balancer explicitly tells us to go into fallback), then we will fail some data plane RPCs.
  • If we are in contact with the balancer-provided backends and the balancer tells us to go into fallback mode, we should not fail any RPCs; we should keep using the balancer-provided backends while we get in contact with the fallback backends.
  • If we are in contact with the balancer-provided backends and the balancer call fails, and then we lose contact with the balancer-provided backends, it's a bit of a grey area. In principle, I suppose we should go into state CONNECTING here and queue data plane RPCs instead of failing them, but if we actually fail some RPCs instead, I think we can probably live with that.

@apolcyn may want to weigh in here as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to everything @markdroth just described.

Also note that go/grpclb-explicit-fallback describes the expected behavior of clients when receiving a fallback response from a balancer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That line fails RPCs for a short time window between balancer RPC closed and trying fallback.

I just realized that sounded similar to b/138458426. I had found a path through the code that could cause that but #6657 looked like it'd fix it. Maybe there was a second path through the code? And apparently Go might still have this problem?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the description of "the client enters transient failure because all subchannels are "connecting", and one has entered "transient failure", so the pending pick fails." in b/138458426#comment4, I'd suspect that was due to the issue described in #7959, which was fixed recently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the fallback-after-startup case, the "get an explicit response from the balancer telling us go into fallback" case should not depend on whether there are currently any READY connections to balancer-given backends, since it's intended to force clients to go to fallback regardless of whether they are currently connected to backends, and you should fix your implementation if it's not doing that.

Sorry sorry, what I mentioned in #8035 (comment) was wrong. Balancer forcing entering fallback is correct. It will stop using balancer-provided backends immediately, even if there are READY connections.

Actually our implementation looks fine for handling the grey area:

  • If connections to all balancer-provided backends fail before balancer RPC becomes broken. In this case, client RPCs fail with the status from one of the broken subchannels. After the balancer RPC fails, before attempting to fallback, the status used to fail client RPCs is changed to that from the balancer RPC.
  • If balancer RPC fail before connections to balancer-provided backends all become broken. Client RPCs do not fail until the later happens. After connections to all balancer-provided backends fail, before attempting to fallback, the status used to fail client RPCs is from one of the broken subchannels.

…k reason being overwritten by timeout waiting for balancer.
…s when failing to fallback, attach the original fallback reason to it. This ensures all client RPCs fail with UNAVAILABLE status code. Errors being logged are still with its original status code.
@voidzcy
Copy link
Contributor Author

voidzcy commented Apr 7, 2021

I updated this a bit:

RPCs failed caused by reasons not directly related to its connections to the backends (aka, anything happens before making connections to backends, such as balancer RPC broken before getting any backends, fail to fallback, etc) will end up with always UNAVAILABLE status code attached with cause and description from the original (immediate-fail or fallback) reason. Some examples are:

  • balancer RPC timeout before receiving any backends, then try to fallback but no fallback address found. RPCs fail with {UNAVAILABLE, description="Unable to fallback, no fallback addresses found\n Timeout waiting for remote balancer", cause=null}
  • balancer RPC failed (say, permission denied) before receiving any backends, then try to fallback but no fallback address found. RPCs fail with {UNAVAILABLE, description="Unable to fallback, no fallback addresses found\n <description from the status of balancer RPC>", cause=<cause from the status of balancer RPC>}

The status code for failing RPCs within the window of balancer RPC closed and attempting fallback (aka, caused by propagateError() described above) is also overridden to be UNAVAILABLE. Logs will still log the original status.

PTAL.

@voidzcy voidzcy requested a review from ejona86 April 7, 2021 22:41
@voidzcy voidzcy merged commit b956f88 into grpc:master Apr 8, 2021
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 7, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

grpclb: Improve error information for "Unable to fallback, no fallback addresses found"
4 participants