Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(retry): Add configurable delay #6421

Merged
merged 3 commits into from Jul 28, 2021

Conversation

benlesh
Copy link
Member

@benlesh benlesh commented May 24, 2021

  • Adds a delay configuration that allows the user to create simpler exponential backoff, simple retry delays, and/or other functionality.

Basically, this will allow a user to write an exponential backoff or simple delay more easily and intuitively than retryWhen could. In fact, we might want to deprecate retryWhen in favor of this, and also add this pattern to repeat (if we decide it's a good idea).

Retry every 5 seconds, a maximum of 100 times:

source.pipe(
  retry({
     // max retries (count is an unfortunate existing name)
     count: 100, 
     // retry 5 seconds after any error
     delay: 5000, // retry after 5 seconds
  })
)

Exponential backoff example:

source.pipe(
  retry({
    // max 100 retries
    count: 100,
    // backoff starting at 2 seconds, exponentially up to 1 minute.
    // retryCount starts a 1. 
    delay: (_err, retryCount) => timer(Math.min(60000, 1000 * (2 ** retryCount)))
  })
)

Notes

Honestly, I'm okay even with adding configuration for the above backoff scenario, given it's such a common thing I've had to implement. Even if all we did was provide a helper function that was easy to tree shake like: retry({ delay: exponentialBackoff({ startDelay: 1000, maxDelay: 60000 }) }) or the like.

@benlesh benlesh added feature PRs and issues for features AGENDA ITEM Flagged for discussion at core team meetings 7.x Issues and PRs for version 6.x labels May 24, 2021
@benlesh benlesh requested a review from cartant May 24, 2021 18:17
* A more configurable means of retrying a source.
*
* If `delay` is provided as a `number`, after the source errors, the result will wait `delay` milliseconds,
* then retry the source. If `delay` is a function, th
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this sentence is not finished.

Copy link
Member Author

@benlesh benlesh May 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is so weird. I feel like this is the third or fourth time this has happened to me, but only with doc comments :\

@backbone87
Copy link
Contributor

isnt retryWhen the more general operator which can be improved by providing premade notifier factories?

// like `retry`
retryWhen(count(n));
retryWhen(count({ count: n, resetOnSuccess: true }));

// exponential backoff
retryWhen(exponentialBackoff({ startDelay: d, maxDelay: m }));

// custom, like current `retryWhen`
retryWhen(createMyCustomNotifier);

@cartant
Copy link
Collaborator

cartant commented May 25, 2021

isnt retryWhen the more general operator which can be improved by providing premade notifier factories?

I somewhat agree with this, but AFAICT the following would not be possible with retryWhen as it has no mechanism for conveying 'success' to the notifier:

retryWhen(count({ count: n, resetOnSuccess: true }));

I'm more curious as to whether this PR hints at a general approach for reducing the number of core operators by making them more 'configurable' - as was done with share and the multicasting API.

@backbone87
Copy link
Contributor

backbone87 commented May 25, 2021

general approach for reducing the number of core operators

i think the current state of share & shareReplay are a good example. a "highly configurable" base operator with the most common configuration as its default and if necessary additional operators for other very common configurations that are implemented in terms of the "base" operator.
imho the benefits of this approach is less code duplication, less tests required and most importantly no "slightly different behavior under equal configuration" for otherwise very similar operators.
from the idea to provide "option factories" (exponentialBackoff): should it then also be share(replay(n))?

the following would not be possible with retryWhen as it has no mechanism for conveying 'success' to the notifier

funnily enough that hinted me into finding a bug in the custom retry backoff operator that I use and caused various services to restart without "retrying enough" - because the attempt counter was not reset. but I noticed immediately the follow up question: what is success? or when should the attempt counter actually be reset?

@benlesh
Copy link
Member Author

benlesh commented Jun 2, 2021

Notes from core team: General approval.

@benlesh benlesh removed the AGENDA ITEM Flagged for discussion at core team meetings label Jun 2, 2021
Copy link
Collaborator

@cartant cartant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine, but I'm not sure about the resetOnSuccess name:

  /**
   * Whether or not to reset the retry counter on success.
   * Defaults to false.
   */
  resetOnSuccess?: boolean;

What is success? The accompanying doc/comment doesn't make this clear. The other reset options in the API are, IMO, a little clearer: resetOnError, resetOnComplete, resetOnDisconnect, resetOnRefCountZero.

I mean, from resetOnSuccess a caller could infer that it could only mean that it'll reset when a next notification is received - because resetting on complete would be meaningless - but should the caller really need to even think about this?

Maybe resetOnNext?

- Adds a `delay` configuration that allows the user to create simpler exponential backoff, simple retry delays, and/or other functionality.
@benlesh
Copy link
Member Author

benlesh commented Jul 14, 2021

@cartant I've changed the one setting to resetOnFirstValue. Since resetOnSuccess doesn't mean much, and resetOnNext is ambiguous.

@benlesh benlesh mentioned this pull request Jul 14, 2021
40 tasks
@benlesh
Copy link
Member Author

benlesh commented Jul 14, 2021

@cartant Just realized that resetOnSuccess was pre-existing. Probably should change that back.

@benlesh benlesh merged commit 5f69795 into ReactiveX:master Jul 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
7.x Issues and PRs for version 6.x feature PRs and issues for features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants