From 434253de16d9efdf984ddb64c409706cda1d5f82 Mon Sep 17 00:00:00 2001 From: Vadym Matsishevskyi <25311427+vam-google@users.noreply.github.com> Date: Thu, 10 Nov 2022 06:15:00 -0800 Subject: [PATCH] fix: Major refactoring of Polling, Retry and Timeout logic (#462) * fix: Major refactoring and fix for Polling, Retry and Timeout logic This is in response to https://freeman.vc/notes/aws-vs-gcp-reliability-is-wildly-different, which triggered an investigation of the whole Polling/Retry/Timeout behavior in Python GAPIC clients and revealed many fundamental flaws in its implementaiton. To properly describe the refactoring this PR does we need to stick to a rigorous terminology, as vague definitions of retries, timeouts, polling and related concepts seems to be the main source of the present bugs and overal confusion among both groups: users of the library and creators of the library. Please check the documentation of the `google.api_core.retry.Retry` class the `google.api_core.future.polling.Polling.result()` method for the proper definitions and context. Note, the overall semantics around Polling, Retry and Timeout remains quite confusing even after refactoring (although it is now more or less rigorously defined), but it was clean as I could make it while still maintaining backward compatibility of the whole library. The quick summary of the changes in this PR: 1) Properly define and fix the application of Deadline and Timeout concepts. Please check the updated documentation for the `google.api_core.retry.Retry` class for the actual definitions. Originally the `deadline` has been used to represent timeouts conflating the two concepts. As result this PR replaces `deadline` arguments with `timeout` ones in as backward-compatible manner as possible (i.e. backward compatible in all practical applications). 2) Properly define RPC Timeout, Retry Timeout and Pollint Timeout and how a generic Timeout concept (aka Logical Timeout) is mapped to one of those depending on the context. Please check `google.api_core.retry.Retry` class documentation for details. 3) Properly define and fix the application of Retry and Polling concepts. Please check the updated documentation for `google.api_core.future.polling.PollingFuture.result()` for details. 4) Separate `retry` and `polling` configurations for Polling future, as these are two different concepts (although both operating on `Retry` class). Originally both retry and polling configurations were controlled by a single `retry` parameter, merging configuration regarding how "rpc error responses" and how "operation not completed" responses are supposed to be handled. 5) For the following config properties - `Retry (including `Retry Timeout`), `Polling` (including `Polling Timeout`) and `RPC Timeout` - fix and properly define how each of the above properties gets configured and which config gets precedence in case of a conflict (check `PollingFuture.result()` method documentation for details). Each of those properties can be specified as follows: directly provided by the user for each call, specified during gapic generation time from config values in `grpc_service_config.json` file (for Retry and RPC Timeout) and `gapic.yaml` file (for Polling), or be provided as a hard-coded basic default values in python-api-core library itself. 6) Fix the per-call polling config propagation logic (the polling/retry configs supplied to `PollingFuture.result()` used to be ignored for actual call). 7) Deprecate the usage of `deadline` terminology in the whole library and backward-compatibly replace it with timeout. This is essential as what has been called "deadline" in this library was actually "timeout" as it is defined in `google.api_core.retry.Retry` class documentation. 8) Deprecate `ExponentialTimeout`, `ConstantTimeout` and related logic as those are outdated concepts and are not consistent with the other GAPIC Languages. Replace it with `TimeToDeadlineTimeout` to be consistent with how the rest of the languages do it. 9) Deprecate `google.api_core.operations_v1.config` as it is an outdated concept and self-inconsistent (as all gapic clients provide configuraiton in code). The configs are directly provided in code instead. 10) Switch randomized delay calculation from `delay` being treated as expected value for randomized_delay to `delay` being treated as maximum value for `randomized_delay` (i.e. the new expected valud for `randomized_delay` is `delay / 2`). See the `exponential_sleep_generator()` method implementation for details. This is needed to make Python implementation of retries and polling exponential backoff consistent with the rest of GAPIC languages. Also fix the uncontrollable growth of `delay` value (since it is a subject of exponential growth, the `delay` value was quickly reaching "infinity" value, and the whole thing was not failing simply due to python being a very forgiving language which forgives multiplying "infinity" by a number (`inf * number = inf`) binstead of simply overflowing to a (most likely) negative number). 11) Fix url construction in `OperationsRestTransport`. Without this fix the polling logic for REST transport was completely broken (is not affecting Compute client, as that one has custom LRO). 12) Las but not least: change the default values for Polling logic to be the following: `initial=1.0` (same as before), `maximum=20.0` (was `60`), `multiplier=1.5` (was `2.0`), `timeout=900` (was `120`, but due to timeout resolution logic was actually None (i.e. infinity)). This, in conjunction with changed calculation of randomized delay (i.e. its expected value now being `delay / 2`) overall makes polling logic much less aggressive in terms of increasing delays between each polling iteration, making LRO return much earlier for users on average, but still keeping a healthy balance between strain put on both client and server by polling and responsiveness of LROs for user. *The design doc summarising all the changes and reasons for them is in progress. * fix ci failures (mainly sphinx errors) * remove unused code * fix typo * Pin pytest version to <7.2.0 * reformat code * address pr feedback * address PR feedback * address pr feedback * Update google/api_core/future/polling.py Co-authored-by: Victor Chudnovsky * Apply documentation suggestions from code review Co-authored-by: Victor Chudnovsky * Address PR feedback Co-authored-by: Victor Chudnovsky --- .github/workflows/lint.yml | 2 +- .github/workflows/mypy.yml | 2 +- google/api_core/extended_operation.py | 28 ++- google/api_core/future/async_future.py | 2 +- google/api_core/future/polling.py | 200 +++++++++++++++--- google/api_core/gapic_v1/config.py | 9 + google/api_core/gapic_v1/method.py | 69 ++---- google/api_core/grpc_helpers.py | 4 +- google/api_core/operation.py | 22 +- .../operations_v1/operations_async_client.py | 39 ++-- .../operations_v1/operations_client.py | 39 ++-- .../operations_v1/operations_client_config.py | 1 + .../api_core/operations_v1/transports/rest.py | 44 +++- google/api_core/retry.py | 166 +++++++++++---- google/api_core/retry_async.py | 58 +++-- google/api_core/timeout.py | 68 +++++- noxfile.py | 19 +- tests/asyncio/gapic/test_method_async.py | 35 --- .../test_operations_async_client.py | 2 +- tests/asyncio/test_grpc_helpers_async.py | 4 +- tests/asyncio/test_operation_async.py | 2 +- tests/asyncio/test_retry_async.py | 13 +- tests/unit/future/test_polling.py | 74 ++++--- tests/unit/gapic/test_method.py | 52 ----- .../operations_v1/test_operations_client.py | 7 +- .../test_operations_rest_client.py | 37 ++-- tests/unit/test_bidi.py | 2 +- tests/unit/test_client_info.py | 6 +- tests/unit/test_exceptions.py | 2 +- tests/unit/test_general_helpers.py | 13 ++ tests/unit/test_grpc_helpers.py | 28 +-- tests/unit/test_operation.py | 2 +- tests/unit/test_retry.py | 29 ++- tests/unit/test_timeout.py | 125 +++++++++-- 34 files changed, 789 insertions(+), 416 deletions(-) create mode 100644 tests/unit/test_general_helpers.py diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index eae860a2..d2aee5b7 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -12,7 +12,7 @@ jobs: - name: Setup Python uses: actions/setup-python@v4 with: - python-version: "3.7" + python-version: "3.10" - name: Install nox run: | python -m pip install --upgrade setuptools pip wheel diff --git a/.github/workflows/mypy.yml b/.github/workflows/mypy.yml index f08164f6..a505525d 100644 --- a/.github/workflows/mypy.yml +++ b/.github/workflows/mypy.yml @@ -12,7 +12,7 @@ jobs: - name: Setup Python uses: actions/setup-python@v4 with: - python-version: "3.7" + python-version: "3.10" - name: Install nox run: | python -m pip install --upgrade setuptools pip wheel diff --git a/google/api_core/extended_operation.py b/google/api_core/extended_operation.py index cabae107..092eba4e 100644 --- a/google/api_core/extended_operation.py +++ b/google/api_core/extended_operation.py @@ -50,10 +50,13 @@ class ExtendedOperation(polling.PollingFuture): refresh (Callable[[], type(extended_operation)]): A callable that returns the latest state of the operation. cancel (Callable[[], None]): A callable that tries to cancel the operation. - retry: Optional(google.api_core.retry.Retry): The retry configuration used - when polling. This can be used to control how often :meth:`done` - is polled. Regardless of the retry's ``deadline``, it will be - overridden by the ``timeout`` argument to :meth:`result`. + polling Optional(google.api_core.retry.Retry): The configuration used + for polling. This can be used to control how often :meth:`done` + is polled. If the ``timeout`` argument to :meth:`result` is + specified it will override the ``polling.timeout`` property. + retry Optional(google.api_core.retry.Retry): DEPRECATED use ``polling`` + instead. If specified it will override ``polling`` parameter to + maintain backward compatibility. Note: Most long-running API methods use google.api_core.operation.Operation This class is a wrapper for a subset of methods that use alternative @@ -68,9 +71,14 @@ class ExtendedOperation(polling.PollingFuture): """ def __init__( - self, extended_operation, refresh, cancel, retry=polling.DEFAULT_RETRY + self, + extended_operation, + refresh, + cancel, + polling=polling.DEFAULT_POLLING, + **kwargs, ): - super().__init__(retry=retry) + super().__init__(polling=polling, **kwargs) self._extended_operation = extended_operation self._refresh = refresh self._cancel = cancel @@ -114,7 +122,7 @@ def error_message(self): def __getattr__(self, name): return getattr(self._extended_operation, name) - def done(self, retry=polling.DEFAULT_RETRY): + def done(self, retry=None): self._refresh_and_update(retry) return self._extended_operation.done @@ -137,9 +145,11 @@ def cancelled(self): self._refresh_and_update() return self._extended_operation.done - def _refresh_and_update(self, retry=polling.DEFAULT_RETRY): + def _refresh_and_update(self, retry=None): if not self._extended_operation.done: - self._extended_operation = self._refresh(retry=retry) + self._extended_operation = ( + self._refresh(retry=retry) if retry else self._refresh() + ) self._handle_refreshed_operation() def _handle_refreshed_operation(self): diff --git a/google/api_core/future/async_future.py b/google/api_core/future/async_future.py index 88c183f9..325ee9cd 100644 --- a/google/api_core/future/async_future.py +++ b/google/api_core/future/async_future.py @@ -95,7 +95,7 @@ async def _blocking_poll(self, timeout=None): if self._future.done(): return - retry_ = self._retry.with_deadline(timeout) + retry_ = self._retry.with_timeout(timeout) try: await retry_(self._done_or_raise)() diff --git a/google/api_core/future/polling.py b/google/api_core/future/polling.py index 02e680f6..6e6aa5d4 100644 --- a/google/api_core/future/polling.py +++ b/google/api_core/future/polling.py @@ -18,7 +18,7 @@ import concurrent.futures from google.api_core import exceptions -from google.api_core import retry +from google.api_core import retry as retries from google.api_core.future import _helpers from google.api_core.future import base @@ -29,14 +29,37 @@ class _OperationNotComplete(Exception): pass -RETRY_PREDICATE = retry.if_exception_type( +# DEPRECATED as it conflates RPC retry and polling concepts into one. +# Use POLLING_PREDICATE instead to configure polling. +RETRY_PREDICATE = retries.if_exception_type( _OperationNotComplete, exceptions.TooManyRequests, exceptions.InternalServerError, exceptions.BadGateway, exceptions.ServiceUnavailable, ) -DEFAULT_RETRY = retry.Retry(predicate=RETRY_PREDICATE) + +# DEPRECATED: use DEFAULT_POLLING to configure LRO polling logic. Construct +# Retry object using its default values as a baseline for any custom retry logic +# (not to be confused with polling logic). +DEFAULT_RETRY = retries.Retry(predicate=RETRY_PREDICATE) + +# POLLING_PREDICATE is supposed to poll only on _OperationNotComplete. +# Any RPC-specific errors (like ServiceUnavailable) will be handled +# by retry logic (not to be confused with polling logic) which is triggered for +# every polling RPC independently of polling logic but within its context. +POLLING_PREDICATE = retries.if_exception_type( + _OperationNotComplete, +) + +# Default polling configuration +DEFAULT_POLLING = retries.Retry( + predicate=POLLING_PREDICATE, + initial=1.0, # seconds + maximum=20.0, # seconds + multiplier=1.5, + timeout=900, # seconds +) class PollingFuture(base.Future): @@ -45,21 +68,29 @@ class PollingFuture(base.Future): The :meth:`done` method should be implemented by subclasses. The polling behavior will repeatedly call ``done`` until it returns True. + The actuall polling logic is encapsulated in :meth:`result` method. See + documentation for that method for details on how polling works. + .. note:: Privacy here is intended to prevent the final class from overexposing, not to prevent subclasses from accessing methods. Args: - retry (google.api_core.retry.Retry): The retry configuration used - when polling. This can be used to control how often :meth:`done` - is polled. Regardless of the retry's ``deadline``, it will be - overridden by the ``timeout`` argument to :meth:`result`. + polling (google.api_core.retry.Retry): The configuration used for polling. + This parameter controls how often :meth:`done` is polled. If the + ``timeout`` argument is specified in :meth:`result` method it will + override the ``polling.timeout`` property. + retry (google.api_core.retry.Retry): DEPRECATED use ``polling`` instead. + If set, it will override ``polling`` paremeter for backward + compatibility. """ - def __init__(self, retry=DEFAULT_RETRY): + _DEFAULT_VALUE = object() + + def __init__(self, polling=DEFAULT_POLLING, **kwargs): super(PollingFuture, self).__init__() - self._retry = retry + self._polling = kwargs.get("retry", polling) self._result = None self._exception = None self._result_set = False @@ -69,11 +100,13 @@ def __init__(self, retry=DEFAULT_RETRY): self._done_callbacks = [] @abc.abstractmethod - def done(self, retry=DEFAULT_RETRY): + def done(self, retry=None): """Checks to see if the operation is complete. Args: - retry (google.api_core.retry.Retry): (Optional) How to retry the RPC. + retry (google.api_core.retry.Retry): (Optional) How to retry the + polling RPC (to not be confused with polling configuration. See + the documentation for :meth:`result` for details). Returns: bool: True if the operation is complete, False otherwise. @@ -81,45 +114,136 @@ def done(self, retry=DEFAULT_RETRY): # pylint: disable=redundant-returns-doc, missing-raises-doc raise NotImplementedError() - def _done_or_raise(self, retry=DEFAULT_RETRY): + def _done_or_raise(self, retry=None): """Check if the future is done and raise if it's not.""" - kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry} - - if not self.done(**kwargs): + if not self.done(retry=retry): raise _OperationNotComplete() def running(self): """True if the operation is currently running.""" return not self.done() - def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY): - """Poll and wait for the Future to be resolved. + def _blocking_poll(self, timeout=_DEFAULT_VALUE, retry=None, polling=None): + """Poll and wait for the Future to be resolved.""" - Args: - timeout (int): - How long (in seconds) to wait for the operation to complete. - If None, wait indefinitely. - """ if self._result_set: return - retry_ = self._retry.with_deadline(timeout) + polling = polling or self._polling + if timeout is not PollingFuture._DEFAULT_VALUE: + polling = polling.with_timeout(timeout) try: - kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry} - retry_(self._done_or_raise)(**kwargs) + polling(self._done_or_raise)(retry=retry) except exceptions.RetryError: raise concurrent.futures.TimeoutError( - "Operation did not complete within the designated " "timeout." + f"Operation did not complete within the designated timeout of " + f"{polling.timeout} seconds." ) - def result(self, timeout=None, retry=DEFAULT_RETRY): - """Get the result of the operation, blocking if necessary. + def result(self, timeout=_DEFAULT_VALUE, retry=None, polling=None): + """Get the result of the operation. + + This method will poll for operation status periodically, blocking if + necessary. If you just want to make sure that this method does not block + for more than X seconds and you do not care about the nitty-gritty of + how this method operates, just call it with ``result(timeout=X)``. The + other parameters are for advanced use only. + + Every call to this method is controlled by the following three + parameters, each of which has a specific, distinct role, even though all three + may look very similar: ``timeout``, ``retry`` and ``polling``. In most + cases users do not need to specify any custom values for any of these + parameters and may simply rely on default ones instead. + + If you choose to specify custom parameters, please make sure you've + read the documentation below carefully. + + First, please check :class:`google.api_core.retry.Retry` + class documentation for the proper definition of timeout and deadline + terms and for the definition the three different types of timeouts. + This class operates in terms of Retry Timeout and Polling Timeout. It + does not let customizing RPC timeout and the user is expected to rely on + default behavior for it. + + The roles of each argument of this method are as follows: + + ``timeout`` (int): (Optional) The Polling Timeout as defined in + :class:`google.api_core.retry.Retry`. If the operation does not complete + within this timeout an exception will be thrown. This parameter affects + neither Retry Timeout nor RPC Timeout. + + ``retry`` (google.api_core.retry.Retry): (Optional) How to retry the + polling RPC. The ``retry.timeout`` property of this parameter is the + Retry Timeout as defined in :class:`google.api_core.retry.Retry`. + This parameter defines ONLY how the polling RPC call is retried + (i.e. what to do if the RPC we used for polling returned an error). It + does NOT define how the polling is done (i.e. how frequently and for + how long to call the polling RPC); use the ``polling`` parameter for that. + If a polling RPC throws and error and retrying it fails, the whole + future fails with the corresponding exception. If you want to tune which + server response error codes are not fatal for operation polling, use this + parameter to control that (``retry.predicate`` in particular). + + ``polling`` (google.api_core.retry.Retry): (Optional) How often and + for how long to call the polling RPC periodically (i.e. what to do if + a polling rpc returned successfully but its returned result indicates + that the long running operation is not completed yet, so we need to + check it again at some point in future). This parameter does NOT define + how to retry each individual polling RPC in case of an error; use the + ``retry`` parameter for that. The ``polling.timeout`` of this parameter + is Polling Timeout as defined in as defined in + :class:`google.api_core.retry.Retry`. + + For each of the arguments, there are also default values in place, which + will be used if a user does not specify their own. The default values + for the three parameters are not to be confused with the default values + for the corresponding arguments in this method (those serve as "not set" + markers for the resolution logic). + + If ``timeout`` is provided (i.e.``timeout is not _DEFAULT VALUE``; note + the ``None`` value means "infinite timeout"), it will be used to control + the actual Polling Timeout. Otherwise, the ``polling.timeout`` value + will be used instead (see below for how the ``polling`` config itself + gets resolved). In other words, this parameter effectively overrides + the ``polling.timeout`` value if specified. This is so to preserve + backward compatibility. + + If ``retry`` is provided (i.e. ``retry is not None``) it will be used to + control retry behavior for the polling RPC and the ``retry.timeout`` + will determine the Retry Timeout. If not provided, the + polling RPC will be called with whichever default retry config was + specified for the polling RPC at the moment of the construction of the + polling RPC's client. For example, if the polling RPC is + ``operations_client.get_operation()``, the ``retry`` parameter will be + controlling its retry behavior (not polling behavior) and, if not + specified, that specific method (``operations_client.get_operation()``) + will be retried according to the default retry config provided during + creation of ``operations_client`` client instead. This argument exists + mainly for backward compatibility; users are very unlikely to ever need + to set this parameter explicitly. + + If ``polling`` is provided (i.e. ``polling is not None``), it will be used + to controll the overall polling behavior and ``polling.timeout`` will + controll Polling Timeout unless it is overridden by ``timeout`` parameter + as described above. If not provided, the``polling`` parameter specified + during construction of this future (the ``polling`` argument in the + constructor) will be used instead. Note: since the ``timeout`` argument may + override ``polling.timeout`` value, this parameter should be viewed as + coupled with the ``timeout`` parameter as described above. Args: - timeout (int): - How long (in seconds) to wait for the operation to complete. - If None, wait indefinitely. + timeout (int): (Optional) How long (in seconds) to wait for the + operation to complete. If None, wait indefinitely. + retry (google.api_core.retry.Retry): (Optional) How to retry the + polling RPC. This defines ONLY how the polling RPC call is + retried (i.e. what to do if the RPC we used for polling returned + an error). It does NOT define how the polling is done (i.e. how + frequently and for how long to call the polling RPC). + polling (google.api_core.retry.Retry): (Optional) How often and + for how long to call polling RPC periodically. This parameter + does NOT define how to retry each individual polling RPC call + (use the ``retry`` parameter for that). Returns: google.protobuf.Message: The Operation's result. @@ -128,8 +252,8 @@ def result(self, timeout=None, retry=DEFAULT_RETRY): google.api_core.GoogleAPICallError: If the operation errors or if the timeout is reached before the operation completes. """ - kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry} - self._blocking_poll(timeout=timeout, **kwargs) + + self._blocking_poll(timeout=timeout, retry=retry, polling=polling) if self._exception is not None: # pylint: disable=raising-bad-type @@ -138,12 +262,18 @@ def result(self, timeout=None, retry=DEFAULT_RETRY): return self._result - def exception(self, timeout=None): + def exception(self, timeout=_DEFAULT_VALUE): """Get the exception from the operation, blocking if necessary. + See the documentation for the :meth:`result` method for details on how + this method operates, as both ``result`` and this method rely on the + exact same polling logic. The only difference is that this method does + not accept ``retry`` and ``polling`` arguments but relies on the default ones + instead. + Args: timeout (int): How long to wait for the operation to complete. - If None, wait indefinitely. + If None, wait indefinitely. Returns: Optional[google.api_core.GoogleAPICallError]: The operation's diff --git a/google/api_core/gapic_v1/config.py b/google/api_core/gapic_v1/config.py index 9c722871..36b50d9f 100644 --- a/google/api_core/gapic_v1/config.py +++ b/google/api_core/gapic_v1/config.py @@ -33,6 +33,9 @@ def _exception_class_for_grpc_status_name(name): """Returns the Google API exception class for a gRPC error code name. + DEPRECATED: use ``exceptions.exception_class_for_grpc_status`` method + directly instead. + Args: name (str): The name of the gRPC status code, for example, ``UNAVAILABLE``. @@ -47,6 +50,8 @@ def _exception_class_for_grpc_status_name(name): def _retry_from_retry_config(retry_params, retry_codes, retry_impl=retry.Retry): """Creates a Retry object given a gapic retry configuration. + DEPRECATED: instantiate retry and timeout classes directly instead. + Args: retry_params (dict): The retry parameter values, for example:: @@ -81,6 +86,8 @@ def _retry_from_retry_config(retry_params, retry_codes, retry_impl=retry.Retry): def _timeout_from_retry_config(retry_params): """Creates a ExponentialTimeout object given a gapic retry configuration. + DEPRECATED: instantiate retry and timeout classes directly instead. + Args: retry_params (dict): The retry parameter values, for example:: @@ -113,6 +120,8 @@ def parse_method_configs(interface_config, retry_impl=retry.Retry): """Creates default retry and timeout objects for each method in a gapic interface config. + DEPRECATED: instantiate retry and timeout classes directly instead. + Args: interface_config (Mapping): The interface config section of the full gapic library config. For example, If the full configuration has diff --git a/google/api_core/gapic_v1/method.py b/google/api_core/gapic_v1/method.py index 73c8d4bc..0c1624a3 100644 --- a/google/api_core/gapic_v1/method.py +++ b/google/api_core/gapic_v1/method.py @@ -22,8 +22,8 @@ import functools from google.api_core import grpc_helpers -from google.api_core import timeout from google.api_core.gapic_v1 import client_info +from google.api_core.timeout import TimeToDeadlineTimeout USE_DEFAULT_METADATA = object() @@ -52,55 +52,14 @@ def _apply_decorators(func, decorators): ``decorators`` may contain items that are ``None`` or ``False`` which will be ignored. """ - decorators = filter(_is_not_none_or_false, reversed(decorators)) + filtered_decorators = filter(_is_not_none_or_false, reversed(decorators)) - for decorator in decorators: + for decorator in filtered_decorators: func = decorator(func) return func -def _determine_timeout(default_timeout, specified_timeout, retry): - """Determines how timeout should be applied to a wrapped method. - - Args: - default_timeout (Optional[Timeout]): The default timeout specified - at method creation time. - specified_timeout (Optional[Timeout]): The timeout specified at - invocation time. If :attr:`DEFAULT`, this will be set to - the ``default_timeout``. - retry (Optional[Retry]): The retry specified at invocation time. - - Returns: - Optional[Timeout]: The timeout to apply to the method or ``None``. - """ - # If timeout is specified as a number instead of a Timeout instance, - # convert it to a ConstantTimeout. - if isinstance(specified_timeout, (int, float)): - specified_timeout = timeout.ConstantTimeout(specified_timeout) - if isinstance(default_timeout, (int, float)): - default_timeout = timeout.ConstantTimeout(default_timeout) - - if specified_timeout is DEFAULT: - specified_timeout = default_timeout - - if specified_timeout is default_timeout: - # If timeout is the default and the default timeout is exponential and - # a non-default retry is specified, make sure the timeout's deadline - # matches the retry's. This handles the case where the user leaves - # the timeout default but specifies a lower deadline via the retry. - if ( - retry - and retry is not DEFAULT - and isinstance(default_timeout, timeout.ExponentialTimeout) - ): - return default_timeout.with_deadline(retry._deadline) - else: - return default_timeout - - return specified_timeout - - class _GapicCallable(object): """Callable that applies retry, timeout, and metadata logic. @@ -108,9 +67,11 @@ class _GapicCallable(object): target (Callable): The low-level RPC method. retry (google.api_core.retry.Retry): The default retry for the callable. If ``None``, this callable will not retry by default - timeout (google.api_core.timeout.Timeout): The default timeout - for the callable. If ``None``, this callable will not specify - a timeout argument to the low-level RPC method by default. + timeout (google.api_core.timeout.Timeout): The default timeout for the + callable (i.e. duration of time within which an RPC must terminate + after its start, not to be confused with deadline). If ``None``, + this callable will not specify a timeout argument to the low-level + RPC method. metadata (Sequence[Tuple[str, str]]): Additional metadata that is provided to the RPC method on every invocation. This is merged with any metadata specified during invocation. If ``None``, no @@ -125,18 +86,16 @@ def __init__(self, target, retry, timeout, metadata=None): def __call__(self, *args, timeout=DEFAULT, retry=DEFAULT, **kwargs): """Invoke the low-level RPC with retry, timeout, and metadata.""" - timeout = _determine_timeout( - self._timeout, - timeout, - # Use only the invocation-specified retry only for this, as we only - # want to adjust the timeout deadline if the *user* specified - # a different retry. - retry, - ) if retry is DEFAULT: retry = self._retry + if timeout is DEFAULT: + timeout = self._timeout + + if isinstance(timeout, (int, float)): + timeout = TimeToDeadlineTimeout(timeout=timeout) + # Apply all applicable decorators. wrapped_func = _apply_decorators(self._target, [retry, timeout]) diff --git a/google/api_core/grpc_helpers.py b/google/api_core/grpc_helpers.py index bf04ae4c..102dc0a0 100644 --- a/google/api_core/grpc_helpers.py +++ b/google/api_core/grpc_helpers.py @@ -30,7 +30,7 @@ PROTOBUF_VERSION = google.protobuf.__version__ # The grpcio-gcp package only has support for protobuf < 4 -if PROTOBUF_VERSION[0:2] == "3.": +if PROTOBUF_VERSION[0:2] == "3.": # pragma: NO COVER try: import grpc_gcp @@ -318,7 +318,7 @@ def create_channel( default_host=default_host, ) - if HAS_GRPC_GCP: + if HAS_GRPC_GCP: # pragma: NO COVER return grpc_gcp.secure_channel(target, composite_credentials, **kwargs) return grpc.secure_channel(target, composite_credentials, **kwargs) diff --git a/google/api_core/operation.py b/google/api_core/operation.py index b17f753b..90cbdc99 100644 --- a/google/api_core/operation.py +++ b/google/api_core/operation.py @@ -61,10 +61,13 @@ class Operation(polling.PollingFuture): result. metadata_type (func:`type`): The protobuf type for the operation's metadata. - retry (google.api_core.retry.Retry): The retry configuration used - when polling. This can be used to control how often :meth:`done` - is polled. Regardless of the retry's ``deadline``, it will be - overridden by the ``timeout`` argument to :meth:`result`. + polling (google.api_core.retry.Retry): The configuration used for polling. + This parameter controls how often :meth:`done` is polled. If the + ``timeout`` argument is specified in the :meth:`result` method, it will + override the ``polling.timeout`` property. + retry (google.api_core.retry.Retry): DEPRECATED: use ``polling`` instead. + If specified it will override ``polling`` parameter to maintain + backward compatibility. """ def __init__( @@ -74,9 +77,10 @@ def __init__( cancel, result_type, metadata_type=None, - retry=polling.DEFAULT_RETRY, + polling=polling.DEFAULT_POLLING, + **kwargs ): - super(Operation, self).__init__(retry=retry) + super(Operation, self).__init__(polling=polling, **kwargs) self._operation = operation self._refresh = refresh self._cancel = cancel @@ -146,7 +150,7 @@ def _set_result_from_operation(self): ) self.set_exception(exception) - def _refresh_and_update(self, retry=polling.DEFAULT_RETRY): + def _refresh_and_update(self, retry=None): """Refresh the operation and update the result if needed. Args: @@ -155,10 +159,10 @@ def _refresh_and_update(self, retry=polling.DEFAULT_RETRY): # If the currently cached operation is done, no need to make another # RPC as it will not change once done. if not self._operation.done: - self._operation = self._refresh(retry=retry) + self._operation = self._refresh(retry=retry) if retry else self._refresh() self._set_result_from_operation() - def done(self, retry=polling.DEFAULT_RETRY): + def done(self, retry=None): """Checks to see if the operation is complete. Args: diff --git a/google/api_core/operations_v1/operations_async_client.py b/google/api_core/operations_v1/operations_async_client.py index 5a5e5562..81c4513c 100644 --- a/google/api_core/operations_v1/operations_async_client.py +++ b/google/api_core/operations_v1/operations_async_client.py @@ -24,8 +24,10 @@ import functools +from google.api_core import exceptions as core_exceptions from google.api_core import gapic_v1, page_iterator_async -from google.api_core.operations_v1 import operations_client_config +from google.api_core import retry as retries +from google.api_core import timeout as timeouts from google.longrunning import operations_pb2 @@ -41,39 +43,44 @@ class OperationsAsyncClient: the default configuration is used. """ - def __init__(self, channel, client_config=operations_client_config.config): + def __init__(self, channel, client_config=None): # Create the gRPC client stub with gRPC AsyncIO channel. self.operations_stub = operations_pb2.OperationsStub(channel) - # Create all wrapped methods using the interface configuration. - # The interface config contains all of the default settings for retry - # and timeout for each RPC method. - interfaces = client_config["interfaces"] - interface_config = interfaces["google.longrunning.Operations"] - method_configs = gapic_v1.config_async.parse_method_configs(interface_config) + default_retry = retries.Retry( + initial=0.1, # seconds + maximum=60.0, # seconds + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + timeout=600.0, # seconds + ) + default_timeout = timeouts.TimeToDeadlineTimeout(timeout=600.0) self._get_operation = gapic_v1.method_async.wrap_method( self.operations_stub.GetOperation, - default_retry=method_configs["GetOperation"].retry, - default_timeout=method_configs["GetOperation"].timeout, + default_retry=default_retry, + default_timeout=default_timeout, ) self._list_operations = gapic_v1.method_async.wrap_method( self.operations_stub.ListOperations, - default_retry=method_configs["ListOperations"].retry, - default_timeout=method_configs["ListOperations"].timeout, + default_retry=default_retry, + default_timeout=default_timeout, ) self._cancel_operation = gapic_v1.method_async.wrap_method( self.operations_stub.CancelOperation, - default_retry=method_configs["CancelOperation"].retry, - default_timeout=method_configs["CancelOperation"].timeout, + default_retry=default_retry, + default_timeout=default_timeout, ) self._delete_operation = gapic_v1.method_async.wrap_method( self.operations_stub.DeleteOperation, - default_retry=method_configs["DeleteOperation"].retry, - default_timeout=method_configs["DeleteOperation"].timeout, + default_retry=default_retry, + default_timeout=default_timeout, ) async def get_operation( diff --git a/google/api_core/operations_v1/operations_client.py b/google/api_core/operations_v1/operations_client.py index e48eac01..3ddd3c47 100644 --- a/google/api_core/operations_v1/operations_client.py +++ b/google/api_core/operations_v1/operations_client.py @@ -37,9 +37,11 @@ import functools +from google.api_core import exceptions as core_exceptions from google.api_core import gapic_v1 from google.api_core import page_iterator -from google.api_core.operations_v1 import operations_client_config +from google.api_core import retry as retries +from google.api_core import timeout as timeouts from google.longrunning import operations_pb2 @@ -54,39 +56,44 @@ class OperationsClient(object): the default configuration is used. """ - def __init__(self, channel, client_config=operations_client_config.config): + def __init__(self, channel, client_config=None): # Create the gRPC client stub. self.operations_stub = operations_pb2.OperationsStub(channel) - # Create all wrapped methods using the interface configuration. - # The interface config contains all of the default settings for retry - # and timeout for each RPC method. - interfaces = client_config["interfaces"] - interface_config = interfaces["google.longrunning.Operations"] - method_configs = gapic_v1.config.parse_method_configs(interface_config) + default_retry = retries.Retry( + initial=0.1, # seconds + maximum=60.0, # seconds + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + timeout=600.0, # seconds + ) + default_timeout = timeouts.TimeToDeadlineTimeout(timeout=600.0) self._get_operation = gapic_v1.method.wrap_method( self.operations_stub.GetOperation, - default_retry=method_configs["GetOperation"].retry, - default_timeout=method_configs["GetOperation"].timeout, + default_retry=default_retry, + default_timeout=default_timeout, ) self._list_operations = gapic_v1.method.wrap_method( self.operations_stub.ListOperations, - default_retry=method_configs["ListOperations"].retry, - default_timeout=method_configs["ListOperations"].timeout, + default_retry=default_retry, + default_timeout=default_timeout, ) self._cancel_operation = gapic_v1.method.wrap_method( self.operations_stub.CancelOperation, - default_retry=method_configs["CancelOperation"].retry, - default_timeout=method_configs["CancelOperation"].timeout, + default_retry=default_retry, + default_timeout=default_timeout, ) self._delete_operation = gapic_v1.method.wrap_method( self.operations_stub.DeleteOperation, - default_retry=method_configs["DeleteOperation"].retry, - default_timeout=method_configs["DeleteOperation"].timeout, + default_retry=default_retry, + default_timeout=default_timeout, ) # Service calls diff --git a/google/api_core/operations_v1/operations_client_config.py b/google/api_core/operations_v1/operations_client_config.py index 6cf95753..70cfd70a 100644 --- a/google/api_core/operations_v1/operations_client_config.py +++ b/google/api_core/operations_v1/operations_client_config.py @@ -14,6 +14,7 @@ """gapic configuration for the googe.longrunning.operations client.""" +# DEPRECATED: retry and timeout classes are instantiated directly config = { "interfaces": { "google.longrunning.Operations": { diff --git a/google/api_core/operations_v1/transports/rest.py b/google/api_core/operations_v1/transports/rest.py index 27ed7661..bb6cd99c 100644 --- a/google/api_core/operations_v1/transports/rest.py +++ b/google/api_core/operations_v1/transports/rest.py @@ -14,6 +14,7 @@ # limitations under the License. # +import re from typing import Callable, Dict, Optional, Sequence, Tuple, Union from requests import __version__ as requests_version @@ -73,6 +74,7 @@ def __init__( always_use_jwt_access: Optional[bool] = False, url_scheme: str = "https", http_options: Optional[Dict] = None, + path_prefix: str = "v1", ) -> None: """Instantiate the transport. @@ -108,12 +110,24 @@ def __init__( http_options: a dictionary of http_options for transcoding, to override the defaults from operatons.proto. Each method has an entry with the corresponding http rules as value. + path_prefix: path prefix (usually represents API version). Set to + "v1" by default. """ # Run the base constructor # TODO(yon-mg): resolve other ctor params i.e. scopes, quota, etc. # TODO: When custom host (api_endpoint) is set, `scopes` must *also* be set on the # credentials object + maybe_url_match = re.match("^(?Phttp(?:s)?://)?(?P.*)$", host) + if maybe_url_match is None: + raise ValueError( + f"Unexpected hostname structure: {host}" + ) # pragma: NO COVER + + url_match_items = maybe_url_match.groupdict() + + host = f"{url_scheme}://{host}" if not url_match_items["scheme"] else host + super().__init__( host=host, credentials=credentials, @@ -127,6 +141,7 @@ def __init__( self._session.configure_mtls_channel(client_cert_source_for_mtls) self._prep_wrapped_messages(client_info) self._http_options = http_options or {} + self._path_prefix = path_prefix def _list_operations( self, @@ -157,7 +172,10 @@ def _list_operations( """ http_options = [ - {"method": "get", "uri": "/v1/{name=operations}"}, + { + "method": "get", + "uri": "/{}/{{name=**}}/operations".format(self._path_prefix), + }, ] if "google.longrunning.Operations.ListOperations" in self._http_options: http_options = self._http_options[ @@ -188,7 +206,7 @@ def _list_operations( headers = dict(metadata) headers["Content-Type"] = "application/json" response = getattr(self._session, method)( - "https://{host}{uri}".format(host=self._host, uri=uri), + "{host}{uri}".format(host=self._host, uri=uri), timeout=timeout, headers=headers, params=rest_helpers.flatten_query_params(query_params), @@ -234,7 +252,10 @@ def _get_operation( """ http_options = [ - {"method": "get", "uri": "/v1/{name=operations/**}"}, + { + "method": "get", + "uri": "/{}/{{name=**/operations/*}}".format(self._path_prefix), + }, ] if "google.longrunning.Operations.GetOperation" in self._http_options: http_options = self._http_options[ @@ -265,7 +286,7 @@ def _get_operation( headers = dict(metadata) headers["Content-Type"] = "application/json" response = getattr(self._session, method)( - "https://{host}{uri}".format(host=self._host, uri=uri), + "{host}{uri}".format(host=self._host, uri=uri), timeout=timeout, headers=headers, params=rest_helpers.flatten_query_params(query_params), @@ -304,7 +325,10 @@ def _delete_operation( """ http_options = [ - {"method": "delete", "uri": "/v1/{name=operations/**}"}, + { + "method": "delete", + "uri": "/{}/{{name=**/operations/*}}".format(self._path_prefix), + }, ] if "google.longrunning.Operations.DeleteOperation" in self._http_options: http_options = self._http_options[ @@ -335,7 +359,7 @@ def _delete_operation( headers = dict(metadata) headers["Content-Type"] = "application/json" response = getattr(self._session, method)( - "https://{host}{uri}".format(host=self._host, uri=uri), + "{host}{uri}".format(host=self._host, uri=uri), timeout=timeout, headers=headers, params=rest_helpers.flatten_query_params(query_params), @@ -371,7 +395,11 @@ def _cancel_operation( """ http_options = [ - {"method": "post", "uri": "/v1/{name=operations/**}:cancel", "body": "*"}, + { + "method": "post", + "uri": "/{}/{{name=**/operations/*}}:cancel".format(self._path_prefix), + "body": "*", + }, ] if "google.longrunning.Operations.CancelOperation" in self._http_options: http_options = self._http_options[ @@ -411,7 +439,7 @@ def _cancel_operation( headers = dict(metadata) headers["Content-Type"] = "application/json" response = getattr(self._session, method)( - "https://{host}{uri}".format(host=self._host, uri=uri), + "{host}{uri}".format(host=self._host, uri=uri), timeout=timeout, headers=headers, params=rest_helpers.flatten_query_params(query_params), diff --git a/google/api_core/retry.py b/google/api_core/retry.py index ce496937..f9207a12 100644 --- a/google/api_core/retry.py +++ b/google/api_core/retry.py @@ -139,15 +139,15 @@ def exponential_sleep_generator(initial, maximum, multiplier=_DEFAULT_DELAY_MULT Yields: float: successive sleep intervals. """ - delay = initial + delay = min(initial, maximum) while True: - # Introduce jitter by yielding a delay that is uniformly distributed - # to average out to the delay time. - yield min(random.uniform(0.0, delay * 2.0), maximum) - delay = delay * multiplier + yield random.uniform(0.0, delay) + delay = min(delay * multiplier, maximum) -def retry_target(target, predicate, sleep_generator, deadline, on_error=None): +def retry_target( + target, predicate, sleep_generator, timeout=None, on_error=None, **kwargs +): """Call a function and retry if it fails. This is the lowest-level retry helper. Generally, you'll use the @@ -161,12 +161,12 @@ def retry_target(target, predicate, sleep_generator, deadline, on_error=None): It should return True to retry or False otherwise. sleep_generator (Iterable[float]): An infinite iterator that determines how long to sleep between retries. - deadline (float): How long to keep retrying the target. The last sleep - period is shortened as necessary, so that the last retry runs at - ``deadline`` (and not considerably beyond it). + timeout (float): How long to keep retrying the target. on_error (Callable[Exception]): A function to call while processing a retryable exception. Any error raised by this function will *not* be caught. + deadline (float): DEPRECATED: use ``timeout`` instead. For backward + compatibility, if specified it will override ``timeout`` parameter. Returns: Any: the return value of the target function. @@ -176,12 +176,13 @@ def retry_target(target, predicate, sleep_generator, deadline, on_error=None): ValueError: If the sleep generator stops yielding values. Exception: If the target raises a method that isn't retryable. """ - if deadline is not None: - deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta( - seconds=deadline - ) + + timeout = kwargs.get("deadline", timeout) + + if timeout is not None: + deadline = datetime_helpers.utcnow() + datetime.timedelta(seconds=timeout) else: - deadline_datetime = None + deadline = None last_exc = None @@ -198,19 +199,17 @@ def retry_target(target, predicate, sleep_generator, deadline, on_error=None): if on_error is not None: on_error(exc) - now = datetime_helpers.utcnow() - - if deadline_datetime is not None: - if deadline_datetime <= now: + if deadline is not None: + next_attempt_time = datetime_helpers.utcnow() + datetime.timedelta( + seconds=sleep + ) + if deadline < next_attempt_time: raise exceptions.RetryError( "Deadline of {:.1f}s exceeded while calling target function".format( - deadline + timeout ), last_exc, ) from last_exc - else: - time_to_deadline = (deadline_datetime - now).total_seconds() - sleep = min(time_to_deadline, sleep) _LOGGER.debug( "Retrying due to {}, sleeping {:.1f}s ...".format(last_exc, sleep) @@ -223,12 +222,77 @@ def retry_target(target, predicate, sleep_generator, deadline, on_error=None): class Retry(object): """Exponential retry decorator. - This class is a decorator used to add exponential back-off retry behavior - to an RPC call. + This class is a decorator used to add retry or polling behavior to an RPC + call. Although the default behavior is to retry transient API errors, a different predicate can be provided to retry other exceptions. + There two important concepts that retry/polling behavior may operate on, + Deadline and Timeout, which need to be properly defined for the correct + usage of this class and the rest of the library. + + Deadline: a fixed point in time by which a certain operation must + terminate. For example, if a certain operation has a deadline + "2022-10-18T23:30:52.123Z" it must terminate (successfully or with an + error) by that time, regardless of when it was started or whether it + was started at all. + + Timeout: the maximum duration of time after which a certain operation + must terminate (successfully or with an error). The countdown begins right + after an operation was started. For example, if an operation was started at + 09:24:00 with timeout of 75 seconds, it must terminate no later than + 09:25:15. + + Unfortunately, in the past this class (and the api-core library as a whole) has not been + properly distinguishing the concepts of "timeout" and "deadline", and the + ``deadline`` parameter has meant ``timeout``. That is why + ``deadline`` has been deprecated and ``timeout`` should be used instead. If the + ``deadline`` parameter is set, it will override the ``timeout`` parameter. In other words, + ``retry.deadline`` should be treated as just a deprecated alias for + ``retry.timeout``. + + Said another way, it is safe to assume that this class and the rest of this + library operate in terms of timeouts (not deadlines) unless explicitly + noted the usage of deadline semantics. + + It is also important to + understand the three most common applications of the Timeout concept in the + context of this library. + + Usually the generic Timeout term may stand for one of the following actual + timeouts: RPC Timeout, Retry Timeout, or Polling Timeout. + + RPC Timeout: a value supplied by the client to the server so + that the server side knows the maximum amount of time it is expected to + spend handling that specifc RPC. For example, in the case of gRPC transport, + RPC Timeout is represented by setting "grpc-timeout" header in the HTTP2 + request. The `timeout` property of this class normally never represents the + RPC Timeout as it is handled separately by the ``google.api_core.timeout`` + module of this library. + + Retry Timeout: this is the most common meaning of the ``timeout`` property + of this class, and defines how long a certain RPC may be retried in case + the server returns an error. + + Polling Timeout: defines how long the + client side is allowed to call the polling RPC repeatedly to check a status of a + long-running operation. Each polling RPC is + expected to succeed (its errors are supposed to be handled by the retry + logic). The decision as to whether a new polling attempt needs to be made is based + not on the RPC status code but on the status of the returned + status of an operation. In other words: we will poll a long-running operation until the operation is done or the polling timeout expires. Each poll will inform us of the status of the operation. The poll consists of an RPC to the server that may itself be retried as per the poll-specific retry settings in case of errors. The operation-level retry settings do NOT apply to polling-RPC retries. + + With the actual timeout types being defined above, the client libraries + often refer to just Timeout without clarifying which type specifically + that is. In that case the actual timeout type (sometimes also refered to as + Logical Timeout) can be determined from the context. If it is a unary rpc + call (i.e. a regular one) Timeout usually stands for the RPC Timeout (if + provided directly as a standalone value) or Retry Timeout (if provided as + ``retry.timeout`` property of the unary RPC's retry config). For + ``Operation`` or ``PollingFuture`` in general Timeout stands for + Polling Timeout. + Args: predicate (Callable[Exception]): A callable that should return ``True`` if the given exception is retryable. @@ -236,9 +300,9 @@ class Retry(object): must be greater than 0. maximum (float): The maximum amount of time to delay in seconds. multiplier (float): The multiplier applied to the delay. - deadline (float): How long to keep retrying in seconds. The last sleep - period is shortened as necessary, so that the last retry runs at - ``deadline`` (and not considerably beyond it). + timeout (float): How long to keep retrying, in seconds. + deadline (float): DEPRECATED: use `timeout` instead. For backward + compatibility, if specified it will override the ``timeout`` parameter. """ def __init__( @@ -247,14 +311,16 @@ def __init__( initial=_DEFAULT_INITIAL_DELAY, maximum=_DEFAULT_MAXIMUM_DELAY, multiplier=_DEFAULT_DELAY_MULTIPLIER, - deadline=_DEFAULT_DEADLINE, + timeout=_DEFAULT_DEADLINE, on_error=None, + **kwargs ): self._predicate = predicate self._initial = initial self._multiplier = multiplier self._maximum = maximum - self._deadline = deadline + self._timeout = kwargs.get("deadline", timeout) + self._deadline = self._timeout self._on_error = on_error def __call__(self, func, on_error=None): @@ -284,7 +350,7 @@ def retry_wrapped_func(*args, **kwargs): target, self._predicate, sleep_generator, - self._deadline, + self._timeout, on_error=on_error, ) @@ -292,23 +358,45 @@ def retry_wrapped_func(*args, **kwargs): @property def deadline(self): - return self._deadline + """ + DEPRECATED: use ``timeout`` instead. Refer to the ``Retry`` class + documentation for details. + """ + return self._timeout + + @property + def timeout(self): + return self._timeout def with_deadline(self, deadline): - """Return a copy of this retry with the given deadline. + """Return a copy of this retry with the given timeout. + + DEPRECATED: use :meth:`with_timeout` instead. Refer to the ``Retry`` class + documentation for details. + + Args: + deadline (float): How long to keep retrying in seconds. + + Returns: + Retry: A new retry instance with the given timeout. + """ + return self.with_timeout(timeout=deadline) + + def with_timeout(self, timeout): + """Return a copy of this retry with the given timeout. Args: - deadline (float): How long to keep retrying. + timeout (float): How long to keep retrying, in seconds. Returns: - Retry: A new retry instance with the given deadline. + Retry: A new retry instance with the given timeout. """ return Retry( predicate=self._predicate, initial=self._initial, maximum=self._maximum, multiplier=self._multiplier, - deadline=deadline, + timeout=timeout, on_error=self._on_error, ) @@ -327,7 +415,7 @@ def with_predicate(self, predicate): initial=self._initial, maximum=self._maximum, multiplier=self._multiplier, - deadline=self._deadline, + timeout=self._timeout, on_error=self._on_error, ) @@ -348,19 +436,19 @@ def with_delay(self, initial=None, maximum=None, multiplier=None): initial=initial if initial is not None else self._initial, maximum=maximum if maximum is not None else self._maximum, multiplier=multiplier if multiplier is not None else self._multiplier, - deadline=self._deadline, + timeout=self._timeout, on_error=self._on_error, ) def __str__(self): return ( "".format( + "multiplier={:.1f}, timeout={}, on_error={}>".format( self._predicate, self._initial, self._maximum, self._multiplier, - self._deadline, + self._timeout, # timeout can be None, thus no {:.1f} self._on_error, ) ) diff --git a/google/api_core/retry_async.py b/google/api_core/retry_async.py index 68a25597..81698838 100644 --- a/google/api_core/retry_async.py +++ b/google/api_core/retry_async.py @@ -68,9 +68,12 @@ async def check_if_exists(): _DEFAULT_MAXIMUM_DELAY = 60.0 # seconds _DEFAULT_DELAY_MULTIPLIER = 2.0 _DEFAULT_DEADLINE = 60.0 * 2.0 # seconds +_DEFAULT_TIMEOUT = 60.0 * 2.0 # seconds -async def retry_target(target, predicate, sleep_generator, deadline, on_error=None): +async def retry_target( + target, predicate, sleep_generator, timeout=None, on_error=None, **kwargs +): """Call a function and retry if it fails. This is the lowest-level retry helper. Generally, you'll use the @@ -84,12 +87,12 @@ async def retry_target(target, predicate, sleep_generator, deadline, on_error=No It should return True to retry or False otherwise. sleep_generator (Iterable[float]): An infinite iterator that determines how long to sleep between retries. - deadline (float): How long to keep retrying the target. The last sleep - period is shortened as necessary, so that the last retry runs at - ``deadline`` (and not considerably beyond it). + timeout (float): How long to keep retrying the target, in seconds. on_error (Callable[Exception]): A function to call while processing a retryable exception. Any error raised by this function will *not* be caught. + deadline (float): DEPRECATED use ``timeout`` instead. For backward + compatibility, if set it will override the ``timeout`` parameter. Returns: Any: the return value of the target function. @@ -99,9 +102,12 @@ async def retry_target(target, predicate, sleep_generator, deadline, on_error=No ValueError: If the sleep generator stops yielding values. Exception: If the target raises a method that isn't retryable. """ + + timeout = kwargs.get("deadline", timeout) + deadline_dt = ( - (datetime_helpers.utcnow() + datetime.timedelta(seconds=deadline)) - if deadline + (datetime_helpers.utcnow() + datetime.timedelta(seconds=timeout)) + if timeout else None ) @@ -132,8 +138,8 @@ async def retry_target(target, predicate, sleep_generator, deadline, on_error=No # Chains the raising RetryError with the root cause error, # which helps observability and debugability. raise exceptions.RetryError( - "Deadline of {:.1f}s exceeded while calling target function".format( - deadline + "Timeout of {:.1f}s exceeded while calling target function".format( + timeout ), last_exc, ) from last_exc @@ -165,12 +171,12 @@ class AsyncRetry: must be greater than 0. maximum (float): The maximum amout of time to delay in seconds. multiplier (float): The multiplier applied to the delay. - deadline (float): How long to keep retrying in seconds. The last sleep - period is shortened as necessary, so that the last retry runs at - ``deadline`` (and not considerably beyond it). + timeout (float): How long to keep retrying in seconds. on_error (Callable[Exception]): A function to call while processing a retryable exception. Any error raised by this function will *not* be caught. + deadline (float): DEPRECATED use ``timeout`` instead. If set it will + override ``timeout`` parameter. """ def __init__( @@ -179,14 +185,16 @@ def __init__( initial=_DEFAULT_INITIAL_DELAY, maximum=_DEFAULT_MAXIMUM_DELAY, multiplier=_DEFAULT_DELAY_MULTIPLIER, - deadline=_DEFAULT_DEADLINE, + timeout=_DEFAULT_TIMEOUT, on_error=None, + **kwargs ): self._predicate = predicate self._initial = initial self._multiplier = multiplier self._maximum = maximum - self._deadline = deadline + self._timeout = kwargs.get("deadline", timeout) + self._deadline = self._timeout self._on_error = on_error def __call__(self, func, on_error=None): @@ -216,7 +224,7 @@ async def retry_wrapped_func(*args, **kwargs): target, self._predicate, sleep_generator, - self._deadline, + self._timeout, on_error=on_error, ) @@ -228,7 +236,7 @@ def _replace( initial=None, maximum=None, multiplier=None, - deadline=None, + timeout=None, on_error=None, ): return AsyncRetry( @@ -236,12 +244,13 @@ def _replace( initial=initial or self._initial, maximum=maximum or self._maximum, multiplier=multiplier or self._multiplier, - deadline=deadline or self._deadline, + timeout=timeout or self._timeout, on_error=on_error or self._on_error, ) def with_deadline(self, deadline): """Return a copy of this retry with the given deadline. + DEPRECATED: use :meth:`with_timeout` instead. Args: deadline (float): How long to keep retrying. @@ -249,7 +258,18 @@ def with_deadline(self, deadline): Returns: AsyncRetry: A new retry instance with the given deadline. """ - return self._replace(deadline=deadline) + return self._replace(timeout=deadline) + + def with_timeout(self, timeout): + """Return a copy of this retry with the given timeout. + + Args: + timeout (float): How long to keep retrying, in seconds. + + Returns: + AsyncRetry: A new retry instance with the given timeout. + """ + return self._replace(timeout=timeout) def with_predicate(self, predicate): """Return a copy of this retry with the given predicate. @@ -280,12 +300,12 @@ def with_delay(self, initial=None, maximum=None, multiplier=None): def __str__(self): return ( "".format( + "multiplier={:.1f}, timeout={:.1f}, on_error={}>".format( self._predicate, self._initial, self._maximum, self._multiplier, - self._deadline, + self._timeout, self._on_error, ) ) diff --git a/google/api_core/timeout.py b/google/api_core/timeout.py index 73232180..3546d540 100644 --- a/google/api_core/timeout.py +++ b/google/api_core/timeout.py @@ -14,8 +14,9 @@ """Decorators for applying timeout arguments to functions. -These decorators are used to wrap API methods to apply either a constant -or exponential timeout argument. +These decorators are used to wrap API methods to apply either a +Deadline-dependent (recommended), constant (DEPRECATED) or exponential +(DEPRECATED) timeout argument. For example, imagine an API method that can take a while to return results, such as one that might block until a resource is ready: @@ -66,9 +67,69 @@ def is_thing_ready(timeout=None): _DEFAULT_DEADLINE = None +class TimeToDeadlineTimeout(object): + """A decorator that decreases timeout set for an RPC based on how much time + has left till its deadline. The deadline is calculated as + ``now + initial_timeout`` when this decorator is first called for an rpc. + + In other words this decorator implements deadline semantics in terms of a + sequence of decreasing timeouts t0 > t1 > t2 ... tn >= 0. + + Args: + timeout (Optional[float]): the timeout (in seconds) to applied to the + wrapped function. If `None`, the target function is expected to + never timeout. + """ + + def __init__(self, timeout=None, clock=datetime_helpers.utcnow): + self._timeout = timeout + self._clock = clock + + def __call__(self, func): + """Apply the timeout decorator. + + Args: + func (Callable): The function to apply the timeout argument to. + This function must accept a timeout keyword argument. + + Returns: + Callable: The wrapped function. + """ + + first_attempt_timestamp = self._clock().timestamp() + + @functools.wraps(func) + def func_with_timeout(*args, **kwargs): + """Wrapped function that adds timeout.""" + + remaining_timeout = self._timeout + if remaining_timeout is not None: + # All calculations are in seconds + now_timestamp = self._clock().timestamp() + + # To avoid usage of nonlocal but still have round timeout + # numbers for first attempt (in most cases the only attempt made + # for an RPC. + if now_timestamp - first_attempt_timestamp < 0.001: + now_timestamp = first_attempt_timestamp + + time_since_first_attempt = now_timestamp - first_attempt_timestamp + # Avoid setting negative timeout + kwargs["timeout"] = max(0, self._timeout - time_since_first_attempt) + + return func(*args, **kwargs) + + return func_with_timeout + + def __str__(self): + return "".format(self._timeout) + + class ConstantTimeout(object): """A decorator that adds a constant timeout argument. + DEPRECATED: use ``TimeToDeadlineTimeout`` instead. + This is effectively equivalent to ``functools.partial(func, timeout=timeout)``. @@ -140,6 +201,9 @@ def _exponential_timeout_generator(initial, maximum, multiplier, deadline): class ExponentialTimeout(object): """A decorator that adds an exponentially increasing timeout argument. + DEPRECATED: the concept of incrementing timeout exponentially has been + deprecated. Use ``TimeToDeadlineTimeout`` instead. + This is useful if a function is called multiple times. Each time the function is called this decorator will calculate a new timeout parameter based on the the number of times the function has been called. diff --git a/noxfile.py b/noxfile.py index 2d8f1e02..21f0f7b2 100644 --- a/noxfile.py +++ b/noxfile.py @@ -26,7 +26,7 @@ # Black and flake8 clash on the syntax for ignoring flake8's F401 in this file. BLACK_EXCLUDES = ["--exclude", "^/google/api_core/operations_v1/__init__.py"] -DEFAULT_PYTHON_VERSION = "3.7" +DEFAULT_PYTHON_VERSION = "3.10" CURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute() # 'docfx' is excluded since it only needs to run in 'docs-presubmit' @@ -95,7 +95,16 @@ def default(session, install_grpc=True): ) # Install all test dependencies, then install this package in-place. - session.install("dataclasses", "mock", "pytest", "pytest-cov", "pytest-xdist") + session.install( + "dataclasses", + "mock", + # Revert to just "pytest" once + # https://github.com/pytest-dev/pytest/issues/10451 is fixed + "pytest<7.2.0", + "pytest-cov", + "pytest-xdist", + ) + if install_grpc: session.install("-e", ".[grpc]", "-c", constraints_path) else: @@ -192,7 +201,7 @@ def mypy(session): session.run("mypy", "google", "tests") -@nox.session(python="3.8") +@nox.session(python=DEFAULT_PYTHON_VERSION) def cover(session): """Run the final coverage report. @@ -204,12 +213,12 @@ def cover(session): session.run("coverage", "erase") -@nox.session(python="3.8") +@nox.session(python=DEFAULT_PYTHON_VERSION) def docs(session): """Build the docs for this library.""" session.install("-e", ".[grpc]") - session.install("sphinx==4.0.1", "alabaster", "recommonmark") + session.install("sphinx==4.2.0", "alabaster", "recommonmark") shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True) session.run( diff --git a/tests/asyncio/gapic/test_method_async.py b/tests/asyncio/gapic/test_method_async.py index 11847da7..02d883f6 100644 --- a/tests/asyncio/gapic/test_method_async.py +++ b/tests/asyncio/gapic/test_method_async.py @@ -198,41 +198,6 @@ async def test_wrap_method_with_overriding_retry_and_timeout(unused_sleep): method.assert_called_with(timeout=22, metadata=mock.ANY) -@mock.patch("asyncio.sleep") -@mock.patch( - "google.api_core.datetime_helpers.utcnow", - side_effect=_utcnow_monotonic(), - autospec=True, -) -@pytest.mark.asyncio -async def test_wrap_method_with_overriding_retry_deadline(utcnow, unused_sleep): - fake_call = grpc_helpers_async.FakeUnaryUnaryCall(42) - method = mock.Mock( - spec=aio.UnaryUnaryMultiCallable, - side_effect=([exceptions.InternalServerError(None)] * 4) + [fake_call], - ) - - default_retry = retry_async.AsyncRetry() - default_timeout = timeout.ExponentialTimeout(deadline=60) - wrapped_method = gapic_v1.method_async.wrap_method( - method, default_retry, default_timeout - ) - - # Overriding only the retry's deadline should also override the timeout's - # deadline. - result = await wrapped_method(retry=default_retry.with_deadline(30)) - - assert result == 42 - timeout_args = [call[1]["timeout"] for call in method.call_args_list] - assert timeout_args == [5.0, 10.0, 20.0, 26.0, 25.0] - assert utcnow.call_count == ( - 1 - + 1 # Compute wait_for timeout in retry_async - + 5 # First to set the deadline. - + 5 # One for each min(timeout, maximum, (DEADLINE - NOW).seconds) - ) - - @pytest.mark.asyncio async def test_wrap_method_with_overriding_timeout_as_a_number(): fake_call = grpc_helpers_async.FakeUnaryUnaryCall(42) diff --git a/tests/asyncio/operations_v1/test_operations_async_client.py b/tests/asyncio/operations_v1/test_operations_async_client.py index 47c3b4b4..34236da7 100644 --- a/tests/asyncio/operations_v1/test_operations_async_client.py +++ b/tests/asyncio/operations_v1/test_operations_async_client.py @@ -17,7 +17,7 @@ try: from grpc import aio -except ImportError: +except ImportError: # pragma: NO COVER pytest.skip("No GRPC", allow_module_level=True) from google.api_core import grpc_helpers_async diff --git a/tests/asyncio/test_grpc_helpers_async.py b/tests/asyncio/test_grpc_helpers_async.py index 2d0a1bcd..2f8f3460 100644 --- a/tests/asyncio/test_grpc_helpers_async.py +++ b/tests/asyncio/test_grpc_helpers_async.py @@ -18,11 +18,11 @@ try: import grpc from grpc import aio -except ImportError: +except ImportError: # pragma: NO COVER grpc = aio = None -if grpc is None: +if grpc is None: # pragma: NO COVER pytest.skip("No GRPC", allow_module_level=True) diff --git a/tests/asyncio/test_operation_async.py b/tests/asyncio/test_operation_async.py index 26ad7cef..127ba634 100644 --- a/tests/asyncio/test_operation_async.py +++ b/tests/asyncio/test_operation_async.py @@ -18,7 +18,7 @@ try: import grpc # noqa: F401 -except ImportError: +except ImportError: # pragma: NO COVER pytest.skip("No GRPC", allow_module_level=True) from google.api_core import exceptions diff --git a/tests/asyncio/test_retry_async.py b/tests/asyncio/test_retry_async.py index 873caaf1..14807eb5 100644 --- a/tests/asyncio/test_retry_async.py +++ b/tests/asyncio/test_retry_async.py @@ -116,7 +116,7 @@ async def test_retry_target_deadline_exceeded(utcnow, sleep): await retry_async.retry_target(target, predicate, range(10), deadline=10) assert exc_info.value.cause == exception - assert exc_info.match("Deadline of 10.0s exceeded") + assert exc_info.match("Timeout of 10.0s exceeded") assert exc_info.match("last exception: meep") assert target.call_count == 2 @@ -253,7 +253,7 @@ def if_exception_type(exc): assert re.match( ( r", " - r"initial=1.0, maximum=60.0, multiplier=2.0, deadline=120.0, " + r"initial=1.0, maximum=60.0, multiplier=2.0, timeout=120.0, " r"on_error=None>" ), str(retry_), @@ -276,8 +276,7 @@ async def test___call___and_execute_success(self, sleep): target.assert_called_once_with("meep") sleep.assert_not_called() - # Make uniform return half of its maximum, which is the calculated sleep time. - @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n / 2.0) + @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n) @mock.patch("asyncio.sleep", autospec=True) @pytest.mark.asyncio async def test___call___and_execute_retry(self, sleep, uniform): @@ -302,8 +301,7 @@ async def test___call___and_execute_retry(self, sleep, uniform): sleep.assert_called_once_with(retry_._initial) assert on_error.call_count == 1 - # Make uniform return half of its maximum, which is the calculated sleep time. - @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n / 2.0) + @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n) @mock.patch("asyncio.sleep", autospec=True) @pytest.mark.asyncio async def test___call___and_execute_retry_hitting_deadline(self, sleep, uniform): @@ -376,8 +374,7 @@ async def test___init___without_retry_executed(self, sleep): sleep.assert_not_called() _some_function.assert_not_called() - # Make uniform return half of its maximum, which is the calculated sleep time. - @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n / 2.0) + @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n) @mock.patch("asyncio.sleep", autospec=True) @pytest.mark.asyncio async def test___init___when_retry_is_executed(self, sleep, uniform): diff --git a/tests/unit/future/test_polling.py b/tests/unit/future/test_polling.py index 2381d036..f5d9b4f1 100644 --- a/tests/unit/future/test_polling.py +++ b/tests/unit/future/test_polling.py @@ -24,7 +24,7 @@ class PollingFutureImpl(polling.PollingFuture): - def done(self): + def done(self, retry=None): return False def cancel(self): @@ -33,9 +33,6 @@ def cancel(self): def cancelled(self): return False - def running(self): - return True - def test_polling_future_constructor(): future = PollingFutureImpl() @@ -84,20 +81,23 @@ def test_invoke_callback_exception(): class PollingFutureImplWithPoll(PollingFutureImpl): - def __init__(self): + def __init__(self, max_poll_count=1): super(PollingFutureImplWithPoll, self).__init__() self.poll_count = 0 self.event = threading.Event() + self.max_poll_count = max_poll_count - def done(self, retry=polling.DEFAULT_RETRY): + def done(self, retry=None): self.poll_count += 1 + if self.max_poll_count > self.poll_count: + return False self.event.wait() self.set_result(42) return True -def test_result_with_polling(): - future = PollingFutureImplWithPoll() +def test_result_with_one_polling(): + future = PollingFutureImplWithPoll(max_poll_count=1) future.event.set() result = future.result() @@ -109,8 +109,34 @@ def test_result_with_polling(): assert future.poll_count == 1 +def test_result_with_two_pollings(): + future = PollingFutureImplWithPoll(max_poll_count=2) + + future.event.set() + result = future.result() + + assert result == 42 + assert future.poll_count == 2 + # Repeated calls should not cause additional polling + assert future.result() == result + assert future.poll_count == 2 + + +def test_result_with_two_pollings_custom_retry(): + future = PollingFutureImplWithPoll(max_poll_count=2) + + future.event.set() + result = future.result() + + assert result == 42 + assert future.poll_count == 2 + # Repeated calls should not cause additional polling + assert future.result() == result + assert future.poll_count == 2 + + class PollingFutureImplTimeout(PollingFutureImplWithPoll): - def done(self, retry=polling.DEFAULT_RETRY): + def done(self, retry=None): time.sleep(1) return False @@ -132,11 +158,11 @@ def __init__(self, errors): super(PollingFutureImplTransient, self).__init__() self._errors = errors - def done(self, retry=polling.DEFAULT_RETRY): + def done(self, retry=None): + self.poll_count += 1 if self._errors: error, self._errors = self._errors[0], self._errors[1:] raise error("testing") - self.poll_count += 1 self.set_result(42) return True @@ -144,17 +170,17 @@ def done(self, retry=polling.DEFAULT_RETRY): def test_result_transient_error(): future = PollingFutureImplTransient( ( - exceptions.TooManyRequests, - exceptions.InternalServerError, - exceptions.BadGateway, + polling._OperationNotComplete, + polling._OperationNotComplete, + polling._OperationNotComplete, ) ) result = future.result() assert result == 42 - assert future.poll_count == 1 + assert future.poll_count == 4 # Repeated calls should not cause additional polling assert future.result() == result - assert future.poll_count == 1 + assert future.poll_count == 4 def test_callback_background_thread(): @@ -197,23 +223,23 @@ def test_double_callback_background_thread(): class PollingFutureImplWithoutRetry(PollingFutureImpl): - def done(self): + def done(self, retry=None): return True - def result(self): + def result(self, timeout=None, retry=None, polling=None): return super(PollingFutureImplWithoutRetry, self).result() - def _blocking_poll(self, timeout): + def _blocking_poll(self, timeout=None, retry=None, polling=None): return super(PollingFutureImplWithoutRetry, self)._blocking_poll( timeout=timeout ) class PollingFutureImplWith_done_or_raise(PollingFutureImpl): - def done(self): + def done(self, retry=None): return True - def _done_or_raise(self): + def _done_or_raise(self, retry=None): return super(PollingFutureImplWith_done_or_raise, self)._done_or_raise() @@ -223,12 +249,12 @@ def test_polling_future_without_retry(): ) future = PollingFutureImplWithoutRetry() assert future.done() - assert future.running() + assert not future.running() assert future.result() is None with mock.patch.object(future, "done") as done_mock: future._done_or_raise() - done_mock.assert_called_once_with() + done_mock.assert_called_once_with(retry=None) with mock.patch.object(future, "done") as done_mock: future._done_or_raise(retry=custom_retry) @@ -238,5 +264,5 @@ def test_polling_future_without_retry(): def test_polling_future_with__done_or_raise(): future = PollingFutureImplWith_done_or_raise() assert future.done() - assert future.running() + assert not future.running() assert future.result() is None diff --git a/tests/unit/gapic/test_method.py b/tests/unit/gapic/test_method.py index 9778d23a..b1035413 100644 --- a/tests/unit/gapic/test_method.py +++ b/tests/unit/gapic/test_method.py @@ -39,27 +39,6 @@ def _utcnow_monotonic(): curr_value += delta -def test__determine_timeout(): - # Check _determine_timeout always returns a Timeout object. - timeout_type_timeout = timeout.ConstantTimeout(600.0) - returned_timeout = google.api_core.gapic_v1.method._determine_timeout( - 600.0, 600.0, None - ) - assert isinstance(returned_timeout, timeout.ConstantTimeout) - returned_timeout = google.api_core.gapic_v1.method._determine_timeout( - 600.0, timeout_type_timeout, None - ) - assert isinstance(returned_timeout, timeout.ConstantTimeout) - returned_timeout = google.api_core.gapic_v1.method._determine_timeout( - timeout_type_timeout, 600.0, None - ) - assert isinstance(returned_timeout, timeout.ConstantTimeout) - returned_timeout = google.api_core.gapic_v1.method._determine_timeout( - timeout_type_timeout, timeout_type_timeout, None - ) - assert isinstance(returned_timeout, timeout.ConstantTimeout) - - def test_wrap_method_basic(): method = mock.Mock(spec=["__call__"], return_value=42) @@ -199,37 +178,6 @@ def test_wrap_method_with_overriding_retry_and_timeout(unusued_sleep): method.assert_called_with(timeout=22, metadata=mock.ANY) -@mock.patch("time.sleep") -@mock.patch( - "google.api_core.datetime_helpers.utcnow", - side_effect=_utcnow_monotonic(), - autospec=True, -) -def test_wrap_method_with_overriding_retry_deadline(utcnow, unused_sleep): - method = mock.Mock( - spec=["__call__"], - side_effect=([exceptions.InternalServerError(None)] * 4) + [42], - ) - default_retry = retry.Retry() - default_timeout = timeout.ExponentialTimeout(deadline=60) - wrapped_method = google.api_core.gapic_v1.method.wrap_method( - method, default_retry, default_timeout - ) - - # Overriding only the retry's deadline should also override the timeout's - # deadline. - result = wrapped_method(retry=default_retry.with_deadline(30)) - - assert result == 42 - timeout_args = [call[1]["timeout"] for call in method.call_args_list] - assert timeout_args == [5.0, 10.0, 20.0, 26.0, 25.0] - assert utcnow.call_count == ( - 1 - + 5 # First to set the deadline. - + 5 # One for each min(timeout, maximum, (DEADLINE - NOW).seconds) - ) - - def test_wrap_method_with_overriding_timeout_as_a_number(): method = mock.Mock(spec=["__call__"], return_value=42) default_retry = retry.Retry() diff --git a/tests/unit/operations_v1/test_operations_client.py b/tests/unit/operations_v1/test_operations_client.py index 187f0be3..fb4b14f1 100644 --- a/tests/unit/operations_v1/test_operations_client.py +++ b/tests/unit/operations_v1/test_operations_client.py @@ -16,12 +16,13 @@ try: import grpc # noqa: F401 -except ImportError: +except ImportError: # pragma: NO COVER pytest.skip("No GRPC", allow_module_level=True) from google.api_core import grpc_helpers from google.api_core import operations_v1 from google.api_core import page_iterator +from google.api_core.operations_v1 import operations_client_config from google.longrunning import operations_pb2 from google.protobuf import empty_pb2 @@ -96,3 +97,7 @@ def test_cancel_operation(): ].metadata assert len(channel.CancelOperation.requests) == 1 assert channel.CancelOperation.requests[0].name == "name" + + +def test_operations_client_config(): + assert operations_client_config.config["interfaces"] diff --git a/tests/unit/operations_v1/test_operations_rest_client.py b/tests/unit/operations_v1/test_operations_rest_client.py index 625539e2..149c463c 100644 --- a/tests/unit/operations_v1/test_operations_rest_client.py +++ b/tests/unit/operations_v1/test_operations_rest_client.py @@ -20,7 +20,7 @@ try: import grpc # noqa: F401 -except ImportError: +except ImportError: # pragma: NO COVER pytest.skip("No GRPC", allow_module_level=True) from requests import Response # noqa I201 from requests.sessions import Session @@ -121,7 +121,7 @@ def test_operations_client_from_service_account_info(client_class): assert client.transport._credentials == creds assert isinstance(client, client_class) - assert client.transport._host == "longrunning.googleapis.com:443" + assert client.transport._host == "https://longrunning.googleapis.com" @pytest.mark.parametrize( @@ -160,7 +160,7 @@ def test_operations_client_from_service_account_file(client_class): assert client.transport._credentials == creds assert isinstance(client, client_class) - assert client.transport._host == "longrunning.googleapis.com:443" + assert client.transport._host == "https://longrunning.googleapis.com" def test_operations_client_get_transport_class(): @@ -465,10 +465,7 @@ def test_list_operations_rest( actual_args = req.call_args assert actual_args.args[0] == "GET" - assert ( - actual_args.args[1] - == "https://longrunning.googleapis.com:443/v3/operations" - ) + assert actual_args.args[1] == "https://longrunning.googleapis.com/v3/operations" assert actual_args.kwargs["params"] == [ ("filter", "my_filter"), ("pageSize", 10), @@ -574,7 +571,7 @@ def test_get_operation_rest( assert actual_args.args[0] == "GET" assert ( actual_args.args[1] - == "https://longrunning.googleapis.com:443/v3/operations/sample1" + == "https://longrunning.googleapis.com/v3/operations/sample1" ) # Establish that the response is the type that we expect. @@ -591,13 +588,11 @@ def test_get_operation_rest_failure(): response_value.status_code = 400 mock_request = mock.MagicMock() mock_request.method = "GET" - mock_request.url = ( - "https://longrunning.googleapis.com:443/v1/operations/sample1" - ) + mock_request.url = "https://longrunning.googleapis.com/v1/operations/sample1" response_value.request = mock_request req.return_value = response_value with pytest.raises(core_exceptions.GoogleAPIError): - client.get_operation("operations/sample1") + client.get_operation("sammple0/operations/sample1") def test_delete_operation_rest( @@ -619,7 +614,7 @@ def test_delete_operation_rest( assert actual_args.args[0] == "DELETE" assert ( actual_args.args[1] - == "https://longrunning.googleapis.com:443/v3/operations/sample1" + == "https://longrunning.googleapis.com/v3/operations/sample1" ) @@ -631,13 +626,11 @@ def test_delete_operation_rest_failure(): response_value.status_code = 400 mock_request = mock.MagicMock() mock_request.method = "DELETE" - mock_request.url = ( - "https://longrunning.googleapis.com:443/v1/operations/sample1" - ) + mock_request.url = "https://longrunning.googleapis.com/v1/operations/sample1" response_value.request = mock_request req.return_value = response_value with pytest.raises(core_exceptions.GoogleAPIError): - client.delete_operation(name="operations/sample1") + client.delete_operation(name="sample0/operations/sample1") def test_cancel_operation_rest(transport: str = "rest"): @@ -657,7 +650,7 @@ def test_cancel_operation_rest(transport: str = "rest"): assert actual_args.args[0] == "POST" assert ( actual_args.args[1] - == "https://longrunning.googleapis.com:443/v3/operations/sample1:cancel" + == "https://longrunning.googleapis.com/v3/operations/sample1:cancel" ) @@ -670,12 +663,12 @@ def test_cancel_operation_rest_failure(): mock_request = mock.MagicMock() mock_request.method = "POST" mock_request.url = ( - "https://longrunning.googleapis.com:443/v1/operations/sample1:cancel" + "https://longrunning.googleapis.com/v1/operations/sample1:cancel" ) response_value.request = mock_request req.return_value = response_value with pytest.raises(core_exceptions.GoogleAPIError): - client.cancel_operation(name="operations/sample1") + client.cancel_operation(name="sample0/operations/sample1") def test_credentials_transport_error(): @@ -825,7 +818,7 @@ def test_operations_host_no_port(): api_endpoint="longrunning.googleapis.com" ), ) - assert client.transport._host == "longrunning.googleapis.com:443" + assert client.transport._host == "https://longrunning.googleapis.com" def test_operations_host_with_port(): @@ -835,7 +828,7 @@ def test_operations_host_with_port(): api_endpoint="longrunning.googleapis.com:8000" ), ) - assert client.transport._host == "longrunning.googleapis.com:8000" + assert client.transport._host == "https://longrunning.googleapis.com:8000" def test_common_billing_account_path(): diff --git a/tests/unit/test_bidi.py b/tests/unit/test_bidi.py index 7fb16209..f5e2b72b 100644 --- a/tests/unit/test_bidi.py +++ b/tests/unit/test_bidi.py @@ -22,7 +22,7 @@ try: import grpc -except ImportError: +except ImportError: # pragma: NO COVER pytest.skip("No GRPC", allow_module_level=True) from google.api_core import bidi diff --git a/tests/unit/test_client_info.py b/tests/unit/test_client_info.py index f5eebfbe..3361fef6 100644 --- a/tests/unit/test_client_info.py +++ b/tests/unit/test_client_info.py @@ -15,7 +15,7 @@ try: import grpc -except ImportError: +except ImportError: # pragma: NO COVER grpc = None from google.api_core import client_info @@ -26,9 +26,9 @@ def test_constructor_defaults(): assert info.python_version is not None - if grpc is not None: + if grpc is not None: # pragma: NO COVER assert info.grpc_version is not None - else: + else: # pragma: NO COVER assert info.grpc_version is None assert info.api_core_version is not None diff --git a/tests/unit/test_exceptions.py b/tests/unit/test_exceptions.py index 4169ad44..eb0b12d1 100644 --- a/tests/unit/test_exceptions.py +++ b/tests/unit/test_exceptions.py @@ -22,7 +22,7 @@ try: import grpc from grpc_status import rpc_status -except ImportError: +except ImportError: # pragma: NO COVER grpc = rpc_status = None from google.api_core import exceptions diff --git a/tests/unit/test_general_helpers.py b/tests/unit/test_general_helpers.py new file mode 100644 index 00000000..82d6d4b6 --- /dev/null +++ b/tests/unit/test_general_helpers.py @@ -0,0 +1,13 @@ +# Copyright 2022, Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/tests/unit/test_grpc_helpers.py b/tests/unit/test_grpc_helpers.py index 8b9fd9f1..9fe938d6 100644 --- a/tests/unit/test_grpc_helpers.py +++ b/tests/unit/test_grpc_helpers.py @@ -17,7 +17,7 @@ try: import grpc -except ImportError: +except ImportError: # pragma: NO COVER pytest.skip("No GRPC", allow_module_level=True) from google.api_core import exceptions @@ -365,7 +365,7 @@ def test_create_channel_implicit(grpc_secure_channel, default, composite_creds_c default.assert_called_once_with(scopes=None, default_scopes=None) - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -400,7 +400,7 @@ def test_create_channel_implicit_with_default_host( mock.sentinel.credentials, mock.sentinel.Request, default_host=default_host ) - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -427,7 +427,7 @@ def test_create_channel_implicit_with_ssl_creds( composite_creds_call.assert_called_once_with(ssl_creds, mock.ANY) composite_creds = composite_creds_call.return_value - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -452,7 +452,7 @@ def test_create_channel_implicit_with_scopes( default.assert_called_once_with(scopes=["one", "two"], default_scopes=None) - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -477,7 +477,7 @@ def test_create_channel_implicit_with_default_scopes( default.assert_called_once_with(scopes=None, default_scopes=["three", "four"]) - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -509,7 +509,7 @@ def test_create_channel_explicit(grpc_secure_channel, auth_creds, composite_cred assert channel is grpc_secure_channel.return_value - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -533,7 +533,7 @@ def test_create_channel_explicit_scoped(grpc_secure_channel, composite_creds_cal assert channel is grpc_secure_channel.return_value - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -561,7 +561,7 @@ def test_create_channel_explicit_default_scopes( assert channel is grpc_secure_channel.return_value - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -587,7 +587,7 @@ def test_create_channel_explicit_with_quota_project( assert channel is grpc_secure_channel.return_value - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -616,7 +616,7 @@ def test_create_channel_with_credentials_file( assert channel is grpc_secure_channel.return_value - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -648,7 +648,7 @@ def test_create_channel_with_credentials_file_and_scopes( assert channel is grpc_secure_channel.return_value - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -680,7 +680,7 @@ def test_create_channel_with_credentials_file_and_default_scopes( assert channel is grpc_secure_channel.return_value - if grpc_helpers.HAS_GRPC_GCP: + if grpc_helpers.HAS_GRPC_GCP: # pragma: NO COVER grpc_secure_channel.assert_called_once_with(target, composite_creds, None) else: grpc_secure_channel.assert_called_once_with(target, composite_creds) @@ -690,7 +690,7 @@ def test_create_channel_with_credentials_file_and_default_scopes( not grpc_helpers.HAS_GRPC_GCP, reason="grpc_gcp module not available" ) @mock.patch("grpc_gcp.secure_channel") -def test_create_channel_with_grpc_gcp(grpc_gcp_secure_channel): +def test_create_channel_with_grpc_gcp(grpc_gcp_secure_channel): # pragma: NO COVER target = "example.com:443" scopes = ["test_scope"] diff --git a/tests/unit/test_operation.py b/tests/unit/test_operation.py index 22e23bc3..f029866c 100644 --- a/tests/unit/test_operation.py +++ b/tests/unit/test_operation.py @@ -18,7 +18,7 @@ try: import grpc # noqa: F401 -except ImportError: +except ImportError: # pragma: NO COVER pytest.skip("No GRPC", allow_module_level=True) from google.api_core import exceptions diff --git a/tests/unit/test_retry.py b/tests/unit/test_retry.py index 74c5d77c..ec27056d 100644 --- a/tests/unit/test_retry.py +++ b/tests/unit/test_retry.py @@ -52,7 +52,7 @@ def test_if_transient_error(): # Make uniform return half of its maximum, which will be the calculated # sleep time. -@mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n / 2.0) +@mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n) def test_exponential_sleep_generator_base_2(uniform): gen = retry.exponential_sleep_generator(1, 60, multiplier=2) @@ -172,6 +172,7 @@ def test_constructor_defaults(self): assert retry_._deadline == 120 assert retry_._on_error is None assert retry_.deadline == 120 + assert retry_.timeout == 120 def test_constructor_options(self): _some_function = mock.Mock() @@ -315,7 +316,7 @@ def if_exception_type(exc): assert re.match( ( r", " - r"initial=1.0, maximum=60.0, multiplier=2.0, deadline=120.0, " + r"initial=1.0, maximum=60.0, multiplier=2.0, timeout=120.0, " r"on_error=None>" ), str(retry_), @@ -337,8 +338,7 @@ def test___call___and_execute_success(self, sleep): target.assert_called_once_with("meep") sleep.assert_not_called() - # Make uniform return half of its maximum, which is the calculated sleep time. - @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n / 2.0) + @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n) @mock.patch("time.sleep", autospec=True) def test___call___and_execute_retry(self, sleep, uniform): @@ -360,8 +360,7 @@ def test___call___and_execute_retry(self, sleep, uniform): sleep.assert_called_once_with(retry_._initial) assert on_error.call_count == 1 - # Make uniform return half of its maximum, which is the calculated sleep time. - @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n / 2.0) + @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n) @mock.patch("time.sleep", autospec=True) def test___call___and_execute_retry_hitting_deadline(self, sleep, uniform): @@ -371,7 +370,7 @@ def test___call___and_execute_retry_hitting_deadline(self, sleep, uniform): initial=1.0, maximum=1024.0, multiplier=2.0, - deadline=9.9, + deadline=30.9, ) utcnow = datetime.datetime.utcnow() @@ -406,8 +405,17 @@ def increase_time(sleep_delay): last_wait = sleep.call_args.args[0] total_wait = sum(call_args.args[0] for call_args in sleep.call_args_list) - assert last_wait == 2.9 # and not 8.0, because the last delay was shortened - assert total_wait == 9.9 # the same as the deadline + assert last_wait == 8.0 + # Next attempt would be scheduled in 16 secs, 15 + 16 = 31 > 30.9, thus + # we do not even wait for it to be scheduled (30.9 is configured timeout). + # This changes the previous logic of shortening the last attempt to fit + # in the deadline. The previous logic was removed to make Python retry + # logic consistent with the other languages and to not disrupt the + # randomized retry delays distribution by artificially increasing a + # probability of scheduling two (instead of one) last attempts with very + # short delay between them, while the second retry having very low chance + # of succeeding anyways. + assert total_wait == 15.0 @mock.patch("time.sleep", autospec=True) def test___init___without_retry_executed(self, sleep): @@ -432,8 +440,7 @@ def test___init___without_retry_executed(self, sleep): sleep.assert_not_called() _some_function.assert_not_called() - # Make uniform return half of its maximum, which is the calculated sleep time. - @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n / 2.0) + @mock.patch("random.uniform", autospec=True, side_effect=lambda m, n: n) @mock.patch("time.sleep", autospec=True) def test___init___when_retry_is_executed(self, sleep, uniform): _some_function = mock.Mock() diff --git a/tests/unit/test_timeout.py b/tests/unit/test_timeout.py index 30d624e2..a83a2ecb 100644 --- a/tests/unit/test_timeout.py +++ b/tests/unit/test_timeout.py @@ -17,11 +17,11 @@ import mock -from google.api_core import timeout +from google.api_core import timeout as timeouts def test__exponential_timeout_generator_base_2(): - gen = timeout._exponential_timeout_generator(1.0, 60.0, 2.0, deadline=None) + gen = timeouts._exponential_timeout_generator(1.0, 60.0, 2.0, deadline=None) result = list(itertools.islice(gen, 8)) assert result == [1, 2, 4, 8, 16, 32, 60, 60] @@ -34,7 +34,7 @@ def test__exponential_timeout_generator_base_deadline(utcnow): datetime.datetime.min + datetime.timedelta(seconds=n) for n in range(15) ] - gen = timeout._exponential_timeout_generator(1.0, 60.0, 2.0, deadline=30.0) + gen = timeouts._exponential_timeout_generator(1.0, 60.0, 2.0, deadline=30.0) result = list(itertools.islice(gen, 14)) # Should grow until the cumulative time is > 30s, then start decreasing as @@ -42,22 +42,105 @@ def test__exponential_timeout_generator_base_deadline(utcnow): assert result == [1, 2, 4, 8, 16, 24, 23, 22, 21, 20, 19, 18, 17, 16] +class TestTimeToDeadlineTimeout(object): + def test_constructor(self): + timeout_ = timeouts.TimeToDeadlineTimeout() + assert timeout_._timeout is None + + def test_constructor_args(self): + timeout_ = timeouts.TimeToDeadlineTimeout(42.0) + assert timeout_._timeout == 42.0 + + def test___str__(self): + timeout_ = timeouts.TimeToDeadlineTimeout(1) + assert str(timeout_) == "" + + def test_apply(self): + target = mock.Mock(spec=["__call__", "__name__"], __name__="target") + + datetime.datetime.utcnow() + datetime.timedelta(seconds=1) + + now = datetime.datetime.utcnow() + + times = [ + now, + now + datetime.timedelta(seconds=0.0009), + now + datetime.timedelta(seconds=1), + now + datetime.timedelta(seconds=39), + now + datetime.timedelta(seconds=42), + now + datetime.timedelta(seconds=43), + ] + + def _clock(): + return times.pop(0) + + timeout_ = timeouts.TimeToDeadlineTimeout(42.0, _clock) + wrapped = timeout_(target) + + wrapped() + target.assert_called_with(timeout=42.0) + wrapped() + target.assert_called_with(timeout=41.0) + wrapped() + target.assert_called_with(timeout=3.0) + wrapped() + target.assert_called_with(timeout=0.0) + wrapped() + target.assert_called_with(timeout=0.0) + + def test_apply_no_timeout(self): + target = mock.Mock(spec=["__call__", "__name__"], __name__="target") + + datetime.datetime.utcnow() + datetime.timedelta(seconds=1) + + now = datetime.datetime.utcnow() + + times = [ + now, + now + datetime.timedelta(seconds=0.0009), + now + datetime.timedelta(seconds=1), + now + datetime.timedelta(seconds=2), + ] + + def _clock(): + return times.pop(0) + + timeout_ = timeouts.TimeToDeadlineTimeout(clock=_clock) + wrapped = timeout_(target) + + wrapped() + target.assert_called_with() + wrapped() + target.assert_called_with() + + def test_apply_passthrough(self): + target = mock.Mock(spec=["__call__", "__name__"], __name__="target") + timeout_ = timeouts.TimeToDeadlineTimeout(42.0) + wrapped = timeout_(target) + + wrapped(1, 2, meep="moop") + + target.assert_called_once_with(1, 2, meep="moop", timeout=42.0) + + class TestConstantTimeout(object): def test_constructor(self): - timeout_ = timeout.ConstantTimeout() + timeout_ = timeouts.ConstantTimeout() assert timeout_._timeout is None def test_constructor_args(self): - timeout_ = timeout.ConstantTimeout(42.0) + timeout_ = timeouts.ConstantTimeout(42.0) assert timeout_._timeout == 42.0 def test___str__(self): - timeout_ = timeout.ConstantTimeout(1) + timeout_ = timeouts.ConstantTimeout(1) assert str(timeout_) == "" def test_apply(self): target = mock.Mock(spec=["__call__", "__name__"], __name__="target") - timeout_ = timeout.ConstantTimeout(42.0) + timeout_ = timeouts.ConstantTimeout(42.0) wrapped = timeout_(target) wrapped() @@ -66,7 +149,7 @@ def test_apply(self): def test_apply_passthrough(self): target = mock.Mock(spec=["__call__", "__name__"], __name__="target") - timeout_ = timeout.ConstantTimeout(42.0) + timeout_ = timeouts.ConstantTimeout(42.0) wrapped = timeout_(target) wrapped(1, 2, meep="moop") @@ -76,30 +159,30 @@ def test_apply_passthrough(self): class TestExponentialTimeout(object): def test_constructor(self): - timeout_ = timeout.ExponentialTimeout() - assert timeout_._initial == timeout._DEFAULT_INITIAL_TIMEOUT - assert timeout_._maximum == timeout._DEFAULT_MAXIMUM_TIMEOUT - assert timeout_._multiplier == timeout._DEFAULT_TIMEOUT_MULTIPLIER - assert timeout_._deadline == timeout._DEFAULT_DEADLINE + timeout_ = timeouts.ExponentialTimeout() + assert timeout_._initial == timeouts._DEFAULT_INITIAL_TIMEOUT + assert timeout_._maximum == timeouts._DEFAULT_MAXIMUM_TIMEOUT + assert timeout_._multiplier == timeouts._DEFAULT_TIMEOUT_MULTIPLIER + assert timeout_._deadline == timeouts._DEFAULT_DEADLINE def test_constructor_args(self): - timeout_ = timeout.ExponentialTimeout(1, 2, 3, 4) + timeout_ = timeouts.ExponentialTimeout(1, 2, 3, 4) assert timeout_._initial == 1 assert timeout_._maximum == 2 assert timeout_._multiplier == 3 assert timeout_._deadline == 4 def test_with_timeout(self): - original_timeout = timeout.ExponentialTimeout() + original_timeout = timeouts.ExponentialTimeout() timeout_ = original_timeout.with_deadline(42) assert original_timeout is not timeout_ - assert timeout_._initial == timeout._DEFAULT_INITIAL_TIMEOUT - assert timeout_._maximum == timeout._DEFAULT_MAXIMUM_TIMEOUT - assert timeout_._multiplier == timeout._DEFAULT_TIMEOUT_MULTIPLIER + assert timeout_._initial == timeouts._DEFAULT_INITIAL_TIMEOUT + assert timeout_._maximum == timeouts._DEFAULT_MAXIMUM_TIMEOUT + assert timeout_._multiplier == timeouts._DEFAULT_TIMEOUT_MULTIPLIER assert timeout_._deadline == 42 def test___str__(self): - timeout_ = timeout.ExponentialTimeout(1, 2, 3, 4) + timeout_ = timeouts.ExponentialTimeout(1, 2, 3, 4) assert str(timeout_) == ( "" @@ -107,7 +190,7 @@ def test___str__(self): def test_apply(self): target = mock.Mock(spec=["__call__", "__name__"], __name__="target") - timeout_ = timeout.ExponentialTimeout(1, 10, 2) + timeout_ = timeouts.ExponentialTimeout(1, 10, 2) wrapped = timeout_(target) wrapped() @@ -121,7 +204,7 @@ def test_apply(self): def test_apply_passthrough(self): target = mock.Mock(spec=["__call__", "__name__"], __name__="target") - timeout_ = timeout.ExponentialTimeout(42.0, 100, 2) + timeout_ = timeouts.ExponentialTimeout(42.0, 100, 2) wrapped = timeout_(target) wrapped(1, 2, meep="moop")