Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batching of precache requests to prevent net::ERR_INSUFFICIENT_RESOURCES in Chrome #2528

Closed
jshearer opened this issue Jun 3, 2020 · 7 comments · Fixed by #2562
Closed
Assignees
Milestone

Comments

@jshearer
Copy link

jshearer commented Jun 3, 2020

Library Affected:
This likely is only related to workbox-precache

Browser & Platform:
Google Chrome

Issue or Feature Request Description:
I originally wrote this as a comment on #570, but I since that issue is closed, I thought I would post it as a new issue to get more visibility

It seems that workbox uses Promise.all when making precache requests instead of explicitly batching or rate-limiting the requests, and it seems that Chrome can't handle this under certain circumstances.

Specifically, I'm getting sporadic net::ERR_INSUFFICIENT_RESOURCES failures.

Screen Shot 2020-06-02 at 1 35 10 AM

After looking up this error, it seems that this represents some sort of resource exhaustion within chrome. A few other people have come across this here, and here, and it seems that the answer is to simply make fewer concurrent requests.

I noticed that @nachoab came up against this here and solved it by batching requests in chunks of 20, effectively limiting concurrency to 20 inflight requests at most... but that's against the old sw-precache repo and not directly applicable here.

The easiest and most flexible solution as I see it would be to make precacheAndRoute (or just precache) return a promise that resolves when all of the specified routes have been precached. This way I could do the rate-limiting in my serviceworker, instead of adding the burden of rate-limiting onto workbox.

Thoughts?

@jeffposnick
Copy link
Contributor

I think this is within the scope of things that we can change in v6, as part of the work that @philipwalton has been doing to rewrite some of the precaching logic.

I'm not sure whether a hardcoded upper bound on concurrent requests, or alternatively some logic that explicitly checks for a net::ERR_INSUFFICIENT_RESOURCES exception and uses that to trigger a backoff would be a better approach.

Note that if you do run into this scenario, the service worker's install will fail, but the entries that were retrieved from the network prior to failure will be stored in the cache and won't have to be re-fetch()ed next time installation is attempted. So the expectation would be that the install would, eventually succeed, whether it's the next time it's attempted or after several additional attempts.

@jshearer
Copy link
Author

jshearer commented Jun 3, 2020

So the expectation would be that the install would, eventually succeed, whether it's the next time it's attempted or after several additional attempts.

This was the "actual" bug I was trying to diagnose -- On cache clear/reload, the serviceworker would fail to install, but after refreshing the page 3-5 times, it would eventually work.

I suspect that explicitly detecting net::ERR_INSUFFICIENT_RESOURCES will be difficult, as my understanding from one of the chromium bug tracker issues I linked is that this is considered a security-related side-channel, and this kind of error should be kept opaque from javascript.

A hardcoded upper bound would be fantastic and probably solve the problem, though certainly a configurable bound would be better.

@jshearer
Copy link
Author

jshearer commented Jun 29, 2020

@jeffposnick I see there is a v6 milestone -- should this be added there? This is still an issue 😕

@jeffposnick jeffposnick added this to the v6 milestone Jun 29, 2020
@jeffposnick
Copy link
Contributor

After going through this a bit, I'm thinking that the cleanest approach would be to just cache one entry at a time, and not start the next until the previous completes.

It might end up taking a bit longer to finish the full list for a long precache manifest, but we have heard from developers who are unhappy with precaching using up bandwidth that the main web app might need, so I'd rather just go completely the opposite end of things with a max size of 1.

(C.f. #1855 and https://developers.google.com/web/fundamentals/primers/service-workers/registration#reasons_to_register_early)

@jshearer
Copy link
Author

jshearer commented Jul 1, 2020

Fantastic!

@micahjon
Copy link

micahjon commented May 7, 2021

I appreciate the simplicity of downloading assets one by one and the goal of reducing impact on other network requests, but it would be nice if this was configurable.

For instance, imagine you're pre-caching some data (e.g. API request) that isn't available at the edge in your user's region. Now retrieving that data from the other side of the world holds up all your other assets at the edge from pre-caching, when they could easily happen in parallel with a negligible performance impact if you allowed two requests at a time.

Latency aside, this also negates some HTTP/2 benefits, even in the simple case of all assets being at the edge already.

In my case, I have a lot of smallish assets and one large XML file, which holds everything else up. Not the end of the world, but it would be nice if I could expand the pipeline to 2-3 requests at a time.

mmso pushed a commit to ProtonMail/WebClients that referenced this issue Jun 6, 2023
It turns out that there's a bug in Chromium (see GoogleChrome/workbox#2528 for
multiple references to it) that causes an exhaustion of the browser resources when launching many simultaneous
HTTP2 requests (not an issue for HTTP1 since in that case simultaneous requestst are capped at 6), making
requests over the quota fail with a net:ERR_INSUFFICIENT_RESOURCES error. Only Chromium-based browser have this bug.
The number of max simultaneous requests allowed seems to be variable. There's probably not a hard cap on it,
but rather a cap on the memory that the browser can use. In our tests we saw the error at around ~1000 requests.

CALWEB-4446
@taozhou-glean
Copy link

+1 making this configurable - I think one at a time can lead to very long queue time when a lot to fetch during sw updates: #3294

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants