Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

angular.io app gets stuck in “loading document” state, likely due to client-server version skew #28114

Closed
1 of 3 tasks
IgorMinar opened this issue Jan 13, 2019 · 104 comments
Closed
1 of 3 tasks
Labels
area: service-worker Issues related to the @angular/service-worker package freq2: medium P3 An issue that is relevant to core functions, but does not impede progress. Important, but not urgent state: needs more investigation type: bug/fix
Projects

Comments

@IgorMinar
Copy link
Contributor

IgorMinar commented Jan 13, 2019

On several occasions in the past I’ve observed that if I have a long running angular.io app open in a tab, and after some time I come back to this tab and try to use it for another navigation within the app, the app responds with doc viewer progress bar indicating the document is being loaded, but it never actually loads the doc and remains in this state forever. Reloading the page resolves the issue.

When I was looking through the Google Analytics (GA) data today, I noticed that there was a spike in errors just after the holidays:

3e00d684-1fac-4168-b553-36246907cdf9

Google Analytics report

Notice how the spike starts just around Jan 7 when most people got back from holidays, and relatively quickly things come back to normal within the next few days.

If we look into the error causes, you’ll see that the main root cause is failing to download certain JS chunks, mainly the “toc-module” chunk (there are some others if you go into the report and look at the long tail of errors, but they are less common).

This makes me believe that the problem I observed in the past and this error spike after holidays are related.

Here is my interpretation of the events:

  • many people were on angular.io before holidays and visited only the home page but not any api/guide doc (the toc-module case), and others visited other pages as well and their case is captured in the long tail of errors other than “toc-module”.
  • these people left their app open, then came back around Jan 7 and tried to use the app for a new navigation
  • in the meantime we deployed many new versions of angular.io, changing the fingeprints of chunks
  • when the new navigation occured in the long-running app, the app (webpack module loader) tried to download the chunk for the missing code (e.g. toc-module)
  • the server however no longer hosted that chunk because we redeployed angular.io with new versions since then and returned 404
  • it’s uncommon that we fail to load code in this way, so the app just paniced and got stuck in the “loading document” state.
  • user had to reload the page to recover and since reloading helped, people just shrug and moved on without reporting the problem...

There are at least three problems we need to fix:

  • the module loading errors should not cause the app to freak out, we should either display the snack bar telling the user that there was an error and they should try to reload, or we should consider reloading the page automatically (but ensure we don’t end up in an infinite loop of reloads if a reload doesn’t cause the app to recover)
  • look into why “toc-module” fails so frequently and consider prefetching and caching it along with the main chunk
  • we should somehow deal with the client-server version skew (have service-worker cache all chunks lazily, host old chunk versions, etc)

UPDATE(2021-09-20):
See #28114 (comment) for an explanation of the problem.

@fredsa
Copy link

fredsa commented Jan 13, 2019

Having the server keep old chunks for previously deployed versions could be part of the solution for angular.io, but this can't be what Angular relies on by default, as this would greatly complicate deployment of apps to hosting services that expect to be given all static content at each deployment.

Catching the error and providing a default message to end users that suggests reloading sounds pretty good. I'd add to that a hook, similar to the global handler, so devs can replace this message with their own handler code.

@petebacondarwin
Copy link
Member

That is interesting analysis @IgorMinar - I agree that we should prompt the user with a better error. Also once we land this PR (#28037) that retains the current scroll position on reload, it would be much safer to do automatic reloads.

@manekinekko
Copy link
Contributor

Interesting. Just an FYI, we have noticed a similar behavior with https://xlayers.app. This happens after each deployment.

@Brandinga
Copy link

just my observations:
in our production enviroment, this error also happens if old chunks are not deleted.
the error is mostly sent from the current version (i added a version number to each error log)
--> but i would expect, it is reported from old versions
--> i think the problem comes also from flaky internet connections
(i did not delete old chunks since July 2018, preloadingStrategy: PreloadAllModules, no service-worker configured)

Thanks for sharing this!

@gkalpak
Copy link
Member

gkalpak commented Jan 14, 2019

  • the module loading errors should not cause the app to freak out, we should either display the snack bar telling the user that there was an error and they should try to reload, or we should consider reloading the page automatically (but ensure we don’t end up in an infinite loop of reloads if a reload doesn’t cause the app to recover)

I am not a big fun of auto-reloading, but +100 for doing a better job at notifying the user (right now the app gets stuck on a blank screen, which is 👎).

  • look into why “toc-module” fails so frequently and consider prefetching and caching it along with the main chunk

I suspect that the reason this one fails much more often than others is that basically all docs (i.e. non-marketing pages) require the <aio-toc> element (while other chunks/custom elements are only required some of the time). Furthermore, on small screens (e.g. mobile) the <aio-toc> chunk - when needed - is requested earlier than others via AppComponent's template directly, so it is the first request to fail and cause the error.

As I've written elsewhere (see below), I still don't understand how this could be happening (although I may have some far-fetched theories), since to requested file is part of the eagerly cached app-shell asset group (see our ngsw.json). This is the same asset group that is theoretically serving the outdated index.html. (So, if the outdated index.html referencing the outdated chunks is still in the case, how come the chunks themselves aren't??? 😕)

  • we should somehow deal with the client-server version skew (have service-worker cache all chunks lazily, host old chunk versions, etc)

I have looked into this errors before and haven't been able to figure out why they are happening (or even how are they possible), because the SW should have the necessary files in its cache 😒

Copying my comments from Slack (not public):

While it worked fine on my mobile, I got an error on my laptop:

Failed to load resource: the server responded with a status of 404 ()
main.c6614337f14ffce1caf6.js:1 ERROR Error: [DocViewer] Error preparing document 'api/core/IterableDifferFactory': Error: Loading chunk 12 failed.
(error: https://next.angular.io/toc-toc-module-ngfactory.d41e300812a73df91d75.js)
    at HTMLScriptElement.a (runtime.99ff37d281cef743f1d5.js:1)
    at HTMLScriptElement.D (zone.js.pre-build-optimizer.js:1188)
    at e.invokeTask (zone.js.pre-build-optimizer.js:421)
    at Object.onInvokeTask (main.c6614337f14ffce1caf6.js:1)
    at e.invokeTask (zone.js.pre-build-optimizer.js:420)
    at t.runTask (zone.js.pre-build-optimizer.js:188)
    at t.invokeTask [as invoke] (zone.js.pre-build-optimizer.js:496)
    at y (zone.js.pre-build-optimizer.js:1540)
    at HTMLScriptElement._ (zone.js.pre-build-optimizer.js:1566)
    at HTMLScriptElement.a (runtime.99ff37d281cef743f1d5.js:1)
    at HTMLScriptElement.D (zone.js.pre-build-optimizer.js:1188)
    at e.invokeTask (zone.js.pre-build-optimizer.js:421)
    at Object.onInvokeTask (main.c6614337f14ffce1caf6.js:1)
    at e.invokeTask (zone.js.pre-build-optimizer.js:420)
    at t.runTask (zone.js.pre-build-optimizer.js:188)
    at t.invokeTask [as invoke] (zone.js.pre-build-optimizer.js:496)
    at y (zone.js.pre-build-optimizer.js:1540)
    at HTMLScriptElement._ (zone.js.pre-build-optimizer.js:1566)
    at e.selector (main.c6614337f14ffce1caf6.js:1)
    at e.error (main.c6614337f14ffce1caf6.js:1)
    at e._error (main.c6614337f14ffce1caf6.js:1)
    at e.error (main.c6614337f14ffce1caf6.js:1)
    at e._error (main.c6614337f14ffce1caf6.js:1)
    at e.error (main.c6614337f14ffce1caf6.js:1)
    at e._error (main.c6614337f14ffce1caf6.js:1)
    at e.error (main.c6614337f14ffce1caf6.js:1)
    at e.notifyError (main.c6614337f14ffce1caf6.js:1)
    at e._error (main.c6614337f14ffce1caf6.js:1)
_r @ main.c6614337f14ffce1caf6.js:1
default~code-code-example-module-ngfactory~code-code-tabs-module-ngfactory.e903a21862713bcb7820.js:1 Failed to load resource: the server responded with a status of 404 ()

Opening the URL in a new tab worked fine.

AFAICT, some lazy-loaded files (i.e. custom elements) are mapped to versions (hashes) that are no longer deployed. In addition to that, the SW does not have these in the cache and tries to fetch them from the server (and fails).

Theoretically, the hashes (which are included in runtime.<hash>.js) should always be cached by the SW with the corresponding lazy-loaded JS files (since all are part of the eagerly fetched app-shell asset group).

So, if the SW serves an index.html, which contains a runtime.<hash>.js file that points to specific lazy-loaded JS files, it should also be able to serve those lazy-loaded files from the same cache.

I couldn't figure out under what circumstances this won't be true.

The only possible cause I can think of right now is having outdated index.html and runtime.<hash>.js served by the browser from the browser cache (not the SW cache), while the browser cache does not have lazy-loaded JS files.

I'll keep an eye for these errors and try to gether more info the next time, but - even if that was the case - it does not explain why it happened for some URLs on CI but not others.

Another possible explanation could be a bad CDN serving some outdated files but not others (but that sounds a little far-fetched) :thinking_face:

@petebacondarwin petebacondarwin added this to SELECTED FOR DEVELOPMENT in docs-infra Jan 14, 2019
@petebacondarwin petebacondarwin moved this from SELECTED FOR DEVELOPMENT to BACKLOG in docs-infra Jan 14, 2019
@juristr
Copy link
Contributor

juristr commented Jan 14, 2019

Just had the same issue opening the docs

image

@Toxicable
Copy link

Also had this issue and made a report here: #28243 (I didn't find this issues at the time)

@gkalpak
Copy link
Member

gkalpak commented Feb 8, 2019

Still not able to reproduce this reliably, but here is some more info:

This does not only happen when you have a tab open for a while. Yesterday, @benlesh run into it while having no tab open and by navigating to a docs page via a Google search results link 😞

Has anyone seen this in any other browser than Chrome?

@wKoza
Copy link
Contributor

wKoza commented Feb 8, 2019

Yesterday, I have also had this error with a external link to the "next api" section ( I go seldom on this part). I had used Chrome.

@fredsa
Copy link

fredsa commented Feb 8, 2019

I only use Chrome and get this from time to time.

@IgorMinar
Copy link
Contributor Author

Just happened again to me:

main.d6ed4a5a014b6cbb8d5f.js:1 ERROR Error: [DocViewer] Error preparing document 'cli/build': Error: Loading chunk 20 failed.
(error: https://angular.io/toc-toc-module-ngfactory.d0ce87256afe85686f1e.js)
    at HTMLScriptElement.a (runtime.60fab82dedf87ba383c8.js:1)
    at HTMLScriptElement.D (polyfills.a2efc1c1a62312ff1f80.js:1)
    at e.invokeTask (polyfills.a2efc1c1a62312ff1f80.js:1)
    at Object.onInvokeTask (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e.invokeTask (polyfills.a2efc1c1a62312ff1f80.js:1)
    at t.runTask (polyfills.a2efc1c1a62312ff1f80.js:1)
    at t.invokeTask [as invoke] (polyfills.a2efc1c1a62312ff1f80.js:1)
    at y (polyfills.a2efc1c1a62312ff1f80.js:1)
    at HTMLScriptElement._ (polyfills.a2efc1c1a62312ff1f80.js:1)
    at e.selector (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e.error (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e._error (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e.error (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e._error (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e.error (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e._error (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e.error (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e.notifyError (main.d6ed4a5a014b6cbb8d5f.js:1)
    at e._error (main.d6ed4a5a014b6cbb8d5f.js:1)

screen shot 2019-02-12 at 9 12 57 pm

@manekinekko
Copy link
Contributor

manekinekko commented Feb 13, 2019

FYI, we had the same issue with 2 angular apps (https://xlayers.app) and (https://ngx.tools). Both apps are deployed to Firebase CDN. And this problem came from the default Firebase cache-control config. I had to temporarily disable the cache (no recommended of course):

"headers": [
      {
        "source": "/**",
        "headers": [
          {
            "key": "Cache-Control",
            "value": "no-cache, no-store, must-revalidate"
          }
        ]
      }
    ]

I have tried other caching strategies but none worked.

@wKoza
Copy link
Contributor

wKoza commented Feb 13, 2019

@manekinekko , you don't use webworker, do you ?

@manekinekko
Copy link
Contributor

Nope, SW.

@gkalpak
Copy link
Member

gkalpak commented Feb 13, 2019

Interesting insight, @manekinekko.
@wKoza: I don't think webworker has anything to do with the problem. You probably meant serviceworker 😁 (AFAICT, ngx.tools does use a ServiceWorker and xlayers.app doesn't.)

@manekinekko
Copy link
Contributor

manekinekko commented Feb 13, 2019

@gkalpak xlayers.app used to have a SW but I had to disable it until I find a better solution for this caching issue.

@wKoza
Copy link
Contributor

wKoza commented Feb 13, 2019

yes @gkalpak . It's hard this morning ;)

@gkalpak
Copy link
Member

gkalpak commented Feb 13, 2019

So, did the issue happen with the SW or also without?

gkalpak added a commit to gkalpak/angular that referenced this issue Sep 20, 2021
Previously, when a version was found to be broken, any clients assigned
to that version were unassigned (and either assigned to the latest
version or to none if the latest version was the broken one). A version
could be considered broken for several reasons, but most often it is a
response for a hashed asset that eiher does not exist or contains
different content than the SW expects. See
angular#28114 (comment)
for more details.

However, assigning a client to a different version (or the network) in
the middle of a session, turned out to be more risky than keeping it on
the same version. For angular.io, for example, it has led to angular#28114.

This commit avoids making things worse when identifying a broken version
by keeping existing clients to their assigned version (but ensuring that
no new clients are assigned to the broken version).

NOTE:
Reloading the page generates a new client ID, so it is like a new client
for the SW, even if the tab and URL are the same.
gkalpak added a commit to gkalpak/angular that referenced this issue Sep 21, 2021
Previously, when a version was found to be broken, any clients assigned
to that version were unassigned (and either assigned to the latest
version or to none if the latest version was the broken one). A version
could be considered broken for several reasons, but most often it is a
response for a hashed asset that eiher does not exist or contains
different content than the SW expects. See
angular#28114 (comment)
for more details.

However, assigning a client to a different version (or the network) in
the middle of a session, turned out to be more risky than keeping it on
the same version. For angular.io, for example, it has led to angular#28114.

This commit avoids making things worse when identifying a broken version
by keeping existing clients to their assigned version (but ensuring that
no new clients are assigned to the broken version).

NOTE:
Reloading the page generates a new client ID, so it is like a new client
for the SW, even if the tab and URL are the same.
@benlesh
Copy link
Contributor

benlesh commented Sep 24, 2021

@gkalpak when you have a solution, can you please assist us in resolving this issue for RxJS docs? Since they're are fork of angular docs, they have the same issue. cc @niklas-wortmann

@gkalpak
Copy link
Member

gkalpak commented Sep 24, 2021

@benlesh, @niklas-wortmann: Sure, happy to help. The fix would be just updating to a version of @angular/service-worker that includes #43518 (once it is merged). No other change should be needed 🤞

alxhub pushed a commit that referenced this issue Sep 24, 2021
…43518)

Previously, when a version was found to be broken, any clients assigned
to that version were unassigned (and either assigned to the latest
version or to none if the latest version was the broken one). A version
could be considered broken for several reasons, but most often it is a
response for a hashed asset that eiher does not exist or contains
different content than the SW expects. See
#28114 (comment)
for more details.

However, assigning a client to a different version (or the network) in
the middle of a session, turned out to be more risky than keeping it on
the same version. For angular.io, for example, it has led to #28114.

This commit avoids making things worse when identifying a broken version
by keeping existing clients to their assigned version (but ensuring that
no new clients are assigned to the broken version).

NOTE:
Reloading the page generates a new client ID, so it is like a new client
for the SW, even if the tab and URL are the same.

PR Close #43518
alxhub pushed a commit that referenced this issue Sep 24, 2021
…43518)

Previously, when a version was found to be broken, any clients assigned
to that version were unassigned (and either assigned to the latest
version or to none if the latest version was the broken one). A version
could be considered broken for several reasons, but most often it is a
response for a hashed asset that eiher does not exist or contains
different content than the SW expects. See
#28114 (comment)
for more details.

However, assigning a client to a different version (or the network) in
the middle of a session, turned out to be more risky than keeping it on
the same version. For angular.io, for example, it has led to #28114.

This commit avoids making things worse when identifying a broken version
by keeping existing clients to their assigned version (but ensuring that
no new clients are assigned to the broken version).

NOTE:
Reloading the page generates a new client ID, so it is like a new client
for the SW, even if the tab and URL are the same.

PR Close #43518
@gkalpak
Copy link
Member

gkalpak commented Oct 2, 2021

@benlesh, @niklas-wortmann: FYI, the fix has been released in v12.2.8. So, you can update the docs app to that version to fix the issue.

gkalpak added a commit to gkalpak/angular that referenced this issue Oct 2, 2021
This commit updates angular.io to the latest prerelease version of the
Angular framework (v13.0.0-next.10). Among other benefits, this version
also includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest Angular CLI and remove
redundant files/config now that CLI has dropped support for differential
loading.

Fixes angular#28114
gkalpak added a commit to gkalpak/angular that referenced this issue Oct 2, 2021
This commit updates angular.io to the latest prerelease version of the
Angular framework (v13.0.0-next.10). Among other benefits, this version
also includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest Angular CLI and remove
redundant files/config now that CLI has dropped support for differential
loading.

Fixes angular#28114
gkalpak added a commit to gkalpak/angular that referenced this issue Oct 2, 2021
This commit updates angular.io to a recent prerelease version of the
Angular framework (v13.0.0-next.9). Among other benefits, this version
also includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE 1:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest Angular CLI and remove
redundant files/config now that CLI has dropped support for differential
loading.

NOTE 2:
We do not update to the latest prerelease version (v13.0.0-next.10) due
to an incompatibility of `@angular-eslint` with the new ESM format of
`@angular/compiler` ([example failure][1]).

Fixes angular#28114

[1]: https://circleci.com/gh/angular/angular/1062087
gkalpak added a commit to gkalpak/angular that referenced this issue Oct 2, 2021
This commit updates angular.io to the latest stable version of the
Angular framework (v12.2.8). Among other benefits, this version also
includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest stable Angular CLI.

Fixes angular#28114
gkalpak added a commit to gkalpak/angular that referenced this issue Oct 2, 2021
This commit updates angular.io to the latest stable version of the
Angular framework (v12.2.8). Among other benefits, this version also
includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest stable Angular CLI.

Fixes angular#28114
gkalpak added a commit to gkalpak/angular that referenced this issue Oct 2, 2021
This commit updates angular.io to a recent prerelease version of the
Angular framework (v13.0.0-next.9). Among other benefits, this version
also includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE 1:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest Angular CLI and remove
redundant files/config now that CLI has dropped support for differential
loading.

NOTE 2:
We do not update to the latest prerelease version (v13.0.0-next.10) due
to an incompatibility of `@angular-eslint` with the new ESM format of
`@angular/compiler` ([example failure][1]).

Fixes angular#28114

[1]: https://circleci.com/gh/angular/angular/1062087
gkalpak added a commit to gkalpak/angular that referenced this issue Oct 2, 2021
This commit updates angular.io to the latest stable version of the
Angular framework (v12.2.8). Among other benefits, this version also
includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest stable Angular CLI.

Fixes angular#28114
gkalpak added a commit to gkalpak/angular that referenced this issue Oct 3, 2021
This commit updates angular.io to a recent prerelease version of the
Angular framework (v13.0.0-next.9). Among other benefits, this version
also includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE 1:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest Angular CLI and remove
redundant files/config now that CLI has dropped support for differential
loading.

NOTE 2:
We do not update to the latest prerelease version (v13.0.0-next.10) due
to an incompatibility of `@angular-eslint` with the new ESM format of
`@angular/compiler` ([example failure][1]).

Fixes angular#28114

[1]: https://circleci.com/gh/angular/angular/1062087
gkalpak added a commit to gkalpak/angular that referenced this issue Oct 5, 2021
This commit updates angular.io to a recent prerelease version of the
Angular framework (v13.0.0-next.9). Among other benefits, this version
also includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE 1:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest Angular CLI and remove
redundant files/config now that CLI has dropped support for differential
loading.

NOTE 2:
We do not update to the latest prerelease version (v13.0.0-next.10) due
to an incompatibility of `@angular-eslint` with the new ESM format of
`@angular/compiler` ([example failure][1]).

Fixes angular#28114

[1]: https://circleci.com/gh/angular/angular/1062087
dylhunn pushed a commit that referenced this issue Oct 5, 2021
This commit updates angular.io to the latest stable version of the
Angular framework (v12.2.8). Among other benefits, this version also
includes the ServiceWorker fix from #43518, which fixes #28114.

NOTE:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest stable Angular CLI.

Fixes #28114

PR Close #43687
gkalpak added a commit to gkalpak/angular that referenced this issue Oct 5, 2021
This commit updates angular.io to a recent prerelease version of the
Angular framework (v13.0.0-next.9). Among other benefits, this version
also includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE 1:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest Angular CLI and remove
redundant files/config now that CLI has dropped support for differential
loading.

NOTE 2:
We do not update to the latest prerelease version (v13.0.0-next.10) due
to an incompatibility of `@angular-eslint` with the new ESM format of
`@angular/compiler` ([example failure][1]).

Fixes angular#28114

[1]: https://circleci.com/gh/angular/angular/1062087
@gkalpak
Copy link
Member

gkalpak commented Oct 27, 2021

Since both #43518 (which fixed the issue for @angular/service-worker) and #43687 (which updates angular.io to a version that includes the fix) have been merged, I am going to close this issue as "fixed" 🤞

Note that people might still see the issue one more time (i.e. the first time they visit angular.io since the fix was deployed), but the error rate should be dropping (and the data from Google analytics does confirm that). We'll keep an eye on this to make sure that the error rate continues to drop.

@gkalpak gkalpak closed this as completed Oct 27, 2021
@petebacondarwin
Copy link
Member

🎉 thank you @gkalpak for your determination and brilliance in finding a resolution to this really really annoying issue.

@angular-automatic-lock-bot
Copy link

This issue has been automatically locked due to inactivity.
Please file a new issue if you are encountering a similar or related problem.

Read more about our automatic conversation locking policy.

This action has been performed automatically by a bot.

@angular-automatic-lock-bot angular-automatic-lock-bot bot locked and limited conversation to collaborators Dec 9, 2021
Serginho pushed a commit to TuLotero/angular that referenced this issue Jan 20, 2022
…r#43687)

This commit updates angular.io to the latest stable version of the
Angular framework (v12.2.8). Among other benefits, this version also
includes the ServiceWorker fix from angular#43518, which fixes angular#28114.

NOTE:
This commit also makes the necessary changes to more closely align
angular.io with new apps created with the latest stable Angular CLI.

Fixes angular#28114

PR Close angular#43687
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area: service-worker Issues related to the @angular/service-worker package freq2: medium P3 An issue that is relevant to core functions, but does not impede progress. Important, but not urgent state: needs more investigation type: bug/fix
Projects
docs-infra
BACKLOG
Development

Successfully merging a pull request may close this issue.