Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

React 18 - waterfall suspensions re-trigger the Suspense boundary #22626

Open
arackaf opened this issue Oct 26, 2021 · 16 comments
Open

React 18 - waterfall suspensions re-trigger the Suspense boundary #22626

arackaf opened this issue Oct 26, 2021 · 16 comments
Labels
React 18 Bug reports, questions, and general feedback about React 18 Type: Discussion

Comments

@arackaf
Copy link

arackaf commented Oct 26, 2021

I'm seeing some baffling behavior with React 18 in concurrent mode. The minimal repro is below. The tl;dr is this:

I have a single Suspense boundary at the root of the application. The fallback turns the screen pink (so you know it's there).

Problem 1

I load some data with a Suspense hook. When the data come back, and render, subsequent Suspensions will happen, since I have a SuspenseImg component which suspends while each image preloads.

Expected Behavior

The fallback should show until the data come in, and all of the subsequent image preloads are done.

Actual Behavior

The fallback shows, then the page renders briefly, without data, and then re-suspends while the images preload.


Problem 2

There's a button which loads more data, using startTransition. The data load re-triggers the same suspension, and the new data cause the same suspensions when rendered, via the same SuspenseImg (I force it to suspend even though I'm loading the same 5 images over and over).

Expected Behavior

The loading boolean from useTransition should show until the data are returned, and all of the images have pre-loaded

Actual Behavior

The loading boolean shows while the data are loading, and then the main fallback shows while the images preload.


I'll assume this isn't a bug, and that I instead have misunderstood how this works. I had thought Suspense was supposed to make this stuff fire and forget, with React keeping the appropriate fallback / loading boolean set until the entire state has been set, and everything rendered without anything suspending. But it seems I've misunderstood something?


https://codesandbox.io/s/suspense-debug-pz5mz?file=/src/useQuery.js

@arackaf arackaf added React 18 Bug reports, questions, and general feedback about React 18 Type: Discussion labels Oct 26, 2021
@arackaf
Copy link
Author

arackaf commented Oct 29, 2021

Update: the incomparable, peerless, magnificent Rich Harris actually figured out what was "wrong."

It seems setting state inside of the useEffect effectively broke / interrupted / completed the initial suspension, and when the image preloads happened, things re-suspended. So on initial load, the pink screen showed twice, and on incremental data loading, the pink screen flashed after the loading boolean went back to false.

This sandbox shows the updated code, but of course I had to remove the infinite list (or whatever it's called). I'm just showing new data as it comes in, discarding the old. I'm guessing the correct way to implement what I had originally is with some clever use of useMemo, to piece together the growing list from new data?

I hope this behavior isn't final, and that state changes from within useEffect will eventually combine with the initial suspension, otherwise there will be a lot of surprised and disappointed users out there.

https://codesandbox.io/s/suspense-debug-2-swqgd?file=/src/DataList.js

@sebmarkbage
Copy link
Collaborator

Yes this behavior is final. In fact delaying the effects is a huge part of this feature. Please reserve judgement about how confusing this will be until you have read the docs for the feature that will be available when it’s actually released.

@arackaf
Copy link
Author

arackaf commented Oct 29, 2021

Thanks for the reply @sebmarkbage - will there be any new api's for setting state after some piece of state changes? Ie, I'm honestly stumped on how to implement an infinite list, that grows when new data come in (without setting state in an effect, or writing to a ref in render) - ie a suspense-friendly version of the first sandbox.

@sebmarkbage
Copy link
Collaborator

Hint: You don’t store data in state. You store what pages are currently visible and fetch the data in a loop. That way you are also able to refresh them automatically.

@arackaf
Copy link
Author

arackaf commented Oct 29, 2021

@sebmarkbage ah well that's less a hint vs the entire solution :)

@arackaf
Copy link
Author

arackaf commented Oct 29, 2021

@sebmarkbage ohhhh I think I see now why (iirc) Relay has an imperative api, ie something like

relayAPI.read()

vs a hook like useQuery();

Here, as you said, you'd loop over your pages, and call .read() (which wouldn't work for a hook).

Lastly, am I correct in assuming you could put this behind a useMemo call? Ie, is it valid to Suspend / throw a promise inside of the useMemo callback? I'd assume so but I'm hesitant to assume anything here :)

@gaearon
Copy link
Collaborator

gaearon commented Oct 29, 2021

You might want to have a look at this: reactwg/react-18#25. Having a backing cache (not state/memo) is pretty central to the design of this feature. The data that you suspend on is meant to be stored there, not in the state. That’s also why effects aren’t relevant to the picture — the refetching happens due to reads and explicit refetch calls (which clear the cache). Relay doesn’t work like this yet, so it approximates this with its own cache, but the goal is to make this a primitive that library developers can use and rely on.

@arackaf
Copy link
Author

arackaf commented Oct 29, 2021

@gaearon thanks Dan. I did my best to understand that post. It sounds like there's two caches? The underlying cache coming from your data provider, ie your GraphQL client, or similar. And then another, higher level cache on top of that, which you all are in the early stages of providing?

I'm having a bit of trouble seeing how that would fit in, here, to this use case, ie an infinite list, where the user may have clicked "show more" 3 times?

Sebastian clarified quite a bit in mentioning that only you'd only store the current "lastKey" values, which are used for each of those queries.

So on render, I could easily just call dataProvider.read() N times, get my cached data, and put the results together. But those array concatenations are certainly not free. It might be premature optimization, but I had wondered if I could put that stuff into a useMemo call. I had assumed the read() calls in the useMemo callback were allowed to suspend, but now it seems like the proper solution is to use a secondary cache, which sits atop my GraphQL cache?

Am I understanding this correctly at all?

@gaearon
Copy link
Collaborator

gaearon commented Oct 29, 2021

But those array concatenations are certainly not free

Hmm. This sounds like a very minor concern unless we’re talking about thousands of items on every render. Whenever you map() JSX you also create arrays. Is there some particular perf issue you are worried about here? I think I’m missing something.

@gaearon
Copy link
Collaborator

gaearon commented Oct 29, 2021

The underlying cache coming from your data provider, ie your GraphQL client, or similar. And then another, higher level cache on top of that, which you all are in the early stages of providing?

Yeah.

I'm having a bit of trouble seeing how that would fit in

The key part i was referring to is just that you wouldn’t rely on anything component-level (state/memo) for “stability”. When we suspend before mount we throw away the tree. Storing data (and Promises) in a cache is what makes your code resilient to that. Because it’s ok to recreate the tree later and that won’t cause a refetch or a cache miss.

@gaearon
Copy link
Collaborator

gaearon commented Oct 29, 2021

but now it seems like the proper solution is to use a secondary cache, which sits atop my GraphQL cache?

I think this depends on what you’re trying to cache. The “stitch together a few pages” use case doesn’t seem like it requires any sort of caching to me. If you’re concerned about concatenation itself you can read all pages that you need first and then useMemo the expression that concats them — but again — I’d be very surprised to see a bottleneck of a perf issue here for any realistic pagination use case. In general, I think that yes — we would provide an extra Cache level on top of yours to model consistency guarantees and invalidation within the tree. But that’s not ready yet and I don’t think that’s what you’re asking about.

@arackaf
Copy link
Author

arackaf commented Oct 29, 2021

@gaearon thanks Dan I really appreciate the thorough explanation. I suppose you're right about the array allocations, and in any event, it's good to see useMemo would still be a suitable escape hatch. Reading the cached data for N pages is all but constant, and then concat'ing them all up should be trivial, as you said.

I've been doing some C++ at work lately and sometimes I forget to switch off the paranoia about allocations and such.

I still don't quite see how or where that secondary cache is supposed to live, but I'll look forward to some docs when it's done.

@arackaf
Copy link
Author

arackaf commented Oct 29, 2021

@gaearon oh wait. I might understand where that secondary cache fits in. So with this use case, where we’re pulling more and more data from Dynamo, each time by passing a new lastPageKey or whatever. I’d store that array of keys in state, like Seb said, and read each cached page on render (or suspend if not in cache)

But I could also store that array of keys in this new cache you’re working on, this way if the user clicks to a new page, and then hits the back button, I could load the same data they were just viewing, right? Is that roughly how it’s supposed to fit in?

@gaearon
Copy link
Collaborator

gaearon commented Oct 30, 2021

No, for what you're describing (holding the "position" key), regular state is appropriate. (And if you want to preserve it for Back button, you might encode the last position in the URL or into pushState "state" object or something similar.) We will have some possible built-in ways to "restore" regular React state in the future, but that's a bit further off.

The purpose of the built-in React Cache is a bit different. I think it's easier to understand if we ignore the GraphQL use case and start with a classical fetch example like this sample sandbox from reactwg/react-18#25. Note that it calls fetch but it's a special version from our react-fetch, which is a thin wrapper integrating native fetch with React Cache. This is why it works to just call it in render — the result is automatically cached, and won't be refetched unless the user presses the "Refresh" button next to the dropdown. There is no state in that example to hold the items — they're being held in the built-in React Cache because you're using react-fetch. For example, re-rendering this component wouldn't cause a refetch. Having another component "read" from the same URL would also reuse the same data. Suspending would not cause it to disappear. And so on. That's what React Cache provides by default. There would also be a <Cache> component which lets you specify that you want different parts of your tree to have separate caches. Then this gives you more granular refetching — you can say "refetch everything in this part of the screen", and it would refetch react-fetch (or any other Cache-aware data sources) up to the closest <Cache> in the tree.

Now, this might seem a bit unnecessary for a normalized GraphQL use case where you already have your own cache. With your own cache, data won't just "disappear" anyway and you already have a place to hold it. E.g. that's how Relay works today. However, we still think React Cache would be a useful thing to integrate with there. It provides an opportunity for a cleanup mechanism (being explored in #22510) and a built-in ability to do caching based on the UI granularity ("refetch just the feed part of the screen, but not the sidebar"). There are likely other higher-level features we'd be able to offer on top of that. But it's still in early exploration, so I think our recommendations for how normalized GraphQL clients would ideally work are still being worked out. Keep an eye on reactwg/react-18#25, as there's going to be more work in that area after we get the more basic blocking building blocks out.

@gaearon
Copy link
Collaborator

gaearon commented Oct 30, 2021

I should note though that we are discussing an experimental set of features that are not stable and there is no expectation right now that any data library author would start integrating them other than for fun or experimentation.

It sounds from some of the social media threads like this is something you're trying to make work for a production application. I'd like to reiterate none of this is "stable React" and I'd recommend using whatever data fetching strategies you used before until this stuff is marked as ready and recommended. It may be fun to explore, but it's not "stable React".

The features in this thread are grouped as "After React 18.0" in #13206.
And even React 18 itself is currently in Alpha, and not out as stable yet.

@arackaf
Copy link
Author

arackaf commented Oct 30, 2021

@gaearon Thanks again - that helped a LOT.

I’ll only add that I’m definitely not looking to integrate this generic caching mechanism into anything. I do have my own GraphQL client which is more or less compatible with Suspense. It just turns out I was using it wrong. Per this thread above.

I basically need to add some hook-free read() type method, and implement my infinite list in the way we’ve been discussing. You brought up the cache thing, so I just wanted to clarify what it was for, and make sure I wasn’t missing anything.

btw what you describe above seems really, really cool. Looking forward to seeing a Suspense-ready fetch wrapper, as well as a generic cache.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
React 18 Bug reports, questions, and general feedback about React 18 Type: Discussion
Projects
None yet
Development

No branches or pull requests

3 participants