Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A faster diff algorithm #10703

Open
thomas-jeepe opened this issue Sep 13, 2017 · 24 comments
Open

A faster diff algorithm #10703

thomas-jeepe opened this issue Sep 13, 2017 · 24 comments

Comments

@thomas-jeepe
Copy link

This is an invitation to discussion...

So, react is pretty freaking awesome and I used it quite a bit. One thing unfortunately where react is not as strong is in performance, which gave roots to Inferno and Preact. Although, this is generally a non-issue on desktop, while mobile might be a bottleneck.

I know many members of the team have been working on improving bundle size (I believe through rollup support in a talk I heard), asynchronous scheduling, etc. I am also aware that @trueadm (the creator of inferno) joined the React team and is working on improving it.

The point I want to bring up is this library petit-dom. It uses, is a diff algorithm (links that explain it provided in the README) and it seems to score incredibly on vdom performance tests. In fact, it is only beat by 2 technologies, vanillajs and surplusjs per the benchmark snapshot.

petit-dom beats inferno, preact, mithril, vue, angular, etc. Of course, it is not a proper js framework, however the point I am trying to make is that it is far faster and a major difference between the other frameworks seems to be its diff algorithm.

I realize this would mean a rewrite of a good portion of react-dom, which is why it is simply a discussion :D.

If this is unfeasible, or simply going after the wrong target/bottleneck, let me know as it is after all a discussion.

@gaearon
Copy link
Collaborator

gaearon commented Sep 14, 2017

One thing unfortunately where react is not as strong is in performance, which gave roots to Inferno and Preact. Although, this is generally a non-issue on desktop, while mobile might be a bottleneck. [...] The point I want to bring up is this library petit-dom.

I think your post might be conflating two very different things here: initialization time, and update (“virtual DOM” time).

Regarding initialization time, yes, React ships more code, and in our internal testing there are minor improvements that we can get with those slimmer libraries. (YMMV as it depends on your target devices, bandwidth, and the size of your application itself.)

We expect that we can get better at this in the future as browsers becomes more compatible with each other, and there is less need to polyfill features like events. We support many large apps so we are a bit conservative in removing polyfills, but we are gradually doing this. Still, these differences are in initialization time and bundle size, not in anything related to the diffing.

Regarding the update time I have not seen the research that would demonstrate a considerable difference in using those libraries versus React in real applications. Benchmarks similar to the one you linked to are notoriously easy to fool. Quoting the author of one of the best benchmarks of its kind:

All this is just a benchmark games, and nobody should take seriously any numbers in this benchmarks.

There are many cases where “optimizations” that appear good on such benchmarks have a negative effect on real apps since they have a different structure. I’m sure @trueadm can share his experience with this effect.

From our research, the vast majority of the time spent by applications built with React-like libraries lies in the application code. We wish optimizing the diffing algorithm was more fruitful for anything other than winning benchmark, but that does not match our experience. The library code was a small fraction of the executed code in real apps, at least in the scenarios we were looking at. Most of the code was spent in the components: rendering and lifecycles. Abstraction has a cost. This is why we are spending our efforts in two areas:

  • Reducing the cost of component abstraction with experiments in compilation techniques.
  • Better prioritization and interleaving IO with rendering to improve perceived performance.

All of this is work in progress, but we’ll share if we find something that works well.

That said if you can summarize your findings and what’s novel about your approach, we’d love to hear it too.

@markozxuu
Copy link

markozxuu commented Sep 14, 2017

@gaearon Are you planning to improve React's performance for a future? I hope to help for React to be a more effective tool. I admire the great work they have done with React :)

@gaearon
Copy link
Collaborator

gaearon commented Sep 14, 2017

We are always working on that. :-)
But again, most of the perf issues we’re seeing are in the application code, not the library code.

@localvoid
Copy link

@thomas-jeepe

First of all, this benchmark is really bad at measuring vdom diffing overhead. It has simple test cases that can be easily implemented in vanilla js and all benchmark implementations are optimized to make sure that they perform diffing as little as possible.

One thing unfortunately where react is not as strong is in performance, which gave roots to Inferno and Preact.

Last time I checked, Preact had a slower vdom diffing.

The point I want to bring up is this library petit-dom. It uses, is a diff algorithm (links that explain it provided in the README) and it seems to score incredibly on vdom performance tests.

When I worked on my first vdom implementation I also tried different LCS algorithms, but there is no need to find LCS because all keys are unique among its siblings, so there are better ways to solve this problem.

The main reason why it is faster than alternative vdom implementations is because of different feature set, here are some React features that petit-dom doesn't have and which will have a noticeable impact on vdom diffing:

  • Immutable Virtual DOM Nodes
  • Implicit keys
  • Key Namespaces (children nodes in deeply nested arrays won't have key collisions)
  • Component lifecycle

@thomas-jeepe
Copy link
Author

Alright, this got some attention.

I fully understand the difference between start up time and updating time.

Diffing algorithm has no influence whatsoever to startup time, it should be handled via extensive tree shaking or other ways. Minimizing code size and/or startup computation is what should be done for start up time.

@localvoid The reason I said preact was startup time, and yes diffing against the dom versus a vdom has its disadvantages and seems to be slower in preact's case.

Having a larger feature set is definitely a large influence, hover it doesn't mean React can't steal optimizations from other libraries :)

Although I am curious, what does Immutable Virtual DOM Nodes entail, do you just do an Object.freeze on the vdom nodes? That seems inefficient so I doubt it.

@gaearon Ok, so let me explain the original intent of the post more.

For my purposes, I want to build an web application on mobile with 60 fps animations and a declarative library like react-motion in order to handle animations for ones that can't be handled via simple css. This means I need very fast patching.

The concern I have with react is not scheduling as handled by fiber, it is: Can I render 60fps when doing complex animations? That means as little overhead as possible. This becomes a scheduling issue when more than one thing is occurring at once, which is not the regular situation.

The regular situation, is user presses button X and a nice animation should occur which doesn't stutter. This means a smaller patch overhead and is not solved by something like fiber.

Out of this concern, I looked around and saw Inferno, preact, bobril (not vdom), surplus (also not a vdom), and petit-dom (which I am not the author of and it is not my novel approach) which uses a certain diff algorithm described in a paper, and scores among the best in vdom libraries.

Using that information, I considered that if the overhead of patching the dom is significantly lowered by taking certain approaches, why shouldn't react js take them?

Now on to your points, obviously benchmarks are easy to fool and don't always represent real world cases. Surplus skips the diffing step by using a mobx style state and compiles down into just pure updates to the dom. Many other libraries aren't for real world use or measure incorrect statistics.
Updating, often and fast however is what I am trying to optimize and that seems much more of a relevant benchmark than a todo list, although it still might be far off.

Anyways, the viewpoint I had with the post was not the regular use case, but rather areas where diffing/patching is more important as in animation. For 90% of patching it makes no difference what vdom library you use.

Of course, technically if I really, really wanted the best performance I could go with gsap, but it isn't as declarative or integrated as react-motion is.

As a closing point, the reason I brought up petit-dom (not my library) and the O(ND) diff algorithm
is because, if there is some sort of bottleneck in the diffing algorithm, perhaps the algorithm could increase the speed, or whatever else. If the bottle neck is with components, then maybe @trueadm can improve it with what he knows from inferno. If the bottle neck is in something else, maybe that can be improved. But in the end we all just want a faster React :D

@localvoid
Copy link

Although I am curious, what does Immutable Virtual DOM Nodes entail, do you just do an Object.freeze on the vdom nodes? That seems inefficient so I doubt it.

It means that you can reuse or hoist Virtual DOM nodes. petit-dom mutates vdom nodes and stores reference to a DOM node on vdom nodes.

@gaearon
Copy link
Collaborator

gaearon commented Sep 14, 2017

The concern I have with react is not scheduling as handled by fiber, it is: Can I render 60fps when doing complex animations? That means as little overhead as possible. This becomes a scheduling issue when more than one thing is occurring at once, which is not the regular situation.

Interestingly this does become a scheduling issue in our experience, but perhaps just in larger apps. For example, at Facebook, there’s a lot of AJAX callbacks firing here and there, and that can interfere with the video player (which should be 60fps). So even if we optimize the hell out of the diffing we do bump into the problem of scheduling sooner or later because any particular animation is a part of a larger app that is worked on by many different teams.

Your use case is still interesting to us because it’s a stress test for React’s update model. The best way to make something faster is to just not do it. Yes, maybe diffing could be 20% faster, but if we want to make your use case work well, it might be that the best way is to escape React’s usual model of comparing leaf elements, and do the minimal work possible (i.e. only even compare things that we know to be dynamic).

This is something Inferno explored with templates. Separate static and dynamic. Yet it doesn’t really make a big difference in very dynamic applications in our experience. Why? Component abstraction. There’s always components in the middle so even if you optimize the leaf node comparisons, you still gotta go “into” components and render them. Optimizations to host nodes only work well when we have contiguous blocks of them, but components get in the way. This is why we’re looking at compilation techniques like #7323 first. If we can make component inlining work, we can end up with larger contiguous blocks of host elements, which we can then avoid diffing by using bytecode (similar to Glimmer).

I hope that makes sense! I’m sorry if I came across as dismissing your idea. I don’t mean that making diffing faster is unnecessary—sure, there are use cases where marginal improvements are important, but with our apps it seems like any marginal improvements are beat by systematic issues coming from the very nature of flexibility React allows. So we’re trying to address those systematic issues instead. Basically, trying to have our cake (expressive JS-based components) and eat it (efficient in corner cases) too. We’ll see how it goes 😉

@yelouafi
Copy link

yelouafi commented Sep 14, 2017

Hi @thomas-jeepe. I'm the author of petit-dom.

I want to add that petit-dom perf. in this benchmark is not due to O(ND) algorithm (it even slows things down in this case).

petit-dom diff process can be split in 3 mains phases:

  1. pre-processing optimizations: detect common prefix/suffix, simple edits, find a contiguous LCS in linear time ...

  2. run the O(ND) algorithm

  3. If (2) can not succeed after some number of iterations, switch to a hashmap based approach.

In the benchmark the O(ND) phase is only hit once : When 'create X rows' is called successively we need to replace all the old rows with new ones. It means the 2 sequences are totally different which triggers the worst case of O(ND) ('D' is at its max value). The process detects that O(ND) can't find an edit path below a certain threshold (fixed to 50 actually) so it switches to (3).

In the only case when we hit the algorithm we dont really use it. Even worse, the process spends some time trying to compute an edit path below 50 before switching to another approach.

So petit-dom perf. on the benchmark aren't really a proof of the O(ND) approach. I think it's more due to (1) and some common VM oriented optimizations (monomorphism, avoiding prototypes)

I picked O(ND) out of an observation: actually all virtual dom implementations I've seen start by building a hashmap of the old sequence to optimize vnode lookup when comparing the old and new sequences.

So if we have, say 1000 vnodes, and we have only changed few of them, say 10, we'll build a hashmap of 1000 entries before we start comparing. The O(ND) algorithm doesn't use hashmaps and its cost in terms of time/space is proportional to how much differences there are between the 2 sequences (the D parameter). This makes it suitable to diff (potentially large) sequences with few differences.

Now having said that, we need to examine the pertinence of our assumptions:

  1. How does we often encounter sequences with 1000 elements in a vdom application. In my experience I typically encounter 'deep trees' instead of 'large sequences'. i.e. trees with not so much children (generally below 100) but deeply nested.

  2. Is building large hashmaps as costly as we may think? does it have a real impact ? my observation is that when measuring things VM engines can really surprise you (I have now some doubts about it this base assumption).

  3. The running time of O(ND) is 'not predictable'. It depends on an 'output parameter' (D which is the difference between the 2 input sequences) and so the perf. can vary up to O(N^2). In petit-dom this is resolved by stopping O(ND) after a threshold of iterations but this illustrates the tradoeffs of this approach.

TL/DR There is still a lack of evidence that the O(ND) algorithm makes a noticeable impact on the perf. of virtual DOM diffing.

Note that it doesn't mean the approach is bad but only that there is still no evidence to support it. And it actually has its own tradeoffs. OTOH I still think pre-processing optimizations of (1) are relevant enough that React should (fully) take advantage of them (see #10382)

Implicit keys

@localvoid I haven't checked but I think this is well handled in petit-dom. diff is 'stable' meaning unkeyed vnodes are preserved as long as they keep appearing in the same order.

@gaearon
Copy link
Collaborator

gaearon commented Sep 14, 2017

How does we often encounter sequences with 1000 elements in a vdom application. In my experience I typically encounter 'deep trees' instead of 'large sequences'. i.e. trees with not so much children (generally below 100) but deeply nested.

That’s our observation as well. If you have a wide list you can usually virtualize it. So it’s not as much of a problem in practice unless you need 1000 items on the screen at the same time. It’s a corner case but it happens (e.g. stock trading). Not common for our apps though.

Whereas if you have a deep tree, there’s no easy escape. And it’s death by a thousand cuts that you can’t just localize and solve in one place. It often even spans across sub-applications and teams. That’s why, as I mentioned above, compiler optimizations to flatten the tree is the area we’re very much interested in.

@thomas-jeepe
Copy link
Author

@gaearon Interesting, so I suppose until compilation optimization comes around the best way to speed up diffing in performance critical animation paths, not the regular use case, is to reduce component overhead and have a shallower tree.

I also remember inferno attempted to go the approach of diffing only dynamic elements via blueprints, which I am unsure why they changed. Surplus js also takes a similar approach but cheats, it uses a mobx like state and uses a compiler:

// before compilation
let text = S.data("foo")
let x = <span> {text()} </span>

The span generates a real dom node, and setting text directly maps to updating the dom node without diffing. This doesn't seem to work with a more functional render from the top approach though.

There are definitely ways to go about hyper optimizing the area, and it seems either manually opting in to blue prints or to a compiler are two ways to go about this.

Also, regarding the scheduling issue, I'm cheating and moving all processing, all websocket requests, all indexeddb requests to a web worker, meaning no dom thread blocking.

@yelouafi Thanks for coming to the thread.

It looks like I misunderstood what exactly petit-dom was targeting. When I was looking through the article describing the algorithm and I saw the pre-processing optimizations and was seriously impressed, as that meant a much faster update time then usual with minimal computation. And it seems like applying the those optimizations before engaging in a regular diff could lead to a (perhaps only marginal) increase in performance.

The issue you pointed out is definitely a point where the pre-processing optimizations could come into play as a benefit, removing the need to move too many items around.

However if the O(ND) diff algorithm wholly doesn't mean an increase in speed, or it is not proven, then it shouldn't be applied or should be tried out more. But instead preprocessing optimizations, if they make a sufficient difference, should be applied.

The optimization of the rest of react, would be completely separate from pre-processing optimizations and would most likely be what @gaearon is referencing or perhaps using methods from inferno which @trueadm could help with or perhaps take the optimizations from other libraries.

Or you could just rewrite in Rust (jk)

@localvoid
Copy link

@yelouafi

I haven't checked but I think this is well handled in petit-dom. diff is 'stable' meaning unkeyed vnodes are preserved as long as they keep appearing in the same order.

[null, <div>] => [<div>, null]

When null is part of LCS then you'll probably get correct results with two dom ops(remove div, insert new div), when it is <div> then you'll reuse node from different position which can lead to losing internal state or something even worse.

@trueadm
Copy link
Contributor

trueadm commented Sep 14, 2017

Just to add to this discussion as I was mentioned a bunch of times. I've been spending a lot of time looking into making React applications faster in the real world at Facebook. I did a talk about this earlier in the year: https://www.youtube.com/watch?v=djOc1EK07Tk.

I while back when I started at Facebook, I hacked together a "ReactDOM lite", which was a very incomplete version of React that was about 10kb in size and applied some of the optimizations that Inferno had (where possible) and tried them on real-world applications. I saw no real performance wins other than the startup time and that was purely down to the bytesize of 10kb vs ~40kb. I plugged my "ReactDOM lite" into some of the popular benchmarks around at the time and the numbers looked amazing.

Realistically speaking, the reason applications today are slow are not really because of the frameworks, but because of the application code. Benchmarks tend to be a couple of components repeated hundreds of times with the same actions applied to them. If you try to benchmark applications with hundreds of thousands of lines of code, containing lots of async network requests, images loading, audio/video streaming/playing etc, you'll see no real win from optimizing React's diff algorithm – it might even make it slower.

There are plenty of things we're working on to make React applications faster and it will likely involve application (not framework) compilation. Further in the future, we might explore compiling React applications to bytecode and use a static/dynamic aware bytecode diffing engine rather than a virtual DOM diffing engine – who knows. :)

@thomas-jeepe
Copy link
Author

@trueadm Thanks for the response :D

So, I fully agree with you on the application side, for 99% of the applications react can handle them without breaking a sweat and react is nowhere near the bottleneck.

If I want to for example animate at 60 fps on mobile (using something like react-motion), my reasoning was that there should be little to no overhead on each patch. Which brought me to the pre-processing optimizations, inferno, etc.

If preprocessing optimizations and diffing aren't the right avenue of improvement for that situation, then it definitely makes sense to target compilation time optimizations rather than react's vdom implementation itself.

Anyways I learned a lot from this thread, thanks a ton and hopefully react will get even faster for the heavy diffing / animation side, because it would be awesome to be extremely close in performance with gsap with awesome declarative animations :D

@monfera
Copy link

monfera commented Sep 14, 2017

The issue with posing the speed question as a faster diff (as in the issue title) is that it's still a diff ie. your application generates snapshots of states in sequence, from which the delta is reverse engineered. The delta info may be available at the application level but gets lost by the time the new desired view gets compared to the existing one. So cycles are spent on recovering info that was lost (sure, shouldComponentUpdate helps but it's no longer diffing). There's also the overhead of restating a lot of things that might not have changed, a common thing in data visualization that can have huge scene graphs with masses of scatter points etc. Immutability can help but large glyph arrays then get mapped to cons cells or similar, to allow efficient update of a few points while leaving the large quantity of unchanging points out of actual diffing.

@leeoniya
Copy link

leeoniya commented Sep 26, 2017

Regarding the update time I have not seen the research that would demonstrate a considerable difference in using those libraries versus React in real applications. Benchmarks similar to the one you linked to are notoriously easy to fool....From our research, the vast majority of the time spent by applications built with React-like libraries lies in the application code.

Let's address the elephant in the room.

That post was written in Nov 2016 and @localvoid, @trueadm and @yelouafi are all aware that the keyed and unkeyed impls have been split since Jan 2017, yet no one points this out here. In addition, new libs/impls are now passed through a MutationObserver sanity check upon submission to ensure they're properly categorized and perform the correct DOM mutations - not just have the correct visual output. The amount of "gaming" that's possible today is vastly smaller than it had been in the past. The current results table is in much better shape.

I agree with the overall conclusion that rendering benchmarks are meaningless IFF your perf bottleneck is elsewhere - which, sadly, is the typical case. Though your battery life will always notice good algo improvements.

For example, m.facebook.com currently does almost 300ms of scripting during page load and generates just ~1,000 total dom nodes. This is on a fast desktop in Chrome 63. That is a lot of scripting for such a small dom size. It's almost certainly not React's diffing inefficiencies that are the main offender here but rather the overall architecture; swapping in Preact or making things async isn't going to fix that to any significant degree.

All that being said, it's good to see that Fiber is moving up in perf and down in mem, too :)

@gaearon
Copy link
Collaborator

gaearon commented Oct 4, 2017

@leeoniya This is great to hear! I was actually not aware of these improvements so thanks for letting us know.

@sghall
Copy link

sghall commented Nov 4, 2017

Maybe because I am interested in animation, I tended to focus on what fiber would make possible for those use cases. Seems it has shifted over time and probably for good reason, but it would be interesting to hear more about it.

Everyone likely knows this post by @acdlite that talked at a high-level what fiber was all about. From the section on scheduling...

The key points are:

In a UI, it's not necessary for every update to be applied immediately; in fact, doing so can be wasteful, causing frames to drop and degrading the user experience.

Different types of updates have different priorities — an animation update needs to complete more quickly than, say, an update from a data store.

A push-based approach requires the app (you, the programmer) to decide how to schedule work. A pull-based approach allows the framework (React) to be smart and make those decisions for you.

React doesn't currently take advantage of scheduling in a significant way; an update results in the entire subtree being re-rendered immediately. Overhauling React's core algorithm to take advantage of scheduling is the driving idea behind Fiber.

Emphasis added by me on the last line. This talks explicitly about animations and says scheduling is the "driving idea behind fiber."

If you watch this intro to fiber video, have to say you'd walk away thinking that there would be some way to get 60fps animations in react 16. I think that's fair. If the actual engineering of it ran into problems...totally understandable. But these sources had a ton of exposure so it may be good to put something out there to clarify.

My take is that it was found to be impractical when you actually tried to implement it (?) I didn't watch the repo that closely, but I saw a few commits here and there with the different priority levels in there and figured this was still being worked on and you would make some kind of official announcement at some point.

I see @gaearon answer above to @thomas-jeepe regarding animations. Where he says...

Your use case is still interesting to us because it’s a stress test for React’s update model. The best way to make something faster is to just not do it. Yes, maybe diffing could be 20% faster, but if we want to make your use case work well, it might be that the best way is to escape React’s usual model of comparing leaf elements, and do the minimal work possible (i.e. only even compare things that we know to be dynamic).

This is something Inferno explored with templates. Separate static and dynamic. Yet it doesn’t really make a big difference in very dynamic applications in our experience. Why? Component abstraction. There’s always components in the middle so even if you optimize the leaf node comparisons, you still gotta go “into” components and render them. Optimizations to host nodes only work well when we have contiguous blocks of them, but components get in the way. This is why we’re looking at compilation techniques like #7323 first. If we can make component inlining work, we can end up with larger contiguous blocks of host elements, which we can then avoid diffing by using bytecode (similar to Glimmer).

This idea of just having leafs nodes that are "escaped" perked up my ears and sounded like it could be awesome for animated charts and graphs. Not sure I follow why it didn't work in inferno. It would be interesting to here more on this.

Is there a reliable way to do 60fps JS animations in react 16? I think the answer is no.

@monfera
Copy link

monfera commented Nov 4, 2017

@sghall I agree with your conclusion

Is there a reliable way to do 60fps JS animations in react 16? I think the answer is no.

though I'd rephrase it as

  1. generally there is no reliable way to do 60fps JS animation (as it depends on browser DOM speed, scenegraph size etc etc)
  2. React hasn't been optimized for animation, and it inherently introduces overhead that's fine for general UX use but eats significantly into the rAF cycle time budget if you have a nontrivial scenegraph, ie. React can't give you the maximal frame rate, given a specific "scenegraph time series" ie. animation

Time lost can't be regained - as my prev. note mentioned, the fundamental issue with generally dynamic scenes is that the React way first loses key information (that of what changed and how), then this lost information must be recovered (via diffing) to avoid updating all nodes. It takes time.

In contrast, here if you open Layout tweaks and tweak rowPitch or rOfSparkStrip then

  1. there's firsthand info of what changed (of course, whatever you tweaked, but it generalizes to animations where you have specific time series for node positions etc.);
  2. this information is not lost, in that only the minimal work is being done to modify the dependent nodes - tweak the radius, and no DOM element or virtual anything is touched or compared other than doing a setAttribute on the target nodes.

The result is that the whole thing just keeps working even though it undergoes changes. This example happens to be a weird D3 + FRP inspired experiment but this works with whatever reactive streams. I simply have no idea how React or similar could efficiently make up for the time lost due to the loss of the info of what changed in the first place, which info exists before React even enters the picture. The React way is fine for general UX and a lot of other stuff, but if you want close to the metal speed, have an overall large, dynamic scenegraph, and/or have a deep scenegraph with dynamic leafs, then rendering resources will be significantly underutilized, as a good fraction of cycles go toward recovering info. Ofc React can just heuristically not update all things if it doesn't fit in the time budget, prioritize etc. but such strategies are also workable with leaner rendering methods.

I think that the often mentioned "but the DOM is slow anyway and is typically the bottleneck" or "10-20% speed difference doesn't matter in UX" are currently correct. But browser makers eg. now the Mozilla team work on new ways of compositing and rendering, using well known techniques from video game rendering, and there are other render targets such as WebGL, React Native, OpenGL, Canvas etc. that can be infinitely faster than current (!) DOM. So, what's a little overhead now can turn into big overhead due to Amdahl's law, if other code paths improve.

@sebinsua
Copy link

sebinsua commented Mar 13, 2018

Your use case is still interesting to us because it’s a stress test for React’s update model. The best way to make something faster is to just not do it. Yes, maybe diffing could be 20% faster, but if we want to make your use case work well, it might be that the best way is to escape React’s usual model of comparing leaf elements, and do the minimal work possible (i.e. only even compare things that we know to be dynamic).

This is something Inferno explored with templates. Separate static and dynamic. Yet it doesn’t really make a big difference in very dynamic applications in our experience. Why? Component abstraction. There’s always components in the middle so even if you optimize the leaf node comparisons, you still gotta go “into” components and render them. Optimizations to host nodes only work well when we have contiguous blocks of them, but components get in the way. This is why we’re looking at compilation techniques like #7323 first. If we can make component inlining work, we can end up with larger contiguous blocks of host elements, which we can then avoid diffing by using bytecode (similar to Glimmer).

This idea of just having leafs nodes that are "escaped" perked up my ears and sounded like it could be awesome for animated charts and graphs. Not sure I follow why it didn't work in inferno. It would be interesting to here more on this.

Bumping this.

Can anybody fill us in on why it didn't work for Inferno and whether anybody has tried this with React?

@leeoniya
Copy link

leeoniya commented Mar 13, 2018

fwiw, some perf experiments have been spotted over at [1]. the results look promising. somewhat reminiscent of https://github.com/google/incremental-dom but with opcodes (presumably for a vm) rather than function calls.

cc @trueadm ;)

[1] https://github.com/trueadm/js-framework-benchmark/blob/master/react-compiled/src/Main.jsx

@trueadm
Copy link
Contributor

trueadm commented Mar 14, 2018

@leeoniya Ah, you saw the research work I was doing a while back. It's true, I've been looking into the performance characteristics when we compile to a lightweight bytecode representation model via a ahead-of-time tool like Prepack.

The big thing here is that we're looking into compiling your "application code" into optimized bytecode – essentially compiling away React's paradigms and all the abstraction of user-land components (including virtual DOM, classes, etc). Making micro-optimizations to React itself gave us no real-world wins. I expect the wins of compiling large complex apps (that may contain bottleneck implementations) would be game-changer if possible.

We still don't know if this approach will work, as compiling and optimizing idiomatic JS is far more complicated than doing this to a template language. We'll see though!

@Jack-Works
Copy link
Contributor

Jack-Works commented Nov 12, 2019

We expect that we can get better at this in the future as browsers becomes more compatible with each other, and there is less need to polyfill features like events. We support many large apps so we are a bit conservative in removing polyfills, but we are gradually doing this. Still, these differences are in initialization time and bundle size, not in anything related to the diffing.

So any consideration on that let developers opt-out the polyfills when they can make sure their application will only run on modern browsers?

Another thing to mention,

In microsoft/TypeScript#34547 and https://github.com/reactjs/rfcs/blob/createlement-rfc/text/0000-create-element-changes.md it mentions createElement is going to change. I wonder if we can preseve static JSX information in the emitted code.

function App() {
    return <div>
        {dynamicPart}
        <p>Static Part <a>deep</a></p>
    </div>
}

will be compiled to

"use strict";
function App() {
    return React.createElement("div", null,
        dynamicPart,
        React.createElement("p", null,
            "Static Part ",
            React.createElement("a", null, "deep")));
}

But if we can use the static information in JSX (lost in emitted code currently)

"use strict";
function App() {
    const _a = React.useRef(() => React.createElement("p", null,
            "Static Part ",
            React.createElement("a", null, "deep")))

    return React.createElement("div", null, dynamicPart, _a.current);
}

React can check the reference of _a.current (that will always be the same) and skip the diff on the whole JSX tree.

@Jessidhia
Copy link
Contributor

What you propose is already done by https://babeljs.io/docs/en/babel-plugin-transform-react-constant-elements but there are corner cases where it breaks, like in babel/babel#6751

@yisar
Copy link
Contributor

yisar commented Dec 29, 2020

I've read everyone's point of view. First of all, I think asynchronous rendering can also have a better diff implementation.

Other optimizations, such as compilation and scheduling, do not conflict with diff algorithm. They can exist at the same time.

I think the most important part is preprocessing, that is, the processing of public prefixes and suffixes.

I implemented this algorithm in Fre. Fre and react is similar, the render is asynchronous, which is a challenge, because we can't give up the generation of linked list, but also ensure the order.

https://github.com/yisar/fre/blob/next/src/reconciler.ts#L92

In general, I think better scheduling and better algorithms can coexist,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests