Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SSR -> ludicrous speed #158

Open
leeoniya opened this issue Jul 7, 2017 · 10 comments
Open

SSR -> ludicrous speed #158

leeoniya opened this issue Jul 7, 2017 · 10 comments

Comments

@leeoniya
Copy link
Member

leeoniya commented Jul 7, 2017

sure, ssr is already plenty fast [1].

but we can shim domvm's virtual-dom api to barf out strings from the replaced factories rather than building up a vtree. should be significantly faster.

[1] https://github.com/leeoniya/domvm/tree/3.x-dev/demos/bench/ssr

@lawrence-dol
Copy link
Collaborator

Though building trees allows for post-processing to occur before barfing out strings?

@leeoniya
Copy link
Member Author

leeoniya commented Jul 7, 2017

yeah, there's use in keeping both around. Choice Is Good ™

@leeoniya
Copy link
Member Author

initial results [1] are not terribly encouraging :/

i think that supporting the full flexibility of domvm's templates as-is won't be able to go significantly faster. there needs to be a more imperative chainable and explicit vtree construction api to really see a worthwhile improvement. eg: https://github.com/ivijs/ssr-benchmark/blob/master/src/vidom/ui.ts

i could be wrong, and maybe there's some specific low hanging fruit that can be helped, but it'll need more detailed tracing.

[1] https://github.com/leeoniya/domvm/tree/gh-158

@lawrence-dol
Copy link
Collaborator

I think this is likely a dead end. Time to close?

@leeoniya
Copy link
Member Author

this approach is probably a dead end, though it's been a long time and JS engines are completely different today.

but i wanted to try if ryansolid/dom-expressions#27 bears any fruit (and if it does something significantly different than domvm does already), the increase looks huge.

@lawrence-dol
Copy link
Collaborator

Interesting discussion. My interest is purely academic at this point -- we don't use SSR and nor do we have plans to, and if we did it would be in Java anyway. But I agree that performance is of utmost concern on the server, since the work is centralized and no longer amortized among the clients. People always say, "but the server is HUGE", while failing to recognize that scaling also needs to consider that it's working on behalf of many clients.

I do have in mind a vacation project someday to use Deno to write a fully async application server just to see what it can do... so maybe one day SSR with DOMVM will crop up again. But even then I still wouldn't consider it viable for a production system without utilizing worker threads so as to leverage the available cores.

@leeoniya
Copy link
Member Author

for anything that has a lot of load on it, you'd want to compile to template strings instead of building up an entire vdom structure anyhow. there are better solutions for this like Marko.

But even then I still wouldn't consider it viable for a production system without utilizing worker threads so as to leverage the available cores.

however, "a lot of load" really needs to be qualified here. several years ago, on a commodity laptop, on a single thread, domvm could render 1,200 complex pages /sec [1]. 99% of web apps out there will never see anything close to this, their DB will choke long before they hit that kind of throughput, and most pages are less complex than the one in this bench.

you wouldn't do this for the front page of NYTimes, Walmart or Ebay, but that's not the typical case, and you have a lot more hardware at that point anyhow.

[1] https://github.com/domvm/domvm/tree/master/demos/bench/ssr#readme

@leeoniya
Copy link
Member Author

my T480s laptop currently gets ~1650 ops/sec on my i5 8th gen (using single thread) in linux.

@lawrence-dol
Copy link
Collaborator

All quite true, but (a) that's not all the server is doing, and (b) servers tend toward lower clocks and higher core-count. It still matters, or at least, is a meaningful consideration as to whether pulling the workload of 10,000 clients onto the server actually makes sense. While it's true that you "have a lot more hardware at that point", most of that hardware is represented by additional cores. It matters not if you have a 64 core ThreadRipper if you are using only a single core -- in that case you more or less have the performance of a midrange laptop. Which is why JavaScript will never truly be viable on servers unless and until it can easily and efficiently leverage all the cores. Again, this is largely academic to me.

@leeoniya
Copy link
Member Author

leeoniya commented May 26, 2021

Which is why JavaScript will never truly be viable on servers unless and until it can easily and efficiently leverage all the cores.

node has had cluster mode for a long time: https://nodejs.org/api/cluster.html#cluster_how_it_works, which is also seamlessly supported by the de-facto node process manager, PM2: https://www.npmjs.com/package/pm2

there's a pretty old article comparing various strategies for distributing load across multiple node processes: https://medium.com/@fermads/node-js-process-load-balancing-comparing-cluster-iptables-and-nginx-6746aaf38272

^^ beware of terrible bar charts that don't start at 0

also, worker threads are an option: https://soshace.com/advanced-node-js-a-hands-on-guide-to-event-loop-child-process-and-worker-threads-in-node-js/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants