New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Lazy load routers #4129
Comments
Cool. I'm optimistic we could support this. As a workaround, anyone who approaches this problem should be able to achieve the same perf gains using the SOA-approach: https://github.com/trpc/trpc/tree/main/examples/soa (or the PR that @juliusmarminge put up for you on calcom/cal.com#8041) An outline of how this could work internally:Rough how it currently works:
For this to work, our routers need to be a bit smarter and be able to resolve stuff and cache already resolved Potential "problematic" code paths
|
We had some discussion (no decisions yet) internally on this, so just dumping some of my thoughts now here for wider visibility. tl;dr is I'm sceptical this is the right solution, but that doesn't undermine the problem existing and needing a solution. So firstly a few assertions from other bits of information I've read on this issue, please correct me if I'm wrong:
So I'm sceptical lazy loading is the true solution for a few reasons:
The solution to these problems in my view would be to break up the tRPC API into multiple skinnier APIs...then merge the definitions for the client, and use a special links setup (or API Gateway equivalent) to route appropriately. @juliusmarminge has already POC'd this and it doesn't look that hard for us to provide some little abstractions in the form of special links, and a way to merge/prefix AppRouter definitions. This would scale much better to truly massive APIs which are challenging the size limits of a lambda, or complexity limits of Typescript, while also reducing bundle size, and allowing you to chop up the API along dependency boundaries - so one API has prisma, another API doesn't, etc. I don't think this is a perfect idea, inevitably dependency boundaries will exist throughout a API's tree rather than neatly by top-level router, and deep merging routers would probably be off the table. In summary...there's really nothing too offensive about adding lazy loading, and maybe my above reasoning is wrong. Also it's probably only a handful of lines of code to implement so we not a huge maintenance headache in the grand scheme. The biggest challenge for me here is the lack of data on the problem (is it cold-start time? is it actually request-start time thanks to a heavy createContext? is the code bundling strategy hurting the project? what gains will this actually offer vs other solutions?) given I'm looking in from the outside, and the lack of a prototype to compare data against. |
Just a heads up. We actually went through and lazy loaded ALL tRPC procedures and helped reducing cold start boots from 16seconds to 2-3 seconds. So this feature would've help a lot here. |
You separates into multiple lambdas too - do you have numbers on each change separately? |
We'll be releasing a case study with more in depth numbers soon. |
I'm actually unsure if this would've actually pushed the needle the same way the refactor to the SOA approach. The culprit is likely not trpc itself but rather each router's dependencies stacking up |
Similar thoughts here, the long term solution to an API getting too large for serverless is to have multiple functions, not to lazy load bits. Lazy loading would just lengthen the runway so to speak. So if we add more 1st class support for solving these problems it will probably be by providing more primitives for merging several tRPC APIs into one client, that way we encourage and support the best practice solution. Happy to be proven wrong of course, but I've been following this piece via Julius and gained the understanding that there were essentially a subset of routers with a heavy dependency around (ie.) i18n and carving that off made a big difference. |
From a DX perspective the lazy loading syntax in the initial post is fantastically easy to understand and use. |
Agreed, lazy loading would be an awesome feature to have |
This comment has been minimized.
This comment has been minimized.
+1 Huge projects will benefit a lot! |
@Nick-Lucas this is exactly what I'm looking for and would be willing to help implement if needed. Is there a separate ticket tracking this feature? |
Describe the feature you'd like to request
Basically
next/dynamic
but for the backend.Recently our tRPC router has been growing significantly for many reasons but mostly due to heavy third party SDKs. The problem with this is that even if we're not calling that specific procedure we still load ALL the router dependencies when calling any procedure.
This impacts performance and cold boots significantly.
Describe the solution you'd like to see
Splitting big tRPC routers into lazy loadable chunks could help loading only what is needed when a procedure is called. I would imagine some pseudo code like this:
Describe alternate solutions
Other aternatives involve moving all the procedure code into a separate file a use imports for that as done in here.
Also lazy loading only third party libraries (like Stripe, Google, etc.) seems to help. But it would be a much nicer DX being able to simply lazy load full routers and support it natively.
Additional information
No response
👨👧👦 Contributing
TRP-19
Funding
The text was updated successfully, but these errors were encountered: