Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Roadmap: 2023-24 #1830

Open
5 of 10 tasks
gbj opened this issue Oct 2, 2023 · 11 comments
Open
5 of 10 tasks

Roadmap: 2023-24 #1830

gbj opened this issue Oct 2, 2023 · 11 comments
Labels

Comments

@gbj
Copy link
Collaborator

gbj commented Oct 2, 2023

(This supersedes the roadmaps in #1147 and #501.)

Big Picture

Depending on how you count, Leptos is somewhere near its 1-year anniversary! I began serious work on it in July 2022, and the 0.0.1 release was October 7. It's incredible to see what's happened since then with 183+ contributors, and 60+ releases. Of course there are still some bugs and rough edges, but after one year, this community-driven, volunteer-built framework has achieved feature parity with the biggest names in the frontend ecosystem. Thank you all, for everything you've done.

Releasing 0.5.0 was a huge effort for me, and the result of a ton of work by a lot of contributors. The process of rethinking and building it began in early April with #802, so it's taken ~6 months—half the public lifetime of Leptos—to land this successfully. And with the exception of some perennials ("make the reactive system Send"), the work laid out in #1147 is essentially done.

So: What's next?

Smaller Things

Polishing 0.5

This section probably goes without saying: we had a lot of beta testing for this release but I also continued making changes throughout that process (lol), so there will be some bugs and rough edges to smooth out in 0.5.1, 0.5.2, etc.

There have also already been PRs adding nice new features!

And some cool ideas unlocked by ongoing development:

  • autotracking async memos (like create_resource, but without explicit dependencies; I made a working demo in a few minutes)
  • splitting apart the "data serialization from server" and "integrating async stuff" functions of resources, to allow serializing synchronous server data easily

Building the Ecosystem (Leptoberfest!)

Of course it will take a little time for our ecosystem libraries to settle down and adapt to 0.5. Many of them have already done this work—kudos! All of these libraries are maintained by volunteers, so please be patient and gracious to them.

The larger the ecosystem of libraries and apps grows, the harder it becomes to make semver-breaking changes, so expect the pace of change to be more sedate this year than it was last year.

The core is pretty stable at this point but there's lots that we can do to make Leptos better by contributing to these ecosystem libraries. To that end, I'm announcing Leptoberfest, a light-hearted community hackathon during the month of October (broadly defined) focusing on supporting our community ecosystem.

Good places to start:

If you're a library maintainer and I missed you — apologies, and ping me and I'll add you.

If you make a contribution to close an issue (bug or feature request!) this fall, please fill out this form to let us know. I'll be publishing a list of all our Leptoberfest contributors at the end of the month (so, some time in early November/when I get to it!) @benwis and I also have a supply of Leptos stickers we can send out as rewards. If you have ideas for other fun rewards, let me know. (We don't have much cash, but we love you!)

On a personal note: As a maintainer, I have found that the feature requests never end. I welcome them, but they can become a distraction from deeper work! I will be slowing down a little in terms of how much time I can dedicate to adding incremental new features to the library. If you see an issue marked feature request, and it's a feature you'd like to see, please consider making a PR!

Bigger Things

Apart from bugs/support/etc., most of my Leptos work this year is going to be exploring two areas for the future. I am a little burnt out after the last year, and everything it's entailed. I have also found that the best antidote to burn-out and the resentment that comes to it is to do the deep, exploratory work that I really find exciting.

Continuing Islands Exploration

The experimental-islands feature included in 0.5 reflects work at the cutting edge of what frontend web frameworks are exploring right now. As it stands, our islands approach is very similar to Astro (before its recent View Transitions support): it allows you to build a traditional server-rendered, multi-page app and pretty seamlessly integrate islands of interactivity.

Incremental Feature Improvements

There are some small improvements that will be easy to add. For example, we can do something very much like Astro's View Transitions approach:

  • add client-side routing for islands apps by fetching subsequent navigations from the server and replacing the HTML document with the new one
  • add animated transitions between the old and new document using the View Transitions API
  • support explicit persistent islands, i.e., islands that you can mark with unique IDs (something like persist:searchbar on the component in the view), which can be copied over from the old to the new document without losing their current state

All of these can be done as opt-in incremental improvements to the current paradigm with no breaking changes.

Things I'm Not Sold On

There are other, larger/architectural improvements that can improve performance significantly, and remove the need for manually marking persistent islands. Specifically, the fact that we use a nested router means that, when navigating, we can actually go to the server and ask only for a certain chunk of the page, and swap that HTML out without needing to fetch or replace the rest of the page.

If this sounds like something you could do with HTMX or Hotwire or whatever, then yes, this is like the router automatically knowing how to do something you could set up manually with HTMX.

However once you start thinking it through (which I have!) you start to realize this raises some real issues. For example, we currently support passing context through the route tree, including through an Outlet. But if a nested route can be refetched indepedently of the whole page, the parent route won't run. This is great from a performance perspective but it means you can no longer pass context through the router on the server. And it turns out that basically the entire router is built on context...

This is essentially why React Server Components need to come with a whole cache layer: if you can't provide data via context on the server, you end up making the same API requests at multiple levels of the app, which means you really want to provide a request-local cache, etc., etc., etc.,

Essentially we have the opportunity to marginally improve performance at the expense of some pretty big breaking changes to the whole mental model. I'm just not sure it's worth pushing that far in this direction. Needless to say I'll continue watching Solid's progress pretty closely.

I don't anticipate making any of these breaking changes without a lot more input, discussion, thought, and research.

The Most Exciting Thing: Rendering Exploration

Leptos has taken two distinct approaches to rendering.

0.0

The 0.0 renderer was built on pure HTML templating. The view macro then worked like the template macro does now, or like SolidJS, Vue Vapor (forthcoming), or Svelte 5 (forthcoming) work, compiling your view to three things:

  • an HTML <template> element that created once, then cloned whenever you need to create the view
  • a series of DOM node traversal instructions (.firstChild and .nextSibling) that walked over that cloned tree
  • a series of create_effect calls that set up the reactive system to update those nodes directly

In ssr mode, the view macro literally compiled your view to a String.

Pros

  • Super fast element creation
  • Smaller WASM binaries (storing your view as static HTML is smaller than as WASM instructions to build DOM elements)
  • Minimal extra junk in the DOM (hydration IDs, comment markers, etc.)
  • Extremely fast/lightweight server-side rendering

Cons

  • Lots of bugs and odd edge cases that were not handled well. ("Have you tried wrapping it in a <div>?")
  • Relied on view macro, couldn't use a builder syntax or ordinary Rust code
  • Harder to work with dynamic views, fragments, etc.; components returned Element (one HTML element) or Vec<Element> (several of them!) without the nice flexibility of -> impl IntoView.
  • Very limited rust-analyzer/syntax highlighting support in the view macro.

Apart from these issues, which probably could've been fixed incrementally, I've come to think there was a fundamental limitation with this approach. Not only did it mean writing much of the framework's rendering logic in a proc macro, which is really hard to debug; doing it in a proc macro meant that we needed to do all of that without access to type information, just by transforming a stream of tokens to other tokens.

For example, we had no way of knowing the type of a block included from outside the view:

let b = "Some text."
view! {
	<p>"before" {b} "and after"</p>
}

The view macro has no idea whether a is a string, or an element, or a component invocation, or (). This caused many of the early bugs and meant that we needed to add additional comment markers and runtime mechanisms to prevent issues.

0.1-0.5

Our current renderer is largely the outstanding work of @jquesada2016 and me collaborating to address those issues. It replaced the old view macro with a more dynamic renderer. There's now a View type that's an enum of different possible views, type erasure by default with -> impl IntoView, much better support for fragments, etc. Most of our growth and success over the last six months was unlocked by this rewrite. The view macro expands to a runtime builder syntax for elements.

Pros

  • Much better rust-analyzer and syntax highlighting support in the view
  • Pure Rust builder syntax available
  • Increased runtime flexibility
  • Much more robust/fewer weird panics (believe it or not)

Cons

This approach basically works! If I never changed it, things would be fine. It is kind of hard to maintain, there are some weird edge cases that I've sunk hours into to no avail, it's kind of chunky, the HTML is a little gross; but it's all fine.

So let's rewrite it.

The Future of Rendering

I have been extremely inspired by the work being done on Xilem, and by Raph Levien's talks on it. I'm also totally uninspired.

For context: blog post, really good talk that lays out four basic paradigms for UI. Seriously, reading/watching these will make you better at what you do. See also the Xilem repo and the Xilem-inspired Concoct.

The Xilem architecture, in my reading, tries to address two pieces of the UI question. How do we build up a view tree to render stuff (natively or in the DOM)? And how do we drive changes to that view tree? It proposes a statically-typed view tree with changes driven by React-like components, in which event callbacks take a mutable reference to the state of that component.

The idea of a statically-typed view tree similar to SwiftUI is just so perfectly suited to Rust and the way its trait composition works that it blew my mind. Xilem is built on a trait with build and rebuild functions, and an associated State type that retains state. Here's an example:

impl<'a> Render for &'a str {
    type State = (Text, &'a str)

    fn build(self) -> Self::State {
        let node = document().create_text_node(self);
        (node, self)
    }

    fn rebuild(self, state: &mut Self::State) {
        let (node, prev) = state;
        if &self != prev {
            node.set_data(self);
            *prev = self;
        }
    }
}

impl<A, B, C> Render for (A, B, C)
where
    A: Render,
    B: Render,
    C: Render,
{
    type State = (A::State, B::State, C::State);
    fn build(self) -> Self::State {
        let (a, b, c) = self;
        (a.build(), b.build(), c.build())
    }
    fn rebuild(self, state: &mut Self::State) {
        let (a, b, c) = self;
        let (view_a, view_b, view_c) = state;
        a.rebuild(view_a);
        b.rebuild(view_b);
        c.rebuild(view_c)
    }
}

If you don't get it, it's okay. You just haven't spent as much time mucking around in the renderer as I have. Building this all up through trait composition, and through the composition of smaller parts, is freaking amazing.

But the use of component-grained reactivity is just fundamentally uninteresting to me.

I played around a little with a Xilem-like renderer and finished with mix results: I was able to create a really tiny framework with an Elm state architecture and very small WASM binary sizes. But when I tried to build something based on Leptos reactivity and the same model, I ended up with something very similar to binary size and performance to Leptos. Not worth a rewrite.

Then it hit me. The fundamental problem with 0.0 was that, when we were walking the DOM, we were generating that DOM walk in a proc macro: we didn't know anything about the types of the view. But if we use trait composition, we do. Using traits means that we can write the whole renderer in plain Rust, with almost nothing in the view macro except a very simple expansion. And we can drive the hydration walk by doing it at a time when we actually understand what types we're dealing with, only advancing the cursor when we hit actual text or an actual element!

impl<'a> RenderHtml for &'a str
{
    fn to_html(&self, buf: &mut String, position: &PositionState) {
        // add a comment node to separate from previous sibling, if any
        if matches!(position.get(), Position::NextChild | Position::LastChild) {
            buf.push_str("<!>")
        }
        buf.push_str(self);
    }

    fn hydrate<const FROM_SERVER: bool>(
        self,
        cursor: &Cursor<R>,
        position: &PositionState,
    ) -> Self::State {
        if position.get() == Position::FirstChild {
            cursor.child();
        } else {
            cursor.sibling();
        }

        let node = cursor.current();

		/* some other stuff that's not important! */

        (node, self)
    }
}

Again, if it's not clear why this is amazing, no worries. Let me think about that. But it is really, really good.

It's also possible to make the view rendering library generic over the rendering layer, unlocking universal rendering or custom renderers (#1743) and allowing for the creation of a TestingDom implementation that allows you to run native cargo test tests of components without a headless browser, a future leptos-native, or whatever. And the rendering library can actually be totally detached from the reactivity layer: the renderer doesn't care what calls the rebuild() function, so the Leptos-specific functionality is very much separable from the rest.

Pros

  • Much more "pure Rust" code with normal rules of mutability, ownership, etc instead of interior mutability and Rc everywhere.
  • Still supports a native Rust syntax
  • Smaller WASM binary sizes (about a 20% reduction in the examples I've built so far)
  • Smaller/cleaner HTML files (many, many fewer hydration IDs and comments)
  • Much faster SSR (my current benchmarks are about 4-5x faster than current Leptos)
  • Faster hydration due to DOM walk
  • More maintainable than current leptos_dom
  • More testable than current leptos_dom
  • Remove a bunch of the view macro's logic/the server optimizations
  • Significantly "smooths out" the worst case of performance: i.e., "re-rendering" a whole chunk of the view inside a dynamic move || block goes from being "this is bad, recreates the whole DOM" to "not ideal but kind of fine, efficiently diffs with fewer allocations than a VDOM"

Cons

  • ???

My goal is for this to be a drop-in replacement for most use cases. i.e., if you're just using the view macro to build components, this new renderer should not require you to rewrite your code. If you're doing fancier stuff with custom IntoView implementations and so on, there will of course be some changes.

As you can tell, I'm very excited about work in this direction. It has been giving me a sense of energy and excitement about the project and its future. None of it is even close to feature parity with current Leptos, let alone ready for production. But I have built enough of it that I'm convinced that it's the next step.

I really believe in building open source in the open, so I'm making all this exploration public at https://github.com/gbj/tachys Feel free to follow along and check things out.

@gbj gbj pinned this issue Oct 2, 2023
@gbj gbj added the future label Oct 2, 2023
@birkskyum
Copy link

birkskyum commented Oct 3, 2023

Regarding opening up for custom (native) renderer on top of laptos_reactive - I noticed that's what i.e. https://github.com/lapce/floem/tree/main/reactive are doing, and it would be cool to see the core of leptos flexible enough for a project like that to build on top of leptos with a custom renderer, rather than fork it and replace all the dom specific code which appear to be the current situation.

@gbj
Copy link
Collaborator Author

gbj commented Oct 3, 2023

Regarding opening up for custom (native) renderer on top of laptos_reactive - I noticed that's what i.e. https://github.com/lapce/floem/tree/main/reactive are doing, and it would be cool to see the core of leptos flexible enough for a project like that to build on top of leptos with a custom renderer, rather than fork it and replace all the dom specific code which appear to be the current situation.

Yep I've seen it. They'd previously been using Leptos for reactivity but decided to go their own way for 0.5 because they were using Scope in a pretty important way, IIRC.

But yeah if you think of the framework stack as being split into 1) change detection/reactive system, 2) rendering library 3) the actual renderer (DOM or native UI toolkit) then Leptos and Floem were sharing 1 with distinct 2 and 3. My goal here is actually to make the three completely modular, so that you can build Leptos web (Leptos reactivity/Tachys/DOM) or Leptos native (Leptos reactivity/Tachys/some native toolkit) or your own framework (X/Tachys/Y) with the shared "rendering library" layer quite loosely coupled.

@ChristopherBiscardi
Copy link
Contributor

Roadmap looks amazing (as is all the work up to this point!) and I'm especially happy you're finding time to work in the spaces that give you energy.

@gbj
Copy link
Collaborator Author

gbj commented Dec 15, 2023

Just to provide an update on the roadmap toward 0.6:

This work has basically what I'd describe as the 80/20 point: I have reimplemented about 80% of Leptos 0.5, and the remaining 20% of the work will take about 80% of the time 😄

In many ways, the outcome looks basically identical to what you'd expect from a Leptos app right now. For example, this afternoon I just finished implementing the new form of the #[server] macro (huge shout out to @benwis for work on the server fn rewrite), and you can find an Axum example here.

There's nothing super surprising

    // dropping create_ prefixes in several places in line with Rust idiom
    // compare `signal(0)` to `channel(0)` for this pattern
    let (count, set_count) = signal(0); 

    // all looks the same
    view! {
        <button
            on:click=move |_| spawn_local(async move {
                let new_count = server_fn_1(count.get()).await.expect("server fn failed");
                set_count.set(new_count);
            })
        >
            "JSON " {move || count.get()}
        </button>

I've seen pretty significant wins in terms of binary size, memory use, and performance, without having to break much in terms of APIs. For example, with comparable build settings the hackernews example drops from 584kb WASM in binary size to 200kb or so.

The performance is also "smoothed out" quite a bit in the case of less-than-ideal user code. For example, in the current version

move || view! { <p>{count.get()}</p> }

will create a new <p> and text node every time count changes, which is very expensive. In 0.6, due to the glories of trait composition, this is actually identical to the preferred

view! { <p>{move || count.get()}</p> }

And in fact this multiplies fairly well. So, for example, if you have multiple signals changing at once, then the "coarser grained" approach actually performs slightly better than the finer-grained one

// this used to be really bad, but now it's quite good!
move || view! { <p>{count1.get()} {count2.get()}</p> }

There are also some nice treats and wins like typed attributes meaning that you can't accidentally type clas="bg-red-500 font-bold tw-am-i-doing-this-right" and spend thirty minutes trying to fix Tailwind. (clas won't compile, because you meant to type class).

What's Left

Most of this work has been happening in other repos (tachys and server_fns) to avoid having to deal with massive merge conflicts and so on.

On one level, the technical steps from here are fairly straightforward

  1. open a new branch in leptos repo
  2. copy and paste the tachys equivalents into the leptos directories
  3. porting docs, doctests over from the 0.5 to 0.6 versions

Other pieces require a bit more work
4) smoothing out rough edges and APIs/building out the Suspense/Await/Transition/ErrorBoundary components for new version, islands, etc.
5) I need to finish some work on the new router w.r.t. nested routing, and also building out the Route components etc.

And some are just polish
6) dogfooding -- using 0.6 in the Leptos site, your blog, making sure the current examples all work or have clear migration paths
7) polish and getting ready for release

Open Questions in API Design

There are also a few open questions. I've been trying to lean more into some native Rust idioms—not for the sake of being more idiomatic, but because there are nice performance, simplicity, and maintainability benefits. For example, I've been reworking both async and error handling a bit.

For async, we'll probably still provide Suspense/Transition/Await components, but you can also use async blocks directly. (Resources now implement IntoFuture; they still parallelize data loading and enable streaming data from the server.)

let value = Resource::new(/ *something */);
view! {
  <p>{
    async {
      value.await.unwrap_or_default()
    }
    // Suspense trait implemented on Futures provides .suspend()
    .suspend()
  }</p>
}

Likewise, I'm playing with a similar pattern for error handling:

move || {
    view! {
        <pre>
            "f32: " {value.get().parse::<f32>()} "\n"
            "u32: " {value.get().parse::<u32>()}
        </pre>
    }
    .catch(|err| {
        view! {
            <pre style="border: 1px solid red; color: red">
                "error"
                //{err.to_string()}
            </pre>
        }
    })
}

Again, this can have a nice component wrapper but doesn't require it.

These are built on the idea that async and try (which doesn't exist yet, but you'll see what I mean) have fundamentally the same semantics: if I hit an unresolved Future, or if I hit an Err, fall back. Implementing these does mean a compromise with fine-grained reactivity at some level though -- you'll notice that the .catch() example is not fine-grained. In my current implementation using a move || value.get().parse::<f32>() would actually always count as "Ok". (The diffing required in the coarser-grained version is statically typed and so cheap as to be irrelevant in many cases -- in this case, for example, it's faster than the fine-grained version -- but that's not necessarily true in every case, for example if it needs to rerun a component.)

I should probably write something up in a bit more detail about some of those decisions as we get closer. In any case, it's definitely possible to implement the same old <ErrorBoundary/> approach if that's preferred.

So: Yeah, 0.5 is in pretty good shape right now, and I think things are relatively stable on that level. I'm trying to take into account as many of the less-tractable 0.5 issues as possible in designing for the future, and the new release is making good progress. I wouldn't want to put a timeline on it, but I'm very excited for the future ahead.

@benwis
Copy link
Contributor

benwis commented Dec 15, 2023

I'll jump in here and talk a little bit about the server fn rewrite, which I am very excited about.

Current Server Fns

Currently, Server Fns are based around Serde's Serialize and Deserialize traits, and feature a fixed number of encoding types(Url,Json, Cbor) built into the macro. Inputs and outputs from them have been required to implement Serialize/Deserialize. This has a few limitations, which I'll talk about below

New Server Fns

These will be based around the Request and Response types of each framework(http::Request, http::Response) for most things, HttpRequest/HttpResponse for Axum, and consist of four different traits that are implemented for an Encoding and for a data type. FromReq/IntoReq and FromRes/IntoRes. Here's the benefits of this approach:

Looks pretty similar to the old one right?

#[server(endpoint = "/my_server_fn", input = GetUrl)]
pub async fn my_server_fn(value: i32) -> Result<i32, ServerFnError> {
    println!("on server");
    Ok(value * 2)
}
  1. Streaming/Multipart Server Functions
    Because we are now handling a Request and a Response directly, it becomes possible to define server fns around those types. Things like file upload and streaming have historically been impossible because of the restrictions around Body and the need for serde's Serialize/Deserialize. No more!

  2. Directly Use Axum extractors
    We've gotten pretty close to allowing people to use Axum extractors with our handy extract() function, but now that we're opening up the entire process, we should be able to call them directly on the Request. This opens a lot of possibilities and leverages the entire Axum ecosystem.

  3. User Definable Custom Encodings
    Adding additional encodings to server functions(like Cbor) used to require modifying the leptos server macro, which isn't easy to do. Now anyone can implement a Codec for their desired format simply by defining a type, implementing the four traits on it, and passing it to the macro. Below you can find the one I wrote for JSON. I'm excited to see what people come up with!

/// Pass arguments as a URL-encoded query string of a `GET` request.
pub struct GetUrl;

/// Pass arguments as the URL-encoded body of a `POST` request.
pub struct PostUrl;

impl Encoding for GetUrl {
    const CONTENT_TYPE: &'static str = "application/x-www-form-urlencoded";
}

impl<T, Request> IntoReq<Request, GetUrl> for T
where
    Request: ClientReq,
    T: Serialize + Send,
{
    fn into_req(self, path: &str) -> Result<Request, ServerFnError> {
        let data =
            serde_qs::to_string(&self).map_err(|e| ServerFnError::Serialization(e.to_string()))?;
        Request::try_new_post(path, GetUrl::CONTENT_TYPE, data)
    }
}

impl<T, Request> FromReq<Request, GetUrl> for T
where
    Request: Req + Send + 'static,
    T: DeserializeOwned,
{
    async fn from_req(req: Request) -> Result<Self, ServerFnError> {
        let string_data = req.as_query().unwrap_or_default();
        let args = serde_qs::from_str::<Self>(string_data)
            .map_err(|e| ServerFnError::Args(e.to_string()))?;
        Ok(args)
    }
}

impl<T, Response> IntoRes<Response, Json> for T
where
    Response: Res,
    T: Serialize + Send,
{
    async fn into_res(self) -> Result<Response, ServerFnError> {
        let data = serde_json::to_string(&self)
            .map_err(|e| ServerFnError::Serialization(e.to_string()))?;
        Response::try_from_string(Json::CONTENT_TYPE, data)
    }
}
impl<T, Response> FromRes<Response, Json> for T
where
    Response: ClientRes + Send,
    T: DeserializeOwned + Send,
{
    async fn from_res(res: Response) -> Result<Self, ServerFnError> {
        let data = res.try_into_string().await?;
        serde_json::from_str(&data).map_err(|e| ServerFnError::Deserialization(e.to_string()))
    }
}

  1. Per Server Fn Axum Middleware
    Dioxus has had a feature for a while where you can annotate a server fn with an Axum middleware, and it will automatically be applied only to that fn. You may or may not know that they share our server fn implementation. We've been working with the very helpful @ealmloff to bring that over to Leptos.
#[server(endpoint = "/my_server_fn", input = GetUrl)]
// Add a timeout middleware to the server function that will return an error if the function takes longer than 1 second to execute
#[middleware(tower_http::timeout::TimeoutLayer::new(std::time::Duration::from_secs(1)))]
pub async fn timeout() -> Result<(), ServerFnError> {
    tokio::time::sleep(std::time::Duration::from_secs(2)).await;
    Ok(())
}
  1. Return your Own Error Types!
    Previously, if you'd like to return your own Error, you had two choices. Either convert the error to a String and stick it in ServerFnError, or return a Result<Result<T, E>, ServerFnError>. I'm happy to say that now, with the power of autoderef specialization, default generics, and a handy macro, you have a third option. Result<T, ServerFnError<MyAppError>> where MyAppError is actually your Error
pub enum MyAppError{
  #[error("An error occured")]
  Errored
}

impl From<ServerFnError> for MyAppError{
  fn from(err: Error) -> ServerFnError<MyAppError>{
    server_fn_error!(err)
  }
}
fn add(val1: i32, val2: i32) -> Result<i32, MyAppError>{
 val1+val2
}
#[server(endpoint = "/my_server_fn", input = GetUrl)]
pub async fn my_server_fn(val1: i32, val2) -> Result<i32, ServerFnError<MyAppError>> {
add(val1, val2)?
}

@Tehnix
Copy link

Tehnix commented Feb 20, 2024

@gbj Since you’re tackling the split-up and refactoring of the new rendering system (which sounds amazing btw), I was wondering if you had thought about how/if functionality like HMR/HSR (Hot Module/State Reloading) would be an integrated part of the reactivity/rendering system, or if it’s something that can be built on top?

From reading how Perseus achieved HSR for their framework (based on Sycamore), it sounded like having it deeply embedded into the reactivity machinery (so it can seamlessly serialize/deserialize things, if that’s the way one goes), could be beneficial. They described their approach here https://framesurge.sh/perseus/en-US/docs/0.4.x/state/freezing-thawing/.

I’m mostly interested to hear if you had any thoughts around where you think such functionality would best live in the leptos stack/ecosystem :) Thought it would be relevant to bring up in case it influenced any design choices while we are looking to replace things anyways.

@gbj
Copy link
Collaborator Author

gbj commented Feb 22, 2024

Since you’re tackling the split-up and refactoring of the new rendering system (which sounds amazing btw), I was wondering if you had thought about how/if functionality like HMR/HSR (Hot Module/State Reloading) would be an integrated part of the reactivity/rendering system, or if it’s something that can be built on top?

From reading how Perseus achieved HSR for their framework (based on Sycamore), it sounded like having it deeply embedded into the reactivity machinery (so it can seamlessly serialize/deserialize things, if that’s the way one goes), could be beneficial.

@Tehnix This might be worth discussing at more length in a a separate issue as well.

Reading through the Perseus docs on this my take-aways are the following:

  1. looks like primarily a DX/development-mode feature, so you can recompile + reload the page without losing state
  2. in Perseus's context, it's tied to the notion of a single blob of reactive state per page, with named fields -- they even emphasize "not using rogue Signals that aren't part of your page state")
  3. as a result for them it lives at the metaframework level, and is one of the benefits from the tradeoff between route-level/page-level state and reactivity -- in exchange for giving up granular state, you get the benefit of hot reloading without losing page state

The big benefit of HMR/state preservation in a JS world comes from the 0ms compile times of JS, which means you can update a page and immediately load the new data. Not so with Rust, so this is mostly "when my app reloads 10 seconds later after recompiling, it restores the same page state."

I have tended toward a more primitive-oriented approach (i.e., building page state up through composing signals and components) rather than the page-level state approach of Perseus, which I think is similar to the NextJS pages directory approach. So this wouldn't work quite as well... i.e., we could save the state of which signals were created in which order, but we don't have an equivalent struct with named fields to serialize/deserialize, so it would likely glitch much more often. (e.g., switching the order of two create_signal calls would break it)

It would certainly be possible to implement at a fairly low level. I'm not sure whether there are real benefits.

If you want to discuss further I'd say open a new issue and we can link to these comments as a starting point.

@Tehnix
Copy link

Tehnix commented Feb 28, 2024

If you want to discuss further I'd say open a new issue and we can link to these comments as a starting point.

Good idea, I've opened #2379 for this :)

@sebadob
Copy link
Contributor

sebadob commented Apr 29, 2024

Not sure if this would be the right place for this, but I can't open an issue in the leptos 0.7 preview playground.

I saw that the nonce feature is not yet implemented, which if of course not an issue.
But that got me thinking about a better way of security improvements via CSP, which might make sense as long as its early for v.0.7.

Usually, nonces are not the best way of hardening a CSP regarding the performance. It would be way less work in the backend if we could calculate sha256 hashes of inline scripts at compile time. Then we could get rid of

  1. generating a crypto safe random string with each single request
  2. skip all the nonce insertions into the HTML

To achieve this, just as an idea:
Would it be possible to have only a single inline <script> for the initial hydration inside the HTML instead of multiple ones? This could maybe load all the additional necessary data (don't know if feasible technically). If this single inline script would always look the same, so no changing id's and stuff, the hash of it could be calculated at compile-time and the security boost via CSP would cost almost nothing at all. It could then look like:

Content-Security-Policy: script-src 'self' 'strict-dynamic' 'wasm-unsafe-eval' 'sha256-base64EncodedHash'

... and it would be the exact same time, which would make scaling a lot better and more efficient.

  • the 'sha256-base64EncodedHash' would be the hash of the initial inline hydration script
  • the 'wasm-unsafe-eval' is just needed to make wasm work
  • the 'strict-dynamic' would automatically allow all resources that are loaded via the the inline hydration script (no need for additional hashing)
  • all other resources would fall under the self rule

@gbj
Copy link
Collaborator Author

gbj commented Apr 29, 2024

@sebadob Thanks, I'll enable issues on that repo too as it does make sense as a place for this sort of discussion.

Basically: The reason we use multiple inline scripts is to enable the streaming of resource values and Suspense fragments as part of the initial HTML response. I don't think it is possible to hash these at compile time, because by definition they are only known at runtime, if for example the resource is loading information from the DB.

(With some additional tooling we could support hashing for the actual hydration script, but not these additional scripts that are part of the streaming response.)

@sebadob
Copy link
Contributor

sebadob commented Apr 29, 2024

@gbj Thanks!

Yes that makes sense. I'll think a bit more about a way of how this could be solved in a more efficient way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants