Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solved by W3C EXI? #80

Open
ghost opened this issue Feb 8, 2021 · 3 comments
Open

Solved by W3C EXI? #80

ghost opened this issue Feb 8, 2021 · 3 comments

Comments

@ghost
Copy link

ghost commented Feb 8, 2021

A W3C WG was working on EXI for XML: https://www.w3.org/XML/EXI
Their primer document: https://www.w3.org/TR/exi-primer

Accoring to their publication, Canonical EXI, it's a W3C recommendation, as of 2018.

The WG's charter ended in 2018, but if they had succeeded, they would've continued past XML, to attempt to create a binary serialization of CSS and JS; see: https://www.w3.org/XML/EXI/outreach/2016/11/css.html

EXIficient may have already supported the JS serialization, this makes me believe that they did in fact create said binary serialization formats.

Wouldn't their goals and path provide a solution to the problems that this ES proposal attempts to solve?

Unfortunately, there seems to have been little to no web adoption of this technology, maybe an alternative solution would be to just push for browser support and encourage tooling to make use of it?

@HKalbasi
Copy link

HKalbasi commented Feb 8, 2021

A huge part of this proposal is about parsing JS on demand and not at the start, Which that proposal has nothing to do with. But I think it can be used in binary serialization part.

@ghost
Copy link
Author

ghost commented Feb 8, 2021

From this proposal's readme:

Performance of applications on the web platform is becoming increasingly bottlenecked by startup (load) time. Larger amounts of JS code are transferred over the wire by more sophisticated web properties. While caching helps, these properties regularly release new code, and cold load times are very important.

The positioning of this helps emphasize that quick parsing at start is in fact, a major goal here, therefore I would presume anything that is smaller, more organized, and pre-parsed would help. EXI for JavaScript was intended to solve this issue.

The real problem might be having 3 seperate formats, plain text, this ES binary-ast, and W3C's EXI's serialization format.
If we can make use of an already-existing, standardized format (I'm still not sure whether or not it was developed), that was intended for use on the web, then all that needs to be done is to put it into use.

The major win I see with using EXI is that it allows developers to send whole web applications in the format, whereas this proposal solely targets ECMAScript. Personally, as an XHTML web developer, I would be in a position to support the use of a binary serialization of XML on the web, yet others may not be. Although, I believe that, like the proposal claims, most developers are already used to using tooling to generate web pages and code. Many companies and other groups often use CSS and HTML generators or preprocessors, as they do for JavaScript, for the same reasons, namely to ease/simplify web development, to improve developer experience, and to improve development times.

Regardless, the formats do seem to have subtly different goals, as the groups working on these appear to come from different perspectives, so if the authors of the proposal believe that it does not solve the issues caused by sending massive, un-parsed, textual JavaScript, and ideally elaborate why it doesn't, then it might make more sense to make something entirely new.

@kannanvijayan
Copy link

kannanvijayan commented Feb 8, 2021

I initially proposed the binary-ast project and worked with @syg to scope it and define the semantics, so let me provide some of the motivating reasoning behind the project.

The goal was to turn javascript parsing into a no-op where possible, and as close to a "load abstract syntax tree into memory" operation as possible where code needed to be analyzed. The context we were working from was a situation where javscript source and parsing semantics demanded that certain errors be raised at load time, which require a minimal "syntax parse" of JS source, regardless of whether code was executed or not. This was leading to slow load times on webpages which were more frequently loading large amounts of JS (up to megabytes for a single page, uncompressed).

By converting the encoded format into one that directly modeled the AST, and subsequently encoding it in a length-prefixed binary, and making some small adjustments to the semantics requirements behind load-time error reporting, we hoped to make it so that browsers could literally ignore parts of the JS source they did not need to execute - i.e. turning it into an order-zero operation.

This would turn JS load time parsing ("syntax parsing" in JS engine parlance) from something that was at best linear time on the size of the full source.. into something that depended more on the size of the executed source, which is typically a small fraction of the total code that is loaded by the browser.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants