Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Liferay Dev Server #635

Merged
merged 29 commits into from
Oct 8, 2021
Merged

feat: Liferay Dev Server #635

merged 29 commits into from
Oct 8, 2021

Conversation

jbalsas
Copy link
Contributor

@jbalsas jbalsas commented Aug 31, 2021

As the title indicates, this is a quick and naive attempt at building a Liferay Dev Server that I hope can serve as inspiration for the team to pick this up and build a real one!!

Screen.Recording.2021-08-31.at.10.29.40.mov

How to test it

  • Clone the repo
  • Run yarn install && yarn build from the dev-server package
  • Run dev server with
node --experimental-specifier-resolution=node ${PATH_TO_REPO}/projects/npm-tools/packages/dev-server/bin/liferay-dev-server.js ${PATH_TO_LIFERAY_PORTAL_SOURCE} -v

Features

  • Command line
  • Sets up a server at http://localhost:3000
  • Proxies all unkonwn requests to a given Liferay Portal instance (http://localhost:8080 by default)
  • Attempts to map requests of the form /o/moduleName/fileName or /o/js/resolved-module/moduleName/fileName to their output on disk
  • Watches for changes and triggers yarn build on the specific projects
  • Sets up a live-reload mechanism to reload automatically when the build finishes

To-Do

I only tinkered with this for about a day yesterday and then cleaned it up so it was easy to dissect. Expect a ton of rough edges and unacounted for scenarios like:

  • Handle SCSS
  • Check other possible load patterns and verify modules are mapped correctly to their build output
  • Apparently it only works with Chrome Devtools open... figure out why? 😂
  • Error Handling
  • Connection Handling
  • Improve Build performance
  • Probably others...

About Performance

Had we had this in place, we would've surely noticed that our build step is not as fast as we'd like it to be. We put a lot of pressure on gradle and deployment, but the fact is that building some modules takes almost a minute. This should be our cue to start looking at that.

One thing I had in mind was to trigger some type of incremental builds, by just putting in a virtual folder the files that changed inside a module rather than forcing us to recompile the whole thing. I'll leave it there for another experimentation :)

Anyways, this is here for your consideration and enjoyment... and hopefully so someone can build this for realz!! 👊

PS: @nhpatt, I know I said I wouldn't do it, but I did

notsorry

@jbalsas jbalsas changed the title [POC] Naive Liferay Dev Server feat: [POC] Naive Liferay Dev Server Aug 31, 2021
@bryceosterhaus
Copy link
Member

Sorry, I thought I responded to this but I must have just in my head. Glad you opened this though and starting to move the conversation. I think my main concern is rolling our own implementation of this. I think we should aim to utilize existing technologies that already have done so much work in this space, such as webpack. We may need to write some helpers specifically for deploying resources to the right place, but I think we will have better performance if we use something that already exists. I assume you have already looked into that a bit, but any idea what some of the main hurdles of that would be?

I remember back with Loop and AC, we rolled our own sort of dev server so that we didn't have to deploy every time but we had made a very specific took for our use case. We later switched to using webpack and that seemed to clean it up quite a bit. However, our use case didn't leverage the AMD loader, so we would need to make sure we are building in a similar manner that we are when we actually deploy the module.

@jbalsas
Copy link
Contributor Author

jbalsas commented Sep 8, 2021

Hey @bryceosterhaus!

I think we should aim to utilize existing technologies that already have done so much work in this space, such as webpack

I wholeheartedly agree.

In this case, however, with 571 lines (just 278 of code) this feels pretty lean and close to something that can provide real immediate value to our developers.

We may need to write some helpers specifically for deploying resources to the right place, but I think we will have better performance if we use something that already exists. I assume you have already looked into that a bit, but any idea what some of the main hurdles of that would be?

Using webpack directly was always my go-to approach. Sadly, 5+ years later, we still find ourselves without anything at all for what it's really a simple proxy server with some custom resolution logic.

When I thought about this again, I got some bundler v3 vibes... and we all know where that took us :(

I'd be happy to explore the webpack approach in more detail, but it is my expectation that we'd end up jumping through hoops, trying to bend webpack to work with our setup, or worse, having to rewrite our whole build process so we could benefit from its features. This last part is not bad in itself. We could choose to change our approach so we could reap some of the benefits. My main concern is that we will be only delivering yet another promise to our developers.

I tried to start migrating modules to webpack some years ago with frontend-js-web but that eventually didn't pay off and DXP has certainly moved in a different direction.

Long term, I think our bet should be a combination of module scripts and import maps. I haven't verified this, but I think we could reduce our build pipeline and generate different import maps for prod/dev to reduce the need for deployments.

Long story short. I think we should look at how we can deliver value to our developers right away. And then, start planning how we want Frontend Developer Experience within Liferay to look like. Doing the latter before is a risk I don't think we should take.

@jbalsas
Copy link
Contributor Author

jbalsas commented Sep 8, 2021

[...] but I think we will have better performance if we use something that already exists [...]

Just as a side note, this performs pretty well and does the absolute minimum. The main thing that doesn't perform is our build process itself.

I played around with incremental builds and an execution of npm-scripts build has around 5-6 seconds of overhead. For a configuration with 0 files to process, it still takes npm-scripts build 6 seconds to complete. That's right now the main bottleneck, but it is in any case:

  • Under our control... we should be able to improve it!
  • Already using custom tooling (babel, tsc, webpack...)

I've seen webpack builds taking quite a bit of time too, so I don't think we're that far off.

@bryceosterhaus
Copy link
Member

Long story short. I think we should look at how we can deliver value to our developers right away. And then, start planning how we want Frontend Developer Experience within Liferay to look like. Doing the latter before is a risk I don't think we should take.

I like this idea of getting something to devs even if it isn't our perfect ideal. We can always iterate and so long as we expose it via some simple CLI, we can always change the internals of it to speed up.

Just as a side note, this performs pretty well and does the absolute minimum. The main thing that doesn't perform is our build process itself.

Oh good to know. I haven't gotten a chance to test this yet but you're right that it'll probably help us identify some bottlenecks and speed things up. I wasn't fully aware of the difficulties we had with testing this in webpack previously but it sounds like we have enough experience with it to know that it isn't going to be a quick win.

I'll test this out a bit later this week or next and can provide some more feedback. If we really want to get this out quickly and gather feedback, we could potentially try out some sort of "experimental" track, similarly to how FB has this organization for projects they experiment with.

@matuzalemsteles
Copy link
Member

matuzalemsteles commented Sep 8, 2021

One of the features that significantly improve the Webpack build process is that it knows what should be compiled and uses a lot of caching to skip steps, it also has a lot of parallelisms if you configure it, the same for babel you can configure local cache, you can reduce a lot of time spent on build just setting this even for production builds. But even if you have the best configuration and use the latest version a production build will still take a few seconds, but for the development environment, you can build in a few ms if you combine chunks.

I think if we want to reduce this bottleneck there are many strategies that we could attack, multithreading is an option but it increases the complexity in code, surely incremental builds would be the fastest strategy maybe. But even if we implement everything we will still have bottlenecks ah unless we adopt the swc which will have much higher gains, there is a community movement that is starting to adopt this in some big projects like Next.js for example. Unfortunately, even though we've improved a lot, Node.js/JS has limitations.

Also as we adopt more TypeScript in our codebase the build time tends to increase, tsc is very slow due to its type checks in build, there is also a problem that we have a phased pipeline that runs different CLIs and this increases a significant bottleneck in the build even if you don't have a file, the time to start a CLI using Node.js can take 500~1000 ms or much more, in a conventional build we have, we run tsc, babel and bundler.

@jbalsas
Copy link
Contributor Author

jbalsas commented Sep 9, 2021

But even if you have the best configuration and use the latest version a production build will still take a few seconds [...]

Funny thing is, our build process still takes ~5 seconds even if there aren't any files to process (on a quick test) :)

But even if we implement everything we will still have bottlenecks ah unless we adopt the swc which will have much higher gains [...]

Moving to swc or esbuild (or other rust-based tooling) should be for sure in our radar for the future.

I'll test this out a bit later this week or next and can provide some more feedback. If we really want to get this out quickly and gather feedback, we could potentially try out some sort of "experimental" track, similarly to how FB has this organization for projects they experiment with.

We can even make it simpler and just place this directly in liferay-portal so it can be invoked from ant frontend-dev-server or something like that. That way we can simply have some of our devs test it directly (I've heard @markocikos and @carloslancha complain lately about how hard it is to work with frontend-taglib-clay and maybe we can use that feedback to get something that helps them and others.

[...] it sounds like we have enough experience with it to know that it isn't going to be a quick win [...]

I wouldn't say I have enough experience with it, so feel free to explore the webpack option. I'd be more than happy to ditch this if we find a better way. I definitely don't think (but don't know for sure) that it's going to be a quick win, but I may be wrong ☺️

@matuzalemsteles
Copy link
Member

Funny thing is, our build process still takes ~5 seconds even if there aren't any files to process (on a quick test) :)

🙈 Yeah, I think this is very much related to running the separate processes for each phase of the build. We would be able to shorten it by invoking the Node API of each package, instead of running the babel and tsc CLI and maybe the bundler if it has, we could shorten about 1~2 seconds of that time, or more.

Moving to swc or esbuild (or other rust-based tooling) should be for sure in our radar for the future.

Yes, not that we need to migrate everything now but we can keep an eye on how they are evolving very fast in terms of supporting more features.

@bryceosterhaus
Copy link
Member

We can even make it simpler and just place this directly in liferay-portal

Oh I like this. It would allow us to quickly iterate and see how it goes too. We could simply put it in there but not announce it for teams other than our own. We can test it out within our team first and then go from there.

I would opt for making it invoked via node and possibly in our package.json at the modules level. yarn run liferay-dev.

@bryceosterhaus
Copy link
Member

I looked into speeding up our build script briefly. I think we could benefit from calling the node APIs directly instead of spawning the CLI tasks. Without any js or css files, I was noticing a base time of ~2.25seconds to run.

So testing the dev-server today I ran into a few issues, neither are major

  • Unable to sign in on port 3000. Had to go to 8080 first and sign in and then back to 3000
  • Liferay lang keys didn't seem to work consistently but this probably makes sense because the file isn't actually deployed
  • Feels a little slow or it seems like I need to also manually reload sometimes. Again, not a major issue because its only a few seconds.
  • The reload alerts can be a little annoying, a few times I get 4-5 stacked.

One simple idea to improve build time is to add some cli flags for liferay-npm-scripts build that dev-server could use to speed it up. Like --sass-only or --js-only.

Overall though, for only ~400 lines of code, this works surprisingly well and is a big improvement of deploying.

For next steps, I think we just go with it and label is something more experimental and we can gather data on its usefulness. I think we publish this as-is under @liferay/dev-server at 1.0.0-experimental and we can have people use it via npx. Steps to use would be Navigate to DXP and run liferay-dev-server.

Any opposed to just going ahead with this? Eventually I think we end up putting this under @liferay/cli and you can run it with liferay dev

@bryceosterhaus
Copy link
Member

I pushed some commits so that you can run the dev-server bin without having to pass the path to liferay-portal. You just need to run it with the cwd instead of liferay-portal

@bryceosterhaus
Copy link
Member

Also just pushed up support for granular builds, this way we only compile one or the other and not both js and css every time

@izaera
Copy link
Member

izaera commented Oct 7, 2021

Regarding speeding up builds, one of the biggest issues is probably bundler v2, which takes quite a lot of time to run when parsing the npm dependencies (as opposed to the project's source code). Fortunately, as deps are immutable, once it processes them and puts them in the build directory, it doesn't process them any more, unless you do a clean, and that alleviates the problem a lot, because transpiling the project sources alone is usually quite fast.

In any case, we have bundler v3, which is basically a webpack build with a plugin to handle bundler-imports. That is the only specific thing our platform has that regular webapps don't. Note that bundler v3 is NOT the same as the webpack federation thing we did that ended up not working (because we couldn't find a way to implement bundler-imports correctly, in fact).

We are currently using bundler 3 to build frontend-js-recharts and it reduced build time from ~25 secs to ~5 secs, so a big improvement there. Also, as it uses webpack under the hood we can leverage webpack caching mechanism and even the webpack-dev-server, most probably (though I don't think that would work well, because this PR setups a devserver for the whole portal, and what I'm talking about would setup a devserver for just one module, so not the same...).

My doubt in this moment is whether to continue evolving bundler 3 to fill in the gap until browser modules is fully working, or skip that step completely and go from bundler 2 to browser imports. On the one hand, it would be worth to begin using bundler 3 because:

  1. we may eventually supersede bundler 2 (given that both are interoperable)
  2. we may evolve this webpack based build to leverage browser modules at some time in the future (related reads: https://v8.dev/features/modules#performance (a recommendation to keep bundling even when using browser modules) and
    https://github.com/tipalti/webpack-import-map-plugin (a webpack plugin to generate import maps))
  3. It's really a standard webpack build plus a post-process to wrap three files inside the AMD thingie.

On the other hand, the thing that scares me a bit is that the model differs a bit from that of bundler v2, because bundler 2 made every npm module public by default (i.e: you can import it from any other place) and bundler 3 does the opposite: everything is hidden inside the OSGi bundle containing it unless you make it public. This makes it necessary to develop some tools to aid in determining what needs to be made public (for instance: anything imported from a JSP should be made public). However, this could be the case for browser modules too (if we end up bundling partial dependency graphs for performance reasons) so it could be good to start investigating it.

To recap: I think we can proceed with the dev server already (we should have proceeded with it long ago, but never did it :-( ), but until we get rid of bundler 2, any change to the dependencies, imports, etc. will probably take a lot of time. I guess it's better having a part than having nothing, but we should probably decide what to do with the bundler in parallel because it's been a quite long period without touching it and clearly bundler 2 is no longer the best tool to do its task...

Depending on what we decide to do with the bundler + AMD loader stuff, this devserver may change in one way or another, I guess. Or maybe not 🤷 What do you think?

@izaera
Copy link
Member

izaera commented Oct 7, 2021

There's also one gap we would need to cover, but I'm not sure of the implications 🤔 It's all server related stuff, i.e., any change to any OSGi thing or any state known by the server, for example:

  1. npm dependency graphs (even inside the project's source code, I think). I mean if you have a module a.js and you add a new import to b.js, I think you need to notify the server's NPMRegistry somehow, so that the AMD loader knows that when you request a.js it needs to fetch b.js too.
  2. Portlet configuration
  3. Portlet localization
  4. Portlet properties (like header CSS)
  5. ...

The devtools team did something to deal with some of these, but it was still incomplete and required true OSGi redeployments for some things to work, so I'm afraid we are far from a complete reload solution in that sense :-(.

@izaera
Copy link
Member

izaera commented Oct 7, 2021

if we end up bundling partial dependency graphs for performance reasons

There's the possibility of a hybrid approach too:

  1. when developing, we don't bundle any npm module at all and leave it all to the browser, as in the browser modules PoC. That would effectively remove the need for any transpilation (other than JSX and other exotic features) and the need to notify the server about dep graph changes (because there' no need to use any AMD loader).
  2. when deploying we do any partial bundling needed as discussed here

@izaera izaera mentioned this pull request Oct 7, 2021
6 tasks
@jbalsas
Copy link
Contributor Author

jbalsas commented Oct 7, 2021

So testing the dev-server today I ran into a few issues, neither are major

I'm sure there are even more 😂

  • Unable to sign in on port 3000. Had to go to 8080 first and sign in and then back to 3000

Yeah, this is something I need to investigate... for some reason the sign-in redirect doesn't work well. I thought it might have to do with the backend so I need to debug it a little bit.

  • Liferay lang keys didn't seem to work consistently but this probably makes sense because the file isn't actually deployed

Yeah, I wouldn't expect them to work at all, though it should be pretty straightforward to implement since:

  • We have access to the local filesystem, thus to the Language.properties files map
  • We are proxying the files, so we can do the rewrite on the fly
  • Feels a little slow or it seems like I need to also manually reload sometimes. Again, not a major issue because its only a few seconds.

Yeah, I think there's some kind of interaction with the loader or something... that's why I mentioned as a ToDo that apparently it only works with DevTools open... did you by any chance experience that?

  • The reload alerts can be a little annoying, a few times I get 4-5 stacked.

Hahaha... yeah... might want to get some UX experts to improve that workflow, feel free to tweak it anyway you want!! 🤗

One simple idea to improve build time is to add some cli flags for liferay-npm-scripts build that dev-server could use to speed it up. Like --sass-only or --js-only.

Nice one! That definitely helps!

What I was trying (but didn't keep it around for some reason) was:

  • Backup config files (.npmscripts.config.js...)
  • Make special configuration so that src, includes... only targets the modified file
  • Trigger build (should only affect that 1 file)
  • Restore original config files

If I remember correctly I kinda made it work for JS, but the CSS build script wouldn't allow properly configuring the globs to point to a single file. We can revisit that approach if we want to push this further. In any case, I think the bottleneck was more on bundler v2 still processing node_modules than in the processing of the source files, so maybe it's a moot point.

Overall though, for only ~400 lines of code, this works surprisingly well and is a big improvement of deploying.

Yeah... ~400 + the lines of the dependencies + the million of lines in Node.js... but that's more or less like everyone else 😂

For next steps, I think we just go with it and label is something more experimental and we can gather data on its usefulness. I think we publish this as-is under @liferay/dev-server at 1.0.0-experimental and we can have people use it via npx. Steps to use would be Navigate to DXP and run liferay-dev-server.

Any opposed to just going ahead with this? Eventually I think we end up putting this under @liferay/cli and you can run it with liferay dev

I thought you wanted to move it into DXP and make a more quiet rollout. I'm fine either way, this is for you all to decide what to do with it considering you're going to be the main maintainers. You can even dump it and start from scratch if that works better for you!


I think @izaera's comments would require further exploration in different threads (especially about which direction to take bundler and portal frontend).

The main point I think is:

[...]To recap: I think we can proceed with the dev server already (we should have proceeded with it long ago, but never did it :-( )[...]

So I'd suggest to get something out there that covers some part of the frontend development process and then slowly see what else needs to be added avoiding unnecessary features. It's not that often that a frontend is changing configurations or dependencies so it could be okay to go through a full reload for that case, just need to make the expectations clear.

@bryceosterhaus
Copy link
Member

I thought you wanted to move it into DXP and make a more quiet rollout.

I did initially, but as I started to do it I started to think it will actually be slower to merge and update because of CI. And I also don't know how Brian would feel with being in dxp. I was sampling this all locally with just a yarn link to the project bin and it worked fine for updating and making changes locally.

Additionally, being able to simply tell people to run npx liferay-dev-server anywhere in dxp source is pretty easy.

I think @izaera's comments would require further exploration in different threads (especially about which direction to take bundler and portal frontend).

Agreed, I think we can explore this in parallel and lets just get something out for people to use right now, regardless of how buggy it might be initially.

We use a very naive approach for now and just map the name of the module
from its `package.json` to the `build/node/packageRunBuild/resources` folder
in the WD.

At runtime, we know requests for those resources should look more or less
like `/o/${moduleName}/${path}`, and those can be retrieved (when built)
from `build/node/packageRunBuild/resources/${path}`.

This does not account for `.scss` files which are compiled to `.css` and
placed in `.sass-cache`, so we'll need to further refine this to account
for it.

Additionally, globbing `liferay-portal` is a very expensive operation. We
could try to defer it somehow, but I'm not sure we can have a functioning
dev-server without it. Thought about doing a kind of reverse-lookup as
requests started coming in and then slowly fill the map on demand, but not
sure how much of a win that could be if at all, so paying the price upfront
for now.
…isk when available

Opens a basic server at 127.0.0.1:3000 that acts as a proxy to the local
server when the requests are not part of our mapped module files.

For now, we only handle JS for:
- /o/moduleName/fileName
- /o/js/resolved-module/moduleName/fileName

AUI modules can't be found in their expected `build/node/packageRunBuild`
location since they're only added via Gradle. We'd need to map them and
other similar files in a different way.
…ed files

Naively attempts to run `yarn build` from the root of the modules that
have changed.
Pretty naive implementation as well, but might be useful for inspiration:
- Adds a middleware to our proxy to inject arbitrary JS into the page
- Creates an `/events` endpoint to register Server Side Events
- Adds a `setupLiveSession` IIFE that registers itself to the `/events`
EventSource and reloads the page on a "reload" command
- On changes, notify live sessions that a reload will occur and issue the
reload command when the build completes
Before, `\d.\d.\d` would only match single digits in each position, thus
something like `3.1.98` would fail to match as a valid version.
By default, our compiled CSS goes into `.sass-cache` instead of directly
with the rest of compiled sources.

Adding a simple fix here to account for this, but long-term, it might be
a good idea to have a better resolution algorithm where we can provide
different solvers for different matches.
This ensures that we don't spend time running unnecessary steps like
minification
jbalsas and others added 19 commits October 7, 2021 15:57
This fixes some paths like /c/portal/login that would fail to redirect properly.
Login is still somewhat broken which needs further inspection.
…faster startup

Pretty much happy-path implementation of a cache strategy for module
mapping resolution. This should support caching multiple mappings
(e.g. for different portal versions) based on the binary path
@bryceosterhaus
Copy link
Member

Added a handful of commits cleaning up the code to be more inline with existing projects.

I also moved it to js-toolkit since it is more js than npm related

@bryceosterhaus bryceosterhaus changed the title feat: [POC] Naive Liferay Dev Server feat: Liferay Dev Server Oct 7, 2021
@bryceosterhaus bryceosterhaus marked this pull request as ready for review October 7, 2021 23:20
@bryceosterhaus
Copy link
Member

I need to rebuild DXP but I am going to test this out tomorrow and if all seems good and no one raises issue, I'll publish a v1 tomorrow and send it to the FI channel for people to try out.

@bryceosterhaus bryceosterhaus merged commit d446259 into liferay:master Oct 8, 2021

## Caveats

Probably a lot, which is why this is currently experimental. 😄
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

youdontsay

@@ -5,7 +5,7 @@
* SPDX-License-Identifier: LGPL-3.0-or-later
*/

import server from '../lib/server';
const {default: server} = require('../lib/server');
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha, good catch!

I forgot I was testing this with NODE_OPTIONS='--experimental-specifier-resolution=node' just for fun

hallelluja

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants