Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better support for --js-flags and other CLI arguments in a packaged app #22705

Open
3 tasks done
patrickhulce opened this issue Mar 16, 2020 · 31 comments
Open
3 tasks done

Comments

@patrickhulce
Copy link

Preflight Checklist

  • I have read the Contributing Guidelines for this project.
  • I agree to follow the Code of Conduct that this project adheres to.
  • I have searched the issue tracker for a feature request that matches the one I want to file, without success.

Problem Description

Packaged apps need a lower-touch method of leveraging --js-flags and other electron CLI options that can only be passed in as arguments currently.

Specific use case
--max-old-space-size to increase V8 heap size of the main process in production.

Additional Context
Our company builds a photo processing application that moves as much analysis out of the renderer process as possible (IPC to main process for all non-renderer work and heavy utilization of node child processes). We can avoid OOMs in the renderer with app.commandLine.appendSwitch('js-flags', '--max-old-space-size=XXXX'); but that does us no good for the main process that's already running. This creates a perverse incentive to disable sandboxing and force more logic into the renderer process. Similarly v8.setFlagsFromString (and the electron built-in v8Flags package.json property) does no good because the isolate has already been created and --max-old-space-size is a noop once the isolate has already been created.

Importance

This is a very common userland request that is frequently misunderstood and dismissed by userland maintainers as technically infeasible. Furthermore, all of the common requests for how to increase memory limit in electron core respond with "use --js-flags" but no mention of how to make this actually work in production.

Additionally, lowering the barrier to experimenting with new flags in packaged apps could reduce the burden of transitioning features like sandbox.

Proposed Solution

Electron reads additional flags from a known filename pattern (electron.opts perhaps? naming very much open to suggestion) on startup that can be processed before the v8 isolate is created/other initialization occurs that render setting flags from within the main process "too little too late".

While on first blush this seemed potentially insecure to me, if the user can already pass these exact flags to the application, a malicious user with filesystem access could already modify the shortcut/plist to the same effect.

Relevant Change Locations

Not entirely sure, maybe here?

v8::Isolate* JavascriptEnvironment::Initialize(uv_loop_t* event_loop) {
auto* cmd = base::CommandLine::ForCurrentProcess();
// --js-flags.
std::string js_flags = cmd->GetSwitchValueASCII(switches::kJavaScriptFlags);
if (!js_flags.empty())
v8::V8::SetFlagsFromString(js_flags.c_str(), js_flags.size());

or here?

int NodeMain(int argc, char* argv[]) {
base::CommandLine::Init(argc, argv);
int exit_code = 1;
{
// Feed gin::PerIsolateData with a task runner.
argv = uv_setup_args(argc, argv);
uv_loop_t* loop = uv_default_loop();
scoped_refptr<UvTaskRunner> uv_task_runner(new UvTaskRunner(loop));
base::ThreadTaskRunnerHandle handle(uv_task_runner);
// Initialize feature list.
auto feature_list = std::make_unique<base::FeatureList>();
feature_list->InitializeFromCommandLine("", "");
base::FeatureList::SetInstance(std::move(feature_list));
#if defined(_WIN64)
crash_reporter::CrashReporterWin::SetUnhandledExceptionFilter();
#endif
// Explicitly register electron's builtin modules.
NodeBindings::RegisterBuiltinModules();
// Parse and set Node.js cli flags.
SetNodeCliFlags();
int exec_argc;
const char** exec_argv;
node::Init(&argc, const_cast<const char**>(argv), &exec_argc, &exec_argv);
gin::V8Initializer::LoadV8Snapshot(
gin::V8Initializer::V8SnapshotFileType::kWithAdditionalContext);
// V8 requires a task scheduler apparently
base::ThreadPoolInstance::CreateAndStartWithDefaultParams("Electron");
// Initialize gin::IsolateHolder.
JavascriptEnvironment gin_env(loop);
v8::Isolate* isolate = gin_env.isolate();

Alternatives Considered

  • Custom compilations of electron with the flags hardcoded (large amount of work and maintenance, only accessible to a small subset of electron population from what I've seen so far)
  • Wrapper around the executable to force the use of particular arguments (difficult cross-platform, undocumented behavior on certain platforms, buggy on macOS)

These are the only two current solutions I'm aware of. If there's a way to achieve what I'm looking for that I missed, I'd love to be wrong here :)

Resourcing

I'm willing to work on this if there's consensus on a solution and someone willing to point me in the right direction. I used to work on Chromium and am familiar enough with the electron architecture to get this far.

Related Issues

@nornagon
Copy link
Member

It's probably not a great idea to do a lot of heavy work in the main process, as it's involved in coordinating a lot of things and all JS in the main process currently runs on the UI thread.

You mention use of child processes—would moving this work to a child process with the heap size increased be possible for your use case? Consider also the recently-merged MessagePort functionality, which supports direct IPC between renderers.

@patrickhulce
Copy link
Author

patrickhulce commented Mar 18, 2020

Thanks very much for the response @nornagon!

It's probably not a great idea to do a lot of heavy work in the main process, as it's involved in coordinating a lot of things and all JS in the main process currently runs on the UI thread.

It's definitely not great, but in our case the extra cost of serialization here between another potential child process hop is high and we've built a custom scheduler that yields frequently back to renderers to keep the UI responsive. (In addition to the engineering challenges with moving this into child processes below)

would moving this work to a child process with the heap size increased be possible for your use case?

Possible, yes, but just a lot of extra engineering. In our experience, working with node_modules has been much, much smoother in the electron-node land compared to custom node child processes (asar path inconsistencies, native module recompilation, etc). If there's something we're missing here that makes headless electron child processes that function like node with custom memory limits much easier, I'm all ears! :)

EDIT: To add on to whether it's possible. While I believe it to be possible, I'm still hesitant that the serialization and transient coordination in the main process with many long-lived and concurrent jobs could reach the 2GB limit and still lead to OOMs, so even in this scenario I'd still like to be able to increase the heap size there.

@nornagon
Copy link
Member

I've seen several apps use hidden BrowserWindows, as opposed to node child processes. Using a hidden BrowserWindow would allow you to directly communicate between the worker and the renderer using a MessagePort. That would eliminate the double-serialization.

@patrickhulce
Copy link
Author

Using a hidden BrowserWindow would allow you to directly communicate between the worker and the renderer using a MessagePort. That would eliminate the double-serialization.

Ah, yes! That's a decent option with MessagePort now. It would require nodeIntegration and disabled sandbox in the BrowserWindow but that's basically the security model we'd be getting in a node child process anyhow. Am I right in assuming that there is not a significant performance difference between utilizing node fs APIs and working with buffers in a BrowserWindow vs. the main process?

Up until now we've used a single BrowserWindow just for UI, so I'm a little fuzzy on the thread model here for multiple windows. It was surprising to us when we first discovered that the work in the main process blocks the main UI thread of the renderer process. If we're launching another "renderer" process as a worker would its main thread be tied together with with the other renderer and the main process?

If all that's true and we've established a workaround for my specific situation, just curious for your thoughts on a mechanism for passing CLI arguments in packaged apps. Many of the other linked issues have different use cases that probably wouldn't be solved by this approach that works for me.

@patrickhulce
Copy link
Author

Thanks very much for your guidance here @nornagon! It's been immensely helpful already :)

@nornagon
Copy link
Member

Ah, yes! That's a decent option with MessagePort now. It would require nodeIntegration and disabled sandbox in the BrowserWindow but that's basically the security model we'd be getting in a node child process anyhow. Am I right in assuming that there is not a significant performance difference between utilizing node fs APIs and working with buffers in a BrowserWindow vs. the main process?

I'm not aware of any performance differences after process startup. It's unclear to me whether it's faster to spin up a renderer process or a node child process. But once the process is started, they should have similar performance characteristics.

I've heard some rumblings about suggestions for "blessing" the idea of using a hidden BrowserWindow as a worker process by creating a new kind of child process that's somewhere between a node child process and a renderer. TBD whether that will ever become a thing, but if it does, then I would expect MessagePort to be usable in such worker processes.

Up until now we've used a single BrowserWindow just for UI, so I'm a little fuzzy on the thread model here for multiple windows. It was surprising to us when we first discovered that the work in the main process blocks the main UI thread of the renderer process. If we're launching another "renderer" process as a worker would its main thread be tied together with with the other renderer and the main process?

The main process only has one UI thread. This is true regardless of how many windows are open.

If all that's true and we've established a workaround for my specific situation, just curious for your thoughts on a mechanism for passing CLI arguments in packaged apps. Many of the other linked issues have different use cases that probably wouldn't be solved by this approach that works for me.

In general I think that it's preferable to support as much as we can via JS, rather than by static configuration. In production scenarios, it's often the case that flags should be applied only in certain situations, for instance, depending on the specific OS version. Many flags are possible to set this way; JS flags in the main process is one of the few exceptions.

It might be worth exploring a way to enable this for --js-flags specifically. I can think of a few other JS flags which app authors might want to enable in the main process (e.g. enabling experimental V8 features).

@Mike-Dax
Copy link
Contributor

@patrickhulce We spawn a hidden BrowserWindow to handle our 'non-ui' tasks.

VSCode uses this method as well.

https://github.com/microsoft/vscode/blob/0687d65b30a752cef32490bca7ca85f4b41015b2/src/vs/code/electron-main/sharedProcess.ts#L40

We use a simple binary protocol to communicate between the UI and the hidden BrowserWindow to minimise serialisation / deserialisation costs.

Using named pipes we're able to communicate directly, however we found problems with Windows sometimes not connecting properly so we switched to using the ipcRenderer.sendTo method.

@patrickhulce
Copy link
Author

I've heard some rumblings about suggestions for "blessing" the idea of using a hidden BrowserWindow as a worker process by creating a new kind of child process that's somewhere between a node child process and a renderer.

I think that's a great idea 👍 When first starting out in Electron I had a hard time figuring out how to actually get work totally off the UI thread and bundling a node binary along with Electron led to a lot of issues later with notarization, publishing, etc.

Thanks for the extra validation @Mike-Dax ! We'll probably go that route as well then.

@flotwig
Copy link
Contributor

flotwig commented Mar 20, 2020

For another use case, our app has to detect if it was launched with NODE_OPTIONS=--max-http-header-size=XXXXX --http-parser=legacy, and if not, use cp.spawn to respawn itself with the right NODE_OPTIONS, because there's no way to hard-code these options to always be set. Being able to embed these options in the packaged app would be swell.

I guess these aren't technically Electron CLI arguments, but it's something to think about that is in the spirit of this issue.

@superdave
Copy link

Frankly, I'd like this to be a little easier to do in order to reduce max heap size; Electron chat apps like Slack and Teams are infamous for using up a lot more RAM than they need to because the default V8 heap size is somewhere around 1.5GB and it doesn't collect in the old space until it's full, so those apps just baloon up until they fill it (needlessly).

I can run Slack and Teams happily with --max-old-space-size=256 (which doesn't percolate down to their helper apps, which also appear to be Electron, which then balloon up to 1.5 GB), but passing those flags to the app requires executing from the terminal, which most Slack users aren't going to do (for the record, on the Mac: open /Applications/Slack.app --args --js-flags --max-old-space-size=256). It would be nice to be able to let the apps expose this as an option for users to configure, but short of the apps packaging themselves to run in a shell script, which I don't see happening, I don't think there are a lot of existing paths there.

@vladimiry
Copy link

Would also love this feature request gets accepted and implemented. I need to store some data in memory in the main process and time to time this data gets written to the file system. I prefer not to keep the data in the renderer process due to the extra serialization and increased code complexity. Normally the data doesn't hit the default 2GB limit but sometimes users have to pass --js-flags="--max-old-space-size=N"-like argument manually which is not very good user experience.

@vladimiry
Copy link

@patrickhulce, I use hacky workaround for now, embedding the CLI args into the desktop shortcuts (primary change + minor fix) but I didn't find yet how to apply it for macOS/DMG package too.

@mix3d
Copy link

mix3d commented Sep 24, 2020

Another request for being able to bundle in commandline args to built electron apps. I have a webGL based 3d medical image renderer that is struggling to allocate enough memory for some large image sets. We were hoping we'd have the control in Electron to do so, but...

@fridgerator
Copy link

BrowserWindow has an option additionalArguments under webPreferences. It seems setting this like: webPreferences: { additionalArguments: '--js-flags="--max-old-space-size=8192"' } adds the correct arguments to the spawned renderer processes.

@vladimiry
Copy link

@fridgerator, the issue is about applying the js-flags to the main process. Handling the renderer processes is a trivial task, simply use app.commandLine.appendSwitch("js-flags", ...).

@fridgerator
Copy link

@vladimiry You're right, I was mistaken.

@gormonn
Copy link

gormonn commented Oct 27, 2020

@nornagon
What about --ignore-gpu-blacklist. See issue (this flag doesn't work with appendSwitch): #15339
For example, in my case, this makes it impossible to use hardware acceleration to play Full HD video on certain configurations in Electron.

This seems to be supported in Electron 10...

@pedro-surf
Copy link

pedro-surf commented Jan 2, 2021

Any news in Electron v11 on how to add more memory capacity?

Our app crashes after feeding a large dataset. (~2.1GB of PDF files)

@mix3d
Copy link

mix3d commented Jan 2, 2021

Any news in Electron v11 on how to add more memory capacity?

Our app crashes after feeding a large dataset. (~2.1GB of PDF files)

You might be running into the same problem I am, unable to allocate a single array object at basically that size. I think it's a hard limit in chromium

@aminya
Copy link

aminya commented Jan 11, 2021

How can I enable experimental web assembly features in Node or Chrome?

@archfz
Copy link

archfz commented Feb 6, 2021

I am trying to pass an argument when starting the app image on linux from terminal, down into the electron main, but it's not showing in process.argv. I would suppose this ticket is requesting similar support.

@archfz
Copy link

archfz commented Feb 6, 2021

Ignore my comment. It is working as expected, had to send explicitly process.argv for the library for parsing it (command-line-args).

@evbo
Copy link

evbo commented Jul 27, 2021

Another use-case in support of the proposed enhancement:

So I try to package an electron AppImage and send it to my customer's mysterious centos7 server and when they run it hangs with:

Fatal process out of memory: Failed to reserve memory for new V8 Isolate

So everywhere on google says something about increasing the address space (but no docs on how to?) or the virtual memory, which in part can be done by increasing the volatile memory I think via the electron-main.js entry to my app:

app.commandLine.appendSwitch('js-flags', '--max-old-space-size=8192');

But then that has seemingly no effect and you read this:

NODE_OPTIONS are explicitly disallowed in packaged apps, except for the following:
--max-http-header-size
--http-parser

So am I completely blocked from deploying a packaged electron app to environments that have insufficient virtual memory allocated to the main process? Seems like a great use case to enable support for js-flags if so.

@vladimiry
Copy link

So I try to package an electron AppImage

For AppImage case it's actually easily solvable, simply adjust your build pipeline by modifying the AppRun sh script located in the AppImage file container.

@evbo
Copy link

evbo commented Jul 27, 2021

@vladimiry wow, that sounds awesome thank you. Can you please explain just a tad bit more? I am using electron builder and so the actual pipeline is abstracted away. I am simply setting some high level config in my package.json and it does the rest. How could I intercept it and where would your suggested changes go?

@vladimiry
Copy link

vladimiry commented Jul 27, 2021

@evbo here is the script that triggers a regular electron-builder-based build and then modifies the AppImage file by adding the --js-flags="--max-old-space-size=6144" CLI argument (the AppImage file repacking is happening).

@nornagon
Copy link
Member

A few notes on this:

  1. There are decent workarounds to pass command-line arguments that need to be parsed before JS is run on Windows and Linux (editing the shortcut and using a wrapper script, respectively), but there is no good solution on macOS.
  2. I considered a field in package.json as a possibility for where these flags could go, but on further consideration, we'd have to potentially parse the asar file to find it, which is too complicated to do from C++ to be worth doing.
  3. Info.plist seems like a plausible place to store this sort of data on macOS.
  4. However, the majority of command-line flags are settable from JavaScript during the first tick.

Aside from all this though, changing the maximum V8 heap size to a constant is probably not what you want to do.

  • Max V8 heap size is currently dependent on the amount of system memory. The calculation is a bit complicated, but roughly it comes out to min(system_memory / 2, 2GB) (or 4GB on systems with 16GB or more of memory). Overriding this with a constant independent of the machine is likely to come with negative repercussions when the app is launched on a machine with a lower amount of memory.
  • V8 pointer compression puts a 4GB maximum on the heap size. So on machines with >16GB, there is no way to increase the heap size from the default.
  • We might consider a fuse to set the max heap size in the main process to 4GB unconditionally, even on machines with 16GB or less of memory. This would still be reduced to 1/2 of available physical memory on machines with less than 8GB of memory. Increasing it beyond 4GB is not supported by V8, due to pointer compression.

There are a few other V8 options which an app might plausibly want to set in the main process, but most of them are quite marginal, & by far the most common request we've seen is to change the max heap size.

@heylying
Copy link

Hi. I have a question. app.commandLine.appendSwitch('js-flags', '--max-old-space-size=xxx') works in v1.7.16, but doesn't work in v10.4.7 after upgrade. No code changes. What's wrong?

@heylying
Copy link

Hi. I have a question. app.commandLine.appendSwitch('js-flags', '--max-old-space-size=xxx') works in v1.7.16, but doesn't work in v10.4.7 after upgrade. No code changes. What's wrong?

After searching around for a decent solution, I think that pointer compression causes it. Now we can compile Electron without pointer compression by ourselves or trouble the team to provide it or downgrade it (< v9.0.0).

@marcelblum
Copy link

@heylying if the cause truly is pointer compression, that was added in V8 v.8.0, which was added to Electron with Electron v.8.0.0. That would mean the latest version you could downgrade to would be Electron 7.3.3. But this line of thinking does not quite jive with when the more severe Electron memory limitations were introduced with Electron 12 and then worsened in Electron 14 as per my findings detailed in #31330.

@heylying
Copy link

heylying commented Oct 19, 2021

Electron disabled it in v8 and enabled it in v9.
Yes, I think our problems are not the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests