Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Same cache dir between multiple projects errors with parallel usage #15

Closed
bluetech opened this issue Apr 19, 2017 · 10 comments
Closed
Assignees
Labels
kind: bug Something isn't working properly kind: optimization Performance, space, size, etc improvement solution: workaround available There is a workaround available for this issue
Milestone

Comments

@bluetech
Copy link
Contributor

My build system creates 3 bundles with rollup. If I build each bundle serially, it succeeds. However, if I build them concurrently (e.g. with make -j4, gulp, ...), it fails like this:

[19:54:40] Using gulpfile client/gulpfile.js
[19:54:40] Starting 'scripts.foo'...
[19:54:42] Starting 'scripts.bar'...
[19:54:44] Starting 'scripts.baz'...
Error: ENOENT: no such file or directory, open 'client/.rpt2_cache/c96394021b516827cdd67f325bc8e31a9c85a68e/code/cache_/c25c849c2122f988238ef5fb3159bccdcf3167ca'
    at error (client/node_modules/rollup/dist/rollup.js:170:12)
    at client/node_modules/rollup/dist/rollup.js:8926:6
    at process._tickCallback (internal/process/next_tick.js:109:7)
    at Module.runMain (module.js:607:11)
    at run (bootstrap_node.js:423:7)
    at startup (bootstrap_node.js:147:9)
    at bootstrap_node.js:538:3
Error: ENOENT: no such file or directory, open 'client/.rpt2_cache/c96394021b516827cdd67f325bc8e31a9c85a68e/code/cache_/fe9b01c7ea2aa0c98e1433b824f39491fdead469'
    at error (client/node_modules/rollup/dist/rollup.js:170:12)
    at client/node_modules/rollup/dist/rollup.js:8926:6
    at process._tickCallback (internal/process/next_tick.js:109:7)
    at Module.runMain (module.js:607:11)
    at run (bootstrap_node.js:423:7)
    at startup (bootstrap_node.js:147:9)
    at bootstrap_node.js:538:3
[19:54:49] Finished 'scripts.bar' after 6.27 s
[19:54:49] Finished 'scripts.foo' after 8.97 s
[19:54:49] Finished 'scripts.baz' after 5.65 s
[19:54:49] Starting 'scripts'...
[19:54:49] Finished 'scripts' after 67 μs

I can try to provide a reproducer, if needed.

These are the relevant package versions:

$ npm ls --depth=0
├── resolve@1.3.2
├── rollup@0.41.6
├── rollup-plugin-typescript2@0.4.0
├── tslib@1.6.0
└── typescript@2.2.2

Thanks.

@ezolenko
Copy link
Owner

ezolenko commented Apr 19, 2017

A workaround would be to give each process its own cache, for example with process.pid:

typescript({ cacheRoot: `.cache_${process.pid}` })

This will create new cache for every run, so not a long term workaround, unless you also add a step to clean those up.

I'll look into using target names in cache path later.

@ezolenko ezolenko self-assigned this Apr 19, 2017
@ezolenko ezolenko added this to the 0.4.1 milestone Apr 19, 2017
@ezolenko
Copy link
Owner

BTW, if you manage to make it work with separate caches, check if your parallel execution makes a difference -- seems like rollup does one transpiling and then makes bundles for different targets from its own memory cache.

So depending on which plugins you are using (transpile work heavy or bundle work heavy) you might not gain much speed up, because you would force rollup to retranspile for each target.

@bluetech
Copy link
Contributor Author

Thanks for your comments!

seems like rollup does one transpiling and then makes bundles for different targets from its own memory cache.

I am not sure what you mean by different targets - as far as I can tell, rollup can only create one bundle at a time (here is an issue: rollup/rollup#863). So if I want multiple bundles, I must run it multiple times?

@bluetech
Copy link
Contributor Author

BTW, there is something strange in RollingCache.write:

	public write(name: string, data: DataType): void
	{
		if (this.rolled)
			return;

		if (data === undefined)
			return;

		if (this.rolled)
			fs.writeJsonSync(`${this.oldCacheRoot}/${name}`, data);
		else
			fs.writeJsonSync(`${this.newCacheRoot}/${name}`, data);
	}

The second if (this.rolled) is dead because the first if (this.rolled) already returns if true. So there may be a bug here?

@ezolenko
Copy link
Owner

Yep, looks like left overs after watch fix, thanks!

I assumed you were building different targets in parallel from the same code somehow... (for example this plugin builds 2 bundles in cjs and es formats).

Are you saying you have multiple whole rollup configs and they conflict when getting built at the same time? Could you post how you have it setup in guilp? Might be able to simply hardcode different cache roots, and then they shouldn't conflict.

@bluetech
Copy link
Contributor Author

bluetech commented Apr 20, 2017

Yes, that's what I meant - entirely different bundles, not multiple targets (I never noticed this functionality in rollup at all - sorry for the confusion).

Following your suggestion, I used the following:

cacheRoot: process.cwd() + `/.rpt2_cache/${app.name}`,

(where app.name is the name I give to the bundle being built), and now it finished without conflicts.

So I think the issue is clear - a single cache directory cannot be used concurrently - and the solution is to use different cache directories. So we can close this issue, unless you want to make it work transparently in some fashion.

Thanks!

@ezolenko
Copy link
Owner

Cool, I'll add hash of rollup config itself to the cache path, should handle a number of possible conflicts.

ezolenko added a commit that referenced this issue Apr 20, 2017
Partial solution for #15 and a number of other potential conflicts.
@ezolenko
Copy link
Owner

ezolenko commented Apr 21, 2017

Ok, 0.4.1 should behave better in this case out of the box (if contents of your rollup configs actually differ).

@lobsterkatie
Copy link

I am running into this on v 0.31.2. My use case is applying the same rollup config to a number of different files via use of an environment variable, as can be seen in the PR linked just above. Interestingly, I only see these error messages if I pass the --silent option to rollup; otherwise, I'd have no idea anything was wrong.

lobsterkatie added a commit to getsentry/sentry-javascript that referenced this issue Mar 15, 2022
Given that now most of our testing runs in parallel in CI, the biggest bottleneck has become the build step. Within that step, the single slowest thing we do is build integration bundles, simply because of the number of bundles which need to be created. (This is especially true now that we're creating three versions of each bundle rather than two[1].)

To speed things up a bit, this parallelizes the building of those bundles. Though it ends up not being as dramatic a time savings as one might hope (because the typescript plugin's caching mechanism doesn't play nicely with concurrent builds[2]), it nonetheless drops the total build time from roughly two minutes down to 70-80 seconds in my local testing.

[1] #4699
[2] ezolenko/rollup-plugin-typescript2#15
@agilgur5 agilgur5 changed the title Cache prevents parallel usage Same cache dir between multiple projects errors with parallel usage May 26, 2022
@agilgur5 agilgur5 added kind: bug Something isn't working properly solution: workaround available There is a workaround available for this issue kind: optimization Performance, space, size, etc improvement labels May 26, 2022
@agilgur5
Copy link
Collaborator

agilgur5 commented May 26, 2022

@lobsterkatie after this issue was closed, the hash for caches now includes the entire Rollup config. If you're changing the inputs of the config, it should work out-of-the-box (though you may run into something similar to #228 depending on how you set it/variable references).
#243 also added graceful handling of the race condition when doing parallel builds with the same hashes.

Otherwise, as mentioned above, you can override cacheRoot as a workaround.

If you're experiencing an unexpected issue still, please file a new issue and fill out the issue template.
That is typically easier to track for maintainers than comments on old, closed issues (this one is ~5 years old and one of the first issues in this repo) and lots of things may have changed since the old issue that could make your report have a totally different root cause. Can always reference the old issue if you believe they are related.

Locking as the original issue was resolved.

Repository owner locked as resolved and limited conversation to collaborators May 26, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind: bug Something isn't working properly kind: optimization Performance, space, size, etc improvement solution: workaround available There is a workaround available for this issue
Projects
None yet
Development

No branches or pull requests

4 participants