Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split function deployments by region AND memory #4253

Merged
merged 6 commits into from Mar 7, 2022
Merged

Conversation

taeold
Copy link
Contributor

@taeold taeold commented Mar 4, 2022

Today, Function deployments are made in batches of functions grouped by region. Combined w/ source token to re-used built containers for deployment, we get meaningful decrease in deploy time and in Cloud Build resource usage.

Unfortunately, this setup has a peculiar bug; Google Cloud Functions bakes in the flag to expand heap size for appropriate for the memory configuration of a function (--max_old_space_size) on the container itself. That means that if you batch two functions with differing memory configuration (e.g. 256MB vs 4 GB), it's guaranteed that one function will have wrongly configured --max_old_space_size flag value (Follow #3716 (comment) for how we discovered this issue).

This PR proposes batching function by region AND availalbeMemoryMB to fix this bug. We do this by refactoring calculateRegionalChanges to generate a collection of Changeset (previously known as RegionalChanges) that sub-divides functions in a region by their memory. In fact, we generalize the approach by allowing arbitrary subdivision by keyFn (e.g. keyFn: (endpoint) => endpoint.availableMemoryMb) as I anticipate that we will revisit this section of the code once I start working on "codebases".

Fixes #3716

Copy link
Contributor

@joehan joehan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some nits but LGTM overall

CHANGELOG.md Outdated Show resolved Hide resolved
src/deploy/functions/release/planner.ts Show resolved Hide resolved
const toDelete = utils.groupBy(
Object.keys(have)
.filter((id) => !want[id])
.filter((id) => options.deleteAll || isFirebaseManaged(have[id].labels || {}))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem like new behavior, but it looks like options.deleteAll deletes every function on the project, not just Firebase managed functions. Is that intentional?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this option is only used in functions:delete where you can delete arbitrary function as long as you specify it's name correctly. I think that might be useful, though we might want to revisit that API.

src/deploy/functions/release/planner.ts Outdated Show resolved Hide resolved
@taeold taeold enabled auto-merge (squash) March 7, 2022 19:37
@taeold taeold merged commit 94b3156 into master Mar 7, 2022
@odeniso
Copy link

odeniso commented Mar 10, 2022

Hello! Do you have an idea of when this change will be released. Otherwise, do you know which version introduced the bug that this PR addresses?

I am hoping this fixes an issue we are facing, where cloud functions get deployed with runtime options that are different than those specified via runWith.

@taeold
Copy link
Contributor Author

taeold commented Mar 10, 2022

@odeniso Unfortunately, this bug existed for quite some time. I'll try to get the team to push out a release containing this change soon.

@gustavopch
Copy link

I also need this to get released as soon as possible. I have a function that parses a Photoshop file and it's crashing with "JavaScript heap out of memory" frequently. Firebase Support pointed me to this workaround https://cloud.google.com/functions/docs/troubleshooting#javascript_heap_out_of_memory, but apparently, it doesn't work with firebase-tools and I'll need to manually update NODE_OPTIONS through the Cloud Console GUI after every deployment if I want the function to keep working.

@gustavopch
Copy link

This was released in https://github.com/firebase/firebase-tools/releases/tag/v10.3.0 🥳

@bkendall bkendall deleted the dl-cf3-deploy-sets branch March 18, 2022 23:19
Copy link
Member

@inlined inlined left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love how clean this is, though, per offline conversation, I'm worried about the impact this could have on deletes. We're talking about some obscure edge cases, but if I rename a function and change its memory allocation in the same change, we can now delete the old function before creating the new one.

Should we fall back to just deleting after all creates succeed again? Should I sweat these edge cases less?

want.endpoints[region] || {},
have.endpoints[region] || {},
(e) => `${e.region}-${e.availableMemoryMb || "default"}`,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we actually do heap tuning for all memory sizes. It might be worth talking to the cloud folks to see if we can merge deployments of <2GB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looked in and discovered that the environment variable is set per memory configuration:

https://github.com/GoogleCloudPlatform/buildpacks/blob/663180f2cf1fbfdee4a889f6c9d17ee2eb9091bf/cmd/nodejs/functions_framework/main.go#L166

Seems like they took the simplest path forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Setting memory to more than 1 GiB still only allow 1 GiB to be used
5 participants