New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split function deployments by region AND memory #4253
Conversation
…nto dl-cf3-deploy-sets
95881ff
to
44ad3e8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some nits but LGTM overall
const toDelete = utils.groupBy( | ||
Object.keys(have) | ||
.filter((id) => !want[id]) | ||
.filter((id) => options.deleteAll || isFirebaseManaged(have[id].labels || {})) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem like new behavior, but it looks like options.deleteAll deletes every function on the project, not just Firebase managed functions. Is that intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this option is only used in functions:delete
where you can delete arbitrary function as long as you specify it's name correctly. I think that might be useful, though we might want to revisit that API.
Hello! Do you have an idea of when this change will be released. Otherwise, do you know which version introduced the bug that this PR addresses? I am hoping this fixes an issue we are facing, where cloud functions get deployed with runtime options that are different than those specified via |
@odeniso Unfortunately, this bug existed for quite some time. I'll try to get the team to push out a release containing this change soon. |
I also need this to get released as soon as possible. I have a function that parses a Photoshop file and it's crashing with "JavaScript heap out of memory" frequently. Firebase Support pointed me to this workaround https://cloud.google.com/functions/docs/troubleshooting#javascript_heap_out_of_memory, but apparently, it doesn't work with |
This was released in https://github.com/firebase/firebase-tools/releases/tag/v10.3.0 🥳 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love how clean this is, though, per offline conversation, I'm worried about the impact this could have on deletes. We're talking about some obscure edge cases, but if I rename a function and change its memory allocation in the same change, we can now delete the old function before creating the new one.
Should we fall back to just deleting after all creates succeed again? Should I sweat these edge cases less?
want.endpoints[region] || {}, | ||
have.endpoints[region] || {}, | ||
(e) => `${e.region}-${e.availableMemoryMb || "default"}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we actually do heap tuning for all memory sizes. It might be worth talking to the cloud folks to see if we can merge deployments of <2GB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looked in and discovered that the environment variable is set per memory configuration:
Seems like they took the simplest path forward.
Today, Function deployments are made in batches of functions grouped by region. Combined w/ source token to re-used built containers for deployment, we get meaningful decrease in deploy time and in Cloud Build resource usage.
Unfortunately, this setup has a peculiar bug; Google Cloud Functions bakes in the flag to expand heap size for appropriate for the memory configuration of a function (
--max_old_space_size
) on the container itself. That means that if you batch two functions with differing memory configuration (e.g. 256MB vs 4 GB), it's guaranteed that one function will have wrongly configured--max_old_space_size
flag value (Follow #3716 (comment) for how we discovered this issue).This PR proposes batching function by region AND
availalbeMemoryMB
to fix this bug. We do this by refactoringcalculateRegionalChanges
to generate a collection ofChangeset
(previously known asRegionalChanges
) that sub-divides functions in a region by their memory. In fact, we generalize the approach by allowing arbitrary subdivision bykeyFn
(e.g.keyFn: (endpoint) => endpoint.availableMemoryMb
) as I anticipate that we will revisit this section of the code once I start working on "codebases".Fixes #3716