Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make remote state initial behavior the same as local state #32614

Merged
merged 5 commits into from Feb 7, 2023

Conversation

yardbirdsax
Copy link
Contributor

This fixes #30670, which is due to how the remote state functionality previously had slightly different logic for the initial serial number of state. The local backend always generated initial state with a serial of > 0, whereas remote state would generate it with a serial of 0. When combined with certain conditions (such as the one laid out in the referenced issue), this could result in Terraform over-writing remote state with stale state from a plan file and not throwing the expected The given plan file can no longer be applied because the state was changed error.

To support this change I also had to refactor the way in which the tests for the affected method were written, since previously the tests assumed one request per test case. I think I did this in a fairly idiomatic way that didn't depart too much from the rest of the test methods but I'm certainly open to suggestions on other ways to do so.

Fixes #30670

Target Release

1.4.x
1.3-backport

Draft CHANGELOG entry

BUG FIXES

  • When using remote state and an initial terraform apply operation with a plan file specified partially succeeds, Terraform would allow the stale plan file to be used again and overwrite the remote state, due to a small difference in how remote state was initially created. Terraform now creates initial remote state in a manner consistent with local state, and will throw the expected The given plan file can no longer be applied because the state was changed error if a stale plan file is used again after the initial partially successful apply operation.

This makes the behavior of remote state consistent with local state in regards to the initial serial number of the generated / pushed state. Previously remote state's initial push would have a serial number of 0, whereas local state had a serial of > 0. This causes issues with the logic around, for example, ensuring that a plan file cannot be applied if state is stale (see hashicorp#30670 for example).
Copy link
Contributor

@nmische nmische left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have manually validated that this fixes the issue when using remote state in s3.

FWIW, it looks like this could also be an issue for Terraform Cloud:

https://github.com/hashicorp/terraform/blob/main/internal/cloud/state.go#L166

internal/states/remote/state.go Outdated Show resolved Hide resolved
@crw crw added the backend/s3 label Feb 1, 2023
@yardbirdsax
Copy link
Contributor Author

By the way, based on where the code I changed was, I think this would be a problem for all remote states, not just S3, though I can't verify that since I don't have any other cloud vendor's storage on which to test.

@jbardin jbardin removed the backend/s3 label Feb 7, 2023
Fix from review

Co-authored-by: Nathan Mische <nmische@gmail.com>
@vercel
Copy link

vercel bot commented Feb 7, 2023

Deployment failed with the following error:

The provided GitHub repository does not contain the requested branch or commit reference. Please ensure the repository is not empty.

@jbardin
Copy link
Member

jbardin commented Feb 7, 2023

Thanks @yardbirdsax! This is not unique to s3, but the fix will apply to all of the legacy remote states in the same manner. It does look like cloud is unique though as @nmische mentioned. We can follow up with a PR there too.

@jbardin jbardin merged commit 1307317 into hashicorp:main Feb 7, 2023
@github-actions
Copy link

github-actions bot commented Feb 7, 2023

Reminder for the merging maintainer: if this is a user-visible change, please update the changelog on the appropriate release branch.

@yardbirdsax
Copy link
Contributor Author

Thanks @jbardin ! Question: did I forget to do something to make sure this gets backported to the 1.3 version? I thought I put the right text in the description but I don’t see any related issues / PRs. Apologies in advance if my GitHub Fu is failing me.

@jbardin
Copy link
Member

jbardin commented Feb 8, 2023

Thanks for mentioning that @yardbirdsax. This would of missed the last patch release, but due to some unforeseen circumstances we do still have one pending. Since it's such a small change I'm going to go ahead and backport it for v1.3 (you didn't miss anything, the automation is triggered by labels which are set by maintainers).

@yardbirdsax
Copy link
Contributor Author

Awesome, thank you!

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 11, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

S3 backend does not give stale plan file error until 3rd apply of same plan file.
5 participants