diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index ca460d642a48..6114a69135f4 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -2,6 +2,14 @@ This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins that each have their own repository linked from the [Terraform Registry index](https://registry.terraform.io/browse/providers). Instructions for developing each provider are usually in the associated README file. For more information, see [the provider development overview](https://www.terraform.io/docs/plugins/provider.html). +--- + +**Note:** Due to current low staffing on the Terraform Core team at HashiCorp, **we are not routinely reviewing and merging community-submitted pull requests**. We do hope to begin processing them again soon once we're back up to full staffing again, but for the moment we need to ask for patience. Thanks! + +**Additional note:** The intent of the prior comment was to provide clarity for the community around what to expect for a small part of the work related to Terraform. This does not affect other PR reviews, such as those for Terraform providers. We expect that the relevant team will be appropriately staffed within the coming weeks, which should allow us to get back to normal community PR review practices. For the broader context and information on HashiCorp’s continued commitment to and investment in Terraform, see [this blog post](https://www.hashicorp.com/blog/terraform-community-contributions). + +--- + **All communication on GitHub, the community forum, and other HashiCorp-provided communication channels is subject to [the HashiCorp community guidelines](https://www.hashicorp.com/community-guidelines).** This document provides guidance on Terraform contribution recommended practices. It covers what we're looking for in order to help set some expectations and help you get the most out of participation in this project. diff --git a/.github/ISSUE_TEMPLATE/documentation_issue.yml b/.github/ISSUE_TEMPLATE/documentation_issue.yml deleted file mode 100644 index 321a3b7abf43..000000000000 --- a/.github/ISSUE_TEMPLATE/documentation_issue.yml +++ /dev/null @@ -1,73 +0,0 @@ -name: Documentation Issue -description: Report an issue or suggest a change in the documentation. -labels: ["documentation", "new"] -body: - - type: markdown - attributes: - value: | - # Thank you for opening a documentation change request. - - Please only use the [hashicorp/terraform](https://github.com/hashicorp/terraform) `Documentation` issue type to report problems with the documentation on [https://www.terraform.io/docs](). Only technical writers (not engineers) monitor this issue type. Report Terraform bugs or feature requests with the `Bug report` or `Feature Request` issue types instead to get engineering attention. - - For general usage questions, please see: https://www.terraform.io/community.html. - - - type: textarea - id: tf-version - attributes: - label: Terraform Version - description: Run `terraform version` to show the version, and paste the result below. If you're not using the latest version, please check to see if something related to your request has already been implemented in a later version. - render: shell - placeholder: ...output of `terraform version`... - value: - validations: - required: true - - - type: textarea - id: tf-affected-pages - attributes: - label: Affected Pages - description: | - Link to the pages relevant to your documentation change request. - placeholder: - value: - validations: - required: false - - - type: textarea - id: tf-problem - attributes: - label: What is the docs issue? - description: What problems or suggestions do you have about the documentation? - placeholder: - value: - validations: - required: true - - - type: textarea - id: tf-proposal - attributes: - label: Proposal - description: What documentation changes would fix this issue and where would you expect to find them? Are one or more page headings unclear? Do one or more pages need additional context, examples, or warnings? Do we need a new page or section dedicated to a specific topic? Your ideas help us understand what you and other users need from our documentation and how we can improve the content. - placeholder: - value: - validations: - required: false - - - type: textarea - id: tf-references - attributes: - label: References - description: | - Are there any other open or closed GitHub issues related to the problem or solution you described? If so, list them below. For example: - ``` - - #6017 - ``` - placeholder: - value: - validations: - required: false - - - type: markdown - attributes: - value: | - **Note:** If the submit button is disabled and you have filled out all required fields, please check that you did not forget a **Title** for the issue. diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md deleted file mode 100644 index dfa6b8241853..000000000000 --- a/.github/pull_request_template.md +++ /dev/null @@ -1,56 +0,0 @@ - - - - -Fixes # - -## Target Release - - - -1.4.x - -## Draft CHANGELOG entry - - - -### NEW FEATURES | UPGRADE NOTES | ENHANCEMENTS | BUG FIXES | EXPERIMENTS - - - -- diff --git a/.github/workflows/equivalence-test.yml b/.github/workflows/equivalence-test.yml deleted file mode 100644 index 07048d9ce3b6..000000000000 --- a/.github/workflows/equivalence-test.yml +++ /dev/null @@ -1,27 +0,0 @@ -name: Terraform Equivalence Tests - -# This action will execute the suite of Terraform equivalence tests after a -# tag has been pushed for a new version. -# -# For now, it is just a skeleton action that will be populated shortly. - -on: - workflow_dispatch: - inputs: - version: - required: true - description: "the Terraform version to equivalence test, eg. v1.3.1" - run-id: - required: true - description: "the run identifier of a successful `Build Terraform CLI Packages` action that contains artifacts for the target version" - -permissions: - contents: read - -jobs: - skeleton-job: - name: "Temporary job to be released with real work" - runs-on: ubuntu-latest - - steps: - - run: echo "Hello, world!" diff --git a/CHANGELOG.md b/CHANGELOG.md index bcc70b2f8b33..da1450460d48 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,28 +1,139 @@ -## 1.4.0 (Unreleased) +## 1.3.5 (Unreleased) BUG FIXES: -* The module installer will now record in its manifest a correct module source URL after normalization when the URL given as input contains both a query string portion and a subdirectory portion. Terraform itself doesn't currently make use of this information and so this is just a cosmetic fix to make the recorded metadata more correct. ([#31636](https://github.com/hashicorp/terraform/issues/31636)) +* Fix Terraform creating objects that should not exist in variables that specify default attributes in optional objects. [GH-32178] +* Fix several Terraform crashes that are caused by HCL creating objects that should not exist in variables that specify default attributes in optional objects within collections. [GH-32178] +* Fix inconsistent behaviour in empty vs null collections. [GH-32178] +* Prevent file uploads from creating unneeded temporary files when the payload size is known [GH-32206] +* Nested attributes marked sensitive by schema no longer reveal sub-attributes in the plan diff [GH-32004] +* Nested attributes now more consistently display when they become unknown or null values in the plan diff [GH-32004] +* Sensitive values are now always displayed as `(sensitive value)` instead of sometimes as `(sensitive)` [GH32004] + + +## 1.3.4 (November 02, 2022) + +BUG FIXES: + +* Fix invalid refresh-only plan caused by data sources being deferred to apply ([#32111](https://github.com/hashicorp/terraform/issues/32111)) +* Optimize the handling of condition checks during apply to prevent performance regressions with large numbers of instances ([#32123](https://github.com/hashicorp/terraform/issues/32123)) +* Output preconditions should not be evaluated during destroy ([#32051](https://github.com/hashicorp/terraform/issues/32051)) +* Fix crash from `console` when outputs contain preconditions ([#32051](https://github.com/hashicorp/terraform/issues/32051)) +* Destroy with no state would still attempt to evaluate some values ([#32051](https://github.com/hashicorp/terraform/issues/32051)) +* Prevent unnecessary evaluation and planning of resources during the pre-destroy refresh ([#32051](https://github.com/hashicorp/terraform/issues/32051)) +* AzureRM Backend: support for generic OIDC authentication via the `oidc_token` and `oidc_token_file_path` properties ([#31966](https://github.com/hashicorp/terraform/issues/31966)) +* Input and Module Variables: Convert variable types before attempting to apply default values. ([#32027](https://github.com/hashicorp/terraform/issues/32027)) +* When installing remote module packages delivered in tar format, Terraform now limits the tar header block size to 1MiB to avoid unbounded memory usage for maliciously-crafted module packages. ([#32135](https://github.com/hashicorp/terraform/issues/32135)) +* Terraform will now reject excessively-complex regular expression patterns passed to the `regex`, `regexall`, and `replace` functions, to avoid unbounded memory usage for maliciously-crafted patterns. This change should not affect any reasonable patterns intended for practical use. ([#32135](https://github.com/hashicorp/terraform/issues/32135)) +* Terraform on Windows now rejects invalid environment variables whose values contain the NUL character when propagating environment variables to a child process such as a provider plugin. Previously Terraform would incorrectly treat that character as a separator between two separate environment variables. ([#32135](https://github.com/hashicorp/terraform/issues/32135)) + +## 1.3.3 (October 19, 2022) + +BUG FIXES: + +* Fix error when removing a resource from configuration which according to the provider has already been deleted. ([#31850](https://github.com/hashicorp/terraform/issues/31850)) +* Fix error when setting empty collections into variables with collections of nested objects with default values. ([#32033](https://github.com/hashicorp/terraform/issues/32033)) + +## 1.3.2 (October 06, 2022) + +BUG FIXES: + +* Fixed a crash caused by Terraform incorrectly re-registering output value preconditions during the apply phase (rather than just reusing the already-planned checks from the plan phase). ([#31890](https://github.com/hashicorp/terraform/issues/31890)) +* Prevent errors when the provider reports that a deposed instance no longer exists ([#31902](https://github.com/hashicorp/terraform/issues/31902)) +* Using `ignore_changes = all` could cause persistent diffs with legacy providers ([#31914](https://github.com/hashicorp/terraform/issues/31914)) +* Fix cycles when resource dependencies cross over between independent provider configurations ([#31917](https://github.com/hashicorp/terraform/issues/31917)) +* Improve handling of missing resource instances during `import` ([#31878](https://github.com/hashicorp/terraform/issues/31878)) + +## 1.3.1 (September 28, 2022) + +NOTE: +* On `darwin/amd64` and `darwin/arm64` architectures, `terraform` binaries are now built with CGO enabled. This should not have any user-facing impact, except in cases where the pure Go DNS resolver causes problems on recent versions of macOS: using CGO may mitigate these issues. Please see the upstream bug https://github.com/golang/go/issues/52839 for more details. + +BUG FIXES: + +* Fixed a crash when using objects with optional attributes and default values in collections, most visible with nested modules. ([#31847](https://github.com/hashicorp/terraform/issues/31847)) +* Prevent cycles in some situations where a provider depends on resources in the configuration which are participating in planned changes. ([#31857](https://github.com/hashicorp/terraform/issues/31857)) +* Fixed an error when attempting to destroy a configuration where resources do not exist in the state. ([#31858](https://github.com/hashicorp/terraform/issues/31858)) +* Data sources which cannot be read during will no longer prevent the state from being serialized. ([#31871](https://github.com/hashicorp/terraform/issues/31871)) +* Fixed a crash which occured when a resource with a precondition and/or a postcondition appeared inside a module with two or more instances. ([#31860](https://github.com/hashicorp/terraform/issues/31860)) + +## 1.3.0 (September 21, 2022) + +NEW FEATURES: + +* **Optional attributes for object type constraints:** When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn't set it. For example: + + ```terraform + variable "with_optional_attribute" { + type = object({ + a = string # a required attribute + b = optional(string) # an optional attribute + c = optional(number, 127) # an optional attribute with a default value + }) + } + ``` + + Assigning `{ a = "foo" }` to this variable will result in the value `{ a = "foo", b = null, c = 127 }`. + +* Added functions: `startswith` and `endswith` allow you to check whether a given string has a specified prefix or suffix. ([#31220](https://github.com/hashicorp/terraform/issues/31220)) + +UPGRADE NOTES: + +* `terraform show -json`: Output changes now include more detail about the unknown-ness of the planned value. Previously, a planned output would be marked as either fully known or partially unknown, with the `after_unknown` field having value `false` or `true` respectively. Now outputs correctly expose the full structure of unknownness for complex values, allowing consumers of the JSON output format to determine which values in a collection are known only after apply. +* `terraform import`: The `-allow-missing-config` has been removed, and at least an empty configuration block must exist to import a resource. +* Consumers of the JSON output format expecting on the `after_unknown` field to be only `false` or `true` should be updated to support [the change representation](https://www.terraform.io/internals/json-format#change-representation) described in the documentation, and as was already used for resource changes. ([#31235](https://github.com/hashicorp/terraform/issues/31235)) +* AzureRM Backend: This release concludes [the deprecation cycle started in Terraform v1.1](https://www.terraform.io/language/upgrade-guides/1-1#preparation-for-removing-azure-ad-graph-support-in-the-azurerm-backend) for the `azurerm` backend's support of "ADAL" authentication. This backend now supports only "MSAL" (Microsoft Graph) authentication. + + This follows from [Microsoft's own deprecation of Azure AD Graph](https://docs.microsoft.com/en-us/graph/migrate-azure-ad-graph-faq), and so you must follow the migration instructions presented in that Azure documentation to adopt Microsoft Graph and then change your backend configuration to use MSAL authentication before upgrading to Terraform v1.3. +* When making requests to HTTPS servers, Terraform will now reject invalid handshakes that have duplicate extensions, as required by RFC 5246 section 7.4.1.4 and RFC 8446 section 4.2. This may cause new errors when interacting with existing buggy or misconfigured TLS servers, but should not affect correct servers. + + This only applies to requests made directly by Terraform CLI, such as provider installation and remote state storage. Terraform providers are separate programs which decide their own policy for handling of TLS handshakes. +* The following backends, which were deprecated in v1.2.3, have now been removed: `artifactory`, `etcd`, `etcdv3`, `manta`, `swift`. The legacy backend name `azure` has also been removed, because the current Azure backend is named `azurerm`. ([#31711](https://github.com/hashicorp/terraform/issues/31711)) ENHANCEMENTS: -* `terraform init` will now ignore entries in the optional global provider cache directory unless they match a checksum already tracked in the current configuration's dependency lock file. This therefore avoids the long-standing problem that when installing a new provider for the first time from the cache we can't determine the full set of checksums to include in the lock file. Once the lock file has been updated to include a checksum covering the item in the global cache, Terraform will then use the cache entry for subsequent installation of the same provider package. ([#32129](https://github.com/hashicorp/terraform/issues/32129)) -* The "Failed to install provider" error message now includes the reason a provider could not be installed. ([#31898](https://github.com/hashicorp/terraform/issues/31898)) -* backend/gcs: Add `kms_encryption_key` argument, to allow encryption of state files using Cloud KMS keys. ([#24967](https://github.com/hashicorp/terraform/issues/24967)) -* backend/gcs: Add `storage_custom_endpoint` argument, to allow communication with the backend via a Private Service Connect endpoint. ([#28856](https://github.com/hashicorp/terraform/issues/28856)) -* backend/gcs: Update documentation for usage of `gcs` with `terraform_remote_state` ([#32065](https://github.com/hashicorp/terraform/issues/32065)) +* config: Optional attributes for object type constraints, as described under new features above. ([#31154](https://github.com/hashicorp/terraform/issues/31154)) +* config: New built-in function `timecmp` allows determining the ordering relationship between two timestamps while taking potentially-different UTC offsets into account. ([#31687](https://github.com/hashicorp/terraform/pull/31687)) +* config: When reporting an error message related to a function call, Terraform will now include contextual information about the signature of the function that was being called, as an aid to understanding why the call might have failed. ([#31299](https://github.com/hashicorp/terraform/issues/31299)) +* config: When reporting an error or warning message that isn't caused by values being unknown or marked as sensitive, Terraform will no longer mention any values having those characteristics in the contextual information presented alongside the error. Terraform will still return this information for the small subset of error messages that are specifically about unknown values or sensitive values being invalid in certain contexts. ([#31299](https://github.com/hashicorp/terraform/issues/31299)) +* config: `moved` blocks can now describe resources moving to and from modules in separate module packages. ([#31556](https://github.com/hashicorp/terraform/issues/31556)) +* `terraform fmt` now accepts multiple target paths, allowing formatting of several individual files at once. ([#28191](https://github.com/hashicorp/terraform/issues/28191)) +* `terraform init`: provider installation errors now mention which host Terraform was downloading from ([#31524](https://github.com/hashicorp/terraform/issues/31524)) +* CLI: Terraform will report more explicitly when it is proposing to delete an object due to it having moved to a resource instance that is not currently declared in the configuration. ([#31695](https://github.com/hashicorp/terraform/issues/31695)) +* CLI: When showing the progress of a remote operation running in Terraform Cloud, Terraform CLI will include information about pre-plan run tasks ([#31617](https://github.com/hashicorp/terraform/issues/31617)) +* The AzureRM Backend now only supports MSAL (and Microsoft Graph) and no longer makes use of ADAL (and Azure Active Directory Graph) for authentication ([#31070](https://github.com/hashicorp/terraform/issues/31070)) +* The COS backend now supports global acceleration. ([#31425](https://github.com/hashicorp/terraform/issues/31425)) +* provider plugin protocol: The Terraform CLI now calls `PlanResourceChange` for compatible providers when destroying resource instances. ([#31179](https://github.com/hashicorp/terraform/issues/31179)) +* As an implementation detail of the Terraform Cloud integration, Terraform CLI will now capture and upload [the JSON integration format for state](https://www.terraform.io/internals/json-format#state-representation) along with any newly-recorded state snapshots, which then in turn allows Terraform Cloud to provide that information to API-based external integrations. ([#31698](https://github.com/hashicorp/terraform/issues/31698)) + +BUG FIXES: + +* config: Terraform was not previously evaluating preconditions and postconditions during the apply phase for resource instances that didn't have any changes pending, which was incorrect because the outcome of a condition can potentially be affected by changes to _other_ objects in the configuration. Terraform will now always check the conditions for every resource instance included in a plan during the apply phase, even for resource instances that have "no-op" changes. This means that some failures that would previously have been detected only by a subsequent run will now be detected during the same run that caused them, thereby giving the feedback at the appropriate time. ([#31491](https://github.com/hashicorp/terraform/issues/31491)) +* `terraform show -json`: Fixed missing markers for unknown values in the encoding of partially unknown tuples and sets. ([#31236](https://github.com/hashicorp/terraform/issues/31236)) +* `terraform output` CLI help documentation is now more consistent with web-based documentation. ([#29354](https://github.com/hashicorp/terraform/issues/29354)) +* `terraform init`: Error messages now handle the situation where the underlying HTTP client library does not indicate a hostname for a failed request. ([#31542](https://github.com/hashicorp/terraform/issues/31542)) +* `terraform init`: Don't panic if a child module contains a resource with a syntactically-invalid resource type name. ([#31573](https://github.com/hashicorp/terraform/issues/31573)) +* CLI: The representation of destroying already-`null` output values in a destroy plan will no longer report them as being deleted, which avoids reporting the deletion of an output value that was already absent. ([#31471](https://github.com/hashicorp/terraform/issues/31471)) +* `terraform import`: Better handling of resources or modules that use `for_each`, and situations where data resources are needed to complete the operation. ([#31283](https://github.com/hashicorp/terraform/issues/31283)) EXPERIMENTS: -* Since its introduction the `yamlencode` function's documentation carried a warning that it was experimental. This predated our more formalized idea of language experiments and so wasn't guarded by an explicit opt-in, but the intention was to allow for small adjustments to its behavior if we learned it was producing invalid YAML in some cases, due to the relative complexity of the YAML specification. +* This release concludes the `module_variable_optional_attrs` experiment, which started in Terraform v0.14.0. The final design of the optional attributes feature is similar to the experimental form in the previous releases, but with two major differences: + * The `optional` function-like modifier for declaring an optional attribute now accepts an optional second argument for specifying a default value to use when the attribute isn't set by the caller. If not specified, the default value is a null value of the appropriate type as before. + * The built-in `defaults` function, previously used to meet the use-case of replacing null values with default values, will not graduate to stable and has been removed. Use the second argument of `optional` inline in your type constraint to declare default values instead. + + If you have any experimental modules that were participating in this experiment, you will need to remove the experiment opt-in and adopt the new syntax for declaring default values in order to migrate your existing module to the stablized version of this feature. If you are writing a shared module for others to use, we recommend declaring that your module requires Terraform v1.3.0 or later to give specific feedback when using the new feature on older Terraform versions, in place of the previous declaration to use the experimental form of this feature: - From Terraform v1.4 onwards, `yamlencode` is no longer documented as experimental and is now subject to the Terraform v1.x Compatibility Promises. There are no changes to its previous behavior in v1.3 and so no special action is required when upgrading. + ```hcl + terraform { + required_version = ">= 1.3.0" + } + ``` ## Previous Releases For information on prior major and minor releases, see their changelogs: -* [v1.3](https://github.com/hashicorp/terraform/blob/v1.3/CHANGELOG.md) * [v1.2](https://github.com/hashicorp/terraform/blob/v1.2/CHANGELOG.md) * [v1.1](https://github.com/hashicorp/terraform/blob/v1.1/CHANGELOG.md) * [v1.0](https://github.com/hashicorp/terraform/blob/v1.0/CHANGELOG.md) diff --git a/CODEOWNERS b/CODEOWNERS index b02fd5141c4d..bb6272e61944 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -8,7 +8,7 @@ /internal/backend/remote-state/cos @likexian /internal/backend/remote-state/etcdv2 Unmaintained /internal/backend/remote-state/etcdv3 Unmaintained -/internal/backend/remote-state/gcs @hashicorp/terraform-google @hashicorp/terraform-ecosystem-strategic +/internal/backend/remote-state/gcs @hashicorp/terraform-google /internal/backend/remote-state/http @hashicorp/terraform-core /internal/backend/remote-state/manta Unmaintained /internal/backend/remote-state/oss @xiaozhu36 diff --git a/LICENSE b/LICENSE index 1409d6ab92fc..c33dcc7c928c 100644 --- a/LICENSE +++ b/LICENSE @@ -1,5 +1,3 @@ -Copyright (c) 2014 HashiCorp, Inc. - Mozilla Public License, version 2.0 1. Definitions diff --git a/docs/plugin-protocol/tfplugin5.3.proto b/docs/plugin-protocol/tfplugin5.3.proto index 0f98f04b7546..5fa53f23392d 100644 --- a/docs/plugin-protocol/tfplugin5.3.proto +++ b/docs/plugin-protocol/tfplugin5.3.proto @@ -183,15 +183,6 @@ message PrepareProviderConfig { } message UpgradeResourceState { - // Request is the message that is sent to the provider during the - // UpgradeResourceState RPC. - // - // This message intentionally does not include configuration data as any - // configuration-based or configuration-conditional changes should occur - // during the PlanResourceChange RPC. Additionally, the configuration is - // not guaranteed to exist (in the case of resource destruction), be wholly - // known, nor match the given prior state, which could lead to unexpected - // provider behaviors for practitioners. message Request { string type_name = 1; @@ -249,14 +240,6 @@ message Configure { } message ReadResource { - // Request is the message that is sent to the provider during the - // ReadResource RPC. - // - // This message intentionally does not include configuration data as any - // configuration-based or configuration-conditional changes should occur - // during the PlanResourceChange RPC. Additionally, the configuration is - // not guaranteed to be wholly known nor match the given prior state, which - // could lead to unexpected provider behaviors for practitioners. message Request { string type_name = 1; DynamicValue current_state = 2; diff --git a/docs/plugin-protocol/tfplugin6.3.proto b/docs/plugin-protocol/tfplugin6.3.proto index e3fa9d10b157..b87effe43442 100644 --- a/docs/plugin-protocol/tfplugin6.3.proto +++ b/docs/plugin-protocol/tfplugin6.3.proto @@ -201,15 +201,6 @@ message ValidateProviderConfig { } message UpgradeResourceState { - // Request is the message that is sent to the provider during the - // UpgradeResourceState RPC. - // - // This message intentionally does not include configuration data as any - // configuration-based or configuration-conditional changes should occur - // during the PlanResourceChange RPC. Additionally, the configuration is - // not guaranteed to exist (in the case of resource destruction), be wholly - // known, nor match the given prior state, which could lead to unexpected - // provider behaviors for practitioners. message Request { string type_name = 1; @@ -267,14 +258,6 @@ message ConfigureProvider { } message ReadResource { - // Request is the message that is sent to the provider during the - // ReadResource RPC. - // - // This message intentionally does not include configuration data as any - // configuration-based or configuration-conditional changes should occur - // during the PlanResourceChange RPC. Additionally, the configuration is - // not guaranteed to be wholly known nor match the given prior state, which - // could lead to unexpected provider behaviors for practitioners. message Request { string type_name = 1; DynamicValue current_state = 2; diff --git a/go.mod b/go.mod index 0dd496cdb969..00d7eba78804 100644 --- a/go.mod +++ b/go.mod @@ -1,7 +1,6 @@ module github.com/hashicorp/terraform require ( - cloud.google.com/go v0.81.0 cloud.google.com/go/storage v1.10.0 github.com/Azure/azure-sdk-for-go v59.2.0+incompatible github.com/Azure/go-autorest/autorest v0.11.24 @@ -39,7 +38,7 @@ require ( github.com/hashicorp/go-multierror v1.1.1 github.com/hashicorp/go-plugin v1.4.3 github.com/hashicorp/go-retryablehttp v0.7.1 - github.com/hashicorp/go-tfe v1.10.0 + github.com/hashicorp/go-tfe v1.9.0 github.com/hashicorp/go-uuid v1.0.3 github.com/hashicorp/go-version v1.6.0 github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f @@ -87,7 +86,6 @@ require ( golang.org/x/text v0.3.7 golang.org/x/tools v0.1.11 google.golang.org/api v0.44.0-impersonate-preview - google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c google.golang.org/grpc v1.47.0 google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 google.golang.org/protobuf v1.27.1 @@ -99,6 +97,7 @@ require ( ) require ( + cloud.google.com/go v0.81.0 // indirect github.com/Azure/go-autorest v14.2.0+incompatible // indirect github.com/Azure/go-autorest/autorest/adal v0.9.18 // indirect github.com/Azure/go-autorest/autorest/azure/cli v0.4.4 // indirect @@ -175,6 +174,7 @@ require ( golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect golang.org/x/time v0.0.0-20220722155302-e5dcc9cfc0b9 // indirect google.golang.org/appengine v1.6.7 // indirect + google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c // indirect gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/ini.v1 v1.66.2 // indirect diff --git a/go.sum b/go.sum index 6dde32328e44..56f93a1abf41 100644 --- a/go.sum +++ b/go.sum @@ -368,8 +368,8 @@ github.com/hashicorp/go-slug v0.10.0/go.mod h1:Ib+IWBYfEfJGI1ZyXMGNbu2BU+aa3Dzu4 github.com/hashicorp/go-sockaddr v1.0.0 h1:GeH6tui99pF4NJgfnhp+L6+FfobzVW3Ah46sLo0ICXs= github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU= github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4= -github.com/hashicorp/go-tfe v1.10.0 h1:mkEge/DSca8VQeBSAQbjEy8fWFHbrJA76M7dny5XlYc= -github.com/hashicorp/go-tfe v1.10.0/go.mod h1:uSWi2sPw7tLrqNIiASid9j3SprbbkPSJ/2s3X0mMemg= +github.com/hashicorp/go-tfe v1.9.0 h1:jkmyo7WKNA7gZDegG5imndoC4sojWXhqMufO+KcHqrU= +github.com/hashicorp/go-tfe v1.9.0/go.mod h1:uSWi2sPw7tLrqNIiASid9j3SprbbkPSJ/2s3X0mMemg= github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= diff --git a/internal/addrs/module_source.go b/internal/addrs/module_source.go index 82000dbc4181..905b77e2f5c1 100644 --- a/internal/addrs/module_source.go +++ b/internal/addrs/module_source.go @@ -316,17 +316,10 @@ func parseModuleSourceRemote(raw string) (ModuleSourceRemote, error) { func (s ModuleSourceRemote) moduleSource() {} func (s ModuleSourceRemote) String() string { - base := s.Package.String() - if s.Subdir != "" { - // Address contains query string - if strings.Contains(base, "?") { - parts := strings.SplitN(base, "?", 2) - return parts[0] + "//" + s.Subdir + "?" + parts[1] - } - return base + "//" + s.Subdir + return s.Package.String() + "//" + s.Subdir } - return base + return s.Package.String() } func (s ModuleSourceRemote) ForDisplay() string { diff --git a/internal/addrs/module_source_test.go b/internal/addrs/module_source_test.go index 2e38673f3e4a..d6b5626ec682 100644 --- a/internal/addrs/module_source_test.go +++ b/internal/addrs/module_source_test.go @@ -154,13 +154,6 @@ func TestParseModuleSource(t *testing.T) { Subdir: "bleep/bloop", }, }, - "git over HTTPS, URL-style, subdir, query parameters": { - input: "git::https://example.com/code/baz.git//bleep/bloop?otherthing=blah", - want: ModuleSourceRemote{ - Package: ModulePackage("git::https://example.com/code/baz.git?otherthing=blah"), - Subdir: "bleep/bloop", - }, - }, "git over SSH, URL-style": { input: "git::ssh://git@example.com/code/baz.git", want: ModuleSourceRemote{ @@ -408,56 +401,6 @@ func TestModuleSourceRemoteFromRegistry(t *testing.T) { }) } -func TestParseModuleSourceRemote(t *testing.T) { - - tests := map[string]struct { - input string - wantString string - wantForDisplay string - wantErr string - }{ - "git over HTTPS, URL-style, query parameters": { - // Query parameters should be correctly appended after the Package - input: `git::https://example.com/code/baz.git?otherthing=blah`, - wantString: `git::https://example.com/code/baz.git?otherthing=blah`, - wantForDisplay: `git::https://example.com/code/baz.git?otherthing=blah`, - }, - "git over HTTPS, URL-style, subdir, query parameters": { - // Query parameters should be correctly appended after the Package and Subdir - input: `git::https://example.com/code/baz.git//bleep/bloop?otherthing=blah`, - wantString: `git::https://example.com/code/baz.git//bleep/bloop?otherthing=blah`, - wantForDisplay: `git::https://example.com/code/baz.git//bleep/bloop?otherthing=blah`, - }, - } - - for name, test := range tests { - t.Run(name, func(t *testing.T) { - remote, err := parseModuleSourceRemote(test.input) - - if test.wantErr != "" { - switch { - case err == nil: - t.Errorf("unexpected success\nwant error: %s", test.wantErr) - case err.Error() != test.wantErr: - t.Errorf("wrong error messages\ngot: %s\nwant: %s", err.Error(), test.wantErr) - } - return - } - - if err != nil { - t.Fatalf("unexpected error: %s", err.Error()) - } - - if got, want := remote.String(), test.wantString; got != want { - t.Errorf("wrong String() result\ngot: %s\nwant: %s", got, want) - } - if got, want := remote.ForDisplay(), test.wantForDisplay; got != want { - t.Errorf("wrong ForDisplay() result\ngot: %s\nwant: %s", got, want) - } - }) - } -} - func TestParseModuleSourceRegistry(t *testing.T) { // We test parseModuleSourceRegistry alone here, in addition to testing // it indirectly as part of TestParseModuleSource, because general diff --git a/internal/backend/remote-state/gcs/backend.go b/internal/backend/remote-state/gcs/backend.go index 8bd622b457f5..0478a95ab119 100644 --- a/internal/backend/remote-state/gcs/backend.go +++ b/internal/backend/remote-state/gcs/backend.go @@ -31,7 +31,6 @@ type Backend struct { prefix string encryptionKey []byte - kmsKeyName string } func New() backend.Backend { @@ -84,32 +83,10 @@ func New() backend.Backend { }, "encryption_key": { - Type: schema.TypeString, - Optional: true, - DefaultFunc: schema.MultiEnvDefaultFunc([]string{ - "GOOGLE_ENCRYPTION_KEY", - }, nil), - Description: "A 32 byte base64 encoded 'customer supplied encryption key' used when reading and writing state files in the bucket.", - ConflictsWith: []string{"kms_encryption_key"}, - }, - - "kms_encryption_key": { - Type: schema.TypeString, - Optional: true, - DefaultFunc: schema.MultiEnvDefaultFunc([]string{ - "GOOGLE_KMS_ENCRYPTION_KEY", - }, nil), - Description: "A Cloud KMS key ('customer managed encryption key') used when reading and writing state files in the bucket. Format should be 'projects/{{project}}/locations/{{location}}/keyRings/{{keyRing}}/cryptoKeys/{{name}}'.", - ConflictsWith: []string{"encryption_key"}, - }, - - "storage_custom_endpoint": { - Type: schema.TypeString, - Optional: true, - DefaultFunc: schema.MultiEnvDefaultFunc([]string{ - "GOOGLE_BACKEND_STORAGE_CUSTOM_ENDPOINT", - "GOOGLE_STORAGE_CUSTOM_ENDPOINT", - }, nil), + Type: schema.TypeString, + Optional: true, + Description: "A 32 byte base64 encoded 'customer supplied encryption key' used to encrypt all state.", + Default: "", }, }, } @@ -204,12 +181,6 @@ func (b *Backend) configure(ctx context.Context) error { } opts = append(opts, option.WithUserAgent(httpclient.UserAgentString())) - - // Custom endpoint for storage API - if storageEndpoint, ok := data.GetOk("storage_custom_endpoint"); ok { - endpoint := option.WithEndpoint(storageEndpoint.(string)) - opts = append(opts, endpoint) - } client, err := storage.NewClient(b.storageContext, opts...) if err != nil { return fmt.Errorf("storage.NewClient() failed: %v", err) @@ -217,8 +188,11 @@ func (b *Backend) configure(ctx context.Context) error { b.storageClient = client - // Customer-supplied encryption key := data.Get("encryption_key").(string) + if key == "" { + key = os.Getenv("GOOGLE_ENCRYPTION_KEY") + } + if key != "" { kc, err := backend.ReadPathOrContents(key) if err != nil { @@ -238,11 +212,5 @@ func (b *Backend) configure(ctx context.Context) error { b.encryptionKey = k } - // Customer-managed encryption - kmsName := data.Get("kms_encryption_key").(string) - if kmsName != "" { - b.kmsKeyName = kmsName - } - return nil } diff --git a/internal/backend/remote-state/gcs/backend_state.go b/internal/backend/remote-state/gcs/backend_state.go index d2d5a2f6b2a2..21b71834735c 100644 --- a/internal/backend/remote-state/gcs/backend_state.go +++ b/internal/backend/remote-state/gcs/backend_state.go @@ -81,7 +81,6 @@ func (b *Backend) client(name string) (*remoteClient, error) { stateFilePath: b.stateFile(name), lockFilePath: b.lockFile(name), encryptionKey: b.encryptionKey, - kmsKeyName: b.kmsKeyName, }, nil } diff --git a/internal/backend/remote-state/gcs/backend_test.go b/internal/backend/remote-state/gcs/backend_test.go index 9e8ca077c319..bbdd5c61a689 100644 --- a/internal/backend/remote-state/gcs/backend_test.go +++ b/internal/backend/remote-state/gcs/backend_test.go @@ -1,8 +1,6 @@ package gcs import ( - "context" - "encoding/json" "fmt" "log" "os" @@ -10,34 +8,18 @@ import ( "testing" "time" - kms "cloud.google.com/go/kms/apiv1" "cloud.google.com/go/storage" "github.com/hashicorp/terraform/internal/backend" - "github.com/hashicorp/terraform/internal/httpclient" "github.com/hashicorp/terraform/internal/states/remote" - "google.golang.org/api/option" - kmspb "google.golang.org/genproto/googleapis/cloud/kms/v1" ) const ( noPrefix = "" noEncryptionKey = "" - noKmsKeyName = "" ) // See https://cloud.google.com/storage/docs/using-encryption-keys#generating_your_own_encryption_key -const encryptionKey = "yRyCOikXi1ZDNE0xN3yiFsJjg7LGimoLrGFcLZgQoVk=" - -// KMS key ring name and key name are hardcoded here and re-used because key rings (and keys) cannot be deleted -// Test code asserts their presence and creates them if they're absent. They're not deleted at the end of tests. -// See: https://cloud.google.com/kms/docs/faq#cannot_delete -const ( - keyRingName = "tf-gcs-backend-acc-tests" - keyName = "tf-test-key-1" - kmsRole = "roles/cloudkms.cryptoKeyEncrypterDecrypter" // GCS service account needs this binding on the created key -) - -var keyRingLocation = os.Getenv("GOOGLE_REGION") +var encryptionKey = "yRyCOikXi1ZDNE0xN3yiFsJjg7LGimoLrGFcLZgQoVk=" func TestStateFile(t *testing.T) { t.Parallel() @@ -72,7 +54,7 @@ func TestRemoteClient(t *testing.T) { t.Parallel() bucket := bucketName(t) - be := setupBackend(t, bucket, noPrefix, noEncryptionKey, noKmsKeyName) + be := setupBackend(t, bucket, noPrefix, noEncryptionKey) defer teardownBackend(t, be, noPrefix) ss, err := be.StateMgr(backend.DefaultStateName) @@ -91,7 +73,7 @@ func TestRemoteClientWithEncryption(t *testing.T) { t.Parallel() bucket := bucketName(t) - be := setupBackend(t, bucket, noPrefix, encryptionKey, noKmsKeyName) + be := setupBackend(t, bucket, noPrefix, encryptionKey) defer teardownBackend(t, be, noPrefix) ss, err := be.StateMgr(backend.DefaultStateName) @@ -111,7 +93,7 @@ func TestRemoteLocks(t *testing.T) { t.Parallel() bucket := bucketName(t) - be := setupBackend(t, bucket, noPrefix, noEncryptionKey, noKmsKeyName) + be := setupBackend(t, bucket, noPrefix, noEncryptionKey) defer teardownBackend(t, be, noPrefix) remoteClient := func() (remote.Client, error) { @@ -145,10 +127,10 @@ func TestBackend(t *testing.T) { bucket := bucketName(t) - be0 := setupBackend(t, bucket, noPrefix, noEncryptionKey, noKmsKeyName) + be0 := setupBackend(t, bucket, noPrefix, noEncryptionKey) defer teardownBackend(t, be0, noPrefix) - be1 := setupBackend(t, bucket, noPrefix, noEncryptionKey, noKmsKeyName) + be1 := setupBackend(t, bucket, noPrefix, noEncryptionKey) backend.TestBackendStates(t, be0) backend.TestBackendStateLocks(t, be0, be1) @@ -161,55 +143,30 @@ func TestBackendWithPrefix(t *testing.T) { prefix := "test/prefix" bucket := bucketName(t) - be0 := setupBackend(t, bucket, prefix, noEncryptionKey, noKmsKeyName) + be0 := setupBackend(t, bucket, prefix, noEncryptionKey) defer teardownBackend(t, be0, prefix) - be1 := setupBackend(t, bucket, prefix+"/", noEncryptionKey, noKmsKeyName) - - backend.TestBackendStates(t, be0) - backend.TestBackendStateLocks(t, be0, be1) -} -func TestBackendWithCustomerSuppliedEncryption(t *testing.T) { - t.Parallel() - - bucket := bucketName(t) - - be0 := setupBackend(t, bucket, noPrefix, encryptionKey, noKmsKeyName) - defer teardownBackend(t, be0, noPrefix) - - be1 := setupBackend(t, bucket, noPrefix, encryptionKey, noKmsKeyName) + be1 := setupBackend(t, bucket, prefix+"/", noEncryptionKey) backend.TestBackendStates(t, be0) backend.TestBackendStateLocks(t, be0, be1) } - -func TestBackendWithCustomerManagedKMSEncryption(t *testing.T) { +func TestBackendWithEncryption(t *testing.T) { t.Parallel() - projectID := os.Getenv("GOOGLE_PROJECT") bucket := bucketName(t) - // Taken from global variables in test file - kmsDetails := map[string]string{ - "project": projectID, - "location": keyRingLocation, - "ringName": keyRingName, - "keyName": keyName, - } - - kmsName := setupKmsKey(t, kmsDetails) - - be0 := setupBackend(t, bucket, noPrefix, noEncryptionKey, kmsName) + be0 := setupBackend(t, bucket, noPrefix, encryptionKey) defer teardownBackend(t, be0, noPrefix) - be1 := setupBackend(t, bucket, noPrefix, noEncryptionKey, kmsName) + be1 := setupBackend(t, bucket, noPrefix, encryptionKey) backend.TestBackendStates(t, be0) backend.TestBackendStateLocks(t, be0, be1) } // setupBackend returns a new GCS backend. -func setupBackend(t *testing.T, bucket, prefix, key, kmsName string) backend.Backend { +func setupBackend(t *testing.T, bucket, prefix, key string) backend.Backend { t.Helper() projectID := os.Getenv("GOOGLE_PROJECT") @@ -220,16 +177,9 @@ func setupBackend(t *testing.T, bucket, prefix, key, kmsName string) backend.Bac } config := map[string]interface{}{ - "bucket": bucket, - "prefix": prefix, - } - // Only add encryption keys to config if non-zero value set - // If not set here, default values are supplied in `TestBackendConfig` by `PrepareConfig` function call - if len(key) > 0 { - config["encryption_key"] = key - } - if len(kmsName) > 0 { - config["kms_encryption_key"] = kmsName + "bucket": bucket, + "prefix": prefix, + "encryption_key": key, } b := backend.TestBackendConfig(t, New(), backend.TestWrapConfig(config)) @@ -255,120 +205,6 @@ func setupBackend(t *testing.T, bucket, prefix, key, kmsName string) backend.Bac return b } -// setupKmsKey asserts that a KMS key chain and key exist and necessary IAM bindings are in place -// If the key ring or key do not exist they are created and permissions are given to the GCS Service account -func setupKmsKey(t *testing.T, keyDetails map[string]string) string { - t.Helper() - - projectID := os.Getenv("GOOGLE_PROJECT") - if projectID == "" || os.Getenv("TF_ACC") == "" { - t.Skip("This test creates a KMS key ring and key in Cloud KMS. " + - "Since this may incur costs, it will only run if " + - "the TF_ACC and GOOGLE_PROJECT environment variables are set.") - } - - // KMS Client - ctx := context.Background() - opts, err := testGetClientOptions(t) - if err != nil { - e := fmt.Errorf("testGetClientOptions() failed: %s", err) - t.Fatal(e) - } - c, err := kms.NewKeyManagementClient(ctx, opts...) - if err != nil { - e := fmt.Errorf("kms.NewKeyManagementClient() failed: %v", err) - t.Fatal(e) - } - defer c.Close() - - // Get KMS key ring, create if doesn't exist - reqGetKeyRing := &kmspb.GetKeyRingRequest{ - Name: fmt.Sprintf("projects/%s/locations/%s/keyRings/%s", keyDetails["project"], keyDetails["location"], keyDetails["ringName"]), - } - var keyRing *kmspb.KeyRing - keyRing, err = c.GetKeyRing(ctx, reqGetKeyRing) - if err != nil { - if !strings.Contains(err.Error(), "NotFound") { - // Handle unexpected error that isn't related to the key ring not being made yet - t.Fatal(err) - } - // Create key ring that doesn't exist - t.Logf("Cloud KMS key ring `%s` not found: creating key ring", - fmt.Sprintf("projects/%s/locations/%s/keyRings/%s", keyDetails["project"], keyDetails["location"], keyDetails["ringName"]), - ) - reqCreateKeyRing := &kmspb.CreateKeyRingRequest{ - Parent: fmt.Sprintf("projects/%s/locations/%s", keyDetails["project"], keyDetails["location"]), - KeyRingId: keyDetails["ringName"], - } - keyRing, err = c.CreateKeyRing(ctx, reqCreateKeyRing) - if err != nil { - t.Fatal(err) - } - t.Logf("Cloud KMS key ring `%s` created successfully", keyRing.Name) - } - - // Get KMS key, create if doesn't exist (and give GCS service account permission to use) - reqGetKey := &kmspb.GetCryptoKeyRequest{ - Name: fmt.Sprintf("%s/cryptoKeys/%s", keyRing.Name, keyDetails["keyName"]), - } - var key *kmspb.CryptoKey - key, err = c.GetCryptoKey(ctx, reqGetKey) - if err != nil { - if !strings.Contains(err.Error(), "NotFound") { - // Handle unexpected error that isn't related to the key not being made yet - t.Fatal(err) - } - // Create key that doesn't exist - t.Logf("Cloud KMS key `%s` not found: creating key", - fmt.Sprintf("%s/cryptoKeys/%s", keyRing.Name, keyDetails["keyName"]), - ) - reqCreateKey := &kmspb.CreateCryptoKeyRequest{ - Parent: keyRing.Name, - CryptoKeyId: keyDetails["keyName"], - CryptoKey: &kmspb.CryptoKey{ - Purpose: kmspb.CryptoKey_ENCRYPT_DECRYPT, - }, - } - key, err = c.CreateCryptoKey(ctx, reqCreateKey) - if err != nil { - t.Fatal(err) - } - t.Logf("Cloud KMS key `%s` created successfully", key.Name) - } - - // Get GCS Service account email, check has necessary permission on key - // Note: we cannot reuse the backend's storage client (like in the setupBackend function) - // because the KMS key needs to exist before the backend buckets are made in the test. - sc, err := storage.NewClient(ctx, opts...) //reuse opts from KMS client - if err != nil { - e := fmt.Errorf("storage.NewClient() failed: %v", err) - t.Fatal(e) - } - defer sc.Close() - gcsServiceAccount, err := sc.ServiceAccount(ctx, keyDetails["project"]) - if err != nil { - t.Fatal(err) - } - - // Assert Cloud Storage service account has permission to use this key. - member := fmt.Sprintf("serviceAccount:%s", gcsServiceAccount) - iamHandle := c.ResourceIAM(key.Name) - policy, err := iamHandle.Policy(ctx) - if err != nil { - t.Fatal(err) - } - if ok := policy.HasRole(member, kmsRole); !ok { - // Add the missing permissions - t.Logf("Granting GCS service account %s %s role on key %s", gcsServiceAccount, kmsRole, key.Name) - policy.Add(member, kmsRole) - err = iamHandle.SetPolicy(ctx, policy) - if err != nil { - t.Fatal(err) - } - } - return key.Name -} - // teardownBackend deletes all states from be except the default state. func teardownBackend(t *testing.T, be backend.Backend, prefix string) { t.Helper() @@ -406,36 +242,3 @@ func bucketName(t *testing.T) string { return strings.ToLower(name) } - -// getClientOptions returns the []option.ClientOption needed to configure Google API clients -// that are required in acceptance tests but are not part of the gcs backend itself -func testGetClientOptions(t *testing.T) ([]option.ClientOption, error) { - t.Helper() - - var creds string - if v := os.Getenv("GOOGLE_BACKEND_CREDENTIALS"); v != "" { - creds = v - } else { - creds = os.Getenv("GOOGLE_CREDENTIALS") - } - if creds == "" { - t.Skip("This test required credentials to be supplied via" + - "the GOOGLE_CREDENTIALS or GOOGLE_BACKEND_CREDENTIALS environment variables.") - } - - var opts []option.ClientOption - var credOptions []option.ClientOption - - contents, err := backend.ReadPathOrContents(creds) - if err != nil { - return nil, fmt.Errorf("error loading credentials: %s", err) - } - if !json.Valid([]byte(contents)) { - return nil, fmt.Errorf("the string provided in credentials is neither valid json nor a valid file path") - } - credOptions = append(credOptions, option.WithCredentialsJSON([]byte(contents))) - opts = append(opts, credOptions...) - opts = append(opts, option.WithUserAgent(httpclient.UserAgentString())) - - return opts, nil -} diff --git a/internal/backend/remote-state/gcs/client.go b/internal/backend/remote-state/gcs/client.go index b91eaf350755..58402fbde08b 100644 --- a/internal/backend/remote-state/gcs/client.go +++ b/internal/backend/remote-state/gcs/client.go @@ -23,7 +23,6 @@ type remoteClient struct { stateFilePath string lockFilePath string encryptionKey []byte - kmsKeyName string } func (c *remoteClient) Get() (payload *remote.Payload, err error) { @@ -58,9 +57,6 @@ func (c *remoteClient) Get() (payload *remote.Payload, err error) { func (c *remoteClient) Put(data []byte) error { err := func() error { stateFileWriter := c.stateFile().NewWriter(c.storageContext) - if len(c.kmsKeyName) > 0 { - stateFileWriter.KMSKeyName = c.kmsKeyName - } if _, err := stateFileWriter.Write(data); err != nil { return err } diff --git a/internal/backend/remote-state/s3/backend.go b/internal/backend/remote-state/s3/backend.go index c56e390a7f18..98aa1c561ef3 100644 --- a/internal/backend/remote-state/s3/backend.go +++ b/internal/backend/remote-state/s3/backend.go @@ -37,11 +37,6 @@ func New() backend.Backend { if strings.HasPrefix(v.(string), "/") { return nil, []error{errors.New("key must not start with '/'")} } - // s3 will recognize objects with a trailing slash as a directory - // so they should not be valid keys - if strings.HasSuffix(v.(string), "/") { - return nil, []error{errors.New("key must not end with '/'")} - } return nil, nil }, }, diff --git a/internal/backend/remote-state/s3/backend_test.go b/internal/backend/remote-state/s3/backend_test.go index 230e4b89c8d6..1fd49c461ab5 100644 --- a/internal/backend/remote-state/s3/backend_test.go +++ b/internal/backend/remote-state/s3/backend_test.go @@ -326,19 +326,6 @@ func TestBackendConfig_invalidKey(t *testing.T) { if !diags.HasErrors() { t.Fatal("expected config validation error") } - - cfg = hcl2shim.HCL2ValueFromConfigValue(map[string]interface{}{ - "region": "us-west-1", - "bucket": "tf-test", - "key": "trailing-slash/", - "encrypt": true, - "dynamodb_table": "dynamoTable", - }) - - _, diags = New().PrepareConfig(cfg) - if !diags.HasErrors() { - t.Fatal("expected config validation error") - } } func TestBackendConfig_invalidSSECustomerKeyLength(t *testing.T) { diff --git a/internal/cloud/backend_apply.go b/internal/cloud/backend_apply.go index 078045540ecd..ff22dd5a9091 100644 --- a/internal/cloud/backend_apply.go +++ b/internal/cloud/backend_apply.go @@ -133,19 +133,6 @@ func (b *Cloud) opApply(stopCtx, cancelCtx context.Context, op *backend.Operatio } } - // Retrieve the run to get task stages. - // Task Stages are calculated upfront so we only need to call this once for the run. - taskStages, err := b.runTaskStages(stopCtx, b.client, r.ID) - if err != nil { - return r, err - } - - if stage, ok := taskStages[tfe.PreApply]; ok { - if err := b.waitTaskStage(stopCtx, cancelCtx, op, r, stage.ID, "Pre-apply Tasks"); err != nil { - return r, err - } - } - r, err = b.waitForRun(stopCtx, cancelCtx, op, "apply", r, w) if err != nil { return r, err diff --git a/internal/cloud/backend_plan.go b/internal/cloud/backend_plan.go index 0678d9360835..2688d65c1288 100644 --- a/internal/cloud/backend_plan.go +++ b/internal/cloud/backend_plan.go @@ -293,13 +293,22 @@ in order to capture the filesystem context the remote workspace expects: // Retrieve the run to get task stages. // Task Stages are calculated upfront so we only need to call this once for the run. - taskStages, err := b.runTaskStages(stopCtx, b.client, r.ID) - if err != nil { - return r, err + taskStages := make([]*tfe.TaskStage, 0) + result, err := b.client.Runs.ReadWithOptions(stopCtx, r.ID, &tfe.RunReadOptions{ + Include: []tfe.RunIncludeOpt{tfe.RunTaskStages}, + }) + if err == nil { + taskStages = result.TaskStages + } else { + // This error would be expected for older versions of TFE that do not allow + // fetching task_stages. + if !strings.HasSuffix(err.Error(), "Invalid include parameter") { + return r, generalError("Failed to retrieve run", err) + } } - if stage, ok := taskStages[tfe.PrePlan]; ok { - if err := b.waitTaskStage(stopCtx, cancelCtx, op, r, stage.ID, "Pre-plan Tasks"); err != nil { + if stageID := getTaskStageIDByName(taskStages, tfe.PrePlan); stageID != nil { + if err := b.waitTaskStage(stopCtx, cancelCtx, op, r, *stageID, "Pre-plan Tasks"); err != nil { return r, err } } @@ -348,8 +357,8 @@ in order to capture the filesystem context the remote workspace expects: // status of the run will be "errored", but there is still policy // information which should be shown. - if stage, ok := taskStages[tfe.PostPlan]; ok { - if err := b.waitTaskStage(stopCtx, cancelCtx, op, r, stage.ID, "Post-plan Tasks"); err != nil { + if stageID := getTaskStageIDByName(taskStages, tfe.PostPlan); stageID != nil { + if err := b.waitTaskStage(stopCtx, cancelCtx, op, r, *stageID, "Post-plan Tasks"); err != nil { return r, err } } @@ -373,6 +382,19 @@ in order to capture the filesystem context the remote workspace expects: return r, nil } +func getTaskStageIDByName(stages []*tfe.TaskStage, stageName tfe.Stage) *string { + if len(stages) == 0 { + return nil + } + + for _, stage := range stages { + if stage.Stage == stageName { + return &stage.ID + } + } + return nil +} + const planDefaultHeader = ` [reset][yellow]Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C will stop streaming the logs, but will not stop the plan running remotely.[reset] diff --git a/internal/cloud/backend_taskStages.go b/internal/cloud/backend_taskStages.go deleted file mode 100644 index d2ae881b2754..000000000000 --- a/internal/cloud/backend_taskStages.go +++ /dev/null @@ -1,32 +0,0 @@ -package cloud - -import ( - "context" - "strings" - - tfe "github.com/hashicorp/go-tfe" -) - -type taskStages map[tfe.Stage]*tfe.TaskStage - -func (b *Cloud) runTaskStages(ctx context.Context, client *tfe.Client, runId string) (taskStages, error) { - taskStages := make(taskStages, 0) - result, err := client.Runs.ReadWithOptions(ctx, runId, &tfe.RunReadOptions{ - Include: []tfe.RunIncludeOpt{tfe.RunTaskStages}, - }) - if err == nil { - for _, t := range result.TaskStages { - if t != nil { - taskStages[t.Stage] = t - } - } - } else { - // This error would be expected for older versions of TFE that do not allow - // fetching task_stages. - if !strings.HasSuffix(err.Error(), "Invalid include parameter") { - return taskStages, generalError("Failed to retrieve run", err) - } - } - - return taskStages, nil -} diff --git a/internal/cloud/backend_taskStages_test.go b/internal/cloud/backend_taskStages_test.go deleted file mode 100644 index e52f6a5e701e..000000000000 --- a/internal/cloud/backend_taskStages_test.go +++ /dev/null @@ -1,207 +0,0 @@ -package cloud - -import ( - "context" - "errors" - "testing" - - "github.com/golang/mock/gomock" - "github.com/hashicorp/go-tfe" - tfemocks "github.com/hashicorp/go-tfe/mocks" -) - -func MockAllTaskStages(t *testing.T, client *tfe.Client) (RunID string) { - ctrl := gomock.NewController(t) - - RunID = "run-all_task_stages" - - mockRunsAPI := tfemocks.NewMockRuns(ctrl) - - goodRun := tfe.Run{ - TaskStages: []*tfe.TaskStage{ - { - Stage: tfe.PrePlan, - }, - { - Stage: tfe.PostPlan, - }, - { - Stage: tfe.PreApply, - }, - }, - } - mockRunsAPI. - EXPECT(). - ReadWithOptions(gomock.Any(), RunID, gomock.Any()). - Return(&goodRun, nil). - AnyTimes() - - // Mock a bad Read response - mockRunsAPI. - EXPECT(). - ReadWithOptions(gomock.Any(), gomock.Any(), gomock.Any()). - Return(nil, tfe.ErrInvalidOrg). - AnyTimes() - - // Wire up the mock interfaces - client.Runs = mockRunsAPI - return -} - -func MockPrePlanTaskStage(t *testing.T, client *tfe.Client) (RunID string) { - ctrl := gomock.NewController(t) - - RunID = "run-pre_plan_task_stage" - - mockRunsAPI := tfemocks.NewMockRuns(ctrl) - - goodRun := tfe.Run{ - TaskStages: []*tfe.TaskStage{ - { - Stage: tfe.PrePlan, - }, - }, - } - mockRunsAPI. - EXPECT(). - ReadWithOptions(gomock.Any(), RunID, gomock.Any()). - Return(&goodRun, nil). - AnyTimes() - - // Mock a bad Read response - mockRunsAPI. - EXPECT(). - ReadWithOptions(gomock.Any(), gomock.Any(), gomock.Any()). - Return(nil, tfe.ErrInvalidOrg). - AnyTimes() - - // Wire up the mock interfaces - client.Runs = mockRunsAPI - return -} - -func MockTaskStageUnsupported(t *testing.T, client *tfe.Client) (RunID string) { - ctrl := gomock.NewController(t) - - RunID = "run-unsupported_task_stage" - - mockRunsAPI := tfemocks.NewMockRuns(ctrl) - - mockRunsAPI. - EXPECT(). - ReadWithOptions(gomock.Any(), RunID, gomock.Any()). - Return(nil, errors.New("Invalid include parameter")). - AnyTimes() - - mockRunsAPI. - EXPECT(). - ReadWithOptions(gomock.Any(), gomock.Any(), gomock.Any()). - Return(nil, tfe.ErrInvalidOrg). - AnyTimes() - - client.Runs = mockRunsAPI - return -} - -func TestTaskStagesWithAllStages(t *testing.T) { - b, bCleanup := testBackendWithName(t) - defer bCleanup() - - config := &tfe.Config{ - Token: "not-a-token", - } - client, _ := tfe.NewClient(config) - runID := MockAllTaskStages(t, client) - - ctx := context.TODO() - taskStages, err := b.runTaskStages(ctx, client, runID) - - if err != nil { - t.Fatalf("Expected to not error but received %s", err) - } - - for _, stageName := range []tfe.Stage{ - tfe.PrePlan, - tfe.PostPlan, - tfe.PreApply, - } { - if stage, ok := taskStages[stageName]; ok { - if stage.Stage != stageName { - t.Errorf("Expected task stage indexed by %s to find a Task Stage with the same index, but receieved %s", stageName, stage.Stage) - } - } else { - t.Errorf("Expected task stage indexed by %s to exist, but it did not", stageName) - } - } -} - -func TestTaskStagesWithOneStage(t *testing.T) { - b, bCleanup := testBackendWithName(t) - defer bCleanup() - - config := &tfe.Config{ - Token: "not-a-token", - } - client, _ := tfe.NewClient(config) - runID := MockPrePlanTaskStage(t, client) - - ctx := context.TODO() - taskStages, err := b.runTaskStages(ctx, client, runID) - - if err != nil { - t.Fatalf("Expected to not error but received %s", err) - } - - if _, ok := taskStages[tfe.PrePlan]; !ok { - t.Errorf("Expected task stage indexed by %s to exist, but it did not", tfe.PrePlan) - } - - for _, stageName := range []tfe.Stage{ - tfe.PostPlan, - tfe.PreApply, - } { - if _, ok := taskStages[stageName]; ok { - t.Errorf("Expected task stage indexed by %s to not exist, but it did", stageName) - } - } -} - -func TestTaskStagesWithOldTFC(t *testing.T) { - b, bCleanup := testBackendWithName(t) - defer bCleanup() - - config := &tfe.Config{ - Token: "not-a-token", - } - client, _ := tfe.NewClient(config) - runID := MockTaskStageUnsupported(t, client) - - ctx := context.TODO() - taskStages, err := b.runTaskStages(ctx, client, runID) - - if err != nil { - t.Fatalf("Expected to not error but received %s", err) - } - - if len(taskStages) != 0 { - t.Errorf("Expected task stage to be empty, but found %d stages", len(taskStages)) - } -} - -func TestTaskStagesWithErrors(t *testing.T) { - b, bCleanup := testBackendWithName(t) - defer bCleanup() - - config := &tfe.Config{ - Token: "not-a-token", - } - client, _ := tfe.NewClient(config) - MockTaskStageUnsupported(t, client) - - ctx := context.TODO() - _, err := b.runTaskStages(ctx, client, "this run ID will not exist is invalid anyway") - - if err == nil { - t.Error("Expected to error but did not") - } -} diff --git a/internal/cloud/state.go b/internal/cloud/state.go index 6f1a433b0d2b..04d9f7773439 100644 --- a/internal/cloud/state.go +++ b/internal/cloud/state.go @@ -92,13 +92,19 @@ func (s *State) WriteStateForMigration(f *statefile.File, force bool) error { } } + // The remote backend needs to pass the `force` flag through to its client. + // For backends that support such operations, inform the client + // that a force push has been requested + if force { + s.EnableForcePush() + } + // We create a deep copy of the state here, because the caller also has // a reference to the given object and can potentially go on to mutate // it after we return, but we want the snapshot at this point in time. s.state = f.State.DeepCopy() s.lineage = f.Lineage s.serial = f.Serial - s.forcePush = force return nil } @@ -130,7 +136,6 @@ func (s *State) WriteState(state *states.State) error { // a reference to the given object and can potentially go on to mutate // it after we return, but we want the snapshot at this point in time. s.state = state.DeepCopy() - s.forcePush = false return nil } @@ -409,6 +414,12 @@ func (s *State) Delete() error { return nil } +// EnableForcePush to allow the remote client to overwrite state +// by implementing remote.ClientForcePusher +func (s *State) EnableForcePush() { + s.forcePush = true +} + // GetRootOutputValues fetches output values from Terraform Cloud func (s *State) GetRootOutputValues() (map[string]*states.OutputValue, error) { ctx := context.Background() diff --git a/internal/command/command_test.go b/internal/command/command_test.go index 7998a554224a..fa498b438270 100644 --- a/internal/command/command_test.go +++ b/internal/command/command_test.go @@ -138,6 +138,9 @@ func metaOverridesForProvider(p providers.Interface) *testingOverrides { Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): providers.FactoryFixed(p), addrs.NewProvider(addrs.DefaultProviderRegistryHost, "hashicorp2", "test"): providers.FactoryFixed(p), + addrs.NewLegacyProvider("null"): providers.FactoryFixed(p), + addrs.NewLegacyProvider("azurerm"): providers.FactoryFixed(p), + addrs.NewProvider(addrs.DefaultProviderRegistryHost, "acmecorp", "aws"): providers.FactoryFixed(p), }, } } diff --git a/internal/command/e2etest/testdata/plugin-cache/.terraform.lock.hcl b/internal/command/e2etest/testdata/plugin-cache/.terraform.lock.hcl deleted file mode 100644 index a96e3e4f05bb..000000000000 --- a/internal/command/e2etest/testdata/plugin-cache/.terraform.lock.hcl +++ /dev/null @@ -1,14 +0,0 @@ -# The global cache is only an eligible installation source if there's already -# a lock entry for the given provider and it contains at least one checksum -# that matches the cache entry. -# -# This lock file therefore matches the "not a real provider" fake executable -# under the "cache" directory, rather than the real provider from upstream, -# so that Terraform CLI will consider the cache entry as valid. - -provider "registry.terraform.io/hashicorp/template" { - version = "2.1.0" - hashes = [ - "h1:e7YvVlRZlaZJ8ED5KnH0dAg0kPL0nAU7eEoCAZ/sOos=", - ] -} diff --git a/internal/command/views/apply_test.go b/internal/command/views/apply_test.go index d8bc71c80aab..b16242ed6302 100644 --- a/internal/command/views/apply_test.go +++ b/internal/command/views/apply_test.go @@ -246,6 +246,7 @@ func TestApplyJSON_outputs(t *testing.T) { }, "password": map[string]interface{}{ "sensitive": true, + "value": "horse-battery", "type": "string", }, }, diff --git a/internal/command/views/json/output.go b/internal/command/views/json/output.go index c9648c56260b..05070984afd6 100644 --- a/internal/command/views/json/output.go +++ b/internal/command/views/json/output.go @@ -42,15 +42,10 @@ func OutputsFromMap(outputValues map[string]*states.OutputValue) (Outputs, tfdia return nil, diags } - var redactedValue json.RawMessage - if !ov.Sensitive { - redactedValue = json.RawMessage(value) - } - outputs[name] = Output{ Sensitive: ov.Sensitive, Type: json.RawMessage(valueType), - Value: redactedValue, + Value: json.RawMessage(value), } } diff --git a/internal/command/views/json/output_test.go b/internal/command/views/json/output_test.go index 0fa15e22d6dd..e3e9495b8cf9 100644 --- a/internal/command/views/json/output_test.go +++ b/internal/command/views/json/output_test.go @@ -52,10 +52,12 @@ func TestOutputsFromMap(t *testing.T) { "beep": { Sensitive: true, Type: json.RawMessage(`"string"`), + Value: json.RawMessage(`"horse-battery"`), }, "blorp": { Sensitive: true, Type: json.RawMessage(`["object",{"a":["object",{"b":["object",{"c":"string"}]}]}]`), + Value: json.RawMessage(`{"a":{"b":{"c":"oh, hi"}}}`), }, "honk": { Sensitive: false, diff --git a/internal/command/views/json_view.go b/internal/command/views/json_view.go index a1493bc4def6..f92036d5c0d7 100644 --- a/internal/command/views/json_view.go +++ b/internal/command/views/json_view.go @@ -13,7 +13,7 @@ import ( // This version describes the schema of JSON UI messages. This version must be // updated after making any changes to this view, the jsonHook, or any of the // command/views/json package. -const JSON_UI_VERSION = "1.1" +const JSON_UI_VERSION = "1.0" func NewJSONView(view *View) *JSONView { log := hclog.New(&hclog.LoggerOptions{ diff --git a/internal/command/views/refresh_test.go b/internal/command/views/refresh_test.go index d68348e5fca4..75dbcd6c4ddc 100644 --- a/internal/command/views/refresh_test.go +++ b/internal/command/views/refresh_test.go @@ -98,6 +98,7 @@ func TestRefreshJSON_outputs(t *testing.T) { }, "password": map[string]interface{}{ "sensitive": true, + "value": "horse-battery", "type": "string", }, }, diff --git a/internal/communicator/ssh/communicator.go b/internal/communicator/ssh/communicator.go index c6af68839e0d..609dc1fbaf0e 100644 --- a/internal/communicator/ssh/communicator.go +++ b/internal/communicator/ssh/communicator.go @@ -418,7 +418,7 @@ func (c *Communicator) Upload(path string, input io.Reader) error { switch src := input.(type) { case *os.File: fi, err := src.Stat() - if err != nil { + if err == nil { size = fi.Size() } case *bytes.Buffer: @@ -641,7 +641,13 @@ func checkSCPStatus(r *bufio.Reader) error { return nil } +var testUploadSizeHook func(size int64) + func scpUploadFile(dst string, src io.Reader, w io.Writer, r *bufio.Reader, size int64) error { + if testUploadSizeHook != nil { + testUploadSizeHook(size) + } + if size == 0 { // Create a temporary file where we can copy the contents of the src // so that we can determine the length, since SCP is length-prefixed. diff --git a/internal/communicator/ssh/communicator_test.go b/internal/communicator/ssh/communicator_test.go index 8d7db9996708..b829e5b9afb3 100644 --- a/internal/communicator/ssh/communicator_test.go +++ b/internal/communicator/ssh/communicator_test.go @@ -577,10 +577,28 @@ func TestAccUploadFile(t *testing.T) { } tmpDir := t.TempDir() + source, err := os.CreateTemp(tmpDir, "tempfile.in") + if err != nil { + t.Fatal(err) + } + + content := "this is the file content" + if _, err := source.WriteString(content); err != nil { + t.Fatal(err) + } + source.Seek(0, io.SeekStart) - content := []byte("this is the file content") - source := bytes.NewReader(content) tmpFile := filepath.Join(tmpDir, "tempFile.out") + + testUploadSizeHook = func(size int64) { + if size != int64(len(content)) { + t.Errorf("expected %d bytes, got %d\n", len(content), size) + } + } + defer func() { + testUploadSizeHook = nil + }() + err = c.Upload(tmpFile, source) if err != nil { t.Fatalf("error uploading file: %s", err) @@ -591,7 +609,7 @@ func TestAccUploadFile(t *testing.T) { t.Fatal(err) } - if !bytes.Equal(data, content) { + if string(data) != content { t.Fatalf("bad: %s", data) } } diff --git a/internal/dag/dag.go b/internal/dag/dag.go index 362c847f3d9f..f5268e76f0c3 100644 --- a/internal/dag/dag.go +++ b/internal/dag/dag.go @@ -179,18 +179,16 @@ type vertexAtDepth struct { Depth int } -// TopologicalOrder returns a topological sort of the given graph, with source -// vertices ordered before the targets of their edges. The nodes are not sorted, -// and any valid order may be returned. This function will panic if it -// encounters a cycle. +// TopologicalOrder returns a topological sort of the given graph. The nodes +// are not sorted, and any valid order may be returned. This function will +// panic if it encounters a cycle. func (g *AcyclicGraph) TopologicalOrder() []Vertex { return g.topoOrder(upOrder) } -// ReverseTopologicalOrder returns a topological sort of the given graph, with -// target vertices ordered before the sources of their edges. The nodes are not -// sorted, and any valid order may be returned. This function will panic if it -// encounters a cycle. +// ReverseTopologicalOrder returns a topological sort of the given graph, +// following each edge in reverse. The nodes are not sorted, and any valid +// order may be returned. This function will panic if it encounters a cycle. func (g *AcyclicGraph) ReverseTopologicalOrder() []Vertex { return g.topoOrder(downOrder) } diff --git a/internal/dag/graph.go b/internal/dag/graph.go index ab1ae3756657..b609558d417c 100644 --- a/internal/dag/graph.go +++ b/internal/dag/graph.go @@ -166,29 +166,28 @@ func (g *Graph) RemoveEdge(edge Edge) { } } -// UpEdges returns the vertices that are *sources* of edges that target the -// destination Vertex v. +// UpEdges returns the vertices connected to the outward edges from the source +// Vertex v. func (g *Graph) UpEdges(v Vertex) Set { return g.upEdgesNoCopy(v).Copy() } -// DownEdges returns the vertices that are *targets* of edges that originate -// from the source Vertex v. +// DownEdges returns the vertices connected from the inward edges to Vertex v. func (g *Graph) DownEdges(v Vertex) Set { return g.downEdgesNoCopy(v).Copy() } -// downEdgesNoCopy returns the vertices targeted by edges from the source Vertex -// v as a Set. This Set is the same as used internally by the Graph to prevent a -// copy, and must not be modified by the caller. +// downEdgesNoCopy returns the outward edges from the source Vertex v as a Set. +// This Set is the same as used internally bu the Graph to prevent a copy, and +// must not be modified by the caller. func (g *Graph) downEdgesNoCopy(v Vertex) Set { g.init() return g.downEdges[hashcode(v)] } -// upEdgesNoCopy returns the vertices that are sources of edges targeting the -// destination Vertex v as a Set. This Set is the same as used internally by the -// Graph to prevent a copy, and must not be modified by the caller. +// upEdgesNoCopy returns the inward edges to the destination Vertex v as a Set. +// This Set is the same as used internally bu the Graph to prevent a copy, and +// must not be modified by the caller. func (g *Graph) upEdgesNoCopy(v Vertex) Set { g.init() return g.upEdges[hashcode(v)] diff --git a/internal/providercache/installer.go b/internal/providercache/installer.go index ffdbd04c2540..0302c77b2315 100644 --- a/internal/providercache/installer.go +++ b/internal/providercache/installer.go @@ -347,134 +347,98 @@ NeedProvider: // Step 3a: If our global cache already has this version available then // we'll just link it in. if cached := i.globalCacheDir.ProviderVersion(provider, version); cached != nil { - // An existing cache entry is only an acceptable choice - // if there is already a lock file entry for this provider - // and the cache entry matches its checksums. - // - // If there was no lock file entry at all then we need to - // install the package for real so that we can lock as complete - // as possible a set of checksums for all of this provider's - // packages. - // - // If there was a lock file entry but the cache doesn't match - // it then we assume that the lock file checksums were only - // partially populated (e.g. from a local mirror where we can - // only see one package to checksum it) and so we'll fetch - // from upstream to see if the origin can give us a package - // that _does_ match. This might still not work out, but if - // it does then it allows us to avoid returning a checksum - // mismatch error. - acceptablePackage := false - if len(preferredHashes) != 0 { - var err error - acceptablePackage, err = cached.MatchesAnyHash(preferredHashes) - if err != nil { - // If we can't calculate the checksum for the cached - // package then we'll just treat it as a checksum failure. - acceptablePackage = false - } + if cb := evts.LinkFromCacheBegin; cb != nil { + cb(provider, version, i.globalCacheDir.baseDir) } - - // TODO: Should we emit an event through the events object - // for "there was an entry in the cache but we ignored it - // because the checksum didn't match"? We can't use - // LinkFromCacheFailure in that case because this isn't a - // failure. For now we'll just be quiet about it. - - if acceptablePackage { - if cb := evts.LinkFromCacheBegin; cb != nil { - cb(provider, version, i.globalCacheDir.baseDir) - } - if _, err := cached.ExecutableFile(); err != nil { - err := fmt.Errorf("provider binary not found: %s", err) - errs[provider] = err - if cb := evts.LinkFromCacheFailure; cb != nil { - cb(provider, version, err) - } - continue + if _, err := cached.ExecutableFile(); err != nil { + err := fmt.Errorf("provider binary not found: %s", err) + errs[provider] = err + if cb := evts.LinkFromCacheFailure; cb != nil { + cb(provider, version, err) } + continue + } - err := i.targetDir.LinkFromOtherCache(cached, preferredHashes) - if err != nil { - errs[provider] = err - if cb := evts.LinkFromCacheFailure; cb != nil { - cb(provider, version, err) - } - continue + err := i.targetDir.LinkFromOtherCache(cached, preferredHashes) + if err != nil { + errs[provider] = err + if cb := evts.LinkFromCacheFailure; cb != nil { + cb(provider, version, err) } - // We'll fetch what we just linked to make sure it actually - // did show up there. - new := i.targetDir.ProviderVersion(provider, version) - if new == nil { - err := fmt.Errorf("after linking %s from provider cache at %s it is still not detected in the target directory; this is a bug in Terraform", provider, i.globalCacheDir.baseDir) - errs[provider] = err - if cb := evts.LinkFromCacheFailure; cb != nil { - cb(provider, version, err) - } - continue + continue + } + // We'll fetch what we just linked to make sure it actually + // did show up there. + new := i.targetDir.ProviderVersion(provider, version) + if new == nil { + err := fmt.Errorf("after linking %s from provider cache at %s it is still not detected in the target directory; this is a bug in Terraform", provider, i.globalCacheDir.baseDir) + errs[provider] = err + if cb := evts.LinkFromCacheFailure; cb != nil { + cb(provider, version, err) } + continue + } - // The LinkFromOtherCache call above should've verified that - // the package matches one of the hashes previously recorded, - // if any. We'll now augment those hashes with one freshly - // calculated from the package we just linked, which allows - // the lock file to gradually transition to recording newer hash - // schemes when they become available. - var priorHashes []getproviders.Hash - if lock != nil && lock.Version() == version { - // If the version we're installing is identical to the - // one we previously locked then we'll keep all of the - // hashes we saved previously and add to it. Otherwise - // we'll be starting fresh, because each version has its - // own set of packages and thus its own hashes. - priorHashes = append(priorHashes, preferredHashes...) - - // NOTE: The behavior here is unfortunate when a particular - // provider version was already cached on the first time - // the current configuration requested it, because that - // means we don't currently get the opportunity to fetch - // and verify the checksums for the new package from - // upstream. That's currently unavoidable because upstream - // checksums are in the "ziphash" format and so we can't - // verify them against our cache directory's unpacked - // packages: we'd need to go fetch the package from the - // origin and compare against it, which would defeat the - // purpose of the global cache. - // - // If we fetch from upstream on the first encounter with - // a particular provider then we'll end up in the other - // codepath below where we're able to also include the - // checksums from the origin registry. - } - newHash, err := cached.Hash() - if err != nil { - err := fmt.Errorf("after linking %s from provider cache at %s, failed to compute a checksum for it: %s", provider, i.globalCacheDir.baseDir, err) - errs[provider] = err - if cb := evts.LinkFromCacheFailure; cb != nil { - cb(provider, version, err) - } - continue - } - // The hashes slice gets deduplicated in the lock file - // implementation, so we don't worry about potentially - // creating a duplicate here. - var newHashes []getproviders.Hash - newHashes = append(newHashes, priorHashes...) - newHashes = append(newHashes, newHash) - locks.SetProvider(provider, version, reqs[provider], newHashes) - if cb := evts.ProvidersLockUpdated; cb != nil { - // We want to ensure that newHash and priorHashes are - // sorted. newHash is a single value, so it's definitely - // sorted. priorHashes are pulled from the lock file, so - // are also already sorted. - cb(provider, version, []getproviders.Hash{newHash}, nil, priorHashes) + // The LinkFromOtherCache call above should've verified that + // the package matches one of the hashes previously recorded, + // if any. We'll now augment those hashes with one freshly + // calculated from the package we just linked, which allows + // the lock file to gradually transition to recording newer hash + // schemes when they become available. + var priorHashes []getproviders.Hash + if lock != nil && lock.Version() == version { + // If the version we're installing is identical to the + // one we previously locked then we'll keep all of the + // hashes we saved previously and add to it. Otherwise + // we'll be starting fresh, because each version has its + // own set of packages and thus its own hashes. + priorHashes = append(priorHashes, preferredHashes...) + + // NOTE: The behavior here is unfortunate when a particular + // provider version was already cached on the first time + // the current configuration requested it, because that + // means we don't currently get the opportunity to fetch + // and verify the checksums for the new package from + // upstream. That's currently unavoidable because upstream + // checksums are in the "ziphash" format and so we can't + // verify them against our cache directory's unpacked + // packages: we'd need to go fetch the package from the + // origin and compare against it, which would defeat the + // purpose of the global cache. + // + // If we fetch from upstream on the first encounter with + // a particular provider then we'll end up in the other + // codepath below where we're able to also include the + // checksums from the origin registry. + } + newHash, err := cached.Hash() + if err != nil { + err := fmt.Errorf("after linking %s from provider cache at %s, failed to compute a checksum for it: %s", provider, i.globalCacheDir.baseDir, err) + errs[provider] = err + if cb := evts.LinkFromCacheFailure; cb != nil { + cb(provider, version, err) } + continue + } + // The hashes slice gets deduplicated in the lock file + // implementation, so we don't worry about potentially + // creating a duplicate here. + var newHashes []getproviders.Hash + newHashes = append(newHashes, priorHashes...) + newHashes = append(newHashes, newHash) + locks.SetProvider(provider, version, reqs[provider], newHashes) + if cb := evts.ProvidersLockUpdated; cb != nil { + // We want to ensure that newHash and priorHashes are + // sorted. newHash is a single value, so it's definitely + // sorted. priorHashes are pulled from the lock file, so + // are also already sorted. + cb(provider, version, []getproviders.Hash{newHash}, nil, priorHashes) + } - if cb := evts.LinkFromCacheSuccess; cb != nil { - cb(provider, version, new.PackageDir) - } - continue // Don't need to do full install, then. + if cb := evts.LinkFromCacheSuccess; cb != nil { + cb(provider, version, new.PackageDir) } + continue // Don't need to do full install, then. } } @@ -527,7 +491,7 @@ NeedProvider: } new := installTo.ProviderVersion(provider, version) if new == nil { - err := fmt.Errorf("after installing %s it is still not detected in %s; this is a bug in Terraform", provider, installTo.BasePath()) + err := fmt.Errorf("after installing %s it is still not detected in the target directory; this is a bug in Terraform", provider) errs[provider] = err if cb := evts.FetchPackageFailure; cb != nil { cb(provider, version, err) @@ -557,28 +521,6 @@ NeedProvider: } continue } - - // We should now also find the package in the linkTo dir, which - // gives us the final value of "new" where the path points in to - // the true target directory, rather than possibly the global - // cache directory. - new = linkTo.ProviderVersion(provider, version) - if new == nil { - err := fmt.Errorf("after installing %s it is still not detected in %s; this is a bug in Terraform", provider, linkTo.BasePath()) - errs[provider] = err - if cb := evts.FetchPackageFailure; cb != nil { - cb(provider, version, err) - } - continue - } - if _, err := new.ExecutableFile(); err != nil { - err := fmt.Errorf("provider binary not found: %s", err) - errs[provider] = err - if cb := evts.FetchPackageFailure; cb != nil { - cb(provider, version, err) - } - continue - } } authResults[provider] = authResult diff --git a/internal/providercache/installer_test.go b/internal/providercache/installer_test.go index cdbca6adc718..bb71f1db2493 100644 --- a/internal/providercache/installer_test.go +++ b/internal/providercache/installer_test.go @@ -327,7 +327,10 @@ func TestEnsureProviderVersions(t *testing.T) { AuthResult string }{ "2.1.0", - filepath.Join(dir.BasePath(), "example.com/foo/beep/2.1.0/bleep_bloop"), + // NOTE: With global cache enabled, the fetch + // goes into the global cache dir and + // we then to it from the local cache dir. + filepath.Join(inst.globalCacheDir.BasePath(), "example.com/foo/beep/2.1.0/bleep_bloop"), "unauthenticated", }, }, @@ -335,7 +338,7 @@ func TestEnsureProviderVersions(t *testing.T) { } }, }, - "successful initial install of one provider through a warm global cache but without a lock file entry": { + "successful initial install of one provider through a warm global cache": { Source: getproviders.NewMockSource( []getproviders.PackageMeta{ { @@ -413,12 +416,6 @@ func TestEnsureProviderVersions(t *testing.T) { beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"), }, }, - { - Event: "ProvidersFetched", - Args: map[addrs.Provider]*getproviders.PackageAuthenticationResult{ - beepProvider: nil, - }, - }, }, beepProvider: { { @@ -434,162 +431,6 @@ func TestEnsureProviderVersions(t *testing.T) { Provider: beepProvider, Args: "2.1.0", }, - // Existing cache entry is ineligible for linking because - // we have no lock file checksums to compare it to. - // Instead, we install from upstream and lock with - // whatever checksums we learn in that process. - { - Event: "FetchPackageMeta", - Provider: beepProvider, - Args: "2.1.0", - }, - { - Event: "FetchPackageBegin", - Provider: beepProvider, - Args: struct { - Version string - Location getproviders.PackageLocation - }{ - "2.1.0", - beepProviderDir, - }, - }, - { - Event: "ProvidersLockUpdated", - Provider: beepProvider, - Args: struct { - Version string - Local []getproviders.Hash - Signed []getproviders.Hash - Prior []getproviders.Hash - }{ - "2.1.0", - []getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="}, - nil, - nil, - }, - }, - { - Event: "FetchPackageSuccess", - Provider: beepProvider, - Args: struct { - Version string - LocalDir string - AuthResult string - }{ - "2.1.0", - filepath.Join(dir.BasePath(), "/example.com/foo/beep/2.1.0/bleep_bloop"), - "unauthenticated", - }, - }, - }, - } - }, - }, - "successful initial install of one provider through a warm global cache and correct locked checksum": { - Source: getproviders.NewMockSource( - []getproviders.PackageMeta{ - { - Provider: beepProvider, - Version: getproviders.MustParseVersion("2.0.0"), - TargetPlatform: fakePlatform, - Location: beepProviderDir, - }, - { - Provider: beepProvider, - Version: getproviders.MustParseVersion("2.1.0"), - TargetPlatform: fakePlatform, - Location: beepProviderDir, - }, - }, - nil, - ), - LockFile: ` - # The existing cache entry is valid only if it matches a - # checksum already recorded in the lock file. - provider "example.com/foo/beep" { - version = "2.1.0" - constraints = ">= 1.0.0" - hashes = [ - "h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84=", - ] - } - `, - Prepare: func(t *testing.T, inst *Installer, dir *Dir) { - globalCacheDirPath := tmpDir(t) - globalCacheDir := NewDirWithPlatform(globalCacheDirPath, fakePlatform) - _, err := globalCacheDir.InstallPackage( - context.Background(), - getproviders.PackageMeta{ - Provider: beepProvider, - Version: getproviders.MustParseVersion("2.1.0"), - TargetPlatform: fakePlatform, - Location: beepProviderDir, - }, - nil, - ) - if err != nil { - t.Fatalf("failed to populate global cache: %s", err) - } - inst.SetGlobalCacheDir(globalCacheDir) - }, - Mode: InstallNewProvidersOnly, - Reqs: getproviders.Requirements{ - beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"), - }, - Check: func(t *testing.T, dir *Dir, locks *depsfile.Locks) { - if allCached := dir.AllAvailablePackages(); len(allCached) != 1 { - t.Errorf("wrong number of cache directory entries; want only one\n%s", spew.Sdump(allCached)) - } - if allLocked := locks.AllProviders(); len(allLocked) != 1 { - t.Errorf("wrong number of provider lock entries; want only one\n%s", spew.Sdump(allLocked)) - } - - gotLock := locks.Provider(beepProvider) - wantLock := depsfile.NewProviderLock( - beepProvider, - getproviders.MustParseVersion("2.1.0"), - getproviders.MustParseVersionConstraints(">= 2.0.0"), - []getproviders.Hash{beepProviderHash}, - ) - if diff := cmp.Diff(wantLock, gotLock, depsfile.ProviderLockComparer); diff != "" { - t.Errorf("wrong lock entry\n%s", diff) - } - - gotEntry := dir.ProviderLatestVersion(beepProvider) - wantEntry := &CachedProvider{ - Provider: beepProvider, - Version: getproviders.MustParseVersion("2.1.0"), - PackageDir: filepath.Join(dir.BasePath(), "example.com/foo/beep/2.1.0/bleep_bloop"), - } - if diff := cmp.Diff(wantEntry, gotEntry); diff != "" { - t.Errorf("wrong cache entry\n%s", diff) - } - }, - WantEvents: func(inst *Installer, dir *Dir) map[addrs.Provider][]*testInstallerEventLogItem { - return map[addrs.Provider][]*testInstallerEventLogItem{ - noProvider: { - { - Event: "PendingProviders", - Args: map[addrs.Provider]getproviders.VersionConstraints{ - beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"), - }, - }, - }, - beepProvider: { - { - Event: "QueryPackagesBegin", - Provider: beepProvider, - Args: struct { - Constraints string - Locked bool - }{">= 2.0.0", true}, - }, - { - Event: "QueryPackagesSuccess", - Provider: beepProvider, - Args: "2.1.0", - }, { Event: "LinkFromCacheBegin", Provider: beepProvider, @@ -613,7 +454,7 @@ func TestEnsureProviderVersions(t *testing.T) { "2.1.0", []getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="}, nil, - []getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="}, + nil, }, }, { @@ -631,184 +472,6 @@ func TestEnsureProviderVersions(t *testing.T) { } }, }, - "successful initial install of one provider through a warm global cache with an incompatible checksum": { - Source: getproviders.NewMockSource( - []getproviders.PackageMeta{ - { - Provider: beepProvider, - Version: getproviders.MustParseVersion("2.0.0"), - TargetPlatform: fakePlatform, - Location: beepProviderDir, - }, - { - Provider: beepProvider, - Version: getproviders.MustParseVersion("2.1.0"), - TargetPlatform: fakePlatform, - Location: beepProviderDir, - }, - }, - nil, - ), - LockFile: ` - # This is approximating the awkward situation where the lock - # file was populated by someone who installed from a location - # other than the origin registry annd so the set of checksums - # is incomplete. In this case we can't prove that our cache - # entry is valid and so we silently ignore the cache entry - # and try to install from upstream anyway, in the hope that - # this will give us an opportunity to access the origin - # registry and get a checksum that works for the current - # platform. - provider "example.com/foo/beep" { - version = "2.1.0" - constraints = ">= 1.0.0" - hashes = [ - # NOTE: This is the correct checksum for the - # beepProviderDir package, but we're going to - # intentionally install from a different directory - # below so that the entry in the cache will not - # match this checksum. - "h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84=", - ] - } - `, - Prepare: func(t *testing.T, inst *Installer, dir *Dir) { - // This is another "beep provider" package directory that - // has a different checksum than the one in beepProviderDir. - // We're mimicking the situation where the lock file was - // originally built from beepProviderDir but the local system - // is running on a different platform and so its existing - // cache entry doesn't match the checksum. - beepProviderOtherPlatformDir := getproviders.PackageLocalDir("testdata/beep-provider-other-platform") - - globalCacheDirPath := tmpDir(t) - globalCacheDir := NewDirWithPlatform(globalCacheDirPath, fakePlatform) - _, err := globalCacheDir.InstallPackage( - context.Background(), - getproviders.PackageMeta{ - Provider: beepProvider, - Version: getproviders.MustParseVersion("2.1.0"), - TargetPlatform: fakePlatform, - Location: beepProviderOtherPlatformDir, - }, - nil, - ) - if err != nil { - t.Fatalf("failed to populate global cache: %s", err) - } - inst.SetGlobalCacheDir(globalCacheDir) - }, - Mode: InstallNewProvidersOnly, - Reqs: getproviders.Requirements{ - beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"), - }, - Check: func(t *testing.T, dir *Dir, locks *depsfile.Locks) { - if allCached := dir.AllAvailablePackages(); len(allCached) != 1 { - t.Errorf("wrong number of cache directory entries; want only one\n%s", spew.Sdump(allCached)) - } - if allLocked := locks.AllProviders(); len(allLocked) != 1 { - t.Errorf("wrong number of provider lock entries; want only one\n%s", spew.Sdump(allLocked)) - } - - gotLock := locks.Provider(beepProvider) - wantLock := depsfile.NewProviderLock( - beepProvider, - getproviders.MustParseVersion("2.1.0"), - getproviders.MustParseVersionConstraints(">= 2.0.0"), - []getproviders.Hash{beepProviderHash}, - ) - if diff := cmp.Diff(wantLock, gotLock, depsfile.ProviderLockComparer); diff != "" { - t.Errorf("wrong lock entry\n%s", diff) - } - - gotEntry := dir.ProviderLatestVersion(beepProvider) - wantEntry := &CachedProvider{ - Provider: beepProvider, - Version: getproviders.MustParseVersion("2.1.0"), - PackageDir: filepath.Join(dir.BasePath(), "example.com/foo/beep/2.1.0/bleep_bloop"), - } - if diff := cmp.Diff(wantEntry, gotEntry); diff != "" { - t.Errorf("wrong cache entry\n%s", diff) - } - }, - WantEvents: func(inst *Installer, dir *Dir) map[addrs.Provider][]*testInstallerEventLogItem { - return map[addrs.Provider][]*testInstallerEventLogItem{ - noProvider: { - { - Event: "PendingProviders", - Args: map[addrs.Provider]getproviders.VersionConstraints{ - beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"), - }, - }, - { - Event: "ProvidersFetched", - Args: map[addrs.Provider]*getproviders.PackageAuthenticationResult{ - beepProvider: nil, - }, - }, - }, - beepProvider: { - { - Event: "QueryPackagesBegin", - Provider: beepProvider, - Args: struct { - Constraints string - Locked bool - }{">= 2.0.0", true}, - }, - { - Event: "QueryPackagesSuccess", - Provider: beepProvider, - Args: "2.1.0", - }, - { - Event: "FetchPackageMeta", - Provider: beepProvider, - Args: "2.1.0", - }, - { - Event: "FetchPackageBegin", - Provider: beepProvider, - Args: struct { - Version string - Location getproviders.PackageLocation - }{ - "2.1.0", - beepProviderDir, - }, - }, - { - Event: "ProvidersLockUpdated", - Provider: beepProvider, - Args: struct { - Version string - Local []getproviders.Hash - Signed []getproviders.Hash - Prior []getproviders.Hash - }{ - "2.1.0", - []getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="}, - nil, - []getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="}, - }, - }, - { - Event: "FetchPackageSuccess", - Provider: beepProvider, - Args: struct { - Version string - LocalDir string - AuthResult string - }{ - "2.1.0", - filepath.Join(dir.BasePath(), "/example.com/foo/beep/2.1.0/bleep_bloop"), - "unauthenticated", - }, - }, - }, - } - }, - }, "successful reinstall of one previously-locked provider": { Source: getproviders.NewMockSource( []getproviders.PackageMeta{ @@ -1939,7 +1602,7 @@ func TestEnsureProviderVersions(t *testing.T) { inst := NewInstaller(outputDir, source) if test.Prepare != nil { test.Prepare(t, inst, outputDir) - } /* boop */ + } locks, lockDiags := depsfile.LoadLocksFromBytes([]byte(test.LockFile), "test.lock.hcl") if lockDiags.HasErrors() { diff --git a/internal/providercache/package_install.go b/internal/providercache/package_install.go index 89ef862cec1f..655a441d8104 100644 --- a/internal/providercache/package_install.go +++ b/internal/providercache/package_install.go @@ -55,7 +55,7 @@ func installFromHTTPURL(ctx context.Context, meta getproviders.PackageMeta, targ f, err := ioutil.TempFile("", "terraform-provider") if err != nil { - return nil, fmt.Errorf("failed to open temporary file to download from %s: %w", url, err) + return nil, fmt.Errorf("failed to open temporary file to download from %s", url) } defer f.Close() defer os.Remove(f.Name()) @@ -125,14 +125,6 @@ func installFromLocalArchive(ctx context.Context, meta getproviders.PackageMeta, filename := meta.Location.String() - // NOTE: We're not checking whether there's already a directory at - // targetDir with some files in it. Packages are supposed to be immutable - // and therefore we'll just be overwriting all of the existing files with - // their same contents unless something unusual is happening. If something - // unusual _is_ happening then this will produce something that doesn't - // match the allowed hashes and so our caller should catch that after - // we return if so. - err := unzip.Decompress(targetDir, filename, true, 0000) if err != nil { return authResult, err diff --git a/internal/providercache/testdata/beep-provider-other-platform/terraform-provider-beep b/internal/providercache/testdata/beep-provider-other-platform/terraform-provider-beep deleted file mode 100644 index 18929cd34bf0..000000000000 --- a/internal/providercache/testdata/beep-provider-other-platform/terraform-provider-beep +++ /dev/null @@ -1,7 +0,0 @@ -This is not a real provider executable. It's just here to give the installer -something to copy in some of our installer test cases. - -This must be different than the file of the same name in the sibling directory -"beep-provider", because we're using this to stand in for a valid package -that was built for a different platform than the one whose checksum is recorded -in the lock file. diff --git a/internal/terraform/eval_variable.go b/internal/terraform/eval_variable.go index c489e4f3bea6..c355204b1046 100644 --- a/internal/terraform/eval_variable.go +++ b/internal/terraform/eval_variable.go @@ -132,8 +132,8 @@ func prepareFinalInputVariableValue(addr addrs.AbsInputVariableInstance, raw *In return cty.UnknownVal(cfg.Type), diags } - // Apply defaults from the variable's type constraint to the converted value, - // unless the converted value is null. We do not apply defaults to top-level + // Apply defaults from the variable's type constraint to the given value, + // unless the given value is null. We do not apply defaults to top-level // null values, as doing so could prevent assigning null to a nullable // variable. if cfg.TypeDefaults != nil && !val.IsNull() { diff --git a/internal/terraform/graph.go b/internal/terraform/graph.go index 38d0fad6ed5c..cc99942093c6 100644 --- a/internal/terraform/graph.go +++ b/internal/terraform/graph.go @@ -1,7 +1,6 @@ package terraform import ( - "fmt" "log" "strings" @@ -89,28 +88,6 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics { return } if g != nil { - // The subgraph should always be valid, per our normal acyclic - // graph validation rules. - if err := g.Validate(); err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Graph node has invalid dynamic subgraph", - fmt.Sprintf("The internal logic for %q generated an invalid dynamic subgraph: %s.\n\nThis is a bug in Terraform. Please report it!", dag.VertexName(v), err), - )) - return - } - // If we passed validation then there is exactly one root node. - // That root node should always be "rootNode", the singleton - // root node value. - if n, err := g.Root(); err != nil || n != dag.Vertex(rootNode) { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Graph node has invalid dynamic subgraph", - fmt.Sprintf("The internal logic for %q generated an invalid dynamic subgraph: the root node is %T, which is not a suitable root node type.\n\nThis is a bug in Terraform. Please report it!", dag.VertexName(v), n), - )) - return - } - // Walk the subgraph log.Printf("[TRACE] vertex %q: entering dynamic subgraph", dag.VertexName(v)) subDiags := g.walk(walker) diff --git a/internal/terraform/node_local.go b/internal/terraform/node_local.go index f194b9cc82a2..79b47576822c 100644 --- a/internal/terraform/node_local.go +++ b/internal/terraform/node_local.go @@ -73,7 +73,6 @@ func (n *nodeExpandLocal) DynamicExpand(ctx EvalContext) (*Graph, error) { log.Printf("[TRACE] Expanding local: adding %s as %T", o.Addr.String(), o) g.Add(o) } - addRootNodeToGraph(&g) return &g, nil } diff --git a/internal/terraform/node_module_variable.go b/internal/terraform/node_module_variable.go index 6d5ae2af89cb..c5e2294eaadb 100644 --- a/internal/terraform/node_module_variable.go +++ b/internal/terraform/node_module_variable.go @@ -50,7 +50,6 @@ func (n *nodeExpandModuleVariable) DynamicExpand(ctx EvalContext) (*Graph, error } g.Add(o) } - addRootNodeToGraph(&g) return &g, nil } diff --git a/internal/terraform/node_output.go b/internal/terraform/node_output.go index 1079a59df36f..74b074123d0e 100644 --- a/internal/terraform/node_output.go +++ b/internal/terraform/node_output.go @@ -122,7 +122,6 @@ func (n *nodeExpandOutput) DynamicExpand(ctx EvalContext) (*Graph, error) { log.Printf("[TRACE] Expanding output: adding %s as %T", absAddr.String(), node) g.Add(node) } - addRootNodeToGraph(&g) if checkableAddrs != nil { checkState := ctx.Checks() diff --git a/internal/terraform/node_resource_apply.go b/internal/terraform/node_resource_apply.go index 6f7b46af6e9a..3928bea0fdc5 100644 --- a/internal/terraform/node_resource_apply.go +++ b/internal/terraform/node_resource_apply.go @@ -49,7 +49,6 @@ func (n *nodeExpandApplyableResource) DynamicExpand(ctx EvalContext) (*Graph, er Addr: n.Addr.Resource.Absolute(module), }) } - addRootNodeToGraph(&g) return &g, nil } diff --git a/internal/terraform/node_resource_import.go b/internal/terraform/node_resource_import.go index 93b14fdf18a6..ecf39a07e033 100644 --- a/internal/terraform/node_resource_import.go +++ b/internal/terraform/node_resource_import.go @@ -176,7 +176,11 @@ func (n *graphNodeImportState) DynamicExpand(ctx EvalContext) (*Graph, error) { }) } - addRootNodeToGraph(g) + // Root transform for a single root + t := &RootTransformer{} + if err := t.Transform(g); err != nil { + return nil, err + } // Done! return g, diags.Err() diff --git a/internal/terraform/node_resource_plan.go b/internal/terraform/node_resource_plan.go index 3c6e94d8bf9d..18c97c7cb9a9 100644 --- a/internal/terraform/node_resource_plan.go +++ b/internal/terraform/node_resource_plan.go @@ -171,8 +171,6 @@ func (n *nodeExpandPlannableResource) DynamicExpand(ctx EvalContext) (*Graph, er checkState.ReportCheckableObjects(n.NodeAbstractResource.Addr, instAddrs) } - addRootNodeToGraph(&g) - return &g, diags.ErrWithWarnings() } diff --git a/internal/terraform/transform_root.go b/internal/terraform/transform_root.go index e06ef5b414cf..b22ef11fc0f7 100644 --- a/internal/terraform/transform_root.go +++ b/internal/terraform/transform_root.go @@ -10,48 +10,41 @@ const rootNodeName = "root" type RootTransformer struct{} func (t *RootTransformer) Transform(g *Graph) error { - addRootNodeToGraph(g) - return nil -} + // If we already have a good root, we're done + if _, err := g.Root(); err == nil { + return nil + } -// addRootNodeToGraph modifies the given graph in-place so that it has a root -// node if it didn't already have one and so that any other node which doesn't -// already depend on something will depend on that root node. -// -// After this function returns, the graph will have only one node that doesn't -// depend on any other nodes. -func addRootNodeToGraph(g *Graph) { - // We always add the root node. This is a singleton so if it's already - // in the graph this will do nothing and just retain the existing root node. + // We intentionally add a graphNodeRoot value -- rather than a pointer to + // one -- so that all root nodes will coalesce together if two graphs + // are merged. Each distinct node value can only be in a graph once, + // so adding another graphNodeRoot value to the same graph later will + // be a no-op and all of the edges from root nodes will coalesce together + // under Graph.Subsume. // - // Note that rootNode is intentionally added by value and not by pointer - // so that all root nodes will be equal to one another and therefore - // coalesce when two valid graphs get merged together into a single graph. - g.Add(rootNode) + // It's important to retain this coalescing guarantee under future + // maintenence. + var root graphNodeRoot + g.Add(root) - // Everything that doesn't already depend on at least one other node will - // depend on the root node, except the root node itself. + // We initially make the root node depend on every node except itself. + // If the caller subsequently runs transitive reduction on the graph then + // it's typical for some of these edges to then be removed. for _, v := range g.Vertices() { - if v == dag.Vertex(rootNode) { + if v == root { continue } if g.UpEdges(v).Len() == 0 { - g.Connect(dag.BasicEdge(rootNode, v)) + g.Connect(dag.BasicEdge(root, v)) } } + + return nil } type graphNodeRoot struct{} -// rootNode is the singleton value representing all root graph nodes. -// -// The root node for all graphs should be this value directly, and in particular -// _not_ a pointer to this value. Using the value directly here means that -// multiple root nodes will always coalesce together when subsuming one graph -// into another. -var rootNode graphNodeRoot - func (n graphNodeRoot) Name() string { return rootNodeName } diff --git a/internal/terraform/transform_root_test.go b/internal/terraform/transform_root_test.go index 61f24a5f764a..4a426b5e7cc2 100644 --- a/internal/terraform/transform_root_test.go +++ b/internal/terraform/transform_root_test.go @@ -8,78 +8,50 @@ import ( ) func TestRootTransformer(t *testing.T) { - t.Run("many nodes", func(t *testing.T) { - mod := testModule(t, "transform-root-basic") + mod := testModule(t, "transform-root-basic") - g := Graph{Path: addrs.RootModuleInstance} - { - tf := &ConfigTransformer{Config: mod} - if err := tf.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - } - - { - transform := &MissingProviderTransformer{} - if err := transform.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - } - - { - transform := &ProviderTransformer{} - if err := transform.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } - } - - { - transform := &RootTransformer{} - if err := transform.Transform(&g); err != nil { - t.Fatalf("err: %s", err) - } + g := Graph{Path: addrs.RootModuleInstance} + { + tf := &ConfigTransformer{Config: mod} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) } + } - actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(testTransformRootBasicStr) - if actual != expected { - t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) + { + transform := &MissingProviderTransformer{} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) } + } - root, err := g.Root() - if err != nil { + { + transform := &ProviderTransformer{} + if err := transform.Transform(&g); err != nil { t.Fatalf("err: %s", err) } - if _, ok := root.(graphNodeRoot); !ok { - t.Fatalf("bad: %#v", root) - } - }) + } - t.Run("only one initial node", func(t *testing.T) { - g := Graph{Path: addrs.RootModuleInstance} - g.Add("foo") - addRootNodeToGraph(&g) - got := strings.TrimSpace(g.String()) - want := strings.TrimSpace(` -foo -root - foo -`) - if got != want { - t.Errorf("wrong final graph\ngot:\n%s\nwant:\n%s", got, want) + { + transform := &RootTransformer{} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) } - }) + } - t.Run("graph initially empty", func(t *testing.T) { - g := Graph{Path: addrs.RootModuleInstance} - addRootNodeToGraph(&g) - got := strings.TrimSpace(g.String()) - want := `root` - if got != want { - t.Errorf("wrong final graph\ngot:\n%s\nwant:\n%s", got, want) - } - }) + actual := strings.TrimSpace(g.String()) + expected := strings.TrimSpace(testTransformRootBasicStr) + if actual != expected { + t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) + } + root, err := g.Root() + if err != nil { + t.Fatalf("err: %s", err) + } + if _, ok := root.(graphNodeRoot); !ok { + t.Fatalf("bad: %#v", root) + } } const testTransformRootBasicStr = ` diff --git a/internal/tfplugin5/tfplugin5.pb.go b/internal/tfplugin5/tfplugin5.pb.go index 92bf7a0a0c35..85ab54eab688 100644 --- a/internal/tfplugin5/tfplugin5.pb.go +++ b/internal/tfplugin5/tfplugin5.pb.go @@ -1800,15 +1800,6 @@ func (x *PrepareProviderConfig_Response) GetDiagnostics() []*Diagnostic { return nil } -// Request is the message that is sent to the provider during the -// UpgradeResourceState RPC. -// -// This message intentionally does not include configuration data as any -// configuration-based or configuration-conditional changes should occur -// during the PlanResourceChange RPC. Additionally, the configuration is -// not guaranteed to exist (in the case of resource destruction), be wholly -// known, nor match the given prior state, which could lead to unexpected -// provider behaviors for practitioners. type UpgradeResourceState_Request struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache @@ -2245,14 +2236,6 @@ func (x *Configure_Response) GetDiagnostics() []*Diagnostic { return nil } -// Request is the message that is sent to the provider during the -// ReadResource RPC. -// -// This message intentionally does not include configuration data as any -// configuration-based or configuration-conditional changes should occur -// during the PlanResourceChange RPC. Additionally, the configuration is -// not guaranteed to be wholly known nor match the given prior state, which -// could lead to unexpected provider behaviors for practitioners. type ReadResource_Request struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache diff --git a/internal/tfplugin6/tfplugin6.pb.go b/internal/tfplugin6/tfplugin6.pb.go index 14322551fc6d..6e101cd21bcf 100644 --- a/internal/tfplugin6/tfplugin6.pb.go +++ b/internal/tfplugin6/tfplugin6.pb.go @@ -1819,15 +1819,6 @@ func (x *ValidateProviderConfig_Response) GetDiagnostics() []*Diagnostic { return nil } -// Request is the message that is sent to the provider during the -// UpgradeResourceState RPC. -// -// This message intentionally does not include configuration data as any -// configuration-based or configuration-conditional changes should occur -// during the PlanResourceChange RPC. Additionally, the configuration is -// not guaranteed to exist (in the case of resource destruction), be wholly -// known, nor match the given prior state, which could lead to unexpected -// provider behaviors for practitioners. type UpgradeResourceState_Request struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache @@ -2264,14 +2255,6 @@ func (x *ConfigureProvider_Response) GetDiagnostics() []*Diagnostic { return nil } -// Request is the message that is sent to the provider during the -// ReadResource RPC. -// -// This message intentionally does not include configuration data as any -// configuration-based or configuration-conditional changes should occur -// during the PlanResourceChange RPC. Additionally, the configuration is -// not guaranteed to be wholly known nor match the given prior state, which -// could lead to unexpected provider behaviors for practitioners. type ReadResource_Request struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache diff --git a/version/version.go b/version/version.go index 2e90b831e2bb..a81a78e28eec 100644 --- a/version/version.go +++ b/version/version.go @@ -11,7 +11,7 @@ import ( ) // The main version number that is being run at the moment. -var Version = "1.4.0" +var Version = "1.3.5" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release diff --git a/website/data/language-nav-data.json b/website/data/language-nav-data.json index 35f869ea928c..2f068352db60 100644 --- a/website/data/language-nav-data.json +++ b/website/data/language-nav-data.json @@ -1035,7 +1035,7 @@ ] }, { - "title": "Upgrading to Terraform v1.4", + "title": "Upgrading to Terraform v1.3", "path": "upgrade-guides" }, { diff --git a/website/docs/cli/commands/state/replace-provider.mdx b/website/docs/cli/commands/state/replace-provider.mdx index 9f8ce089d707..63fc4a75ef53 100644 --- a/website/docs/cli/commands/state/replace-provider.mdx +++ b/website/docs/cli/commands/state/replace-provider.mdx @@ -38,7 +38,7 @@ also accepts the option [`-ignore-remote-version`](/cli/cloud/command-line-arguments#ignore-remote-version). For configurations using -[the `local` state](/language/settings/backends/local) only, +[the `local` state rm](/language/settings/backends/local) only, `terraform state replace-provider` also accepts the legacy options [`-state`, `-state-out`, and `-backup`](/language/settings/backends/local#command-line-arguments). diff --git a/website/docs/cli/import/index.mdx b/website/docs/cli/import/index.mdx index ab972d956f65..fded3d0791af 100644 --- a/website/docs/cli/import/index.mdx +++ b/website/docs/cli/import/index.mdx @@ -9,33 +9,14 @@ description: >- > **Hands-on:** Try the [Import Terraform Configuration](https://learn.hashicorp.com/tutorials/terraform/state-import?in=terraform/state&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial. -<<<<<<< HEAD -<<<<<<< HEAD -Terraform can import existing infrastructure resources. This functionality lets you bring existing resources under Terraform management. - -~> Warning: Terraform expects that each remote object is bound to only one resource address. You should import each remote object to only one Terraform resource address. If you import the same object multiple times, Terraform may exhibit unwanted behavior. Refer to [State](/language/state) for more details. -======= Terraform can import existing infrastructure resources. This functionality allows you take resources you created by some other means and bring them under Terraform management. -======= -Terraform can import existing infrastructure resources. This functionality allows you take resources you created by some other means and bring them under Terraform -management. - -This method lets you slowly transition infrastructure to Terraform, or -to be able to be confident that you can use Terraform in the future if it -potentially doesn't support every feature you need today. - -~> Warning: Terraform expects that each remote object it is managing will be -bound to only one resource address, which is normally guaranteed by Terraform -itself having created all objects. If you import existing objects into Terraform, be careful to import each remote object to only one Terraform resource address. If you import the same object multiple times, Terraform may exhibit unwanted behavior. Refer to [State](/language/state) for more details. ->>>>>>> parent of 84edd84471 (more content updates for flow) This is a great way to slowly transition infrastructure to Terraform, or to be able to be confident that you can use Terraform in the future if it potentially doesn't support every feature you need today. -<<<<<<< HEAD ~> Warning: Terraform expects that each remote object it is managing will be bound to only one resource address, which is normally guaranteed by Terraform itself having created all objects. If you import existing objects into Terraform, @@ -43,23 +24,9 @@ be careful to import each remote object to only one Terraform resource address. If you import the same object multiple times, Terraform may exhibit unwanted behavior. For more information on this assumption, see [the State section](/language/state). ->>>>>>> parent of 0a7e221a49 (Remove future-facing statements) ## Currently State Only -<<<<<<< HEAD -Terraform import can only import resources into the [state](/language/state). Importing does not generate configuration. -======= -Terraform import can only import resources into the [state](/language/state). It does not generate configuration. ->>>>>>> parent of 84edd84471 (more content updates for flow) - -Because of this, prior to running `terraform import` you must manually write a `resource` configuration block for the resource that describes where Terraform should map the imported object. - -## Terraform Cloud - -<<<<<<< HEAD -When you use Terraform on the command line with Terraform Cloud, many commands like `apply` run inside your Terraform Cloud environment. However, the `import` command runs locally, so it does not have access to information from Terraform Cloud. To successfully perform an import, you may need to set local variables equivalent to any remote workspace variables in Terraform Cloud. -======= The current implementation of Terraform import can only import resources into the [state](/language/state). It does not generate configuration. A future version of Terraform will also generate configuration. @@ -74,7 +41,3 @@ importing existing resources. ## Terraform Cloud When you use Terraform on the command line with Terraform Cloud, many commands (e.g., `apply`) run inside your Terraform Cloud environment. However, the `import` command runs locally, so it will not have access to information from Terraform Cloud. To successfully perform an import, you may need to set local variables equivalent to any remote workspace variables in Terraform Cloud. ->>>>>>> parent of 0a7e221a49 (Remove future-facing statements) -======= -When you use Terraform on the command line with Terraform Cloud, many commands (e.g., `apply`) run inside your Terraform Cloud environment. However, the `import` command runs locally, so it does not have access to information from Terraform Cloud. To successfully perform an import, you may need to set local variables equivalent to any remote workspace variables in Terraform Cloud. ->>>>>>> parent of 84edd84471 (more content updates for flow) diff --git a/website/docs/language/expressions/type-constraints.mdx b/website/docs/language/expressions/type-constraints.mdx index b4e7f849bd96..7d761bc3b5db 100644 --- a/website/docs/language/expressions/type-constraints.mdx +++ b/website/docs/language/expressions/type-constraints.mdx @@ -96,9 +96,9 @@ The three kinds of collection type in the Terraform language are: for single line maps. A newline between key/value pairs is sufficient in multi-line maps. - Note: Although colons are valid delimiters between keys and values, - `terraform fmt` currently ignores them (whereas `terraform fmt` - attempts to vertically align equals signs). + Note: although colons are valid delimiters between keys and values, + they are currently ignored by `terraform fmt` (whereas `terraform fmt` + will attempt vertically align equals signs). * `set(...)`: a collection of unique values that do not have any secondary identifiers or ordering. diff --git a/website/docs/language/functions/yamlencode.mdx b/website/docs/language/functions/yamlencode.mdx index af2dba3bf9cb..7ceddb746cad 100644 --- a/website/docs/language/functions/yamlencode.mdx +++ b/website/docs/language/functions/yamlencode.mdx @@ -8,6 +8,23 @@ description: The yamlencode function encodes a given value as a YAML string. `yamlencode` encodes a given value to a string using [YAML 1.2](https://yaml.org/spec/1.2/spec.html) block syntax. +~> **Warning:** This function is currently **experimental** and its exact +result format may change in future versions of Terraform, based on feedback. +Do not use `yamldecode` to construct a value for any resource argument where +changes to the result would be disruptive. To get a consistent string +representation of a value use [`jsonencode`](/language/functions/jsonencode) instead; its +results are also valid YAML because YAML is a JSON superset. + + + This function maps [Terraform language values](/language/expressions/types) to YAML tags in the following way: @@ -32,15 +49,6 @@ types, passing the `yamlencode` result to `yamldecode` will not produce an identical value, but the Terraform language automatic type conversion rules mean that this is rarely a problem in practice. -YAML is a superset of JSON, and so where possible we recommend generating -JSON using [`jsonencode`](/language/functions/jsonencode) instead, even if -a remote system supports YAML. JSON syntax is equivalent to flow-style YAML -and Terraform can present detailed structural change information for JSON -values in plans, whereas Terraform will treat block-style YAML just as a normal -multi-line string. However, generating YAML may improve readability if the -resulting value will be directly read or modified in the remote system by -humans. - ## Examples ``` diff --git a/website/docs/language/settings/backends/gcs.mdx b/website/docs/language/settings/backends/gcs.mdx index 4e0ece875eee..a97063d193a2 100644 --- a/website/docs/language/settings/backends/gcs.mdx +++ b/website/docs/language/settings/backends/gcs.mdx @@ -1,5 +1,5 @@ --- -page_title: 'Backend Type: gcs' +page_title: "Backend Type: gcs" description: >- Terraform can store the state remotely, making it easier to version and work with in a team. @@ -38,16 +38,12 @@ data "terraform_remote_state" "foo" { } } -# Terraform >= 0.12 -resource "local_file" "foo" { - content = data.terraform_remote_state.foo.outputs.greeting - filename = "${path.module}/outputs.txt" -} +resource "template_file" "bar" { + template = "${greeting}" -# Terraform <= 0.11 -resource "local_file" "foo" { - content = "${data.terraform_remote_state.foo.greeting}" - filename = "${path.module}/outputs.txt" + vars { + greeting = "${data.terraform_remote_state.foo.greeting}" + } } ``` @@ -77,40 +73,20 @@ the path of the service account key. Terraform will use that key for authenticat Terraform can impersonate a Google Service Account as described [here](https://cloud.google.com/iam/docs/creating-short-lived-service-account-credentials). A valid credential must be provided as mentioned in the earlier section and that identity must have the `roles/iam.serviceAccountTokenCreator` role on the service account you are impersonating. -## Encryption - -!> **Warning:** Take care of your encryption keys because state data encrypted with a lost or deleted key is not recoverable. If you use customer-supplied encryption keys, you must securely manage your keys and ensure you do not lose them. You must not delete customer-managed encryption keys in Cloud KMS used to encrypt state. However, if you accidentally delete a key, there is a time window where [you can recover it](https://cloud.google.com/kms/docs/destroy-restore#restore). - -### Customer-supplied encryption keys - -To get started, follow this guide: [Use customer-supplied encryption keys](https://cloud.google.com/storage/docs/encryption/using-customer-supplied-keys) - -If you want to remove customer-supplied keys from your backend configuration or change to a different customer-supplied key, Terraform cannot perform a state migration automatically and manual intervention is necessary instead. This intervention is necessary because Google does not store customer-supplied encryption keys, any requests sent to the Cloud Storage API must supply them instead (see [Customer-supplied Encryption Keys](https://cloud.google.com/storage/docs/encryption/customer-supplied-keys)). At the time of state migration, the backend configuration loses the old key's details and Terraform cannot use the key during the migration process. - -~> **Important:** To migrate your state away from using customer-supplied encryption keys or change the key used by your backend, you need to perform a [rewrite (gsutil CLI)](https://cloud.google.com/storage/docs/gsutil/commands/rewrite) or [cp (gcloud CLI)](https://cloud.google.com/sdk/gcloud/reference/storage/cp#--decryption-keys) operation to remove use of the old customer-supplied encryption key on your state file. Once you remove the encryption, you can successfully run `terraform init -migrate-state` with your new backend configuration. - -### Customer-managed encryption keys (Cloud KMS) - -To get started, follow this guide: [Use customer-managed encryption keys](https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys) - -If you want to remove customer-managed keys from your backend configuration or change to a different customer-managed key, Terraform _can_ manage a state migration without manual intervention. This ability is because GCP stores customer-managed encryption keys and are accessible during the state migration process. However, these changes do not fully come into effect until the first write operation occurs on the state file after state migration occurs. In the first write operation after state migration, the file decrypts with the old key and then writes with the new encryption method. This method is equivalent to the [rewrite](https://cloud.google.com/storage/docs/gsutil/commands/rewrite) operation described in the customer-supplied encryption keys section. Because of the importance of the first write to state after state migration, you should not delete old KMS keys until any state file(s) encrypted with that key update. - -Customer-managed keys do not need to be sent in requests to read files from GCS buckets because decryption occurs automatically within GCS. This process means that if you use the `terraform_remote_state` [data source](/language/state/remote-state-data) to access KMS-encrypted state, you do not need to specify the KMS key in the data source's `config` object. - -~> **Important:** To use customer-managed encryption keys, you need to create a key and give your project's GCS service agent permission to use it with the Cloud KMS CryptoKey Encrypter/Decrypter predefined role. - ## Configuration Variables -!> **Warning:** We recommend using environment variables to supply credentials and other sensitive data. If you use `-backend-config` or hardcode these values directly in your configuration, Terraform includes these values in both the `.terraform` subdirectory and in plan files. Refer to [Credentials and Sensitive Data](/language/settings/backends/configuration#credentials-and-sensitive-data) for details. +!> **Warning:** We recommend using environment variables to supply credentials and other sensitive data. If you use `-backend-config` or hardcode these values directly in your configuration, Terraform will include these values in both the `.terraform` subdirectory and in plan files. Refer to [Credentials and Sensitive Data](/language/settings/backends/configuration#credentials-and-sensitive-data) for details. The following configuration options are supported: -- `bucket` - (Required) The name of the GCS bucket. This name must be - globally unique. For more information, see [Bucket Naming +- `bucket` - (Required) The name of the GCS bucket. This name must be + globally unique. For more information, see [Bucket Naming Guidelines](https://cloud.google.com/storage/docs/bucketnaming.html#requirements). - `credentials` / `GOOGLE_BACKEND_CREDENTIALS` / `GOOGLE_CREDENTIALS` - (Optional) Local path to Google Cloud Platform account credentials in JSON - format. If unset, the path uses [Google Application Default Credentials](https://developers.google.com/identity/protocols/application-default-credentials). The provided credentials must have the Storage Object Admin role on the bucket. + format. If unset, [Google Application Default + Credentials](https://developers.google.com/identity/protocols/application-default-credentials) + are used. The provided credentials must have Storage Object Admin role on the bucket. **Warning**: if using the Google Cloud Platform provider as well, it will also pick up the `GOOGLE_CREDENTIALS` environment variable. - `impersonate_service_account` - (Optional) The service account to impersonate for accessing the State Bucket. @@ -127,12 +103,6 @@ The following configuration options are supported: - `prefix` - (Optional) GCS prefix inside the bucket. Named states for workspaces are stored in an object called `/.tfstate`. - `encryption_key` / `GOOGLE_ENCRYPTION_KEY` - (Optional) A 32 byte base64 - encoded 'customer-supplied encryption key' used when reading and writing state files in the bucket. For - more information see [Customer-supplied Encryption - Keys](https://cloud.google.com/storage/docs/encryption/customer-supplied-keys). -- `kms_encryption_key` / `GOOGLE_KMS_ENCRYPTION_KEY` - (Optional) A Cloud KMS key ('customer-managed encryption key') - used when reading and writing state files in the bucket. - Format should be `projects/{{project}}/locations/{{location}}/keyRings/{{keyRing}}/cryptoKeys/{{name}}`. - For more information, including IAM requirements, see [Customer-managed Encryption - Keys](https://cloud.google.com/storage/docs/encryption/customer-managed-keys). -- `storage_custom_endpoint` / `GOOGLE_BACKEND_STORAGE_CUSTOM_ENDPOINT` / `GOOGLE_STORAGE_CUSTOM_ENDPOINT` - (Optional) A URL containing three parts: the protocol, the DNS name pointing to a Private Service Connect endpoint, and the path for the Cloud Storage API (`/storage/v1/b`, [see here](https://cloud.google.com/storage/docs/json_api/v1/buckets/get#http-request)). You can either use [a DNS name automatically made by the Service Directory](https://cloud.google.com/vpc/docs/configure-private-service-connect-apis#configure-p-dns) or a [custom DNS name](https://cloud.google.com/vpc/docs/configure-private-service-connect-apis#configure-dns-default) made by you. For example, if you create an endpoint called `xyz` and want to use the automatically-created DNS name, you should set the field value as `https://storage-xyz.p.googleapis.com/storage/v1/b`. For help creating a Private Service Connect endpoint using Terraform, [see this guide](https://cloud.google.com/vpc/docs/configure-private-service-connect-apis#terraform_1). + encoded 'customer supplied encryption key' used to encrypt all state. For + more information see [Customer Supplied Encryption + Keys](https://cloud.google.com/storage/docs/encryption#customer-supplied). diff --git a/website/docs/language/upgrade-guides/index.mdx b/website/docs/language/upgrade-guides/index.mdx index 73670f510a35..9ffb4db3c96c 100644 --- a/website/docs/language/upgrade-guides/index.mdx +++ b/website/docs/language/upgrade-guides/index.mdx @@ -1,13 +1,124 @@ --- -page_title: Upgrading to Terraform v1.4 -description: Upgrading to Terraform v1.4 +page_title: Upgrading to Terraform v1.3 +description: Upgrading to Terraform v1.3 --- -# Upgrading to Terraform v1.4 +# Upgrading to Terraform v1.3 --> Do you need the upgrade guide for an earlier version of Terraform? Use the -version selector in the navigation bar to select the version you are intending -to upgrade to. +-> **Note:** Use the version selector to view the upgrade guides for older Terraform versions. -Terraform v1.4 is still under development and so its upgrade guide is not yet -finalized. This should be updated before the final v1.4.0 release. +Terraform v1.3 is a minor release in the stable Terraform v1.0 series. + +Terraform v1.3 continues to honor [the Terraform v1.0 Compatibility Promises](https://www.terraform.io/language/v1-compatibility-promises), but there are some behavior changes outside of those promises that may affect a small number of users. Specifically, the following updates may require additional upgrade steps: + +* [Removal of Deprecated State Storage Backends](#removal-of-deprecated-state-storage-backends) +* [Concluding the Optional Attributes Experiment](#concluding-the-optional-attributes-experiment) +* [AzureRM Backend Requires Microsoft Graph](#azurerm-backend-requires-microsoft-graph) +* [Other Small Changes](#other-small-changes) + +If you encounter any problems during upgrading which are not by this guide, or if the migration instructions don't work for you, please start a topic in [the Terraform community forum](https://discuss.hashicorp.com/c/terraform-core/27) to discuss it. + +## Removal of Deprecated State Storage Backends + +Terraform currently requires that all supported state storage backends be maintained in the Terraform codebase and compiled into Terraform CLI. Terraform therefore contains a mixture of backends maintained by the Terraform CLI team, backends maintained by other teams at HashiCorp, and backends maintained by third-party contributors. + +There are a number of backends that we have so far preserved on a best-effort basis despite them not having any active maintainers. Due to the overhead of continuing to support them, we deprecated the following unmaintained backends in Terraform v1.2.3: +* `artifactory` +* `etcd` +* `etcdv3` +* `manta` +* `swift` + +All of these deprecated state storage backends are now removed in Terraform v1.3. If you are using any of these you will need to migrate to another state storage backend using Terraform v1.2 before you upgrade to Terraform v1.3. + +The following sections describe some specific migration considerations for each removed backend. + +### Migrating from the `artifactory` backend + +From JFrog Artifactory 7.38.4 or later, Artifactory has support for the state storage protocol used by Terraform's `remote` backend, using a special repository type called a [Terraform Backend Repository](https://www.jfrog.com/confluence/display/JFROG/Terraform+Backend+Repository). + +The `remote` backend was available in Terraform v1.2 and remains available in Terraform v1.3. If you are using the `artifactory` backend then we recommend migrating to the `remote` backend, using the configuration instructions provided by JFrog, before upgrading to Terraform v1.3. + +### Migrating from the `etcd` and `etcdv3` backends + +The two generations of state storage backend for [etcd](https://etcd.io/) have been removed and have no direct replacement. + +If you are [using etcd in conjunction with Kubernetes](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/), you might choose to migrate to [the `kubernetes` state storage backend](https://www.terraform.io/language/settings/backends/kubernetes), which stores Terraform state snapshots under a Kubernetes secret. + +### Migrating from the `manta` backend + +The Manta backend was written for an object storage system developed by Joyent. However, the backend was targeting the original implementation of that system which shut down in November 2019. + +This backend has therefore been unmaintained for several years and is now removed without replacement. + +### Migrating from the `swift` backend + +The `swift` backend was for OpenStack's object storage system, Swift. This backend has not had an active maintainer for some time and has not kept up with new features and changes to Swift itself, and so it is now removed. + +OpenStack Swift contains an implementation of the Amazon S3 API. Although [Terraform's `s3` backend](https://www.terraform.io/language/settings/backends/s3) officially supports only Amazon's implementation of that API, we have heard from users that they have had success using that backend to store Terraform state snapshots in Swift. + +If you intend to migrate to the `s3` backend then you should complete that migration with Terraform v1.2 before you upgrade to Terraform v1.3. + +## Concluding the Optional Attributes Experiment + +Terraform v0.14.0 introduced a new _experimental_ language feature for declaring object type constraints with optional attributes in your module's input variables. Thanks to feedback from those who tried the experiment, a refinement of that functionality is now stablized in Terraform v1.3. + +For general information on this new feature, see [Optional Object Type Attributes](/language/expressions/type-constraints#optional-object-type-attributes). + +If you have any experimental modules that were using the feature in its previous form, you can now adapt those modules for production use with the final form of the feature by making the following changes: + +1. Remove the `experiments = [module_variable_optional_attrs]` experiment opt-in from your module, and replace it with a Terraform version constraint inside the same `terraform` block: + + ```hcl + terraform { + required_version = ">= 1.3.0" + } + ``` + + This version constraint makes it explicit that your module is using language features added in Terraform v1.3.0, which earlier versions of Terraform can use to give better feedback about the module not being supported there. +2. If you were using the experimental `defaults` function, you will need to replace your use of it with the new syntax for declaring defaults as part of your main type constraint. + + For example, you can declare a default value for an optional string attribute using a second argument to the `optional` syntax, inline in your type constraint expression: + + ```hcl + type = object({ + example = optional(string, "default value") + }) + ``` + +Because the experiment is concluded, the experimental implementation of this feature is no longer available and Terraform v1.3.0 and later will not accept any module that contains the explicit experiment opt-in. + +As with all new language features, you should take care to upgrade Terraform for all configurations which use a shared module before you use optional attributes in that shared module. Any module which must remain compatible with older versions of Terraform must not declare any optional attributes. Once all users of a module are using Terraform v1.3.0 or later, you can safely begin using optional attribute declarations. + +## AzureRM Backend Requires Microsoft Graph + +In response to [Microsoft's deprecation of Azure AD Graph](https://docs.microsoft.com/en-us/graph/migrate-azure-ad-graph-faq), Terraform v1.1 marked the beginning of a deprecation cycle for support of Azure AD Graph in Terraform's `azurerm` backend. + +That deprecation cycle has now concluded with the total removal of Azure AD Graph support in Terraform v1.3. The AzureRM backend now supports only [Microsoft Graph](https://docs.microsoft.com/en-us/graph/overview). + +If you previously set `use_microsoft_graph = true` in your backend configuration to explicitly opt in to using the Microsoft Graph client instead of Azure AD Graph, you will need to now remove that argument from your backend configuration. + +If you remove this setting in an already-initialized Terraform working directory then Terraform will detect it as a configuration change and prompt you to decide whether to migrate state to a new location. Because removing that setting does not change the physical location of the state snapshots, you should _not_ tell Terraform to migrate the state to a new location and should instead use the `-reconfigure` option to `terraform init`: + +``` +terraform init -reconfigure +``` + +If you did not previously set the `use_microsoft_graph` argument then you do not need to make any changes. Microsoft Graph is now used by default and is the only available implementation. + +## Other Small Changes + +There are some other changes in Terraform v1.3 that we don't expect to have a great impact but may affect a small number of users: +* `terraform import` no longer supports the option `-allow-missing-config`. This option was originally added as a backward-compatibility helper when Terraform first began making use of the configuration during import, but the behavior of the import command was significantly limited by the requirement to be able to work without configuration, and so configuration is now required. + + In most cases it is sufficient to write just an empty `resource` block whose resource type and name matches the address given on the `terraform import` command line. This will cause Terraform to associate the import operation with the default provider configuration for the provider that the resource belongs to. +* `terraform show -json` previously simplified the "unknown" status for all output values to be a single boolean value, even though an output value of a collection or structural type can potentially be only partially unknown. + + The JSON output now accurately describes partially-unknown output values in the same way as it describes partially-unknown values in resource attributes. Any consumer of the plan JSON format which was relying on output values always being either known or entirely unknown must be changed to support more complex situations in the `after_unknown` property of [the JSON Change Representation](https://www.terraform.io/internals/json-format#change-representation). +* When making requests to HTTPS servers, Terraform now rejects invalid TLS handshakes that have duplicate extensions, as required by RFC 5246 section 7.4.1.4 and RFC 8446 section 4.2. This action may cause new errors when interacting with existing buggy or misconfigured TLS servers, but should not affect correct servers. + + If you see new HTTPS, TLS, or SSL-related error messages after upgrading to Terraform v1.3, that may mean that the server that Terraform tried to access has an incorrect implementation of the relevant protocols and needs an upgrade to a correct version for continued use with Terraform. + + Similar problems can also arise on networks that use HTTPS-intercepting [middleboxes](https://en.wikipedia.org/wiki/Middlebox), such as deep packet inspection firewalls. In that case, the protocol implementation of the middlebox must also be correct in order for Terraform to successfully access HTTPS servers through it. + + This only applies to requests made directly by Terraform CLI, such as provider installation and remote state storage. Terraform providers are separate programs which decide their own policy for handling of TLS handshakes.