Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check Provider Schema Attribute Configurability During Parsing/Validation #30669

Closed
bflad opened this issue Mar 14, 2022 · 8 comments
Closed
Labels
config enhancement providers/protocol Potentially affecting the Providers Protocol and SDKs

Comments

@bflad
Copy link
Member

bflad commented Mar 14, 2022

Current Terraform Version

v1.1.7

Use-cases

Providers are generally required to notify Terraform CLI about the "configurability" of a managed resource, data source, or provider schema attribute via the Required, Optional, and Computed fields in the response to the GetProviderSchema RPC. This information is already used in various Terraform CLI logic, for example, to determine whether providers are allowed to respond with their own value for the attribute.

Interestingly enough though, provider development outside of using terraform-plugin-sdk needs to verify whether an attribute received a configured value on a Computed-only attribute (from the Terraform configuration files) during validate, plan, or apply. Without this provider-side validation, an error diagnostic such as the below will be thrown:

        Error: Provider produced invalid plan
        
        Provider "registry.terraform.io/hashicorp/framework" planned an invalid value
        for framework_user.foo.date_joined: planned value cty.NullVal(cty.String)
        does not match config value cty.StringVal("oops").
        
        This is a bug in the provider, which should be reported in the provider's own
        issue tracker.

It is likely that this type of configuration validation was overlooked because the terraform-plugin-sdk already implemented it previously, but now that there are more options for provider development, we should investigate whether Terraform CLI should be handling validation like this.

Attempted Solutions

For terraform-plugin-framework, adding additional internal logic to perform this same validation.

For terraform-plugin-go, finding a place to document this validation requirement.

For providers developed outside these, we would need to document it somewhere, such as the protocol.

Proposal

It appears configuration errors such as this could be caught and automatically handled before involving the provider during the validate, plan, and apply operation configuration parsing and/or validation logic. Terraform CLI should have enough schema information to know whether the configuration value is allowed to be non-null.

After parsing a provider, managed resource, or data source configuration during the validate graph walk:

  • Walk the cty.Value for the configuration
  • Check each attribute for a cty.NullVal value, if so, continue.
  • Otherwise, check schema for Required or Optional, and if not set, return an error diagnostic

Passing this validation should continue the Validate*Config RPC call to the provider.

References

For reference, terraform-plugin-sdk returned the following error diagnostic during the ValidateResourceConfig/ValidateDataSourceConfig/PrepareProviderConfig RPCs if it determined the attribute was Computed-only:

        Error: Value for unconfigurable attribute
        
          with corner_user.foo,
          on terraform_plugin_test.tf line 6, in resource "corner_user" "foo":
           6:   date_joined = "oops"
        
        Can't configure a value for "date_joined": its value will be decided
        automatically based on the result of applying this configuration.
@bflad
Copy link
Member Author

bflad commented Mar 14, 2022

Another validation implemented by terraform-plugin-sdk around this logic was checking if a Required attribute was configured. For resource and data source configurations, this should be safe to unilaterally validate as well, if it is not already. We would need to be careful with provider configurations though as PrepareProviderConfig allowed for providers to declare Required with a DefaultFunc (where really it should've been marked Optional, but allowed for backwards compatibility).

@apparentlymart
Copy link
Member

Thanks for recording this, @bflad.

The "requiredness" is currently handled by mapping the Terraform schema idea of required onto the Required field of hcldec.AttrSpec, which is ultimately what's decoding these things internally. That's also what handles the type conversion, where needed.

One way we could try to address this is to make Terraform generate a hcldec.ValidateSpec with a rule like what @bflad described above. I believe we did have some use of ValidateSpec in earlier versions of Terraform but we don't seem to have any today, so I wonder if we stopped doing that for a particular reason that we should consider before reintroducing it.

If that won't work then we can of course add some additional logic into the main validation codepath in Terraform Core, but I mention the above just because it seems to be the place where we're currently handling concerns like this, and so it would be nice to handle everything in one place if possible.

@apparentlymart
Copy link
Member

I can't find any released older version of Terraform which uses hcldec.ValidateSpec, so I have to assume that whatever I'm thinking of didn't make it into a stable release. Maybe there are some old clues lurking in the changes from the v0.12.0 development period, but probably easier to just try out hcldec.ValidateSpec anew here and see if it does anything surprising/unwanted.

@detro
Copy link

detro commented Jun 9, 2022

Leaving a comment here for tracking.

I have just encountered this issue in my work to port the TLS Provider to FW: as part of this, there is a field that after deprecation (during which was Computer + Optional), is now going to set to Computed.

We had tests during the deprecation phase, but of course with the Computed-only schema it's throwing the same error as above.

@bflad
Copy link
Member Author

bflad commented Jul 22, 2022

terraform-plugin-framework v0.9.0 and later include error diagnostics "Invalid Configuration for Read-Only Attribute" and "Missing Configuration for Required Attribute" during the Validate*Config RPCs to account for this.

Another related part of this is that we also have a similar issue with Terraform 0.12 through 0.15.1 not returning diagnostics for Block MaxItems/MinItems, which we will also handle on the provider side, also during the Validate*Config RPCs.

These cases do raise an interesting question though, that might be worth pondering on. Given that HCL's validation is mostly syntax tokenization and type-based in nature, except for a limited number of cases such as block count, and the provider-based validation can handle everything surrounding the semantics of schema-based data values, is it actually worth having a delineation be drawn? I could see a few pros and cons to saying that providers are wholly responsible for validation after Terraform's configuration handling ensures that the type/value structure itself is valid. More concretely, I am referring to not necessarily sending Required/MaxItems/MinItems across the protocol.

Some potential benefits:

  • Practitioners would receive a "bundle" of syntax-related errors (as best possible via HCL partial decoding) from Terraform and receive a "bundle" of semantics-related errors from the provider (generally via SDK handling). This feels like a better practitioner experience over some edge cases where HCL may catch some more "glaring" semantic rules that are currently supported, but I'm guessing its very unlikely that these rules would be expanded in HCL further, nor Terraform, since the provider can do all this already.
  • The protocol documentation, for the benefit of other potential protocol implementations, can explicitly state that the Required field does not participate in actual validation, Computed by itself does not participate in actual validation, and (regardless of the outcome of this discussion) MaxItems/MinItems only after Terraform 0.15.2.
  • In the framework, we can drop the MaxItems/MinItems from being directly exposed to provider developers only for block definitions, instead asking provider developers to implement validators similar to attributes and nested attributes. The consistency might be nice and reduce confusion why this functionality is only supported for blocks.
  • Making this sort of decision does not affect Terraform's compatibility promises as this is provider-supplied data. Providers can already skip over these implementation details.

Some potential tradeoffs:

  • Dropping this sort of information from the protocol makes it more difficult for tooling to use something like terraform providers schema -json to extract some of these details from a provider, using that method. We have noodled in the past of eschewing the protocol "barrier" (e.g. needing to update the protocol and core understand provider implementations in depth) to implement a richer data extraction of provider information. For example, being able to fully generate resource documentation with separated validation descriptors, import examples, and more.
  • Other potential use cases in Terraform core itself for this data, which I personally might not be aware of during my quick thinking on this subject.

I'm curious if you have any particular thoughts here though! If HCL/Terraform does have further plans for value-based validation, then that might render a lot of this thinking moot.

@jbardin
Copy link
Member

jbardin commented Aug 4, 2022

I agree that leaving the bulk of the validation to the provider/sdk is a good idea here. For example, the Min/Max values are not really useful in core, because of Terraform's dynamic blocks we cannot always statically determine the number of blocks.

I took a look into actually using a ValidateSpec to handle this, it turned out to not be all that useful in this case without significant refactoring. The ValidateSpec only operates on a cty.Value, so in order to get good diagnostics you really need to wrap every nested spec in its own validation. This has to be done conditionally as well, since the proposed validation can only be used on the raw config where computed attributes cannot exist.

Walking the schema and decoded config and manually checking for non-null computed values would probably be a lot less invasive (and there's probably already a similar walk done already which could also serve to validate this).

@apparentlymart
Copy link
Member

I think our current situation essentially unfolded from us not having a clear model during early v0.12 development of which parts of the work were being handled by Terraform Core and which parts were being handled by the SDK, because at that point everything was tightly coupled together in a single codebase.

I'm totally on board with the SDKs declining to return this extra metadata that Terraform Core doesn't need to do its work, of which MinItems and MaxItems are an obvious place to start given that they are clearly not needed for Terraform Core to determine the static type information for a resource type (or similar). I'm skeptical that those rules should be enforced even on the provider side of the protocol, aside from a rule that there should be at least one block of a particular type, because as @jbardin noted the dynamic block functionality can prevent us from knowing the correct amount of blocks to report during the planning step anyway, and SDKv2's rather brittle handling of blocks prevented us from signalling that situation clearly via the protocol.

Terraform Core does rely on some of the other information to implement the safety checks for the resource instance change lifecycle rules, and so e.g. returning misleading information about what is required vs. optional or what is computed vs. not may cause those safety checks to raise false positives.


terraform providers schema -json is likewise there primarily to give callers the information they'll need to properly interpret the output of other commands like terraform show -json, and so there's no requirement to emit anything there which is not needed to successfully decode the other formats. It's not intended for broader uses like documentation generation, though of course it can be used as a last resort in the absence of richer information and we've ended up cramming a few things in there to do that in the absence of a better answer so far.

If an SDK wants to emit additional schema detail that Terraform Core and consumers of its output don't strictly need then I think it would be better to design a protocol for the provider plugin to offer that to a plugin client directly, and then this SDK-specific protocol can be tailored to return whatever level of detail is appropriate for a particular SDK and its tightly-coupled supporting tools like documentation generators. That protocol and its associated client tool could then evolve totally independently of the protocol defined by Terraform Core.

I think this should already be possible today by registering an additional gRPC service alongside the ones that implement the tfplugin5 and tfplugin6 services, and then providing a small client library which wraps the additional service using go-plugin for schema tools to use. A gRPC server can simultaneously export a variety of independent services as long as they all have unique service names.

@bflad
Copy link
Member Author

bflad commented May 14, 2024

I don't think there is anything importantly actionable here (attribute validation will continue to remain on the provider side of the protocol) so closing to reduce issue noise.

@bflad bflad closed this as not planned Won't fix, can't repro, duplicate, stale May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
config enhancement providers/protocol Potentially affecting the Providers Protocol and SDKs
Projects
None yet
Development

No branches or pull requests

5 participants