Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

schema: Propagate "sensitive" into the core schema in all cases #504

Closed
wants to merge 1 commit into from

Conversation

apparentlymart
Copy link
Member

Previously we would translate the SDK's idea of "sensitive" to Terraform Core's idea of "sensitive" only in the case where a particular field we were translating into a core schema attribute were directly marked as sensitive.

However, that fails to cover situations where the mapping from SDK schema to core schema is inexact: nested resources that are marked as Computed and any situation where a provider forces a nested resource field to be represented as an attribute when there's a sensitive field inside it.

This change addresses those two situations in an effort to ensure that every Sensitive: true in the provider's schema is respected somehow, even if the handling of it is more conservative that what the provider schema requested.

The effect of this change should be mostly cosmetic, though changing a nested resource to be a block will cause Terraform Core to use the different safety rules that apply to blocks when verifying that a provider is behaving correctly. However, those safety rules are already in effect for any nested resource that is both Optional: true and Computed: true, so we're already exercising that codepath in many existing resource types without any problems.

The two cosmetic changes this implies are:

  • When rendering a plan containing a nested block that has Optional: false, Computed: true where that field or any of its nested fields are also Sensitive: true, the plan output will illustrate it as a sequence of nested blocks rather than as a single list/set attribute. That then allows the plan renderer to do the (sensitive) presentation to the individual nested fields that are sensitive.
  • For a schema field which is using the "attributes as blocks" hack, if any of its sub-fields are marked as Sensitive: true then the top-most field with attributes as blocks mode activated will be treated as entirely sensitive, even if there are other nested fields inside that are not marked as sensitive in the schema, and so the entire substructure will be masked out as (sensitive) in the plan.

This is potential short-term fix for #201, implementing both of the ideas I previously posted in a comment on that issue. Future changes to the provider protocol and core schema model might allow mapping the SDK model more directly, or future changes to the SDK model might make it more closely match the core schema model, but until one of those things happen it's better to err on the side of potentially marking more than necessary as sensitive rather than less than necessary.

With that said, I've not tested this against any of the real-world situations discussed in #201 because I don't have suitable credentials to run the acceptance tests for those providers, so I'd like some help to try this out and see whether there are any practical implications of it that I've not considered.

Previously we would translate the SDK's idea of "sensitive" to Terraform
Core's idea of "sensitive" only in the case where a particular field
we were translating into a core schema attribute were directly marked
as sensitive.

However, that fails to cover situations where the mapping from SDK schema
to core schema is inexact: nested resources that are marked as Computed
and any situation where a provider forces a nested resource field to be
represented as an attribute when there's a sensitive field inside it.

This change addresses those two situations in an effort to ensure that
every Sensitive: true in the provider's schema is respected somehow, even
if the handling of it is more conservative that what the provider schema
requested.

The effect of this change should be _mostly_ cosmetic, though changing a
nested resource to be a block will cause Terraform Core to use the
different safety rules that apply to blocks when verifying that a provider
is behaving correctly. Providers using this SDK are currently opted out of
the enforcement of those rules though, so any changes in the result of
those checks should only affect warnings in Terraform's debug log, not
end-user-visible behavior.

Future changes to the provider protocol and core schema model might allow
mapping the SDK model more directly, or future changes to the SDK model
might make it more closely match the core schema model, but until one of
those things happen it's better to err on the side of potentially marking
more than necessary as sensitive rather than less than necessary.
@apparentlymart apparentlymart self-assigned this Jul 16, 2020
@paddycarver paddycarver added this to the v2.x (next minor) milestone Jul 29, 2020
@paddycarver paddycarver added the bug Something isn't working label Jul 29, 2020
@paddycarver paddycarver requested a review from a team July 29, 2020 13:36
@kmoe kmoe self-assigned this Aug 5, 2020
Copy link
Member

@kmoe kmoe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense to me, and I've tested it against the situation described in hashicorp/terraform-provider-azurerm#4105 with a similar config.

terraform plan with SDK v2.0.0:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # azurerm_kubernetes_cluster.aks_with_aad_parameters[0] will be created
  + resource "azurerm_kubernetes_cluster" "aks_with_aad_parameters" {
      + dns_prefix              = "foodnsprefix"
      + fqdn                    = (known after apply)
      + id                      = (known after apply)
      + kube_admin_config       = (known after apply)
      + kube_admin_config_raw   = (sensitive value)
      + kube_config             = (known after apply)
      + kube_config_raw         = (sensitive value)
      + kubelet_identity        = (known after apply)
      + kubernetes_version      = (known after apply)
      + location                = "westus"
      + name                    = "foocluster"
      + node_resource_group     = (known after apply)
      + private_cluster_enabled = (known after apply)
      + private_fqdn            = (known after apply)
      + private_link_enabled    = (known after apply)
      + resource_group_name     = "foorgname"
      + sku_tier                = "Free"

      + addon_profile {
          + aci_connector_linux {
              + enabled     = (known after apply)
              + subnet_name = (known after apply)
            }

          + azure_policy {
              + enabled = (known after apply)
            }

          + http_application_routing {
              + enabled                            = (known after apply)
              + http_application_routing_zone_name = (known after apply)
            }

          + kube_dashboard {
              + enabled = (known after apply)
            }

          + oms_agent {
              + enabled                    = (known after apply)
              + log_analytics_workspace_id = (known after apply)
              + oms_agent_identity         = (known after apply)
            }
        }

      + auto_scaler_profile {
          + balance_similar_node_groups      = (known after apply)
          + max_graceful_termination_sec     = (known after apply)
          + scale_down_delay_after_add       = (known after apply)
          + scale_down_delay_after_delete    = (known after apply)
          + scale_down_delay_after_failure   = (known after apply)
          + scale_down_unneeded              = (known after apply)
          + scale_down_unready               = (known after apply)
          + scale_down_utilization_threshold = (known after apply)
          + scan_interval                    = (known after apply)
        }

      + default_node_pool {
          + max_pods             = (known after apply)
          + name                 = "fooopool"
          + node_count           = 1
          + orchestrator_version = (known after apply)
          + os_disk_size_gb      = (known after apply)
          + type                 = "VirtualMachineScaleSets"
          + vm_size              = "Standard_D2_v2"
        }

      + network_profile {
          + dns_service_ip     = (known after apply)
          + docker_bridge_cidr = (known after apply)
          + load_balancer_sku  = (known after apply)
          + network_plugin     = (known after apply)
          + network_policy     = (known after apply)
          + outbound_type      = (known after apply)
          + pod_cidr           = (known after apply)
          + service_cidr       = (known after apply)

          + load_balancer_profile {
              + effective_outbound_ips    = (known after apply)
              + idle_timeout_in_minutes   = (known after apply)
              + managed_outbound_ip_count = (known after apply)
              + outbound_ip_address_ids   = (known after apply)
              + outbound_ip_prefix_ids    = (known after apply)
              + outbound_ports_allocated  = (known after apply)
            }
        }

      + role_based_access_control {
          + enabled = true

          + azure_active_directory {
              + tenant_id = (known after apply)
            }
        }

      + windows_profile {
          + admin_password = (sensitive value)
          + admin_username = (known after apply)
        }
    }

  # azurerm_resource_group.test will be created
  + resource "azurerm_resource_group" "test" {
      + id       = (known after apply)
      + location = "westus"
      + name     = "foorgname"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

terraform plan with this PR's change to the SDK:

❤ @up ➜  azure  terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # azurerm_kubernetes_cluster.aks_with_aad_parameters[0] will be created
  + resource "azurerm_kubernetes_cluster" "aks_with_aad_parameters" {
      + dns_prefix              = "foodnsprefix"
      + fqdn                    = (known after apply)
      + id                      = (known after apply)
      + kube_admin_config_raw   = (sensitive value)
      + kube_config_raw         = (sensitive value)
      + kubelet_identity        = (known after apply)
      + kubernetes_version      = (known after apply)
      + location                = "westus"
      + name                    = "foocluster"
      + node_resource_group     = (known after apply)
      + private_cluster_enabled = (known after apply)
      + private_fqdn            = (known after apply)
      + private_link_enabled    = (known after apply)
      + resource_group_name     = "foorgname"
      + sku_tier                = "Free"

      + addon_profile {
          + aci_connector_linux {
              + enabled     = (known after apply)
              + subnet_name = (known after apply)
            }

          + azure_policy {
              + enabled = (known after apply)
            }

          + http_application_routing {
              + enabled                            = (known after apply)
              + http_application_routing_zone_name = (known after apply)
            }

          + kube_dashboard {
              + enabled = (known after apply)
            }

          + oms_agent {
              + enabled                    = (known after apply)
              + log_analytics_workspace_id = (known after apply)
              + oms_agent_identity         = (known after apply)
            }
        }

      + auto_scaler_profile {
          + balance_similar_node_groups      = (known after apply)
          + max_graceful_termination_sec     = (known after apply)
          + scale_down_delay_after_add       = (known after apply)
          + scale_down_delay_after_delete    = (known after apply)
          + scale_down_delay_after_failure   = (known after apply)
          + scale_down_unneeded              = (known after apply)
          + scale_down_unready               = (known after apply)
          + scale_down_utilization_threshold = (known after apply)
          + scan_interval                    = (known after apply)
        }

      + default_node_pool {
          + max_pods             = (known after apply)
          + name                 = "fooopool"
          + node_count           = 1
          + orchestrator_version = (known after apply)
          + os_disk_size_gb      = (known after apply)
          + type                 = "VirtualMachineScaleSets"
          + vm_size              = "Standard_D2_v2"
        }

      + kube_admin_config {
          + client_certificate     = (sensitive value)
          + client_key             = (sensitive value)
          + cluster_ca_certificate = (sensitive value)
          + host                   = (known after apply)
          + password               = (sensitive value)
          + username               = (known after apply)
        }

      + kube_config {
          + client_certificate     = (sensitive value)
          + client_key             = (sensitive value)
          + cluster_ca_certificate = (sensitive value)
          + host                   = (known after apply)
          + password               = (sensitive value)
          + username               = (known after apply)
        }

      + network_profile {
          + dns_service_ip     = (known after apply)
          + docker_bridge_cidr = (known after apply)
          + load_balancer_sku  = (known after apply)
          + network_plugin     = (known after apply)
          + network_policy     = (known after apply)
          + outbound_type      = (known after apply)
          + pod_cidr           = (known after apply)
          + service_cidr       = (known after apply)

          + load_balancer_profile {
              + effective_outbound_ips    = (known after apply)
              + idle_timeout_in_minutes   = (known after apply)
              + managed_outbound_ip_count = (known after apply)
              + outbound_ip_address_ids   = (known after apply)
              + outbound_ip_prefix_ids    = (known after apply)
              + outbound_ports_allocated  = (known after apply)
            }
        }

      + role_based_access_control {
          + enabled = true

          + azure_active_directory {
              + tenant_id = (known after apply)
            }
        }

      + windows_profile {
          + admin_password = (sensitive value)
          + admin_username = (known after apply)
        }
    }

  # azurerm_resource_group.test will be created
  + resource "azurerm_resource_group" "test" {
      + id       = (known after apply)
      + location = "westus"
      + name     = "foorgname"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

As of this PR, kube_config and kube_admin_config are now represented as nested blocks, with the appropriate fields marked as sensitive.

@katbyte, does the new plan output look appropriate to you as a solution to #201?

@paultyng
Copy link
Contributor

This feels like overkill potentially? People could already do this manually if they desired by flagging sensitive at a higher level.

This is going to render some plans less useful which I imagine is undesirable for users. Could this be an opt-in behavior somehow?

@apparentlymart
Copy link
Member Author

My intent here was to make sure that existing Sensitive: true annotations in exiting provider codebases are always respected, at the expense of in some cases over-generalizing to mark other things as sensitive, with the goal of addressing the immediate problem (sensitive things are not obscured) to buy time to design a more granular solution.

In other words, I'm proposing this only as a compromise to reduce overall harm in the short term, not as the long-term solution to the granularity mismatches between the SDK schema and Terraform Core schema.

@maonat
Copy link

maonat commented Oct 16, 2020

Hello, any news on this? Looks like PR is waiting to be merged. Could be awesome to have this feature because I would replace the tokenization with release pipeline.

@paultyng
Copy link
Contributor

Since you can already work around this manually and there is work already progressing in 0.14 on Sensitive, going to close this PR.

You could always implement this in your provider if you so chose by doing some post processing on your schema prior to returning it.

@paultyng paultyng closed this Oct 17, 2020
@ghost
Copy link

ghost commented Nov 16, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked as resolved and limited conversation to collaborators Nov 16, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants