Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing Outpost #363

Open
sebt3 opened this issue Jun 12, 2023 · 4 comments
Open

Sharing Outpost #363

sebt3 opened this issue Jun 12, 2023 · 4 comments

Comments

@sebt3
Copy link

sebt3 commented Jun 12, 2023

Outpost cost ressources (since they are create deployments).
Authentik clearly allow to share an outpost between many provider/application.

Yet I don't see a single way to do this using this provider.

The official kubernetes provider allow to modify existing object using :
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/labels

Something related to this would help a lot.

a bit related to #341 or #310.
But the options of creating a new outpost for every application willing to use authentik is clearly not viable.

@BeryJu
Copy link
Member

BeryJu commented Jun 12, 2023

You can reference multiple providers via the protocol_providers attribute, which will assign all the providers to the outpost

@sebt3
Copy link
Author

sebt3 commented Jun 13, 2023

Sure thing I can put as much providers in the outpost when I create it, but I cannot add more afterward.
Workflow :

  • 1 terraform layer containing authentik
  • 1 terraform layer per application connected to it.

Currently, as a workaround, i'm using restapi provider in tandem with the http provider to work around this :

locals {
  request_headers = {
    "Content-Type"  = "application/json"
    Authorization   = "Bearer ${local.authentik-token}"
  }
  ldap-outpost-prividers = jsondecode(data.http.get_ldap_outpost.response_body).results[0].providers
  ldap-outpost-pk = jsondecode(data.http.get_ldap_outpost.response_body).results[0].pk
}

data "http" "get_ldap_outpost" {
  url    = "http://authentik.${var.authentik-ns}.svc/api/v3/outposts/instances/?name__iexact=ldap"
  method = "GET"
  request_headers = local.request_headers
  lifecycle {
    postcondition {
      condition     = contains([200], self.status_code)
      error_message = "Status code invalid"
    }
  }
}

provider "restapi" {
  uri = "http://authentik.${var.authentik-ns}.svc/api/v3/"
  headers = local.request_headers
  create_method = "PATCH"
  update_method = "PATCH"
  destroy_method = "PATCH"
  write_returns_object = true
  id_attribute = "name"
}

resource "restapi_object" "ldap_outpost_binding" {
  path = "/outposts/instances/${local.ldap-outpost-pk}/"
  data = jsonencode({
    name = "ldap"
    providers = contains(local.ldap-outpost-prividers, authentik_provider_ldap.gitea_provider_ldap.id) ? local.ldap-outpost-prividers : concat(local.ldap-outpost-prividers, [authentik_provider_ldap.gitea_provider_ldap.id])
  })
}

note the conditionnal concatenation of the arrays.
I still have to do the same in the authentik layer so an update of the layer wont reset the outpost configuration

@sebt3
Copy link
Author

sebt3 commented Jun 14, 2023

Current implementation for the authentik layer (only using the http provider and the authentik one) :

locals {
  request_headers = {
    "Content-Type"  = "application/json"
    Authorization   = "Bearer ${local.authentik-token}"
  }
  ldap-outpost-json = jsondecode(data.http.get_ldap_outpost.response_body).results
  ldap-outpost-prividers = length(local.ldap-outpost-json)>0?(contains(local.ldap-outpost-json[0].providers, authentik_provider_ldap.provider_ldap[0].id)?local.ldap-outpost-json[0].providers:concat(local.ldap-outpost-json[0].providers, [authentik_provider_ldap.provider_ldap[0].id])):[authentik_provider_ldap.provider_ldap[0].id]
}

data "http" "get_ldap_outpost" {
  depends_on = [kustomization_resource.post, authentik_provider_ldap.provider_ldap]
  url    = "http://authentik.${var.namespace}.svc/api/v3/outposts/instances/?name__iexact=ldap"
  method = "GET"
  request_headers = local.request_headers
  lifecycle {
    postcondition {
      condition     = contains([200], self.status_code)
      error_message = "Status code invalid"
    }
  }
}

resource "authentik_service_connection_kubernetes" "local" {
  count = var.outposts.ldap ? 1 : 0
  name  = "local"
  local = true
}

resource "authentik_provider_ldap" "provider_ldap" {
  count = var.outposts.ldap ? 1 : 0
  name         = "authentik-ldap-provider"
  base_dn      = "dc=${var.namespace},dc=namespace"
  bind_flow    = authentik_flow.ldap-authentication-flow.uuid
}

resource "authentik_outpost" "outpost-ldap" {
  count = var.outposts.ldap ? 1 : 0
  name = "ldap"
  type = "ldap"
  service_connection = authentik_service_connection_kubernetes.local[count.index].id
  config = jsonencode({
    "log_level": "info",
    "authentik_host": "http://authentik",
    "docker_map_ports": true,
    "kubernetes_replicas": 1,
    "kubernetes_namespace": var.namespace,
    "authentik_host_browser": "",
    "object_naming_template": "ak-outpost-%(name)s",
    "authentik_host_insecure": false,
    "kubernetes_service_type": "ClusterIP",
    "kubernetes_image_pull_secrets": [],
    "kubernetes_disabled_components": [],
    "kubernetes_ingress_annotations": {},
    "kubernetes_ingress_secret_name": "authentik-outpost-tls"
  })
  protocol_providers = local.ldap-outpost-prividers
}

Note that the ressource authentik_provider_ldap.provider_ldap is a forced dummy one that will never be used.

@sebt3
Copy link
Author

sebt3 commented Jun 14, 2023

So in my opinion, if that workflow was to be supported, the requierement would be :

  • some keys in the authentik_outpost ressource to say that the provider list should at least contain the given provider without removing the others.
  • some "add-provider-to-outpost" ressource to add additional providers later on

that's my opinion, nothing more, and, as seen above I have a working implementation so :P

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants