Skip to content

How to deploy EKS Clusters with the Terraform EKS Blueprints and GitHub Actions Workflows

License

Notifications You must be signed in to change notification settings

aws-samples/eks-blueprints-actions-workflow

EKS Clusters using Karpenter on Fargate deployed with the Terraform EKS Blueprints and GitHub Actions Workflows

checkov tflint terraform-docs

Warning You are responsible for the cost of the AWS services used while running this sample deployment. There is no additional cost for using this sample. For full details, see the pricing pages for each AWS service you will be using in this sample. Prices are subject to change. This sample code should only be used for demonstration purposes and should not be used in a production environment.

This example provides the following capabilities:

  • Deploy EKS clusters with GitHub Action Workflows
  • Execute test suites on ephemeral test clusters
  • Leverage Karpenter for Autoscaling

The following resources will be deployed by this example.

  • Creates EKS Cluster Control plane with one Fargate profile for the following namespaces:
    • kube-system
    • karpenter
    • argocd
  • ArgoCD add-on deployed through a Helm chart
  • Karpenter add-on deployed through a Helm chart
  • One Karpenter default NodePool
  • AWS Load Balancer Controller add-on deployed through a Helm chart
  • External DNS add-on deployed through a Helm chart
  • The game-2048 application deployed through a Helm chart from this repository to demonstrates how Karpenter scales nodes based on workload requirements and how to configure the Ingress so that an application can be accessed over the internet.

How to Deploy Manually

Manual Deployment Prerequisites

Ensure that you have installed the following tools in your Linux, Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply

The following resources need to be available in the AWS accounts where you want to deploy EKS clusters:

  • EKS Cluster Administrators IAM Role
  • VPC with private and public subnets
  • Route 53 hosted zone
  • Wildcard certificate issued in ACM

You also need to provide a Terraform Cloud organization and workspace, and a GitHub Personal Access Token (PAT) to access the application helm chart on this repository.

Manual Deployment Steps

  1. Clone the repo using the command below

    git clone https://github.com/aws-samples/eks-blueprints-actions-workflow.git
  2. Run Terraform init to initialize a working directory with configuration files

    export TF_CLOUD_ORGANIZATION='<my-org>'
    export TF_WORKSPACE='<my-workspace>'
    export TF_TOKEN_app_terraform_io='<my-token>'
    
    terraform init
  3. Create the tfvars file in the envs folder with the values for your EKS cluster.

    Use envs/dev/terraform.tfvars as a reference Replace all values contained in the demo example with the required cluster configuration

  4. Run Terraform plan to verify the resources created by this execution

    # Personal Access Token (PAT) required to access the application helm chart repo
    export WORKLOADS_PAT="<github_token>"
    
    terraform plan \
      -var-file="./envs/dev/terraform.tfvars" \
      -var="workloads_pat=${WORKLOADS_PAT}" \
      -out=tfplan
  5. Finally, Terraform apply.

    terraform apply tfplan

How to Deploy with a GitHub Actions Workflow

GitHub Actions Workflow Prerequisites

The following resources need to be available in the AWS accounts where you want to deploy EKS clusters:

  • EKS Cluster Administrators IAM Role
  • VPC with private and public subnets
  • Route 53 hosted zone
  • Wildcard certificate issued in ACM
  • GitHub Actions IAM OIDC Identity Provider
  • GitHub Actions Terraform Execution IAM Role

Follow the instructions for Local Execution mode configuration. You will need to provide a Terraform Cloud organization with one workspace per environment and a GitHub Personal Access Token (PAT) to access the application helm chart on this repository.

Ensure that you all the required Actions Secrets are present in the Secrets - Actions settings before creating a workflow to deploy an EKS cluster.

Workflow Deployment Steps

  1. Clone the repo using the command below

    git clone https://github.com/aws-samples/eks-blueprints-actions-workflow.git
  2. Create a new repo into you own GitHub organization using the cloned repo.

  3. Create a branch.

  4. Create or update the tfvars file in the envs folder with the values for your EKS cluster.

    Use envs/dev/terraform.tfvars as a reference Replace all values contained in the demo example with the required cluster configuration

  5. Commit your changes and publish your branch.

  6. Create a Pull Request. This will trigger the workflow and add a comment with the expected plan outcome to the PR. The Terraform Apply step will not be executed at this stage.

  7. A workflow that triggers the deployment of an ephemeral cluster in the PR-TEST environment will be waiting for an approval. Add a required reviewer to approve workflow runs so you can decide when to deploy the ephemeral test cluster.

  8. Ask someone to review the PR and make the appropriate changes if necessary.

  9. Once the PR is approved and the code is merged to the main branch, the workflow will be triggered automatically and start the deploy job. The Terraform Apply step will only be executed if changes are required.

  10. When the PR is closed, a workflow to destroy the ephemeral test cluster in the PR-TEST environment. Approve the workflow run to destroy the EKS cluster.

Validation Steps

  1. Configure kubectl.

    EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster. This following command used to update the ~/.kube/config file in your local machine where you run kubectl commands to interact with your EKS Cluster.

    aws eks --region <region> update-kubeconfig --name <cluster-name>
  2. Access the ArgoCD UI by running the following command:

    kubectl port-forward service/argocd-server -n argocd 8080:443

    Then, open your browser and navigate to https://localhost:8080/ The username is admin.

    You can retrieve the admin user password by running the following command:

    kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
  3. Click on the REFRESH APPS button and select all applications to deploy them.

  4. After the game-2048 deployment is complete, you should see multiple Fargate nodes and five nodes provisioned by Karpenter up and running.

    kubectl get nodes
  5. Check that the demo app workload is deployed on nodes provisioned by Karpenter provisioners.

    You can run this command to view the Karpenter Controller logs while the nodes are provisioned.

    kubectl logs --selector app.kubernetes.io/name=karpenter -n karpenter
  6. After a couple of minutes, you should see new nodes being added by Karpenter to accommodate the game-2048 application EC2 instance family, capacity type, availability zones placement, and pod anti-affinity requirements.

    kubectl get node \
      --selector=karpenter.sh/initialized=true \
      -L karpenter.sh/provisioner-name \
      -L topology.kubernetes.io/zone \
      -L karpenter.sh/capacity-type \
      -L karpenter.k8s.aws/instance-family
  7. Test by listing the game-2048 pods. You should see that all the pods are running on different nodes because of the pod anti-affinity rule.

    kubectl -n game-2048 get pods -o wide
  8. Test that the sample application is now available.

    kubectl -n game-2048 get ingress/ingress-2048

    Open the browser to access the application via the ALB address https://game-2048-<ClusterName>.<Domain>/

    Warning You might need to wait a few minutes, and then refresh your browser.

Cleanup Steps

To clean up your environment, delete the sample workload and then destroy the rest of the Terraform resources.

  1. Run Terraform init to initialize a working directory with configuration files

    export TF_CLOUD_ORGANIZATION='<my-org>'
    export TF_WORKSPACE='<my-workspace>'
    export TF_TOKEN_app_terraform_io='<my-token>'
    
    terraform init
  2. Delete the Demo Game 2048 Application.

    kubectl -n argocd delete application game-2048
  3. Wait for 5 minutes to allow Karpenter to delete the empty nodes.

  4. Execute the Destroy GitHub Action Workflow to destroy all clusters in all environments or run Terraform destroy manually.

    # Personal Access Token (PAT) required to access the application helm chart repo
    export WORKLOADS_PAT="<github_token>"
    
    terraform destroy \
      -var-file="./envs/dev/terraform.tfvars" \
      -var="workloads_pat=${WORKLOADS_PAT}"

Requirements

Name Version
terraform >= 1.0.0
aws >=5.0.0
helm >=2.0.0
kubectl >= 2.0.0
kubernetes >=2.0.0
random >=3.0.0

Providers

Name Version
aws >=5.0.0
helm >=2.0.0

Modules

Name Source Version
eks github.com/terraform-aws-modules/terraform-aws-eks.git 70866e6fb26aa46a876f16567a043a9aaee4ed34
eks_blueprints_addons github.com/aws-ia/terraform-aws-eks-blueprints-addons.git 257677adeed1be54326637cf919cf24df6ad7c06
external_dns_irsa_role github.com/terraform-aws-modules/terraform-aws-iam.git//modules/iam-role-for-service-accounts-eks f0a3a1cf8ba2f43f7919ba29593be3e9cadd363c
karpenter github.com/terraform-aws-modules/terraform-aws-eks.git//modules/karpenter 70866e6fb26aa46a876f16567a043a9aaee4ed34
load_balancer_controller_irsa_role github.com/terraform-aws-modules/terraform-aws-iam.git//modules/iam-role-for-service-accounts-eks f0a3a1cf8ba2f43f7919ba29593be3e9cadd363c

Resources

Name Type
aws_ec2_tag.private_subnet_cluster_alb_tag resource
aws_ec2_tag.private_subnet_cluster_karpenter_tag resource
aws_ec2_tag.public_subnet_cluster_alb_tag resource
aws_ec2_tag.vpc_tag resource
helm_release.argocd_applications resource
aws_caller_identity.current data source
aws_subnet.eks_private_subnets data source
aws_subnet.eks_public_subnets data source
aws_subnets.eks_selected_private_subnets data source
aws_subnets.eks_selected_public_subnets data source
aws_vpc.eks data source

Inputs

Name Description Type Default Required
argocd_version Argo CD version string n/a yes
aws_load_balancer_controller_version Version of the AWS Load Balancer Controller Helm Chart string n/a yes
cluster_suffix The EKS Cluster suffix string n/a yes
cluster_version EKS Control Plane version to be provisioned string n/a yes
core_dns_version Version of the CoreDNS addon string n/a yes
environment The EKS cluster environment string n/a yes
external_dns_version External DNS version string n/a yes
karpenter_version Version of the Karpenter Helm Chart string n/a yes
kube_proxy_version Version of the kube-proxy addon string n/a yes
region The AWS region where to deploy the EKS Cluster string n/a yes
route53_hosted_zone_id Route 53 Hosted Zone ID to be used by the external-dns addon string n/a yes
route53_hosted_zone_name Route 53 Hosted Zone Domain Name to be used by the Demo Game 2048 Application Ingress string n/a yes
tenant_name The EKS Cluster tenant name string n/a yes
vpc_cni_version Version of the VPC CNI addon string n/a yes
vpc_name The name of the VPC where to deploy the EKS Cluster Worker Nodes string n/a yes
workloads_org The Workloads GitHub Organization string n/a yes
workloads_pat The Workloads GitHub Personnal Access Token string n/a yes
workloads_repo_url The Workloads GitHub Repository URL string n/a yes
access_entries EKS Access Entries map(any) {} no
enable_endpoint_public_access Indicates whether or not the Amazon EKS public API server endpoint is enabled bool false no
tags Additional tags (e.g. map('BusinessUnit,XYZ) map(string) {} no

Outputs

Name Description
configure_kubectl Configure kubectl. Make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

About

How to deploy EKS Clusters with the Terraform EKS Blueprints and GitHub Actions Workflows

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages