Skip to content

karl-johan-grahn/kubernetes-infrastructure

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kubernetes infrastructure

This repository uses infrastructure as code to provision a Kubernetes cluster.

Options for running Kubernetes

Some options for provisioning a Kubernetes cluster:

  1. Set up everything manually the hard way - a good but not very fast learning experience
  2. Use a managed solution such as Amazon Elastic Kubernetes Service (Amazon EKS), a fully managed Kubernetes control plane
  3. Use Kops
  4. Use Terraform

Comparing Kops with EKS:

  • Kops has a lower cost of entry - compared with EKS, it is easier and quicker to create the cluster
  • EKS has a lower maintenance cost - compared with Kops, it requires less work to keep master nodes up to date

This repository will use Terraform and Kops to provision a Kubernetes cluster in AWS.

Prerequisites

Before being able to provision, these are the prerequisites:

  • Sign up for a free tier AWS account
  • Install terraform
  • Install kops
  • Install kubectl
  • Install AWS CLI
  • Use an existing domain or register a free domain from for example freenom.com
  • Create a hosted zone in Route 53 in AWS
    • Update NameServer values on the domain service provider
    • Validate NameServer records:
      (Windows) nslookup -type=soa <domain name>
      
      (Unix) dig ns <domain name>
      
      This should return one or more name server records that R53 assigned to the hosted zone.
  • Create an IAM user for kops
  • Decide which region to deploy in: eu-west-1
  • Confirm which availability zones are available:
    aws ec2 describe-availability-zones --region eu-west-1
    
  • Create an ssh key for logging into cluster:
    ssh-keygen
    
  • Run aws configure and then export the keys:
    export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
    export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
    

The rest of the infrastructure will be provisioned and managed with Terraform and Kops.

Terraform backend

To manage shared storage for state files, use Terraform’s built-in support for remote backends. Setup a remote backend instead of local to store the state file.

A separate Terraform script will be used to create the needed S3 bucket.

It only needs to be run once to create the infrastructure needed to enable remote backend storage:

cd remote_state
terraform init
terraform apply

Kops can now be used to create the Kubernetes resources.

Cluster

First run Terraform from the cluster folder to define Terraform backend and create Kops state S3 bucket:

cd cluster
terraform init
terraform apply

Choose as small node sizes as possible to avoid costs for the AWS free tier. Create the cluster Terraform files with Kops:

kops create cluster \
    --cloud=aws \
    --name="mytempsite.tk" \
    --dns-zone="mytempsite.tk" \
    --api-loadbalancer-type=public \
    --state=s3://kops-state-kjg \
    --kubernetes-version="1.18.0" \
    --master-zones="eu-west-1a" \
    --zones="eu-west-1a" \
    --master-size="t2.medium" \
    --node-size="t2.micro" \
    --master-volume-size="8" \
    --node-volume-size="8" \
    --master-count="1" \
    --node-count="2" \
    --ssh-public-key="~/.ssh/kube.pub" \
    --out=./terraform-out \
    --target=terraform

If cluster creation runs successfully, the output will end with:

...

kops has set your kubectl context to mytempsite.tk

Terraform output has been placed into ./terraform-out
Run these commands to apply the configuration:
   cd ./terraform-out
   terraform plan
   terraform apply

Suggestions:
 * validate cluster: kops validate cluster --wait 10m
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa ubuntu@api.mytempsite.tk
 * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
 * read about installing addons at: https://kops.sigs.k8s.io/operations/addons.

Run Terraform and wait until the nodes have started and DNS changes have propagated, this can take everything from a few minutes to hours. Then validate cluster as described.

Create Tiller service account and cluster role binding:

$ kubectl create -f tiller-rbac-config.yaml

Install Tiller into the cluster:

$ helm init --service-account tiller
$ kubectl get pod -n kube-system | grep tiller
tiller-deploy-55f5dfddc9-zqx88                                        1/1     Running   0          13s

Verify Tiller account, role, and binding:

kubectl --namespace kube-system get deploy tiller-deploy -o yaml

Store the generated Terraform files terraform-out in Git.

Store the Kube config as a KUBECONFIG secret in the Helm registry repository: KUBECONFIG secret

The cluster is now ready for deploying Helm charts via the Helm registry repository.

Destroy

Tear down the cluster as follows:

$ terraform destroy
$ kops delete cluster --yes \
  --name=mytempsite.tk \
  --state=s3://kops-state-kjg

Future work

For production purposes, evaluate these options:

  • Consider using Pulumi instead of Terraform
  • Consider doing provisioning entirely via GitHub workflow, the official setup-terraform action relies on Terraform Cloud
  • Consider switching to Terraform Cloud
  • Consider extending the remote state Terraform script to also include creating Kops IAM user and R53 Hosted Zone

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published