Skip to content

Bootstrap a simple, secure kubernetes deployment on public cloud providers using terraform and k3s.

License

Notifications You must be signed in to change notification settings

jawabuu/kloud-3s

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

kloud-3s - k3s in the cloud

kloud-3s is a set of terraform modules that deploys a secure, functional ( kubernetes) k3s cluster on a number of cloud providers.
The following are currently tested;

You may support the project by using the referral links above.

This project is inspired by and borrows from the awesome work done by Hobby-Kube

Why kloud-3s?

kloud-3s follows the hobby-kube guidelines for setting up a secure kubernetes cluster. As this guide is comprehensive, the information will not be repeated here.
kloud-3s aims to add the following;

  1. Use a LightWeight kubernetes distribution i.e. k3s .
  2. Allow clean scale-up and scale down of nodes.
  3. Improve supported OS'. kloud-3s supports Windows with git-bash.
  4. Bootstrap installation with only minimal variables required.
  5. Resolve cluster and pod networking issues that ranks as the most common issue for k3s installations.
  6. Support multiple kubernetes CNI's and support preservation of source IP's out of the box. The following have been tested; The embedded k3s flannel (does not preserve source IP).
    CNI Installation Docs
    Flannel https://coreos.com/flannel/docs/latest/kubernetes.html
    Cilium https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/
    Calico https://docs.projectcalico.org/getting-started/kubernetes/quickstart
    Weave https://www.weave.works/docs/net/latest/kubernetes/kube-addon/
  7. Add testing by using some popular cloud-native projects, namely:

A successful deployment will complete without any error from these deployments.

Features

  1. kloud-3s is opinionated and does not support every available use-case. The objective is to provide a consistent deployment experience across the supported cloud providers.

  2. For maximum portability kloud-3s does not use an ssh-agent for it's terraform modules. It does not require existing ssh-keys but users may define their existing key paths.

  3. Enforce encrypted communication between nodes and use vpc/private networks if supported by cloud provider.

  4. Although not required, kloud-3s suggests using it with a domain you own.

  5. kloud-3s creates an A record and wildcard CNAME record of the domain value you provided pointing to your master node.

  6. kloud-3s ensures functionality of the following across all its supported cloud providers. The support matrix will be updated as other versions are tested.

    Software Version
    Ubuntu 20.04 LTS
    K3S v1.21.1+k3s1
  7. kloud-3s tests a successful deployment by using traefik and cert-manager deployments sending requests to the following endpoints;

    Test Response Code Certificate Issuer
    curl -Lkv test.your.domain 200 None
    curl -Lkv whoami.your.domain 200 Fake LE
    curl -Lkv dash.your.domain 200 LetsEncrypt

Dependencies

kloud-3s requires the following installed on your system

  • terraform
  • wireguard
  • jq
  • kubectl
  • git-bash if on Windows

Deployment

Quick Install

  1. Clone the repo

    $ git clone https://github.com/jawabuu/kloud-3s.git
  2. Switch to desired cloud provider under the deploy directory. For example, to deploy kloud-3s on digitalocean

    $ cd kloud-3s/deploy/digitalocean
  3. Copy tfvars.example to terraform.tfvars

    deploy/digitalocean$ cp tfvars.example terraform.tfvars
  4. Using your favourite editor, update values in terraform.tfvars marked required

    deploy/digitalocean$ nano terraform.tfvars
    
    # DNS Settings
    create_zone = "true"
    domain      = "kloud-3s.my.domain"
    # We are using digitalocean for dns
    # Resource Settings
    digitalocean_token    = <required>
  5. Run terraform init to initalize modules

    deploy/digitalocean$ terraform init
  6. Run terraform plan to view changes terraform will make

    deploy/digitalocean$ terraform apply
  7. Run terraform apply to create your resources

    deploy/digitalocean$ terraform apply --auto-approve
  8. Set KUBECONFIG by running $(terraform output kubeconfig)

    deploy/digitalocean$ $(terraform output kubeconfig)
  9. Check resources kubectl get po -A -o wide

    deploy/digitalocean$ kubectl get po -A -o wide
    NAMESPACE        NAME                                       READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
    kube-system      cilium-operator-77d99f8578-hqhtx           1/1     Running   0          60s   10.0.1.2      kube2   <none>           <none>
    kube-system      metrics-server-6d684c7b5-nrphr             1/1     Running   0          20s   10.42.1.68    kube2   <none>           <none>
    kube-system      cilium-ggjxz                               1/1     Running   0          60s   10.0.1.2      kube2   <none>           <none>
    kube-system      coredns-6c6bb68b64-54dgw                   1/1     Running   0          33s   10.42.0.3     kube1   <none>           <none>
    kube-system      cilium-9t6f7                               1/1     Running   0          51s   10.0.1.1      kube1   <none>           <none>
    cert-manager     cert-manager-9b8969d86-4ppxb               1/1     Running   0          30s   10.42.0.8     kube1   <none>           <none>
    whoami           whoami-5c8d94f78-qg2pc                     1/1     Running   0          20s   10.42.1.217   kube2   <none>           <none>
    whoami           whoami-5c8d94f78-8bgtc                     1/1     Running   0          16s   10.42.0.231   kube1   <none>           <none>
    default          traefik-76695c9b69-t25j2                   1/1     Running   0          27s   10.42.0.94    kube1   <none>           <none>
    metallb-system   speaker-rwc46                              1/1     Running   0          8s    10.0.1.2      kube2   <none>           <none>
    metallb-system   controller-65c5689b94-vdcpn                1/1     Running   0          8s    10.42.1.90    kube2   <none>           <none>
    metallb-system   speaker-zmpnx                              1/1     Running   0          12h   10.0.1.1      kube1   <none>           <none>
    default          net-8c845cc87-vml5w                        1/1     Running   0          7s    10.42.1.52    kube2   <none>           <none>
    cert-manager     cert-manager-webhook-8c5db9fb6-b59tj       1/1     Running   0          28s   10.42.0.250   kube1   <none>           <none>
    kube-system      local-path-provisioner-58fb86bdfd-k7cfw    1/1     Running   4          34s   10.42.1.13    kube2   <none>           <none>
    cert-manager     cert-manager-cainjector-8545fdf87c-jddnl   1/1     Running   6          27s   10.42.0.219   kube1   <none>           <none>
    
  10. SSH to master easily with $(terraform output ssh-master)

    deploy/digitalocean$ $(terraform output ssh-master)

Advanced Install

For any given provider under the deploy directory, only the terraform.tfvars and main.tf files need to be modified. Refer to the variables.tf file which contains information on the various variables that you can override in terraform.tfvars

Quick Reference

Quick References for readability

Common variables for deployment

common variables default description
node_count 3 Number of nodes in cluster
create_zone false Create a domain zone if it does not exist
domain none Domain for the deployment
k3s_version latest This is set to v1.21.1+k3s1
cni weave Choice of CNI among default,flannel,cilium,calico,weave
overlay_cidr 10.42.0.0/16 pod cidr for k3s
service_cidr 10.43.0.0/16 service cidr for k3s
vpc_cidr 10.115.0.0/24 vpc cidr for supported providers
vpn_iprange 10.0.1.0/24 vpn cidr for wireguard
ha_cluster false Create a high-availabilty cluster
ha_nodes 3 Number of controller nodes in HA cluster
enable_floatingip false Use a floating ip
install_app.floating_ip false Install floating-ip controller
create_certs false Create letsencrypt certs
ssh_key_path ./../../.ssh/tf-kube Filepath for ssh private key
ssh_pubkey_path ./../../.ssh/tf-kube.pub Filepath for ssh public key
ssh_keys_dir ./../../.ssh Directory to store created ssh keys
kubeconfig_path ./../../.kubeconfig Directory to store kubeconfig file

Provider variables

************ Authentication Machine Size Machine OS Machine Region
DigitalOcean digitalocean_token digitalocean_size digitalocean_image digitalocean_region
HetznerCloud hcloud_token hcloud_type hcloud_image hcloud_location
Vultr vultr_api_key vultr_plan vultr_os vultr_region
Linode linode_token linode_type linode_image linode_region
UpCloud upcloud_username, upcloud_password upcloud_plan upcloud_image upcloud_zone
ScaleWay scaleway_organization_id, scaleway_access_key, scaleway_secret_key scaleway_type scaleway_image scaleway_zone
OVH tenant_name, user_name, password ovh_type ovh_image region
Google creds_file size image region, region_zone
Azure client_id, client_secret, tenant_id, subscription_id size - region
AWS aws_access_key, aws_secret_key size image region
AlibabaCloud alicloud_access_key, alicloud_secret_key size - scaleway_zone

Todos

  • Support multi-master k3s HA clusters
  • Add module to optionally bootstrap basic logging, monitoring and ingress services in the vein of kube-prod by bitnami.
  • Security hardening for production workloads
  • Support more cloud providers
  • Support more DNS providers
  • Add Cloud Controller Manager module
  • Implement K3S Auto upgrades

Notes

**Terraform v0.13 only

References

License

MIT

About

Bootstrap a simple, secure kubernetes deployment on public cloud providers using terraform and k3s.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published