Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS Elastic Kubernetes Service Kubernetes 1.28 Update #237

Open
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

4141done
Copy link
Contributor

@4141done 4141done commented Nov 14, 2023

Updates for EKS 1.28

This change brings MARA up to date for execution in Amazon EKS running Kubernetes 1.28.
The additional changes in this PR are:

  • Changes to python versions (3.11.6 is suggested, 3.11 required) to support building our combination of dependencies
  • It is now required to install the aws tool yourself

The docker-based runs have not been updated and will be addressed in a future PR.

Run Instructions

Prerequisites

  • An AWS account (be aware that this stand up is big and requires t2.large instances with the accompanying costs)
  • A (free) Pulumi account
  • Python 3.11 (3.11.6 is tested) installed. Consider using a tool like pyenv or asdf
  • The aws cli tool installed and configured
  • (optional) The kubectl tool installed
  • (optional) The k9s tool installed

All of the other suggested tooling can be managed with asdf if you want.

NOTE FOR containerd USERS ON ARM64 ARCHITECTURE: this will not work using containerd for the moment due to how it handles manifests. If you are using Docker for Mac on an M1/M2 system you'll need to disable the containerd mode before continuing

Steps

  1. git clone --recurse-submodules https://github.com/nginxinc/kic-reference-architectures to check out this repo and the demo app.
  2. git checkout pulumi_2023 to get this branch
  3. ./bin/setup_venv.sh to set up python and install all the dependencies
  4. In ./config/pulumi create a file like this with the name Pulumi.your_stack_name.yaml
config:
  aws:profile: 369313531325_Users
  aws:region: us-west-2
  eks:desired_capacity: 3
  eks:instance_type: t2.large
  eks:k8s_version: '1.28'
  eks:max_size: 12
  eks:min_size: 3
  kubernetes:infra_type: AWS
  vpc:azs:
  - us-west-2a
  - us-west-2b
  - us-west-2c
  - us-west-2d
  kic:image_name: nginx/nginx-ingress:3.3.2
  kic:image_origin: registry
  kic-helm:chart_name: nginx-ingress
  kic-helm:chart_version: 1.0.2
  kic-helm:helm_timeout: 900
  # Chart name for the helm chart for the logagent
  logagent:chart_version: 7.17.3
  # Elasticsearch takes a LOOONG time to deploy
  logstore:helm_timeout: 900
  logstore:chart_version: 19.13.9
  logstore:chart_name: elasticsearch
  logstore:helm_repo_name: bitnami
  logstore:helm_repo_url: https://charts.bitnami.com/bitnami
  # Cert Manager Configuration
  certmgr:chart_name: cert-manager
  # Chart hame for the helm chart for certmanager
  certmgr:chart_version: v1.12.6
  # Chart version for the helm chart for certmanager
  certmgr:certmgr_helm_repo_name: cert-manager
  # Name of the repo to pull the certmanager chart from
  certmgr:certmgr_helm_repo_url: https://charts.jetstack.io
  # URL of the chart repo to pull certmanager from
  certmgr:helm_timeout: 300

  # Prometheus Configuration
  prometheus:chart_name: kube-prometheus-stack
  # Chart name for the helm chart for prometheus
  prometheus:chart_version: 54.0.1
  # Name of the statsd chart (uses the same repo as the prom chart)
  prometheus.statsd_chart_version: 0.10.1

  # Prometheus Configuration for Bank of Sirius Application
  # https://artifacthub.io/packages/helm/prometheus-community/prometheus-postgres-exporter
  sirius:chart_version: 5.2.0
  sirius:helm_repo_name: prometheus-community
  1. Execute the standup script. The environment variable isn't strictly necessary if you are not on an linux/arm64 machine but it doesn't hurt to leave it there.
DOCKER_DEFAULT_PLATFORM=linux/amd64 pulumi/python/runner -d -p aws -s your_stack_name up

Exploring

Once the standup script is complete, you should see something that looks like this:

application_url     : "https://longstring-119454905.us-west-2.elb.amazonaws.com"

You can use that to visit the Bank of Sirius UI

To view other installed tooling, first make sure you have kubectl configured to have access to your cluster like this:

aws eks update-kubeconfig --region YOUR_REGION --name CLUSTER_NAME --profile PROFILE_NAME

The CLUSTER_NAME can be obtained either from the AWS UI or by looking at the output from this project in the "EKS" section at the beginning. It will look like this:

Outputs:
    cluster_name: "aws-eks-yourstackname-eksCluster-cf1f5e2"

Once that is done, run:

./bin/test-forward.sh

Which till create tunnels to the appropriate endpoints in the deployed MARA instance.

It is also recommended to install and deploy k9s (link) to explore the Kubernetes cluster. If you set up your kubeconfig in the earlier step then it should just work.

@@ -91,7 +93,7 @@ if ! command -v python3 >/dev/null; then

mkdir -p "${PYENV_ROOT}"
git_clone_log="$(mktemp -t pyenv_git_clone-XXXXXXX.log)"
if git clone --depth 1 --branch v2.0.3 https://github.com/pyenv/pyenv.git "${PYENV_ROOT}" 2>"${git_clone_log}"; then
if git clone --depth 1 --branch v2.3.31 https://github.com/pyenv/pyenv.git "${PYENV_ROOT}" 2>"${git_clone_log}"; then
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

annotation:
Older versions of pyenv don't know about the newer python version we are using.

srv = Service.get(resource_name="nginx-ingress",
id=Output.concat("nginx-ingress", "/", pstatus.name, "-nginx-ingress"),
id=Output.concat("nginx-ingress", "/", pstatus.name, "-nginx-ingress-controller"),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Annotation:
Necessary to align with the new name of the controller in new helm chart

@@ -47,7 +47,8 @@ def _docker_pull(self, image_name: str) -> str:
:param image_name: full container image name in the format of repository:tag
:return full image name with server name (e.g. docker.io/library/debian:buster-slim)
"""
cmd = f'docker pull --quiet "{image_name}"'

cmd = f'docker pull --platform linux/amd64 --quiet "{image_name}"'
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

annotation:
Necessary to make sure we get the version of the image that will be compatible with the deployment environment not the computer running it.

@@ -7,5 +7,5 @@
version_config=True,
packages=['kic_util'],
install_requires=[
'pyyaml>=5.3.1,<6.0', 'passlib>=1.7.4,<2.0.0', 'GitPython>=3.1.18,<3.2.0'
'pyyaml', 'passlib>=1.7.4,<2.0.0', 'GitPython>=3.1.18,<3.2.0'
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

annotation:
Relaxing this requirement is ok and allows us to update the various pulumi packages which have moved to version >=6

@@ -4,29 +4,25 @@ verify_ssl = true
name = "pypi"

[packages]
awscli = "~=1.25.35"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

annotation:
The AWS cli has some pretty strict pins on its dependencies. PyYaml specifically. The advice should be to install the v2 aws CLI tool separately.

@4141done 4141done changed the title [WIP] Pulumi 2023 AWS Elastic Kubernetes Service Kubernetes 1.28 Update Nov 15, 2023
@4141done 4141done marked this pull request as ready for review November 15, 2023 08:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant