Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm mode: add workflow for clustermesh via upgrade #1515

Merged
merged 1 commit into from
May 2, 2023

Conversation

asauber
Copy link
Member

@asauber asauber commented Apr 18, 2023

Add a GitHub Actions Workflow for Clustermesh via Helm implementation of cilium upgrade

Test procedure used during local development

We can use the new upgrade command to enable Cluster Mesh on a set of two Cilium clusters with the following procedure.

Create a kind cluster with the following config

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: c1
nodes:
  - role: control-plane
  - role: worker
  - role: worker
  - role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.10.0.0/16"
  serviceSubnet: "10.50.0.0/16"
kind create cluster --config c1.kindconfig.yaml

Test that the kind cluster came up with k get pods -A

Install Cilium using the CLI, making sure to set a Cluster ID and Cluster Name

export CILIUM_CLI_MODE=helm
cilium install --help
# note the suggestion to use --helm-set flags
cilium install --helm-set cluster.id=1 --helm-set cluster.name=cluster1

Create a second cluster using kind with the following config

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: c2
nodes:
  - role: control-plane
  - role: worker
  - role: worker
  - role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.11.0.0/16"
  serviceSubnet: "10.51.0.0/16"
kind create cluster --config c2.kindconfig.yaml

Check that you are authed for Cluster kind-c2

Run k get pods -A and look for kube-apiserver-c2-control-plane

Install Cilium using the CLI, making sure to set a Cluster ID and Cluster Name

export CILIUM_CLI_MODE=helm
cilium install --help
# note the suggestion to use --helm-set flags
cilium install --helm-set cluster.id=2 --helm-set cluster.name=cluster2

Enable clustermesh on Cluster 1 using the new Helm upgrade command

export CILIUM_CLI_MODE=helm
cilium upgrade --help
# note the suggestion to use --helm-set flags
cilium upgrade --context $CLUSTER1 \
    --helm-set clustermesh.useAPIServer=true \
    --helm-set clustermesh.apiserver.service.type=NodePort
# observe that the clustermesh apiserver is running
k get pods -A

Prep environment for the clustermesh enable.

export CLUSTER1=kind-c1 CLUSTER2=kind-c2

Extract the CA cert from Cluster 1 and install it into Cluster 2, then restart Cilium on Cluster 2

kubectl --context $CLUSTER2 delete secret -n kube-system cilium-ca && \
kubectl --context $CLUSTER1 get secrets -n kube-system cilium-ca -oyaml \
    | kubectl --context $CLUSTER2 apply -f -
# you should see the following output
# secret "cilium-ca" deleted
# secret/cilium-ca created
# proceed to restart Cilium on kind-c2
kubectl --context $CLUSTER2 delete pod -l app.kubernetes.io/part-of=cilium -A

Enable clustermesh on Cluster 2 using the new Helm upgrade command

export CILIUM_CLI_MODE=helm
cilium upgrade --help
# note the suggestion to use --helm-set flags
cilium upgrade --context $CLUSTER2 \
    --helm-set clustermesh.useAPIServer=true \
    --helm-set clustermesh.apiserver.service.type=NodePort

Move the secrets expected by the cilium clustermesh connect command (this quirk to be fixed with a helm-mode PR for that command)

k get secrets --context $CLUSTER1 -n kube-system clustermesh-apiserver-remote-cert \
    -oyaml \
    | sed 's/name: .*/name: clustermesh-apiserver-client-cert/' \
    | k apply --context $CLUSTER1 -f -

k get secrets --context $CLUSTER2 -n kube-system clustermesh-apiserver-remote-cert \
    -oyaml \
    | sed 's/name: .*/name: clustermesh-apiserver-client-cert/' \
    | k apply --context $CLUSTER2 -f -

Connect the two clusters using ClusterMesh

cilium clustermesh connect --context $CLUSTER1 --destination-context $CLUSTER2

Run the Multi-Cluster connectivity tests.

cilium connectivity test --context $CLUSTER1 --multi-cluster $CLUSTER2

@asauber asauber temporarily deployed to ci April 18, 2023 20:08 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 19, 2023 20:43 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 19, 2023 21:09 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 19, 2023 21:13 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 19, 2023 21:24 — with GitHub Actions Inactive
@asauber asauber force-pushed the pr/asauber/helm-upgrade-workflow branch from 8e346f0 to ec7b1a7 Compare April 19, 2023 21:27
@asauber asauber temporarily deployed to ci April 19, 2023 21:27 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 19, 2023 21:32 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 19, 2023 21:49 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 19, 2023 22:16 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 19, 2023 22:30 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 19, 2023 22:42 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 20, 2023 16:43 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 20, 2023 16:59 — with GitHub Actions Inactive
@asauber asauber force-pushed the pr/asauber/helm-upgrade-workflow branch from b18941a to 49620a6 Compare April 20, 2023 17:24
@asauber asauber temporarily deployed to ci April 20, 2023 17:24 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 20, 2023 17:41 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 20, 2023 17:54 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 20, 2023 19:18 — with GitHub Actions Inactive
@asauber asauber temporarily deployed to ci April 20, 2023 19:38 — with GitHub Actions Inactive
@asauber asauber force-pushed the pr/asauber/helm-upgrade-workflow branch from d57dda6 to bf3d197 Compare April 20, 2023 20:05
@asauber asauber temporarily deployed to ci April 20, 2023 20:05 — with GitHub Actions Inactive
@asauber asauber force-pushed the pr/asauber/helm-upgrade-workflow branch from bf3d197 to e808da8 Compare April 20, 2023 20:10
@asauber asauber temporarily deployed to ci April 20, 2023 20:10 — with GitHub Actions Inactive
@asauber asauber force-pushed the pr/asauber/helm-upgrade-workflow branch from e808da8 to d752a75 Compare April 20, 2023 20:21
@asauber asauber temporarily deployed to ci April 20, 2023 20:21 — with GitHub Actions Inactive
Signed-off-by: Andrew Sauber <andrew.sauber@isovalent.com>
@asauber asauber force-pushed the pr/asauber/helm-upgrade-workflow branch from d752a75 to 70b420f Compare April 20, 2023 20:32
@asauber asauber temporarily deployed to ci April 20, 2023 20:32 — with GitHub Actions Inactive
@asauber asauber marked this pull request as ready for review April 20, 2023 20:52
@asauber asauber requested review from a team as code owners April 20, 2023 20:52
@asauber asauber requested review from brlbil and pchaigno April 20, 2023 20:52
@asauber asauber changed the title Add Actions Workflow for Clustermesh via Helm helm mode: add workflow for clustermesh via upgrade Apr 24, 2023
@michi-covalent
Copy link
Contributor

@brlbil ping!

@michi-covalent
Copy link
Contributor

is the button green? yes.

@michi-covalent michi-covalent merged commit e497b07 into main May 2, 2023
13 checks passed
@michi-covalent michi-covalent deleted the pr/asauber/helm-upgrade-workflow branch May 2, 2023 14:20
@asauber asauber self-assigned this May 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants