Skip to content

Commit 5492dbc

Browse files
authoredSep 1, 2022
docs: Update terraform example and troubleshoot guide for new v.0.16… (#2387)
Co-authored-by: Matthias Gamsjager <matthias.gamsjager@cloudnation.nl>
1 parent af3a019 commit 5492dbc

File tree

4 files changed

+32
-6
lines changed

4 files changed

+32
-6
lines changed
 

‎website/content/en/preview/getting-started/getting-started-with-terraform/_index.md

+4-3
Original file line numberDiff line numberDiff line change
@@ -160,9 +160,10 @@ module "eks" {
160160
# Not required nor used - avoid tagging two security groups with same tag as well
161161
create_security_group = false
162162
163-
min_size = 1
164-
max_size = 1
165-
desired_size = 1
163+
# Ensure enough capacity to run 2 Karpenter pods
164+
min_size = 2
165+
max_size = 3
166+
desired_size = 2
166167
167168
iam_role_additional_policies = [
168169
# Required by Karpenter

‎website/content/en/preview/troubleshooting.md

+12
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,18 @@ description: >
66
Troubleshoot Karpenter problems
77
---
88

9+
## Unable to schedule pod due to insufficient node group instances
10+
11+
v0.16 changed the default replicas from 1 to 2.
12+
13+
Karpenter won't launch capacity to run itself (log related to the `karpenter.sh/provisioner-name DoesNotExist requirement`)
14+
so it can't provision for the second Karpenter pod.
15+
16+
To solve this you can either reduce the replicas back from 2 to 1, or ensure there is enough capacity that isn't being managed by Karpenter
17+
(these are instances with the name `karpenter.sh/provisioner-name/<PROVIDER_NAME>`) to run both pods.
18+
19+
To do so on AWS increase the `minimum` and `desired` parameters on the node group autoscaling group to launch at lease 2 instances.
20+
921
## Node not created
1022
In some circumstances, Karpenter controller can fail to start up a node.
1123
For example, providing the wrong block storage device name in a custom launch template can result in a failure to start the node and an error similar to:

‎website/content/en/v0.16.0/getting-started/getting-started-with-terraform/_index.md

+4-3
Original file line numberDiff line numberDiff line change
@@ -160,9 +160,10 @@ module "eks" {
160160
# Not required nor used - avoid tagging two security groups with same tag as well
161161
create_security_group = false
162162
163-
min_size = 1
164-
max_size = 1
165-
desired_size = 1
163+
# Ensure enough capacity to run 2 Karpenter pods
164+
min_size = 2
165+
max_size = 3
166+
desired_size = 2
166167
167168
iam_role_additional_policies = [
168169
# Required by Karpenter

‎website/content/en/v0.16.0/troubleshooting.md

+12
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,18 @@ description: >
66
Troubleshoot Karpenter problems
77
---
88

9+
## Unable to schedule pod due to insufficient node group instances
10+
11+
v0.16 changed the default replicas from 1 to 2.
12+
13+
Karpenter won't launch capacity to run itself (log related to the `karpenter.sh/provisioner-name DoesNotExist requirement`)
14+
so it can't provision for the second Karpenter pod.
15+
16+
To solve this you can either reduce the replicas back from 2 to 1, or ensure there is enough capacity that isn't being managed by Karpenter
17+
(these are instances with the name `karpenter.sh/provisioner-name/<PROVIDER_NAME>`) to run both pods.
18+
19+
To do so on AWS increase the `minimum` and `desired` parameters on the node group autoscaling group to launch at lease 2 instances.
20+
921
## Node not created
1022
In some circumstances, Karpenter controller can fail to start up a node.
1123
For example, providing the wrong block storage device name in a custom launch template can result in a failure to start the node and an error similar to:

0 commit comments

Comments
 (0)
Please sign in to comment.