You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: website/content/en/preview/troubleshooting.md
+12
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,18 @@ description: >
6
6
Troubleshoot Karpenter problems
7
7
---
8
8
9
+
## Unable to schedule pod due to insufficient node group instances
10
+
11
+
v0.16 changed the default replicas from 1 to 2.
12
+
13
+
Karpenter won't launch capacity to run itself (log related to the `karpenter.sh/provisioner-name DoesNotExist requirement`)
14
+
so it can't provision for the second Karpenter pod.
15
+
16
+
To solve this you can either reduce the replicas back from 2 to 1, or ensure there is enough capacity that isn't being managed by Karpenter
17
+
(these are instances with the name `karpenter.sh/provisioner-name/<PROVIDER_NAME>`) to run both pods.
18
+
19
+
To do so on AWS increase the `minimum` and `desired` parameters on the node group autoscaling group to launch at lease 2 instances.
20
+
9
21
## Node not created
10
22
In some circumstances, Karpenter controller can fail to start up a node.
11
23
For example, providing the wrong block storage device name in a custom launch template can result in a failure to start the node and an error similar to:
Copy file name to clipboardexpand all lines: website/content/en/v0.16.0/troubleshooting.md
+12
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,18 @@ description: >
6
6
Troubleshoot Karpenter problems
7
7
---
8
8
9
+
## Unable to schedule pod due to insufficient node group instances
10
+
11
+
v0.16 changed the default replicas from 1 to 2.
12
+
13
+
Karpenter won't launch capacity to run itself (log related to the `karpenter.sh/provisioner-name DoesNotExist requirement`)
14
+
so it can't provision for the second Karpenter pod.
15
+
16
+
To solve this you can either reduce the replicas back from 2 to 1, or ensure there is enough capacity that isn't being managed by Karpenter
17
+
(these are instances with the name `karpenter.sh/provisioner-name/<PROVIDER_NAME>`) to run both pods.
18
+
19
+
To do so on AWS increase the `minimum` and `desired` parameters on the node group autoscaling group to launch at lease 2 instances.
20
+
9
21
## Node not created
10
22
In some circumstances, Karpenter controller can fail to start up a node.
11
23
For example, providing the wrong block storage device name in a custom launch template can result in a failure to start the node and an error similar to:
0 commit comments