Skip to content

Commit

Permalink
Remove docs related to in-tree support to GPU
Browse files Browse the repository at this point in the history
The in-tree support to GPU is completely removed in release 1.11.
This PR removes the related docs in release-1.11 branch.

xref: kubernetes/kubernetes#61498
  • Loading branch information
tengqm committed May 3, 2018
1 parent 8cc303d commit cb264f8
Show file tree
Hide file tree
Showing 3 changed files with 0 additions and 69 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -206,12 +206,10 @@ $ kubectl describe nodes e2e-test-minion-group-4lw4
Name: e2e-test-minion-group-4lw4
[ ... lines removed for clarity ...]
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 7679792Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 1800m
memory: 7474992Ki
pods: 110
Expand Down
2 changes: 0 additions & 2 deletions docs/tasks/administer-cluster/extended-resource-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,6 @@ The output shows that the Node has a capacity of 4 dongles:

```
"capacity": {
"alpha.kubernetes.io/nvidia-gpu": "0",
"cpu": "2",
"memory": "2049008Ki",
"example.com/dongle": "4",
Expand All @@ -98,7 +97,6 @@ Once again, the output shows the dongle resource:

```yaml
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 2049008Ki
example.com/dongle: 4
Expand Down
65 changes: 0 additions & 65 deletions docs/tasks/manage-gpus/scheduling-gpus.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,68 +143,3 @@ spec:

This will ensure that the pod will be scheduled to a node that has the GPU type
you specified.

## v1.6 and v1.7
To enable GPU support in 1.6 and 1.7, a special **alpha** feature gate
`Accelerators` has to be set to true across the system:
`--feature-gates="Accelerators=true"`. It also requires using the Docker
Engine as the container runtime.

Further, the Kubernetes nodes have to be pre-installed with NVIDIA drivers.
Kubelet will not detect NVIDIA GPUs otherwise.

When you start Kubernetes components after all the above conditions are true,
Kubernetes will expose `alpha.kubernetes.io/nvidia-gpu` as a schedulable
resource.

You can consume these GPUs from your containers by requesting
`alpha.kubernetes.io/nvidia-gpu` just like you request `cpu` or `memory`.
However, there are some limitations in how you specify the resource requirements
when using GPUs:
- GPUs are only supposed to be specified in the `limits` section, which means:
* You can specify GPU `limits` without specifying `requests` because
Kubernetes will use the limit as the request value by default.
* You can specify GPU in both `limits` and `requests` but these two values
must be equal.
* You cannot specify GPU `requests` without specifying `limits`.
- Containers (and pods) do not share GPUs. There's no overcommitting of GPUs.
- Each container can request one or more GPUs. It is not possible to request a
fraction of a GPU.

When using `alpha.kubernetes.io/nvidia-gpu` as the resource, you also have to
mount host directories containing NVIDIA libraries (libcuda.so, libnvidia.so
etc.) to the container.

Here's an example:

```yaml
apiVersion: v1
kind: Pod
metadata:
name: cuda-vector-add
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
# https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
limits:
alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU
volumeMounts:
- name: "nvidia-libraries"
mountPath: "/usr/local/nvidia/lib64"
volumes:
- name: "nvidia-libraries"
hostPath:
path: "/usr/lib/nvidia-375"
```

The `Accelerators` feature gate and `alpha.kubernetes.io/nvidia-gpu` resource
works on 1.8 and 1.9 as well. It will be deprecated in 1.10 and removed in
1.11.

## Future
- Support for hardware accelerators in Kubernetes is still in alpha.
- Better APIs will be introduced to provision and consume accelerators in a scalable manner.
- Kubernetes will automatically ensure that applications consuming GPUs get the best possible performance.

0 comments on commit cb264f8

Please sign in to comment.