Skip to content

Commit

Permalink
Merge pull request #7539 from yuxiang-zhang/update-layout
Browse files Browse the repository at this point in the history
Announce eksctl Support Status Update on eksctl.io
  • Loading branch information
yuxiang-zhang committed Feb 7, 2024
2 parents 02c2066 + 3c2bb26 commit 4e3d0e4
Show file tree
Hide file tree
Showing 7 changed files with 89 additions and 144 deletions.
14 changes: 8 additions & 6 deletions userdocs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ theme:
features:
- header.autohide
- navigation.instant
- navigation.sections
- navigation.top
- navigation.tabs
- navigation.tabs.sticky
Expand Down Expand Up @@ -135,10 +134,13 @@ nav:
- Getting Started:
- Introduction: getting-started.md
- Installation: installation.md
- Config File Schema: usage/schema.md
- Dry Run: usage/dry-run.md
- FAQ: usage/faq.md
- Announcements:
- announcements/managed-nodegroups-announcement.md
- announcements/nodegroup-override-announcement.md
- Usage:
- User Guide:
- Clusters:
- usage/creating-and-managing-clusters.md
- usage/access-entries.md
Expand Down Expand Up @@ -170,6 +172,8 @@ nav:
- usage/container-runtime.md
- usage/windows-worker-nodes.md
- usage/nodegroup-additional-volume-mappings.md
- usage/eksctl-karpenter.md
- usage/eksctl-anywhere.md
- GitOps:
- usage/gitops-v2.md
- Security:
Expand All @@ -189,12 +193,10 @@ nav:
- usage/iam-identity-mappings.md
- usage/iamserviceaccounts.md
- usage/pod-identity-associations.md
- usage/dry-run.md
- usage/schema.md
- usage/eksctl-anywhere.md
- usage/eksctl-karpenter.md
- usage/dry-run.md
- usage/troubleshooting.md
- FAQ: usage/faq.md
- Examples: "https://github.com/eksctl-io/eksctl/tree/main/examples"
- Example Configs: "https://github.com/eksctl-io/eksctl/tree/main/examples"
- Community: community.md
- Adopters: adopters.md
52 changes: 2 additions & 50 deletions userdocs/src/community.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ For more information, please head to our [Community][community] and [Contributin
[community]: https://github.com/eksctl-io/eksctl/blob/main/COMMUNITY.md
[contributing]: https://github.com/eksctl-io/eksctl/blob/main/CONTRIBUTING.md

## Get in touch :simple-wechat:
## Get in touch :material-chat-processing-outline:

[Create an issue](https://github.com/eksctl-io/eksctl/issues/new), or login to [Eksctl Slack (#eksctl)][slackchan] ([signup][slackjoin]).

[slackjoin]: https://slack.k8s.io/
[slackchan]: https://slack.k8s.io/messages/eksctl/

## Release Cadence :material-clipboard-check-multiple-outline:
## Release Cadence :simple-starship:

Minor releases of `eksctl` are loosely scheduled for weekly on Fridays. Patch
releases will be made available as needed.
Expand All @@ -28,51 +28,3 @@ each minor release. RC builds are intended only for testing purposes.
## Eksctl Roadmap :octicons-project-roadmap-16:

The EKS section of the AWS Containers Roadmap contains the overall roadmap for EKS. All the upcoming features for `eksctl` built in partnership with AWS can be found [here](https://github.com/aws/containers-roadmap/projects/1?card_filter_query=label%3Aeks).

## 2021 Roadmap

The following are the features/epics we will focus on and hope to ship this year.
We will take their completion as a marker for graduation to v1.
General maintenance of `eksctl` is still implied alongside this work,
but all subsequent features which are suggested during the year will be weighed
in relation to the core targets.

Progress on the roadmap can be tracked [here](https://github.com/eksctl-io/eksctl/projects/2).

### Technical Debt

Not a feature, but a vital pre-requisite to making actual feature work straightforward.

Key aims within this goal include, but are not limited to:

- [Refactoring/simplifying the Provider](https://github.com/eksctl-io/eksctl/issues/2931)
- [Expose core `eksctl` workflows through a library/SDK](https://github.com/eksctl-io/eksctl/issues/813)
- Greater integration test coverage and resilience
- Greater unit test coverage (this will either be dependent on, or help drive out,
better internal interface boundaries)

### Declarative configuration and cluster reconciliation

This has been on the TODO list for quite a while, and we are very excited to bring
it into scope for 2021

Current interaction with `eksctl` is imperative, we hope to add support for declarative
configuration and cluster reconciliation via a new `eksctl apply -f config.yaml`
command. This model will additionally allow users to manage a cluster via a git repo.

A [WIP proposal](https://github.com/eksctl-io/eksctl/blob/main/docs/proposal-007-apply.md)
is already under consideration, to participate in the development of this feature
please refer to the [tracking issue](https://github.com/eksctl-io/eksctl/issues/2774)
and our [proposal contributing guide](https://github.com/eksctl-io/eksctl/blob/main/CONTRIBUTING.md#proposals).

### Flux v2 integration (GitOps Toolkit)

In 2019 `eksctl` gave users a way to easily create a Gitops-ready ([Flux v1](https://docs.fluxcd.io/en/1.21.1/))
cluster and to declare a set of pre-installed applications Quickstart profiles which can be managed via a git repo.

Since then, the practice of GitOps has matured, therefore `eksctl`'s support of
GitOps has changed to keep up with current standards. From version 0.76.0 Flux v1 support was removed after an 11
month deprecation period. In its place support for [Flux v2](https://fluxcd.io/) can be used via
`eksctl enable flux`


106 changes: 52 additions & 54 deletions userdocs/src/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,16 @@

!!! tip "New for 2023"
`eksctl` now supports configuring cluster access management via [AWS EKS Access Entries](/usage/access-entries).

`eksctl` now supports configuring fine-grained permissions to EKS running apps via [EKS Pod Identity Associations](/usage/pod-identity-associations)


`eksctl` now supports [updating the subnets and security groups](/usage/cluster-subnets-security-groups) associated with the EKS control plane.

`eksctl` now supports creating fully private clusters on [AWS Outposts](/usage/outposts).

`eksctl` now supports new ISO regions `us-iso-east-1` and `us-isob-east-1`.

`eksctl` now supports new regions - Calgary (`ca-west-1`), Zurich (`eu-central-2`), Spain (`eu-south-2`), Hyderabad (`ap-south-2`), Melbourne (`ap-southeast-4`) and Tel Aviv (`il-central-1`).
`eksctl` now supports new regions - Calgary (`ca-west-1`), Tel Aviv (`il-central-1`), Melbourne (`ap-southeast-4`), Hyderabad (`ap-south-2`), Spain (`eu-south-2`) and Zurich (`eu-central-2`).

`eksctl` is a simple CLI tool for creating and managing clusters on EKS - Amazon's managed Kubernetes service for EC2.
It is written in Go, uses CloudFormation, was created by [Weaveworks](https://www.weave.works/) and it welcomes
Expand All @@ -32,38 +31,37 @@ contributions from the community.
- `us-west-2` region
- a dedicated VPC (check your quotas)

Example output:

```sh
$ eksctl create cluster
[ℹ] using region us-west-2
[ℹ] setting availability zones to [us-west-2a us-west-2c us-west-2b]
[ℹ] subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for us-west-2c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup "ng-98b3b83a" will use "ami-05ecac759c81e0b0c" [AmazonLinux2/1.11]
[ℹ] creating EKS cluster "floral-unicorn-1540567338" in "us-west-2" region
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=floral-unicorn-1540567338'
[ℹ] 2 sequential tasks: { create cluster control plane "floral-unicorn-1540567338", create nodegroup "ng-98b3b83a" }
[ℹ] building cluster stack "eksctl-floral-unicorn-1540567338-cluster"
[ℹ] deploying stack "eksctl-floral-unicorn-1540567338-cluster"
[ℹ] building nodegroup stack "eksctl-floral-unicorn-1540567338-nodegroup-ng-98b3b83a"
[ℹ] --nodes-min=2 was set automatically for nodegroup ng-98b3b83a
[ℹ] --nodes-max=2 was set automatically for nodegroup ng-98b3b83a
[ℹ] deploying stack "eksctl-floral-unicorn-1540567338-nodegroup-ng-98b3b83a"
[✔] all EKS cluster resource for "floral-unicorn-1540567338" had been created
[✔] saved kubeconfig as "~/.kube/config"
[ℹ] adding role "arn:aws:iam::376248598259:role/eksctl-ridiculous-sculpture-15547-NodeInstanceRole-1F3IHNVD03Z74" to auth ConfigMap
[ℹ] nodegroup "ng-98b3b83a" has 1 node(s)
[ℹ] node "ip-192-168-64-220.us-west-2.compute.internal" is not ready
[ℹ] waiting for at least 2 node(s) to become ready in "ng-98b3b83a"
[ℹ] nodegroup "ng-98b3b83a" has 2 node(s)
[ℹ] node "ip-192-168-64-220.us-west-2.compute.internal" is ready
[ℹ] node "ip-192-168-8-135.us-west-2.compute.internal" is ready
[ℹ] kubectl command should work with "~/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "floral-unicorn-1540567338" in "us-west-2" region is ready
```
???- info "Example output"
```sh
$ eksctl create cluster
[ℹ] using region us-west-2
[ℹ] setting availability zones to [us-west-2a us-west-2c us-west-2b]
[ℹ] subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for us-west-2c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup "ng-98b3b83a" will use "ami-05ecac759c81e0b0c" [AmazonLinux2/1.11]
[ℹ] creating EKS cluster "floral-unicorn-1540567338" in "us-west-2" region
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=floral-unicorn-1540567338'
[ℹ] 2 sequential tasks: { create cluster control plane "floral-unicorn-1540567338", create nodegroup "ng-98b3b83a" }
[ℹ] building cluster stack "eksctl-floral-unicorn-1540567338-cluster"
[ℹ] deploying stack "eksctl-floral-unicorn-1540567338-cluster"
[ℹ] building nodegroup stack "eksctl-floral-unicorn-1540567338-nodegroup-ng-98b3b83a"
[ℹ] --nodes-min=2 was set automatically for nodegroup ng-98b3b83a
[ℹ] --nodes-max=2 was set automatically for nodegroup ng-98b3b83a
[ℹ] deploying stack "eksctl-floral-unicorn-1540567338-nodegroup-ng-98b3b83a"
[✔] all EKS cluster resource for "floral-unicorn-1540567338" had been created
[✔] saved kubeconfig as "~/.kube/config"
[ℹ] adding role "arn:aws:iam::376248598259:role/eksctl-ridiculous-sculpture-15547-NodeInstanceRole-1F3IHNVD03Z74" to auth ConfigMap
[ℹ] nodegroup "ng-98b3b83a" has 1 node(s)
[ℹ] node "ip-192-168-64-220.us-west-2.compute.internal" is not ready
[ℹ] waiting for at least 2 node(s) to become ready in "ng-98b3b83a"
[ℹ] nodegroup "ng-98b3b83a" has 2 node(s)
[ℹ] node "ip-192-168-64-220.us-west-2.compute.internal" is ready
[ℹ] node "ip-192-168-8-135.us-west-2.compute.internal" is ready
[ℹ] kubectl command should work with "~/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "floral-unicorn-1540567338" in "us-west-2" region is ready
```

Customize your cluster by using a config file. Just run

Expand Down Expand Up @@ -101,7 +99,15 @@ To learn more about how to create clusters and other features continue reading t

[ekskubectl]: https://docs.aws.amazon.com/eks/latest/userguide/configure-kubectl.html

### Basic cluster creation
## Listing clusters

To list the details about a cluster or all of the clusters, use:

```sh
eksctl get cluster [--name=<name>] [--region=<region>]
```

## Basic cluster creation

To create a basic cluster, but with a different name, run:

Expand All @@ -116,15 +122,7 @@ With `eksctl` you can deploy any of the supported versions by passing `--version
eksctl create cluster --version=1.24
```

### Listing clusters
To list the details about a cluster or all of the clusters, use:
```sh
eksctl get cluster [--name=<name>][--region=<region>]
```
#### Config-based creation
### Config-based creation

You can also create a cluster passing all configuration information in a file
using `--config-file`:
Expand All @@ -140,7 +138,7 @@ nodegroups until later:
eksctl create cluster --config-file=<path> --without-nodegroup
```

#### Cluster credentials
### Cluster credentials

To write cluster credentials to a file other than default, run:

Expand All @@ -163,10 +161,10 @@ eksctl create cluster --name=cluster-3 --nodes=4 --auto-kubeconfig
To obtain cluster credentials at any point in time, run:

```sh
eksctl utils write-kubeconfig --cluster=<name> [--kubeconfig=<path>][--set-kubeconfig-context=<bool>]
eksctl utils write-kubeconfig --cluster=<name> [--kubeconfig=<path>] [--set-kubeconfig-context=<bool>]
```

#### Caching Credentials
### Caching Credentials

`eksctl` supports caching credentials. This is useful when using MFA and not wanting to continuously enter the MFA
token on each `eksctl` command run.
Expand All @@ -184,7 +182,7 @@ It's also possible to configure the location of this cache file using `EKSCTL_CR
be the **full path** to a file in which to store the cached credentials. These are credentials, so make sure the access
of this file is restricted to the current user and in a secure location.

### Autoscaling
## Autoscaling

To use a 3-5 node Auto Scaling Group, run:

Expand All @@ -196,7 +194,7 @@ You will still need to install and configure Auto Scaling. See the "Enable Auto
note that depending on your workloads you might need to use a separate nodegroup for each AZ. See [Zone-aware
Auto Scaling](/usage/autoscaling/) for more info.

### SSH access
## SSH access

In order to allow SSH access to nodes, `eksctl` imports `~/.ssh/id_rsa.pub` by default, to use a different SSH public key, e.g. `my_eks_node_id.pub`, run:

Expand All @@ -218,15 +216,15 @@ eksctl create cluster --enable-ssm

If you are creating managed nodes with a custom launch template, the `--enable-ssm` flag is disallowed.

### Tagging
## Tagging

To add custom tags for all resources, use `--tags`.

```sh
eksctl create cluster --tags environment=staging --region=us-east-1
```

### Volume size
## Volume size

To configure node root volume, use the `--node-volume-size` (and optionally `--node-volume-type`), e.g.:

Expand All @@ -237,7 +235,7 @@ eksctl create cluster --node-volume-size=50 --node-volume-type=io1
???+ note
The default volume size is 80G.

### Deletion
## Deletion

To delete a cluster, run:

Expand Down
3 changes: 2 additions & 1 deletion userdocs/src/stylesheets/extra.css
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,9 @@ img[src$="#wwinline"] {
--md-default-fg-color: #c8cad5;
}

a.md-button.announce {
.announce {
padding: 0.3em 0.3em;
background-color: var(--md-primary-bg-color);
}

.md-header__button.md-logo img, .md-header__button.md-logo svg {
Expand Down
5 changes: 2 additions & 3 deletions userdocs/src/usage/schema.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
# Config file schema
# Config File Schema

Use `eksctl utils schema` to get the raw JSON schema.
<script type="module" src="../schema.js"></script>

<table id="config"></table>


0 comments on commit 4e3d0e4

Please sign in to comment.