Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into list-workflow
Browse files Browse the repository at this point in the history
Signed-off-by: Jiacheng Xu <xjcmaxwellcjx@gmail.com>
  • Loading branch information
jiachengxu committed Apr 29, 2024
2 parents 1001279 + 2bea4ba commit 66bcc6e
Show file tree
Hide file tree
Showing 29 changed files with 90 additions and 346 deletions.
3 changes: 2 additions & 1 deletion cmd/argo/commands/cp.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ import (
"github.com/argoproj/argo-workflows/v3/pkg/apiclient"
workflowpkg "github.com/argoproj/argo-workflows/v3/pkg/apiclient/workflow"
"github.com/argoproj/argo-workflows/v3/pkg/apis/workflow/v1alpha1"
wfutil "github.com/argoproj/argo-workflows/v3/workflow/util"
)

func NewCpCommand() *cobra.Command {
Expand Down Expand Up @@ -82,7 +83,7 @@ func NewCpCommand() *cobra.Command {
if nodeInfo == nil {
return fmt.Errorf("could not get node status for node ID %s", artifact.NodeID)
}
customPath = strings.Replace(customPath, "{templateName}", nodeInfo.TemplateName, 1)
customPath = strings.Replace(customPath, "{templateName}", wfutil.GetTemplateFromNode(*nodeInfo), 1)
customPath = strings.Replace(customPath, "{namespace}", namespace, 1)
customPath = strings.Replace(customPath, "{workflowName}", workflowName, 1)
customPath = strings.Replace(customPath, "{nodeId}", artifact.NodeID, 1)
Expand Down
3 changes: 2 additions & 1 deletion docs/empty-dir.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Empty Dir

While by default, the Docker and PNS [workflow executors](workflow-executors.md) can get output artifacts/parameters from the base layer (e.g. `/tmp`), neither the Kubelet nor the K8SAPI executors can. It is unlikely you can get output artifacts/parameters from the base layer if you run your workflow pods with a [security context](workflow-pod-security-context.md).
Not all [workflow executors](workflow-executors.md) can get output artifacts/parameters from the base layer (e.g. `/tmp`).
It is unlikely you can get output artifacts/parameters from the base layer if you run your workflow pods with a [security context](workflow-pod-security-context.md).

You can work-around this constraint by mounting volumes onto your pod. The easiest way to do this is to use as `emptyDir` volume.

Expand Down
6 changes: 0 additions & 6 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,3 @@ Is there an RBAC error?
You're probably getting a permission denied error because your RBAC is not configured.

[Learn more about workflow RBAC](workflow-rbac.md) and [even more details](https://blog.argoproj.io/demystifying-argo-workflowss-kubernetes-rbac-7a1406d446fc)

## There is an error about `/var/run/docker.sock`

Try using a different container runtime executor.

[Learn more about executors](workflow-executors.md)
41 changes: 22 additions & 19 deletions docs/quick-start.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,31 @@
# Quick Start

To see how Argo Workflows work, you can install it and run examples of simple workflows.
To try out Argo Workflows, you can install it and run example workflows.

Before you start you need a Kubernetes cluster and `kubectl` set up to be able to access that cluster. For the purposes of getting up and running, a local cluster is fine. You could consider the following local Kubernetes cluster options:
Alternatively, if you don't want to set up a Kubernetes cluster, try the [Killercoda course](training.md#hands-on).

## Prerequisites

Before installing Argo, you need a Kubernetes cluster and `kubectl` configured to access it.
For quick testing, you can use a local cluster with:

* [minikube](https://minikube.sigs.k8s.io/docs/)
* [kind](https://kind.sigs.k8s.io/)
* [k3s](https://k3s.io/) or [k3d](https://k3d.io/)
* [Docker Desktop](https://www.docker.com/products/docker-desktop/)

Alternatively, if you want to try out Argo Workflows and don't want to set up a Kubernetes cluster, try the [Killercoda course](training.md#hands-on).

!!! Warning "Development vs. Production"
These instructions are intended to help you get started quickly. They are not suitable for production. For production installs, please refer to [the installation documentation](installation.md).
These instructions are intended to help you get started quickly. They are not suitable for production.
For production installs, please refer to [the installation documentation](installation.md).

## Install Argo Workflows

To install Argo Workflows, navigate to the [releases page](https://github.com/argoproj/argo-workflows/releases/latest) and find the release you wish to use (the latest full release is preferred).

To install Argo Workflows, go to the [releases page](https://github.com/argoproj/argo-workflows/releases/latest).
Scroll down to the `Controller and Server` section and execute the `kubectl` commands.

Below is an example of the install commands, ensure that you update the command to install the correct version number:
Below is an example of the install commands (substitute `<<ARGO_WORKFLOWS_VERSION>>` with the version number):

```yaml
```bash
kubectl create namespace argo
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v<<ARGO_WORKFLOWS_VERSION>>/quick-start-minimal.yaml
```
Expand All @@ -39,29 +42,29 @@ You can more easily interact with Argo Workflows with the [Argo CLI](walk-throug
argo submit -n argo --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml
```
The `--watch` flag used above will allow you to observe the workflow as it runs and the status of whether it succeeds.
When the workflow completes, the watch on the workflow will stop.
The `--watch` flag watches the workflow as it runs and reports whether it succeeds or not.
When the workflow completes, the watch stops.
You can list all the Workflows you have submitted by running the command below:
You can list all submitted Workflows by running the command below:
```bash
argo list -n argo
```
You will notice the Workflow name has a `hello-world-` prefix followed by random characters. These characters are used
to give Workflows unique names to help identify specific runs of a Workflow. If you submitted this Workflow again,
the next Workflow run would have a different name.
The Workflow name has a `hello-world-` prefix followed by random characters.
These characters give Workflows unique names to help identify specific runs of a Workflow.
If you submit this Workflow again, the next run will have different characters.
Using the `argo get` command, you can always review the details of a Workflow run. The output for the command below will
be the same as the information shown when you submitted the Workflow:
You can review the details of a Workflow run using the `argo get` command.
The output for the command below will be the same as the information shown when you submitted the Workflow:
```bash
argo get -n argo @latest
```
The `@latest` argument to the CLI is a shortcut to view the latest Workflow run that was executed.
The `@latest` argument is a shortcut to view the latest Workflow run.
You can also observe the logs of the Workflow run by running the following:
You can observe the logs of the Workflow run with the following command:
```bash
argo logs -n argo @latest
Expand Down
5 changes: 0 additions & 5 deletions docs/sidecar-injection.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,12 @@ See [#1282](https://github.com/argoproj/argo-workflows/issues/1282).

Key:

* Unsupported - this executor is no longer supported
* Any - we can kill any image
* KubectlExec - we kill images by running `kubectl exec`

| Executor | Sidecar | Injected Sidecar |
|---|---|---|
| `docker` | Any | Unsupported |
| `emissary` | Any | KubectlExec |
| `k8sapi` | Shell | KubectlExec |
| `kubelet` | Shell | KubectlExec |
| `pns` | Any | Any |

## How We Kill Sidecars Using `kubectl exec`

Expand Down
41 changes: 0 additions & 41 deletions docs/workflow-controller-configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -159,47 +159,6 @@ data:
# name: my-s3-credentials
# key: secretKey
# Specifies the container runtime interface to use (default: emissary)
# must be one of: docker, kubelet, k8sapi, pns, emissary
# It has lower precedence than either `--container-runtime-executor` and `containerRuntimeExecutors`.
# (removed in v3.4)
containerRuntimeExecutor: emissary

# Specifies the executor to use.
#
# You can use this to:
# * Tailor your executor based on your preference for security or performance.
# * Test out an executor without committing yourself to use it for every workflow.
#
# To find out which executor was actually use, see the `wait` container logs.
#
# The list is in order of precedence; the first matching executor is used.
# This has precedence over `containerRuntimeExecutor`.
# (removed in v3.4)
containerRuntimeExecutors: |
- name: emissary
selector:
matchLabels:
workflows.argoproj.io/container-runtime-executor: emissary
- name: pns
selector:
matchLabels:
workflows.argoproj.io/container-runtime-executor: pns
# Specifies the location of docker.sock on the host for docker executor (default: /var/run/docker.sock)
# (available v2.4-v3.3)
dockerSockPath: /var/someplace/else/docker.sock

# kubelet port when using kubelet executor (default: 10250) (kubelet executor will be deprecated use emissary instead)
# (removed in v3.4)
kubeletPort: "10250"

# disable the TLS verification of the kubelet executor (default: false)
# (removed in v3.4)
kubeletInsecure: "false"

# The command/args for each image, needed when the command is not specified and the emissary executor is used.
# https://argo-workflows.readthedocs.io/en/latest/workflow-executors/#emissary-emissary
images: |
Expand Down
81 changes: 1 addition & 80 deletions docs/workflow-executors.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,12 @@

A workflow executor is a process that conforms to a specific interface that allows Argo to perform certain actions like monitoring pod logs, collecting artifacts, managing container life-cycles, etc.

The executor to be used in your workflows can be changed in [the config map](./workflow-controller-configmap.yaml) under the `containerRuntimeExecutor` key (removed in v3.4).

## Emissary (emissary)

> v3.1 and after
Default in >= v3.3.
Only option in >= v3.4.

This is the most fully featured executor.

Expand Down Expand Up @@ -45,81 +44,3 @@ Emissary will create a cache entry, using image with version as key and command
### Exit Code 64

The emissary will exit with code 64 if it fails. This may indicate a bug in the emissary.

## Docker (docker)

⚠️Deprecated. Removed in v3.4.

Default in <= v3.2.

* Least secure:
* It requires `privileged` access to `docker.sock` of the host to be mounted which. Often rejected by Open Policy Agent (OPA) or your Pod Security Policy (PSP).
* It can escape the privileges of the pod's service account
* It cannot [`runAsNonRoot`](workflow-pod-security-context.md).
* Equal most scalable:
* It communicates directly with the local Docker daemon.
* Artifacts:
* Output artifacts can be located on the base layer (e.g. `/tmp`).
* Configuration:
* No additional configuration needed.

**Note**: when using docker as workflow executors, messages printed in both `stdout` and `stderr` are captured in the [Argo variable](./variables.md#scripttemplate) `.outputs.result`.

## Kubelet (kubelet)

⚠️Deprecated. Removed in v3.4.

* Secure
* No `privileged` access
* Cannot escape the privileges of the pod's service account
* [`runAsNonRoot`](workflow-pod-security-context.md) - TBD, see [#4186](https://github.com/argoproj/argo-workflows/issues/4186)
* Scalable:
* Operations performed against the local Kubelet
* Artifacts:
* Output artifacts must be saved on volumes (e.g. [empty-dir](empty-dir.md)) and not the base image layer (e.g. `/tmp`)
* Step/Task result:
* Warnings that normally goes to stderr will get captured in a step or a dag task's `outputs.result`. May require changes if your pipeline is conditioned on `steps/tasks.name.outputs.result`
* Configuration:
* Additional Kubelet configuration maybe needed

## Kubernetes API (`k8sapi`)

⚠️Deprecated. Removed in v3.4.

* Reliability:
* Works on GKE Autopilot
* Most secure:
* No `privileged` access
* Cannot escape the privileges of the pod's service account
* Can [`runAsNonRoot`](workflow-pod-security-context.md)
* Least scalable:
* Log retrieval and container operations performed against the remote Kubernetes API
* Artifacts:
* Output artifacts must be saved on volumes (e.g. [empty-dir](empty-dir.md)) and not the base image layer (e.g. `/tmp`)
* Step/Task result:
* Warnings that normally goes to stderr will get captured in a step or a dag task's `outputs.result`. May require changes if your pipeline is conditioned on `steps/tasks.name.outputs.result`
* Configuration:
* No additional configuration needed.

## Process Namespace Sharing (`pns`)

⚠️Deprecated. Removed in v3.4.

* More secure:
* No `privileged` access
* cannot escape the privileges of the pod's service account
* Can [`runAsNonRoot`](workflow-pod-security-context.md), if you use volumes (e.g. [empty-dir](empty-dir.md)) for your output artifacts
* Processes are visible to other containers in the pod. This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions.
* Scalable:
* Most operations use local `procfs`.
* Log retrieval uses the remote Kubernetes API
* Artifacts:
* Output artifacts can be located on the base layer (e.g. `/tmp`)
* Cannot capture artifacts from a base layer which has a volume mounted under it
* Cannot capture artifacts from base layer if the container is short-lived.
* Configuration:
* No additional configuration needed.
* Process will no longer run with PID 1
* [Doesn't work for Windows containers](https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-pod).

[Learn more](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/)
6 changes: 3 additions & 3 deletions docs/workflow-pod-security-context.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Workflow Pod Security Context

By default, all workflow pods run as root. The Docker executor even requires `privileged: true`.
By default, all workflow pods run as root.

For other [workflow executors](workflow-executors.md), you can run your workflow pods more securely by configuring the [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for your workflow pod.
You can run your workflow pods more securely by configuring the [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for your workflow pod.

This is likely to be necessary if you have a [pod security policy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/). You probably can't use the Docker executor if you have a pod security policy.
This is likely to be necessary if you have a [pod security policy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/).

```yaml
apiVersion: argoproj.io/v1alpha1
Expand Down
22 changes: 0 additions & 22 deletions docs/workflow-rbac.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,25 +26,3 @@ rules:
- create
- patch
```

For <= v3.3 use.

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: executor
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- patch
```

Warning: For many organizations, it may not be acceptable to give a workflow the `pod patch` permission, see [#3961](https://github.com/argoproj/argo-workflows/issues/3961)

If you are not using the emissary, you'll need additional permissions.
See [executor](https://github.com/argoproj/argo-workflows/tree/main/manifests/quick-start/base/executor) for suitable permissions.
3 changes: 1 addition & 2 deletions hack/manifests/crdgen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,12 @@ add_header() {
controller-gen crd:maxDescLen=0 paths=./pkg/apis/... output:dir=manifests/base/crds/full

find manifests/base/crds/full -name 'argoproj.io*.yaml' | while read -r file; do
echo "Patching ${file}"
# remove junk fields
go run ./hack/manifests cleancrd "$file"
add_header "$file"
# create minimal
minimal="manifests/base/crds/minimal/$(basename "$file")"
echo "Creating ${minimal}"
echo "Creating minimal CRD file: ${minimal}"
cp "$file" "$minimal"
go run ./hack/manifests removecrdvalidation "$minimal"
done
18 changes: 0 additions & 18 deletions manifests/quick-start/base/executor/docker/executor-role.yaml

This file was deleted.

37 changes: 0 additions & 37 deletions manifests/quick-start/base/executor/k8sapi/executor-role.yaml

This file was deleted.

0 comments on commit 66bcc6e

Please sign in to comment.