Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: delete pod when base container completes #39694

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

jonathan-ostrander
Copy link

Closes #39693

Special logic was added to KubernetesPodOperator's lifecycle to handle the case where an istio proxy sidecar is running and preventing the pod from completing, but this logic should have been applied more generally to handle when multiple containers are ran in the pod. The new behavior considers the pod completed if the container matching the base name has completed.

The istio-specific logic has been removed because it is no longer used.


^ Add meaningful description above
Read the Pull Request Guidelines for more information.
In case of fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in a newsfragment file, named {pr_number}.significant.rst or {issue_number}.significant.rst, in newsfragments.

Fixes apache#39693.

Special logic was added to `KubernetesPodOperator`'s lifecycle to handle the case where an istio proxy sidecar is running and preventing the pod from completing, but this logic should have been applied more generally to handle when multiple containers are ran in the pod. The new behavior considers the pod completed if the container matching the base name has completed.
Copy link

boring-cyborg bot commented May 17, 2024

Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contributors' Guide (https://github.com/apache/airflow/blob/main/contributing-docs/README.rst)
Here are some useful points:

  • Pay attention to the quality of your code (ruff, mypy and type annotations). Our pre-commits will help you with that.
  • In case of a new feature add useful documentation (in docstrings or in docs/ directory). Adding a new operator? Check this short guide Consider adding an example DAG that shows how users should use it.
  • Consider using Breeze environment for testing locally, it's a heavy docker but it ships with a working Airflow and a lot of integrations.
  • Be patient and persistent. It might take some time to get a review or get the final approval from Committers.
  • Please follow ASF Code of Conduct for all communication including (but not limited to) comments on Pull Requests, Mailing list and Slack.
  • Be sure to read the Airflow Coding style.
  • Always keep your Pull Requests rebased, otherwise your build might fail due to changes not related to your commits.
    Apache Airflow is a community-driven project and together we are making it better 🚀.
    In case of doubts contact the developers at:
    Mailing List: dev@airflow.apache.org
    Slack: https://s.apache.org/airflow-slack

@jedcunningham
Copy link
Member

Did you test on_finish_action="keep_pod"? We don't always delete the pods. I doubt we can do away with our istio handling like this, and I suspect that got broken somehow recently.

@jedcunningham
Copy link
Member

I did a little digging, and that scenario was never covered when the feature was added in #33306 😞. I don't think we want to leave these pods still running.

@jedcunningham
Copy link
Member

Yeah, it's tough to 1) do it generally 2) not delete pods and 3) not leave the pods running. That's why istio has (or tried to have it turns out) special handling.

I'm actually excited about the new(ish) sidecar support in k8s. It should make this situation a lot easier to tackle. Istio does have support for it also via ENABLE_NATIVE_SIDECARS. I'm not sure how well Airflow tolerates that though, I just know I haven't personally tried it yet.

@jonathan-ostrander
Copy link
Author

@jedcunningham I'm not sure I understand your point about OnFinishAction.KEEP_POD or why the istio side car needs to be treated as a special case. Shouldn't that finish action be respected regardless of what's running in the pod? This doesn't change the behavior of KubernetesPodOperator#process_pod_deletion at all. It just changes the condition for a pod to be considered completed.

I'm also excited by the introduction of sidecars, but sadly the GKE cluster that my team has airflow deployed on is currently on k8s version 1.27.x and our infra team doesn't have plans to upgrade to 1.29.x for a couple of months... We have a workaround which is to just enable the istio proxy (or just name the sidecar container istio-proxy) but this feels hacky considering we don't need istio.

Copy link
Collaborator

@dirrao dirrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the case of additional sidecars, the additional sidecars (i.e. Splunk) have to wait for the completion of the base container and do self-kill. If not, we need to kill the additional sidecars after the timeout. Are we handling that scenario?

airflow/providers/cncf/kubernetes/operators/pod.py Outdated Show resolved Hide resolved
Copy link
Member

@ashb ashb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's elsewhere in the code path so doesn't show up in the diff, but where do we terminate the other containers in the pod?

@jonathan-ostrander
Copy link
Author

jonathan-ostrander commented May 19, 2024

@dirrao
Copy link
Collaborator

dirrao commented May 19, 2024

closes: #39625

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:providers provider:cncf-kubernetes Kubernetes provider related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

KubernetesPodOperator with multiple containers hangs if container other than base container is still running
5 participants