Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Flyte array plugin fails with "ResourceDeletedExternally" #5272

Open
2 tasks done
pablocasares opened this issue Apr 23, 2024 · 2 comments
Open
2 tasks done

[BUG] Flyte array plugin fails with "ResourceDeletedExternally" #5272

pablocasares opened this issue Apr 23, 2024 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@pablocasares
Copy link

Describe the bug

In some workflows executions using the map_task the workflow fails because of
[498]: code:"ResourceDeletedExternally" message:"resource not found, name [ingestion-pipeline-production/jtep6yfsp4re7teyxnfs-n4-0-498]. reason: pods \"jtep6yfsp4re7teyxnfs-n4-0-498\" not found" [500]: code:"ResourceDeletedExternally" message:"resource not found, name [ingestion-pipeline-production/jtep6yfsp4re7teyxnfs-n4-0-500]. reason: pods \"jtep6yfsp4re7teyxnfs-n4-0-500\" not found" [501]: code:"ResourceDeletedExternally" message:"resource not found, name [ingestion-pipeline-production/jtep6yfsp4re7teyxnfs-n4-0-501]. reason: pods \"jtep6yfsp4re7teyxnfs-n4-0-501\" not found" [502]: code:"ResourceDeletedExternally" message:"resource not found, name [ingestion-pipeline-production/jtep6yfsp4re7teyxnfs-n4-0-502]. reason: pods \"jtep6yfsp4re7teyxnfs-n4-0-502\" not found" [503]: code:"ResourceDeletedExternally" message:"resource not found, name [ingestion-pipeline-production/jtep6yfsp4re7teyxnfs-n4-0-503]. reason: pods \"jtep6yfsp4re7teyxnfs-n4-0-503\" not found" ... and many more.

This seems to be happening because the pod is being removed before the final status is checked by the propeller.
Those pods mentioned didn't failed. They were executed normally but it seems they are being cleaned up before the propeller reads the Succeeded final status.

Expected behavior

Flyte array plugin should not fail because of this. The pod final status should be checked before being cleaned.

Additional context to reproduce

No response

Screenshots

No response

Are you sure this issue hasn't been raised already?

  • Yes

Have you read the Code of Conduct?

  • Yes
@pablocasares pablocasares added bug Something isn't working untriaged This issues has not yet been looked at by the Maintainers labels Apr 23, 2024
Copy link

runllm bot commented Apr 23, 2024

To address the "ResourceDeletedExternally" error in Flyte when using the array plugin, consider enabling finalizers in your Flyte deployment. Finalizers prevent Kubernetes from garbage collecting resources prematurely, allowing Flyte to check the pod's final status before it is cleaned up. This issue often arises when pods are deleted externally due to Kubernetes operations like node draining or resource constraints, which can be mitigated by using finalizers.

For further investigation, ensure that the inject-finalizer configuration is set to true in your Flyte deployment, as discussed in this Flyte discussion. If the issue persists despite having finalizers enabled, it may be necessary to examine the Kubernetes and FlytePropeller logs for any anomalies during the pod lifecycle. Would you like more detailed steps on how to enable finalizers or how to investigate this issue further?

Sources:


This is an AI-generated response and your feedback is appreciated! Please leave a 👍 if this is helpful and 👎 if it is not.

Mention @RunLLM to ask follow-up questions. See runllm.com for more!

@eapolinario eapolinario removed the untriaged This issues has not yet been looked at by the Maintainers label May 2, 2024
@hamersaw
Copy link
Contributor

hamersaw commented May 9, 2024

IIUC you already have finalizers enabled and they are being set on Pods created by the maptask per this conversation. The finalizer is used to suggest that this Pod is not deleted until the finalizer is removed by flytepropeller. If it is being cleaned up, even though the finalizer exists, this suggests that k8s is under very heavy load and is not respecting the finalizers. Do you have a notion of the order of concurrent Pod executions when this behavior occurs? It would explain why reproducing this is difficult.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants