-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requests sent to terminating pods #15211
Comments
Hi @skonto. I posted testing code here The app folder has the knative service, the tester folder sends simultaneous requests, and the knative operator config is here To replicate the issue. Send a job with 200 requests, watch for the pods to start to terminate then send job-2 and observe 502 bad gateway errors in the response. It does not happen every time. I have also noticed it does not happen if the pod runs for a short time (10 seconds, 30 seconds). It occurs on long requests like 5 minutes. The below error is from the queue proxy when this happens. We see the same behavior on Google Cloud GKE and on an on-premise Kubernetes Cluster.
Below are similar issues I have found: Thanks for your help! Please let me know anything else you need. |
Here is another related issue. #9355 |
What version of Knative?
1.14.0
Expected Behavior
Being able to send groups of 200 requests and knative service those requests.
The requests not to be scheduled to terminating pods.
Actual Behavior
Sending in groups of 200 requests to knative. The processing takes 5 minutes on knative to run. All the pods will finish with 200 return code. When a second groups of 200 requests is sent in and pods are terminating many of the requests will return 502 bad gateway errors. The requests are getting scheduled to pods that are terminating.
Steps to Reproduce the Problem
Watch for pods to terminate and send in requests. kourier is the ingress and using the knative autoscaler.
The text was updated successfully, but these errors were encountered: