New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to rightsize istio deployment #50637
Comments
I'm curious, how many GB are actually used? |
This is the resource setting of ingress gateway spec:
|
You might want to use HPA |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
(This is used to request new product features, please visit https://github.com/istio/istio/discussions for questions on using Istio)
Describe the feature request
We are running a kubenetes cluster with than 4K+ pods and 2K+ proxies. And we have configured Istio ingress gateway pods so that it can scale out up to 5. Recently we found the memory usage of ingress gateway pods reached beyond 85%. This is a signal to me i need to upsize and further scaleout the ingress gateway, Thats why I am looking do you have guidelines and formula that can help us to make a more decision on the (cpu/memory) sizing of istio deployment.
Describe alternatives you've considered
NA
Affected product area (please put an X in all that apply)
[ ] Ambient
[ ] Docs
[ ] Dual Stack
[ ] Installation
[ ] Networking
[ X] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
Affected features (please put an X in all that apply)
[ ] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane
Additional context
The text was updated successfully, but these errors were encountered: