Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to rightsize istio deployment #50637

Open
kelvin-ko opened this issue Apr 23, 2024 · 3 comments
Open

How to rightsize istio deployment #50637

kelvin-ko opened this issue Apr 23, 2024 · 3 comments

Comments

@kelvin-ko
Copy link

(This is used to request new product features, please visit https://github.com/istio/istio/discussions for questions on using Istio)

Describe the feature request
We are running a kubenetes cluster with than 4K+ pods and 2K+ proxies. And we have configured Istio ingress gateway pods so that it can scale out up to 5. Recently we found the memory usage of ingress gateway pods reached beyond 85%. This is a signal to me i need to upsize and further scaleout the ingress gateway, Thats why I am looking do you have guidelines and formula that can help us to make a more decision on the (cpu/memory) sizing of istio deployment.

Describe alternatives you've considered
NA

Affected product area (please put an X in all that apply)

[ ] Ambient
[ ] Docs
[ ] Dual Stack
[ ] Installation
[ ] Networking
[ X] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Affected features (please put an X in all that apply)

[ ] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane

Additional context

@wulianglongrd
Copy link
Member

I'm curious, how many GB are actually used?

@kelvin-ko
Copy link
Author

This is the resource setting of ingress gateway

spec:
affinity:
nodeAffinity: {}
containers:

  • args:
    • proxy
    • router
    • --domain
    • $(POD_NAMESPACE).svc.cluster.local
    • --proxyLogLevel=warning
    • --proxyComponentLogLevel=misc:error
    • --log_output_level=default:info
      env:
    • name: JWT_POLICY
      value: third-party-jwt
    • name: PILOT_CERT_PROVIDER
      value: istiod
    • name: CA_ADDR
      value: istiod-1-19-3-distroless.istio-system.svc:15012
    • name: NODE_NAME
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: spec.nodeName
    • name: POD_NAME
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: metadata.name
    • name: POD_NAMESPACE
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace
    • name: INSTANCE_IP
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: status.podIP
    • name: HOST_IP
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: status.hostIP
    • name: ISTIO_CPU_LIMIT
      valueFrom:
      resourceFieldRef:
      divisor: "0"
      resource: limits.cpu
    • name: SERVICE_ACCOUNT
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: spec.serviceAccountName
    • name: ISTIO_META_WORKLOAD_NAME
      value: istio-ingressgateway
    • name: ISTIO_META_OWNER
      value: kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway
    • name: ISTIO_META_MESH_ID
      value: cluster.local
    • name: TRUST_DOMAIN
      value: cluster.local
    • name: ISTIO_META_UNPRIVILEGED_POD
      value: "true"
    • name: ISTIO_META_CLUSTER_ID
      value: Kubernetes
    • name: ISTIO_META_NODE_NAME
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: spec.nodeName
    • name: NEW_RELIC_METADATA_KUBERNETES_CLUSTER_NAME
      value: aks-sea-emm-prd
    • name: NEW_RELIC_METADATA_KUBERNETES_NODE_NAME
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: spec.nodeName
    • name: NEW_RELIC_METADATA_KUBERNETES_NAMESPACE_NAME
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace
    • name: NEW_RELIC_METADATA_KUBERNETES_POD_NAME
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: metadata.name
      image: docker.io/istio/proxyv2:1.19.3-distroless
      imagePullPolicy: IfNotPresent
      name: istio-proxy
      ports:
    • containerPort: 15021
      protocol: TCP
    • containerPort: 8080
      protocol: TCP
    • containerPort: 8443
      protocol: TCP
    • containerPort: 15090
      name: http-envoy-prom
      protocol: TCP
      readinessProbe:
      failureThreshold: 30
      httpGet:
      path: /healthz/ready
      port: 15021
      scheme: HTTP
      initialDelaySeconds: 1
      periodSeconds: 2
      successThreshold: 1
      timeoutSeconds: 1
      resources:
      limits:
      cpu: "2"
      memory: 1Gi
      requests:
      cpu: 100m
      memory: 128Mi

@j2gg0s
Copy link
Contributor

j2gg0s commented Apr 28, 2024

You might want to use HPA

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants