You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My services, routes, pods in cluster are up and running. Istio gateway has external IP assigned, yet i cannot reach my services via curl, (outside of kubernetes cluser, from the same host where k8s is running).
Ask your question here:
Im pretty sure something might be missing in configuration, or setup is incorrect. I cannot trace where the problem is. Any help will be appreciated as now I dont really see any clue in logs where problem could be located.
* processing: http://192.168.9.1/v1/models/mlworkeralpha:predict
* Trying 192.168.9.1:80...
* Connected to 192.168.9.1 (192.168.9.1) port 80
> POST /v1/models/mlworkeralpha:predict HTTP/1.1
> Host: mlworkeralpha-predictor.kserve-dsml5.ml.proxy.mydomain.rnd
> User-Agent: curl/8.2.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 21
>
< HTTP/1.1 503 Service Unavailable
< content-length: 152
< content-type: text/plain
< date: Mon, 26 Feb 2024 12:25:20 GMT
< server: istio-envoy
<
* Connection #0 to host 192.168.9.1 left intact
upstream connect error or disconnect/reset before headers. reset reason: remote connection failure, transport failure reason: delayed connect error: 113
Working pods/services/routing
NAME READY STATUS RESTARTS AGE
pod/mlworkeralpha-predictor-00001-deployment-d8f69f474-xm4sv 2/2 Running 6 (42m ago) 3d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mlworkeralpha ExternalName knative-local-gateway.istio-system.svc.cluster.local 3h40m
service/mlworkeralpha-predictor ExternalName knative-local-gateway.istio-system.svc.cluster.local 80/TCP 3d
service/mlworkeralpha-predictor-00001 ClusterIP 10.103.209.198 80/TCP,443/TCP 3d
service/mlworkeralpha-predictor-00001-private ClusterIP 10.98.6.87 80/TCP,443/TCP,9090/TCP,9091/TCP,8022/TCP,8012/TCP 3d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mlworkeralpha-predictor-00001-deployment 1/1 1 1 3d
NAME DESIRED CURRENT READY AGE
replicaset.apps/mlworkeralpha-predictor-00001-deployment-d8f69f474 1 1 1 3d
NAME LATESTCREATED LATESTREADY READY REASON
configuration.serving.knative.dev/mlworkeralpha-predictor mlworkeralpha-predictor-00001 mlworkeralpha-predictor-00001 True
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON ACTUAL REPLICAS DESIRED REPLICAS
revision.serving.knative.dev/mlworkeralpha-predictor-00001 mlworkeralpha-predictor 1 True 1
kubectl --namespace istio-system get service istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.105.112.176 192.168.9.1 15021:30104/TCP,80:30686/TCP,443:31206/TCP 3d1h
kubectl get configmap/config-domain --namespace knative-serving -oyaml
apiVersion: v1
data:
ml.proxy.mydomain.rnd: ""
svc.cluster.local: |
selector:
app: secret
kind: ConfigMap
metadata:
annotations:
knative.dev/example-checksum: 26c09de5
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"_example":"################################\n# #\n# EXAMPLE CONFIGURATION #\n# #\n################################\n\n# This block is not actually functional configuration,\n# but serves to illustrate the available configuration\n# options and document them in a way that is accessible\n# to users that `kubectl edit` this config map.\n#\n# These sample configuration options may be copied out of\n# this example block and unindented to be in the data block\n# to actually change the configuration.\n\n# Default value for domain.\n# Routes having the cluster domain suffix (by default 'svc.cluster.local')\n# will not be exposed through Ingress. You can define your own label\n# selector to assign that domain suffix to your Route here, or you can set\n# the label\n# \"networking.knative.dev/visibility=cluster-local\"\n# to achieve the same effect. This shows how to make routes having\n# the label app=secret only exposed to the local cluster.\nsvc.cluster.local: |\n selector:\n app: secret\n\n# These are example settings of domain.\n# example.com will be used for all routes, but it is the least-specific rule so it\n# will only be used if no other domain matches.\nexample.com: |\n\n# example.org will be used for routes having app=nonprofit.\nexample.org: |\n selector:\n app: nonprofit\n"},"kind":"ConfigMap","metadata":{"annotations":{"knative.dev/example-checksum":"26c09de5"},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/name":"knative-serving","app.kubernetes.io/version":"1.13.1"},"name":"config-domain","namespace":"knative-serving"}}
creationTimestamp: "2024-02-23T10:57:05Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: knative-serving
app.kubernetes.io/version: 1.13.1
name: config-domain
namespace: knative-serving
resourceVersion: "5449487"
uid: 4298243d-a367-43d8-8097-d669d98d7ad7
Config istio, Should gateway/ingress setup be under _example?
kubectl get cm config-istio -n knative-serving -oyaml
apiVersion: v1
data:
_example: |
gateway.knative-serving.knative-ingress-gateway: "istio-ingressgateway.istio-system.svc.cluster.local"
local-gateway.knative-serving.knative-local-gateway: "knative-local-gateway.istio-system.svc.cluster.local"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"_example":"################################\n# #\n# EXAMPLE CONFIGURATION #\n# #\n################################\n\n# This block is not actually functional configuration,\n# but serves to illustrate the available configuration\n# options and document them in a way that is accessible\n# to users that `kubectl edit` this config map.\n#\n# These sample configuration options may be copied out of\n# this example block and unindented to be in the data block\n# to actually change the configuration.\n\n# A gateway and Istio service to serve external traffic.\n# The configuration format should be\n# `gateway.{{gateway_namespace}}.{{gateway_name}}: \"{{ingress_name}}.{{ingress_namespace}}.svc.cluster.local\"`.\n# The {{gateway_namespace}} is optional; when it is omitted, the system will search for\n# the gateway in the serving system namespace `knative-serving`\ngateway.knative-serving.knative-ingress-gateway: \"istio-ingressgateway.istio-system.svc.cluster.local\"\n\n# A cluster local gateway to allow pods outside of the mesh to access\n# Services and Routes not exposing through an ingress. If the users\n# do have a service mesh setup, this isn't required and can be removed.\n#\n# An example use case is when users want to use Istio without any\n# sidecar injection (like Knative's istio-ci-no-mesh.yaml). Since every pod\n# is outside of the service mesh in that case, a cluster-local service\n# will need to be exposed to a cluster-local gateway to be accessible.\n# The configuration format should be `local-gateway.{{local_gateway_namespace}}.\n# {{local_gateway_name}}: \"{{cluster_local_gateway_name}}.\n# {{cluster_local_gateway_namespace}}.svc.cluster.local\"`. The\n# {{local_gateway_namespace}} is optional; when it is omitted, the system\n# will search for the local gateway in the serving system namespace\n# `knative-serving`\nlocal-gateway.knative-serving.knative-local-gateway: \"knative-local-gateway.istio-system.svc.cluster.local\"\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"net-istio","app.kubernetes.io/name":"knative-serving","app.kubernetes.io/version":"1.13.0","networking.knative.dev/ingress-provider":"istio"},"name":"config-istio","namespace":"knative-serving"}}
creationTimestamp: "2024-02-23T11:01:44Z"
labels:
app.kubernetes.io/component: net-istio
app.kubernetes.io/name: knative-serving
app.kubernetes.io/version: 1.13.0
networking.knative.dev/ingress-provider: istio
name: config-istio
namespace: knative-serving
resourceVersion: "5457598"
uid: 6a39eea4-2189-4b84-a529-3f1937afd555
Virtual Service
kubectl get vs mlworkeralpha -n kserve-dsml5 -oyaml
2024-02-26T13:15:40.735235Z info FLAG: --concurrency="0"
2024-02-26T13:15:40.735277Z info FLAG: --domain="istio-system.svc.cluster.local"
2024-02-26T13:15:40.735288Z info FLAG: --help="false"
2024-02-26T13:15:40.735296Z info FLAG: --log_as_json="false"
2024-02-26T13:15:40.735308Z info FLAG: --log_caller=""
2024-02-26T13:15:40.735316Z info FLAG: --log_output_level="default:info"
2024-02-26T13:15:40.735325Z info FLAG: --log_rotate=""
2024-02-26T13:15:40.735333Z info FLAG: --log_rotate_max_age="30"
2024-02-26T13:15:40.735342Z info FLAG: --log_rotate_max_backups="1000"
2024-02-26T13:15:40.735352Z info FLAG: --log_rotate_max_size="104857600"
2024-02-26T13:15:40.735359Z info FLAG: --log_stacktrace_level="default:none"
2024-02-26T13:15:40.735379Z info FLAG: --log_target="[stdout]"
2024-02-26T13:15:40.735390Z info FLAG: --meshConfig="./etc/istio/config/mesh"
2024-02-26T13:15:40.735398Z info FLAG: --outlierLogPath=""
2024-02-26T13:15:40.735407Z info FLAG: --profiling="true"
2024-02-26T13:15:40.735414Z info FLAG: --proxyComponentLogLevel="misc:error"
2024-02-26T13:15:40.735422Z info FLAG: --proxyLogLevel="warning"
2024-02-26T13:15:40.735429Z info FLAG: --serviceCluster="istio-proxy"
2024-02-26T13:15:40.735454Z info FLAG: --stsPort="0"
2024-02-26T13:15:40.735464Z info FLAG: --templateFile=""
2024-02-26T13:15:40.735474Z info FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2024-02-26T13:15:40.735484Z info FLAG: --vklog="0"
2024-02-26T13:15:40.735494Z info Version 1.20.2-5f5d657c72d30a97cae97938de3a6831583e9f15-Clean
2024-02-26T13:15:40.768548Z info Maximum file descriptors (ulimit -n): 1073741816
2024-02-26T13:15:40.768847Z info Proxy role ips=[10.10.246.147] type=router id=istio-ingressgateway-cdf98c974-vkqwv.istio-system domain=istio-system.svc.cluster.local
2024-02-26T13:15:40.768994Z info Apply mesh config from file defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
proxyMetadata: {}
terminationDrainDuration: 20s
tracing:
zipkin:
address: zipkin.istio-system:9411
defaultProviders:
metrics:
- prometheus
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local
2024-02-26T13:15:40.771289Z info cpu limit detected as 3, setting concurrency
2024-02-26T13:15:40.771611Z info Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 3
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
proxyAdminPort: 15000
serviceCluster: istio-proxy
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 20s
tracing:
zipkin:
address: zipkin.istio-system:9411
2024-02-26T13:15:40.771629Z info JWT policy is third-party-jwt
2024-02-26T13:15:40.771636Z info using credential fetcher of JWT type in cluster.local trust domain
2024-02-26T13:15:40.773572Z info Workload SDS socket not found. Starting Istio SDS Server
2024-02-26T13:15:40.773606Z info CA Endpoint istiod.istio-system.svc:15012, provider Citadel
2024-02-26T13:15:40.773615Z info Opening status port 15020
2024-02-26T13:15:40.773668Z info Using CA istiod.istio-system.svc:15012 cert with certs: var/run/secrets/istio/root-cert.pem
2024-02-26T13:15:40.829401Z info ads All caches have been synced up in 94.663331ms, marking server ready
2024-02-26T13:15:40.829806Z info xdsproxy Initializing with upstream address "istiod.istio-system.svc:15012" and cluster "Kubernetes"
2024-02-26T13:15:40.829828Z info sds Starting SDS grpc server
2024-02-26T13:15:40.830429Z info starting Http service at 127.0.0.1:15004
2024-02-26T13:15:40.832617Z info Pilot SAN: [istiod.istio-system.svc]
2024-02-26T13:15:40.835137Z info Starting proxy agent
2024-02-26T13:15:40.835182Z info starting
2024-02-26T13:15:40.835292Z info Envoy command: [-c etc/istio/proxy/envoy-rev.json --drain-time-s 45 --drain-strategy immediate --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --allow-unknown-static-fields --log-format %Y-%m-%dT%T.%fZ %l envoy %n %g:%# %v thread=%t -l warning --component-log-level misc:error --concurrency 3]
2024-02-26T13:15:45.786780Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012
2024-02-26T13:15:45.800231Z info cache generated new workload certificate latency=4.970434213s ttl=23h59m59.199779254s
2024-02-26T13:15:45.800335Z info cache Root cert has changed, start rotating root cert
2024-02-26T13:15:45.800406Z info ads XDS: Incremental Pushing ConnectedEndpoints:0 Version:
2024-02-26T13:15:45.800580Z info cache returned workload trust anchor from cache ttl=23h59m59.199428991s
2024-02-26T13:16:04.066630Z info ads ADS: new connection for node:istio-ingressgateway-cdf98c974-vkqwv.istio-system-1
2024-02-26T13:16:04.066764Z info cache returned workload certificate from cache ttl=23h59m40.933244722s
2024-02-26T13:16:04.066931Z info ads ADS: new connection for node:istio-ingressgateway-cdf98c974-vkqwv.istio-system-2
2024-02-26T13:16:04.067142Z info cache returned workload trust anchor from cache ttl=23h59m40.932872388s
2024-02-26T13:16:04.067423Z info ads SDS: PUSH request for node:istio-ingressgateway-cdf98c974-vkqwv.istio-system resources:1 size:1.1kB resource:ROOTCA
2024-02-26T13:16:04.067430Z info ads SDS: PUSH request for node:istio-ingressgateway-cdf98c974-vkqwv.istio-system resources:1 size:4.0kB resource:default
2024-02-26T13:16:04.191913Z info Readiness succeeded in 23.469068727s
2024-02-26T13:16:04.192626Z info Envoy proxy is ready
2024-02-26T13:46:03.726824Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012
Thanks in advance for pointing me in right direction. If any info/additional data/logs are needed, let me know.
The text was updated successfully, but these errors were encountered:
Do you have istio injection enabled in both the namespace where knative-serving is running and the namespace where your service is deployed? The Ready 1/1 on the Knative pods and 2/2 on the mlworkeralpha makes me think there is no Envoy sidecar (unless you are using ambient?)
/area networking
Hi, I have kubernetes on premises setup.
Kubernetes: 1.29.1 (single node, control plane is also a worker node)
Knative Version: 1.13.1
Istio Version:
1.13.01.20.2, (mistaken 1.13.0 was based on kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.13.0/net-istio.yaml )I followed tutorial visible here: https://knative.dev/docs/install/yaml-install/serving/install-serving-with-yaml/
I did not set up mTLS. I added serving HPA accoring to guide.
My services, routes, pods in cluster are up and running. Istio gateway has external IP assigned, yet i cannot reach my services via curl, (outside of kubernetes cluser, from the same host where k8s is running).
Ask your question here:
Im pretty sure something might be missing in configuration, or setup is incorrect. I cannot trace where the problem is. Any help will be appreciated as now I dont really see any clue in logs where problem could be located.
Working pods/services/routing
NAME READY STATUS RESTARTS AGE
pod/mlworkeralpha-predictor-00001-deployment-d8f69f474-xm4sv 2/2 Running 6 (42m ago) 3d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mlworkeralpha ExternalName knative-local-gateway.istio-system.svc.cluster.local 3h40m
service/mlworkeralpha-predictor ExternalName knative-local-gateway.istio-system.svc.cluster.local 80/TCP 3d
service/mlworkeralpha-predictor-00001 ClusterIP 10.103.209.198 80/TCP,443/TCP 3d
service/mlworkeralpha-predictor-00001-private ClusterIP 10.98.6.87 80/TCP,443/TCP,9090/TCP,9091/TCP,8022/TCP,8012/TCP 3d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mlworkeralpha-predictor-00001-deployment 1/1 1 1 3d
NAME DESIRED CURRENT READY AGE
replicaset.apps/mlworkeralpha-predictor-00001-deployment-d8f69f474 1 1 1 3d
NAME LATESTCREATED LATESTREADY READY REASON
configuration.serving.knative.dev/mlworkeralpha-predictor mlworkeralpha-predictor-00001 mlworkeralpha-predictor-00001 True
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON ACTUAL REPLICAS DESIRED REPLICAS
revision.serving.knative.dev/mlworkeralpha-predictor-00001 mlworkeralpha-predictor 1 True 1
NAME URL READY REASON
route.serving.knative.dev/mlworkeralpha-predictor http://mlworkeralpha-predictor.kserve-dsml5.ml.proxy.mydomain.rnd True
NAME URL LATESTCREATED LATESTREADY READY REASON
service.serving.knative.dev/mlworkeralpha-predictor http://mlworkeralpha-predictor.kserve-dsml5.ml.proxy.mydomain.rnd mlworkeralpha-predictor-00001 mlworkeralpha-predictor-00001 True
Istio external IP assigned
kubectl --namespace istio-system get service istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.105.112.176 192.168.9.1 15021:30104/TCP,80:30686/TCP,443:31206/TCP 3d1h
Knative pods running
NAME READY STATUS RESTARTS AGE
activator-58db57894b-jhxf4 1/1 Running 4 (57m ago) 3d1h
autoscaler-76f95fff78-qtptw 1/1 Running 3 (57m ago) 3d1h
autoscaler-hpa-85696784dd-lpqjj 1/1 Running 3 (57m ago) 3d1h
controller-7dd875844b-4btf6 1/1 Running 3 (57m ago) 3d1h
net-istio-controller-5576fc66d-g78xg 1/1 Running 3 (57m ago) 3d1h
net-istio-webhook-9965c55c5-tvblf 1/1 Running 3 (57m ago) 3d1h
webhook-d8674645d-rvppt 1/1 Running 3 (57m ago) 3d1h
Knative-serving configmap/config-domain
Config istio, Should gateway/ingress setup be under _example?
Virtual Service
Logs:
Istio activator
Istio net controller
Istio net webhook
knative controller
knative webhook
Istio logs
Thanks in advance for pointing me in right direction. If any info/additional data/logs are needed, let me know.
The text was updated successfully, but these errors were encountered: