Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about Invoker. #5425

Open
QWQyyy opened this issue Jul 1, 2023 · 1 comment
Open

Some questions about Invoker. #5425

QWQyyy opened this issue Jul 1, 2023 · 1 comment

Comments

@QWQyyy
Copy link

QWQyyy commented Jul 1, 2023

Invoker is a transitional component from the request to the real container of the openwhisk function. Is there a queue system like kafka inside?
The reason I ask this question is that I once encountered an error reporting that requests that did not get a free container for a long time were discarded because all functions were running to full capacity.
Therefore, in the K8s-helm deployment mode, can we set the Invoker to the container pool or the queue length of the specific function container and the waiting time of the request in the queue through some options in values.yaml?

@Dakzh10
Copy link

Dakzh10 commented Jul 21, 2023

The Invoker is a component that is responsible for receiving requests from the OpenWhisk API and dispatching them to the appropriate function containers. It does not have a queue system like Kafka inside.
The error that you encountered is likely due to the fact that the function containers were running at full capacity. This can happen if there are too many requests being sent to the OpenWhisk API, or if the function containers are not being scaled up to meet the demand.
In the K8s-helm deployment mode, you can set the number of function containers that are created by using the replicas property in the openwhisk-invoker chart. You can also set the maximum number of requests that can be queued by using the queue-size property.
If you are still experiencing errors, you may need to increase the number of function containers or the queue size. You can also try to scale up the function containers automatically by using a tool like Kubernetes Horizontal Pod Autoscaler (HPA).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants