Pod priority enforcement without killing lower priority pods #10080
-
Is your feature request related to a problem? Please describe. I also have some resource-hungry lower-priority processes that need to complete eventually, but aren't so time-sensitive and are not sensitive to being swapped to disk. Describe the solution you'd like Describe alternatives you've considered This seems particularly relevant to k3s, where we probably have relatively fixed compute that cannot scale out, and we are trying to allocate resources on a single compute as well as possible. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
This isn't really a K3s question; the constructs you're asking about are all core Kubernetes. You might read up on how Kubernetes priority classes and QoS work. They are more about scheduling and preempting(evicting) pods based on resource requests, it has nothing to do with niceness or the actual kernel-level scheduling priority of processes once the pods are running. When pods are running, they are given time slices as determined by their CPU resource requests.
I don't think you're going to get what you want, since kubernetes doesn't use "niceness" at all, and it sounds like you're wanting to dynamically reduce the CPU resources allocated to a pod, in response to other workloads scheduled to the node. Perhaps in-place resource resize gets close, but that is still in alpha. |
Beta Was this translation helpful? Give feedback.
-
Thanks @brandond. I'm aware of the current functionality; that's why I was asking about this as a possibility. I think what you're saying is that k3s isn't in the business of adding custom priorities and the necessarily infrastructure underneath to support nice-ing, which is fair enough. As I was saying, I thought this might be quite interesting for k3s, as k3s is probably the only k8s implementation that might well be run on a single node. |
Beta Was this translation helpful? Give feedback.
That sort of custom functionality is pretty far beyond what we have the capacity to develop and maintain. We generally do the bare minimum necessary to run Kubernetes in a single process, and are actively trying to reduce the number of patches that we carry.