Skip to content
/ preoom Public

Retrieves & observes Kubernetes Pod resource (CPU, memory) utilisation.

License

Notifications You must be signed in to change notification settings

gajus/preoom

Repository files navigation

Preoom 🧐

Travis build status Coveralls NPM version Canonical Code Style Twitter Follow

Retrieves & observes Kubernetes Pod resource (CPU, memory) utilisation.

Use case

If node experiences a system OOM (out of memory) event prior to the kubelet being able to reclaim memory, then oom_killer identifies and kills containers with the lowest quality of service that are consuming the largest amount of memory relative to the scheduling request.

These pods will be terminated and termination reason will be "OOMKilled", e.g.

Last State:   Terminated
Reason:       OOMKilled
Exit Code:    137

The problem is that Kubernetes OOM termination is performed using SIGKILL, i.e. Pod is not given time for graceful shutdown.

Preoom allows to set up a regular check for memory usage and gracefully shutdown the Kubernetes Pod before the OOM termination occurs (see Using Preoom with Lightship to gracefully shutdown service before the OOM termination).

Requirements

Kubernetes Metrics Server must be available in the cluster and the metrics.k8s.io API must be accessible by the anonymous service account.

Usage

import {
  createResourceObserver,
  isKubernetesCredentialsPresent
} from 'preoom';

const main = async () => {
  const resourceObserver = createResourceObserver();

  if (isKubernetesCredentialsPresent()) {
    console.log(await resourceObserver.getPodResourceSpecification());

    // {
    //   containers: [
    //     {
    //       name: 'authentication-proxy',
    //       resources: {
    //         limits: {
    //           cpu: 500',
    //           memory: 536870912
    //         },
    //         requests: {
    //           cpu: 250,
    //           memory: 268435456
    //         }
    //       }
    //     },
    //     {
    //       name: 'monitoring-proxy',
    //       resources: {
    //         limits: {
    //           cpu: 1000',
    //           memory: 536870912
    //         },
    //         requests: {
    //           cpu: 500,
    //           memory: 268435456
    //         }
    //       }
    //     },
    //     {
    //       name: 'showtime-api',
    //       resources: {
    //         limits: {
    //           cpu: 2000,
    //           memory: 2147483648
    //         },
    //         requests: {
    //           cpu: 1000,
    //           memory: 1073741824
    //         }
    //       }
    //     }
    //   ],
    //   name: 'showtime-api-56568dd94-tz8df'
    // }

    console.log(await resourceObserver.getPodResourceUsage());

    // {
    //   containers: [
    //     {
    //       name: 'authentication-proxy',
    //       usage: {
    //         cpu: 0,
    //         memory: 101044224
    //       }
    //     },
    //     {
    //       name: 'monitoring-proxy',
    //       usage: {
    //         cpu: 1000,
    //         memory: 42151936
    //       }
    //     },
    //     {
    //       name: 'showtime-api',
    //       usage: {
    //         cpu: 0,
    //         memory: 1349738496
    //       }
    //     }
    //   ],
    //   name: 'showtime-api-56568dd94-tz8df'
    // }
  }
};

main();

Using Preoom with Lightship to gracefully shutdown service before the OOM termination

Preoom allows to set up a regular check for memory usage and gracefully shutdown the Kubernetes Pod before the OOM termination occurs. Graceful termination can be implemented using Lightship, e.g.

import {
  createLightship
} from 'lightship';
import {
  createResourceObserver,
  isKubernetesCredentialsPresent
} from 'preoom';

const MAXIMUM_MEMORY_USAGE = 0.95;

const main = async () => {
  const lightship = createLightship();

  if (isKubernetesCredentialsPresent()) {
    const resourceObserver = createResourceObserver();

    resourceObserver.observe((error, podResourceSpecification, podResourceUsage) => {
      if (error) {
        // Handle error.
      } else {
        for (const containerResourceSpecification of podResourceSpecification.containers) {
          if (containerResourceSpecification.resources.limits && containerResourceSpecification.resources.limits.memory) {
            const containerResourceUsage = podResourceUsage.containers.find((container) => {
              return container.name === containerResourceSpecification.name;
            });

            if (!containerResourceUsage) {
              throw new Error('Unexpected state.');
            }

            if (containerResourceUsage.usage.memory / containerResourceSpecification.resources.limits.memory > MAXIMUM_MEMORY_USAGE) {
              lightship.shutdown();
            }
          }
        }
      }
    }, 5 * 1000);
  }

  lightship.signalReady();
}

main();

Units

  • CPUs are reported as milliCPU units (1000 = 1 CPU).
  • Memory is reported in bytes.

Related projects

  • Iapetus – Prometheus metrics server.
  • Lightship – Abstracts readiness/ liveness checks and graceful shutdown of Node.js services running in Kubernetes.

About

Retrieves & observes Kubernetes Pod resource (CPU, memory) utilisation.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published