As more enterprises adopt Kubernetes as their container orchestration platform of choice, the topic of Kubernetes optimization is coming up more and more. With the complexity of Kubernetes resource management, manual approaches to optimization can require days or weeks of tuning, tweaking, and troubleshooting.

This blog will provide an overview of Kubernetes optimization – what is it, why does it matter, and how can you achieve it?

What is Kubernetes Optimization?

To start, let’s be sure and define what we mean by Kubernetes optimization. Merriam-Webster defines ‘optimization’ as:

an act, process, or methodology of making something (such as a design, system, or decision) as fully perfect, functional, or effective as possible

If we apply that to Kubernetes, it means we want to make our cloud native/Kubernetes environment and the applications that run in that environment as perfect, functional, or effective as possible. The term “perfect, functional, or effective” may mean different things to different organizations, but leaving aside an application’s functional requirements, Kubernetes optimization consists of two key components:

  • An application’s performance and reliability. In other words, what is the response time of the application and how much downtime does it have?
  • The cost of running the application, which is a direct result of compute resources utilized in running the app, for example CPU, memory, and storage.

Simply put then, Kubernetes optimization means that your application meets or exceeds business requirements for performance and reliability (defined by SLAs and SLOs) at the lowest possible resource utilization and cost.

Why does Kubernetes Optimization matter?

Kubernetes adoption continues to accelerate, with recent data from Red Hat showing 70% of organizations using the popular container orchestration platform, with almost a third planning to significantly increase their use of containers in the next twelve months. Similarly, in their 2022 Kubernetes Adoption Survey, Portworx found that 87% of organizations expect Kubernetes to play a larger role in their organizations’ infrastructure management in the next two to three years.

The growth in Kubernetes adoption is driven by many factors. The Portworx survey found the top three benefits expected from Kubernetes were:

  1. Faster time to deploy new apps (68% ranked in their top 3)
  2. Reduced IT/staffing costs (66%)
  3. Easier to update apps (63%)

These benefits all bring tremendous business value, but they’re not easy to achieve. In Canonical’s 2022 Kubernetes and cloud native operations report, respondents ranked “How can we optimize resource utilization?” as the second most important question ops people should care about, behind only, “How secure is that thing?” Without optimization, it’s impossible to realize the promised value of Kubernetes.

Get Started with StormForge

Try StormForge for FREE, and start optimizing your Kubernetes environment now.

Start Trial

Benefits of Kubernetes Optimization

It’s clear that Kubernetes optimization is a priority for enterprises, for several reasons:

  • Cost savings – With cloud costs making up an increasing portion of the overall cost of revenues, sometimes approaching 75-80%, and 47% of cloud spend wasted, Kubernetes optimization is imperative, and the opportunity for cost savings is substantial.
  • User satisfaction – Kubernetes optimization means consistently meeting or exceeding SLAs and SLOs. The result? No more unacceptable response times or frustrated users, and fewer abandoned site visits.
  • Efficient resource utilizationWhile cost savings is important, resource utilization is another benefit that should be considered separately. Especially for organizations running on-premises in private clouds, compute resources can be reallocated for other uses, like additional testing environments.
  • Environmental responsibility – Kubernetes efficiency means fewer resources used, with the result being reduced carbon emissions from data centers. And while environmental responsibility is worthy on its own merits, it’s also valued by consumers, with 80% of consumers considering sustainability when making purchase decisions.

How to achieve Kubernetes Optimization

Given all the benefits, why is Kubernetes optimization still a top unsolved issue for so many organizations? It’s because of the perception that optimization is time-consuming and difficult. As one developer commented, “Who has time to optimize? The name of the game is to slap as many features together as possible as fast as possible and ship it!”

It’s true that time spent doing anything other than developing new and differentiating capabilities is considered by most organizations as time wasted. With the complexity of Kubernetes resource management, manual approaches to optimization can require days or weeks of tuning, tweaking, and troubleshooting. Nothing could be more frustrating for an engineer who just wants to work on cool technology and deliver business value.

Fortunately, new solutions like StormForge have applied machine learning and automation to make Kubernetes optimization virtually effortless. StormForge includes two solutions for a holistic approach to Kubernetes optimization.

Optimize Live: Observation-based Optimization

StormForge Optimize Live works by applying machine learning to analyze the observability data you’re already collecting using tools like Prometheus or Datadog. Optimize Live automatically right-sizes your pod CPU and memory (vertical autoscaling) while also setting the optimal target utilization for the horizontal pod autoscaler. This allows you to scale efficiently, minimizing waste without sacrificing performance or reliability. Optimize Live is simple, easy to configure, and provides fast time to value.

StormForge Optimize Live UI showing recommendations for demo-hpa/frontend
StormForge Optimize Live analyzes observability data to optimize Kubernetes applications in production.

Optimize Pro: Experimentation-based Optimization

StormForge Optimize Pro takes an experimentation-based approach to Kubernetes optimization. It works in non-prod environments using load testing to simulate any scenario. The patent-pending StormForge machine learning algorithm then recommends the optimal way to configure your app for deployment. Optimize Pro provides the ability to explore, analyze, and better understand the behavior of your application. This can help to identify architectural improvements, for example you might find that you’re better off scaling out several smaller replicas, or you might find that it’s better to have fewer, but larger replicas.

StormForge Optimize Pro UI showing experiment summary for pet-clinic-startup-logs
StormForge Optimize Pro uses experimentation in a non-prod environment to ensure apps are configured optimally prior to deployment.


With tools like StormForge, Kubernetes optimization can become more than an unattainable vision. In fact, given the value that can be gained with minimal effort, it’s now a must-have. StormForge can help you unlock the benefits of Kubernetes, reducing costs, minimizing resource waste, and ensuring application performance, all while freeing up your software engineers to innovate.

To see how StormForge can help in your environment, request a demo today.


Kubernetes application optimization is important because it allows applications to take advantage of the cloud-native architecture of Kubernetes to realize improved performance, efficiency, scalability, and availability. By optimizing applications for Kubernetes, organizations can reduce costs, improve user experience, and ensure their applications are always available and functioning properly. Kubernetes application optimization also helps to reduce the time and effort needed to deploy and manage applications.

There are two ways to optimize a Kubernetes environment:

  1. Handle the task manually. This involves a human manually changing settings or parameters, assessing how these “tweaks” have impacted results, and then employing a trial-and-error process until one set of results are acceptable and desirable. Clearly, this trial-and-error approach is time-consuming, error-prone and ultimately ineffective for optimizing a Kubernetes environment.
  2. The alternative is automated optimization of resource management and application performance that is software-defined – driven by artificial intelligence and machine learning. Given the huge numbers of variables involved with cloud-based applications, and the complexity of testing, adjusting and re-testing millions of combinations of variables, the automated approach is far superior to any manually-based efforts.

Container optimization is the process of finding the set of configuration options that will result in application performance that meets or exceeds SLAs at the lowest possible cost. Configuration settings include CPU and memory requests and limits, replicas, and application-specific settings such as JVM heap size and garbage collection. This can be accomplished by tuning the environment in which the containers run, as well as the applications themselves.

Kubernetes clusters include one or more nodes that run containerized applications. Within this set of components, there are several opportunities for improvement in terms of performance and efficiency, including:

  • Resource optimization at the container level, including memory and CPU requests and limits, and replicas.
  • Resource and configuration settings for the applications inside the container, including worker process/thread counts, garbage collection settings, memory process allocation, and cache settings.
  • Resource settings and constraints at the node level, including CPU and memory available for scheduling workloads as well as restrictions and affinities on what type of workloads can be scheduled on the node.
  • Also for nodes, specialized hardware such as GPUs can be added to assist with specialized workloads.
  • Networking and storage infrastructure can be balanced between performance, cost, and level of complexity.