In recent years, many enterprise development teams have embraced container technology as a way to run applications securely and efficiently while allowing the code to run reliably in multiple computing environments. And many enterprises are using Kubernetes, the open source software, to deploy, manage, and scale their containerized applications.

But while many enterprises use Kubernetes to manage their containers, some don’t think about ways to optimize Kubernetes. Optimizing Kubernetes may sound like a complex task, but it doesn’t have to be, and companies that neglect to do so may be missing out on performance improvements or cost savings.

Here are some tips for optimizing Kubernetes:

Part 1 of Optimizing Kubernetes: Pay Attention to Resources

In many cases, we see Kubernetes users who aren’t conscious of the resource requests and limits. Users often stand up an application on Kubernetes, leave the resource section of the manifest blank, or use the default request/limits from a helm chart or other source. Then, they don’t regularly revisit the resource allocations.

There are several potential problems with this scenario. In some cases, the resource allocations originally selected aren’t optimal for their unique application load/processes/traffic. As an application’s codebase grows and changes over time, in many cases, resource settings should be adjusted as well.

Dev teams often don’t check on how efficiently the applications are running using the resources allocated through Kubernetes. As a result, they could be leaving a lot of wasted cost and resources on the table.

We sometimes compare Kubernetes to a high-end stereo sound system. Dev teams should be fiddling with the system’s knobs. But just like on a sound system, if one knob is tweaked, seven other settings may need to be changed to balance everything out.

With Kubernetes, instead of having one single volume knob that goes from one to 10, dev teams need to dial in multiple resource and parameter settings that can go from one to 10,000.

Often, dev teams set up Kubernetes by just “turning on the stereo”, when they probably could get much better performance if they were checking the “tuning knobs” of their Kubernetes environments more often.

Dev teams should be conscious of their resource requests and limits. If they aren’t, they could be missing out. Even though Kubernetes is working, and even though an enterprise may be getting decent performance, they possibly could get double the performance for the same cost, or they could cut their cost in half for the same performance.

It’s possible to see big cuts in costs without reducing an enterprise’s performance goals, just by being more proactive. While dev teams may set resources too high, it’s also possible to assign an application too few resources in Kubernetes.

If an application gets a spike in traffic or has a heavy job, and it doesn’t have enough resources, it can unexpectedly crash. Then, the dev team needs to go hunt for the problem, kind of like finding a needle in a haystack. They adjust the parameters, hoping the application continues to run, then repeat the cycle if it doesn’t. That’s a bad cycle to get into.

Instead, dev teams should routinely opt to “dial in” and optimize resource limits and requests. There are tools to help them check for optimal settings, and that can recommend better resource allocations, but regular monitoring is key.

Get Started with StormForge

Try StormForge for FREE, and start optimizing your Kubernetes environment now.

Start Trial

Part 2 of Optimizing Kubernetes: Don’t Neglect Application Parameters

In addition to regularly checking their resource requests, dev teams should also regularly look at the parameters and settings of the applications they’re running in containers managed by Kubernetes.

Similar to the situation with resource allocations, applications may run perfectly one day, then need adjustments the next day. Correct might only be correct for a day. In some cases, an application may run more efficiently with different settings than originally assigned to it.

There’s a standard way to configure apps. Kubernetes offers ConfigMaps to add configuration data to the container from a separate outside source, as a way to avoid injecting configuration data in code.

Our recommendation is for dev teams to check the configurations regularly, to ensure that apps are running as efficiently as possible. They could be paying for more resources than they need if their apps aren’t configured correctly.

Some Kubernetes optimization tools will test app configurations and should be able to set the parameters beforehand. These tools can recommend tested configurations, and they can give dev teams recommendations on what settings to use.

Going back to resource allocations, these tools also can tell dev teams how many resources an application typically uses, and they can suggest what additional resources an app may need under a heavy load scenario.

Use Available Kubernetes Tools to Your Advantage

While we’re on the subject of tools, it’s worth noting that there is a growing list of products that can help dev teams optimize Kubernetes. Automation tools can help dev teams manage both resource allocations and app configurations.

Kubernetes allows for tens of thousands of combinations, and the human brain just can’t handle it when it has that many combinations to consider. The human brain can’t calibrate that huge number of variables.

Instead, dev teams should leverage automation and the landscape of tools to make things easier. They shouldn’t try to do everything themselves. There are a lot Kubernetes tools available, and it’s worth the time to get to know them.

The enterprise version of our own StormForge platform uses machine learning to proactively tune complex Kubernetes apps, automatically giving teams the correct configuration settings to use before they deploy. This works especially well for applications with 10 to 100+ different config ‘knobs’, running multiple workloads scoped to a namespace (or multiple namespaces). We also offer a free for life version for developers, which is great for simple applications running less than 3 pods.

Another interesting tool is Kubecost, which gives dev teams visibility into their Kubernetes resource costs, to reduce cost and visualize resource usage over time. This can be a great way to visualize, analyze, and report on your actual workload costs pre and post optimization. In addition to Kubecost, there are several other tools worth investigating, and new tools sprouting up all the time.

Aside from the few tools mentioned here, there are several good lists of useful Kubernetes tools available online. Start by keeping up with the tools on the Cloud Native Computing Foundation website. Dev teams would do themselves a favor by keeping up with the growing number of tools being released.

Request a demo of the StormForge platform.

For more information about Kubecost, check out the Kubecost website.

Check out the CNCF Landscape.