As an IT manager or DevOps team leader, you’re looking at Kubernetes and seeing solutions to many of your present challenges, particularly around application development and deployment. From everything you’ve read and heard, it seems that Kubernetes has what it takes to make your applications more portable and scalable, simpler to develop, and easier, faster and less expensive to deploy.

So, what’s keeping you from jumping on the K8s bandwagon? If you’re like a lot of other DevOps and IT pros, it’s the complexity that comes with this container orchestration system. Or, if you’ve already started, have complexity-related issues stopped you in your tracks? Are your next steps being delayed as you grapple with the issues?

Wherever you are on your migration journey, it’ll be helpful to peel back the layers to understand both the nature of Kubernetes’ complexity and ways to overcome it. So, let’s do that.

The Kubernetes Complexity Problem

One recent security assessment hit the nail on the head in describing the complexity of Kubernetes and the challenges it creates for teams:

“Kubernetes is a large system with significant operational complexity. The assessment team found configuration and deployment of Kubernetes to be non-trivial, with certain components having confusing default settings, missing operational controls, and implicitly defined security controls.”

Kubernetes gives teams seemingly endless options and choices as to how their applications should be configured and provisioned – and how they should ultimately run. Kubernetes creates a declarative world in which developers declare their choices, after which, Kubernetes should automatically “make it so.”

Too often, however, it doesn’t quite work out that way. Lots of teams have the technical skills to stand up their K8s clusters, but those same teams are at a loss for why the apps they have running in containers on those clusters aren’t performing as expected.

Maybe the goal is very high performance, super-flexible scalability, or rock-solid reliability.  Or maybe it’s more in the way of cost-effectiveness to lower the cost of IT operations.  Whatever the goals may be, it’s virtually impossible to achieve any of them with no visibility into how configuration changes affect app performance, and no understanding of how tuning one parameter affects other components of application or the app as a whole.

In other words, it’s a good goal to shoot for a 30% or 40% reduction in an app’s cloud costs. But achieving it has a lot less impact if it takes developers 3 or 4 weeks of tinkering with its configuration settings to get there. After all, the only thing more expensive than many companies’ cloud bill is the cost of headcount, so it’s critical for firms to always strive to use their people in the best and most strategic ways possible.

Relying on over-provisioning isn’t a viable strategy either; it casts “going big’ in a new, negative light. In addition, the financial ramifications of taking that path tend to surface fairly rapidly, especially in the tight business conditions the pandemic has created.

Get Started with StormForge

Try StormForge for FREE, and start optimizing your Kubernetes environment now.

Start Trial

Configuring and Tuning K8s

The biggest danger for teams, the real iceberg in their path is the inability to configure and tune K8s apps effectively.  From what the StormForge team has seen in the field, configuration and tuning miscues are the most common reasons why DevOps and IT teams slide past their delivery deadlines, miss their KPIs, and violate their SLAs with the business. It’s the main culprit in Kubernetes migration failures and for digital transformation initiatives getting driven into ditches.

One specific area where these problems crop up is in continuous integration/continuous delivery (CI/CD) pipelines. This methodology adds automation to various phases of the development process to enable teams to deliver apps faster and with more agility. Much of the value here is in the testing that takes place, to verify code changes don’t break the application. More specifically, CI/CD introduces ongoing automation and continuous monitoring throughout the lifecycle of apps, including initial development, integration and testing, and delivery and deployment. But do you even test for optimization, that is the code runs, but is it efficient and able to take the proper load?

The inherent complexities of Kubernetes, and of containerized apps running in Kubernetes, tend to cause problems around the midpoint of these processes – as integration connects to delivery. That’s where altered declarations and definitions, and different configuration settings for apps are applied and tested – and often don’t behave as expected. This is exactly where optimization is the most important, you have to know that the app will perform in a desired way.

Intelligent Kubernetes Optimization to the Rescue

There’s a new, smarter, more effective way to manage the configuration and tuning of apps and services running in K8s containers. I’m proud to say that this innovation was pioneered by colleagues here at StormForge.

This approach starts with established data science methods as its foundation. Next, it utilizes advanced, machine learning techniques to explore application parameters in unique ways. Next comes automated determination the optimal configurations, ones that are guaranteed to deploy reliably and perform optimally. This involves intelligently analyzing and managing 10’s or dozens of interrelated variables with millions to billions of potential combinations to automatically select optimal settings for each application, and do so in minutes or hours.

Machine learning elements also enable this technology to learn about apps and services over time, continually improving configuration management and tuning while also making the process more scalable and efficient.

In short, ML-powered systems like the StormForge Platform are knocking down the big hurdles that have kept success with Kubernetes migrations out of reach for many DevOps and IT teams. By doing so for all kinds of teams at all types of companies, they are paving the way for digital transformation success. And that’s pretty darned cool!