As with all environments, including Dev, you want to ensure that whatever is running on said environment is operating at the utmost performance. Not only will it make the application stack more reliable, but it’s less calls at 2:00 AM for the engineers on-call and less support tickets from angry customers using your application.

In this blog post, you’ll learn about the three key ways to optimize performance across all fronts with Karpenter, Kubecost, and Stormforge.


To follow along from a hands-on perspective with this blog post, you’ll need to sign up for the 30-day StormForge trial.

The Benefits Of All Three

When you think about performance optimization, you may be thinking “Why do I need three tools?”, and you’re absolutely right. Technically, you don’t. You could use Stormforge for all three needs (Stormforge can work directly with the Cluster Autoscaler). However, when implementing tools and platforms, the key is to ensure that what you’re implementing is the best for the job. That’s why implementing open-source tools like Kubecost and Karpenter to add value makes sense.

Let’s break down what each is for.

First, there’s Karpenter. Karpenter will be used for scaling nodes up and down in the most performant way possible. If you’re wondering why you wouldn’t just use Cluster Autoscaler because it’s built in, you absolutely can, but it’s slower. From a performance benchmark perspective, Karpenter wins in terms of getting nodes up and bringing them down faster.

Next, there’s Stormforge. Stormforge will take care of the resource optimization for resources running on the cluster. It’ll manage the memory and CPU needs of the workloads. It can take care of all Requests and limits. Karpenter takes care of the infrastructure and Stormforge takes care of the workloads.

Lastly, there’s Kubecost. Kubecost takes care of… you guessed it, the cost. It ensures that cost optimization is properly implemented within your Kubernetes environment for both the cluster itself and the the workloads running within the cluster. It’ll give you suggestions on what can be optimized to save money and still perform as well as before.

Learn more about the benefits of combining StormForge and Kubecost in our guide to Kubernetes optimization. You can walkthrough getting a snapshot of savings before optimization in Kubecost, optimizing a workload with StormForge, and seeing those costs go down in Kubecost.

Karpenter Configuration

The overall installation and configuration for Karpenter takes three key steps:

  1. Ensuring the proper IAM permissions.
  2. The installation of Karpenter itself.
  3. The configuration of Karpenter.

In this section, you’re going to learn how to set up all three to ensure Karpenter is properly running within your environment.

AWS Role

First things first – Karpenter needs permissions to scale up and scale down Worker Nodes. It doesn’t have the permission needed out of the box, so during the installation, you’ll point Karpenter to the proper IAM Role.

First, create a policy that uses the same config in the policy.json below.


"Statement": [
        "Action": [
        "Effect": "Allow",
        "Resource": "*",
        "Sid": "Karpenter"
        "Action": "ec2:TerminateInstances",
        "Condition": {
            "StringLike": {
                "ec2:ResourceTag/Name": "*karpenter*"
        "Effect": "Allow",
        "Resource": "*",
        "Sid": "ConditionalEC2Termination"
"Version": "2012-10-17"

This policy will ensure that Karpenter has the right permissions.

Next, create the policy.

aws iam create-policy --policy-name KarpenterControllerPolicy --policy-document file://policy.json

After the policy is created, ensure that the OIDC provider is active and ready to use AWS IAM.

eksctl utils associate-iam-oidc-provider --cluster k8squickstart-cluster --approve

Lastly, add the service account with the policy to your cluster.

eksctl create iamserviceaccount 
  --cluster "k8squickstart-cluster" --name karpenter --namespace karpenter 
  --role-name "KarpenterInstanceNodeRole" 
  --attach-policy-arn "arn:aws:iam::912101370089:policy/KarpenterControllerPolicy" 


Now that the policy is ready, you can start the installation process for Karpenter.

First, configure a few environment various which contain:

  1. Your clusters name.
  2. The IAM role.
  3. The cluster endpoint

An example is below.

export CLUSTER_NAME="k8squickstart-cluster"

Next, install Karpenter with the associated environment variables.

helm upgrade --install --namespace karpenter --create-namespace 
karpenter karpenter/karpenter 
--set serviceAccount.annotations.""=${KARPENTER_IAM_ROLE_ARN} 
--set clusterName=${CLUSTER_NAME} 
--set clusterEndpoint=${CLUSTER_ENDPOINT} 
--set aws.defaultInstanceProfile=KarpenterNodeInstanceProfile-${CLUSTER_NAME}

Lastly, install the Karpenter CRD’s.

kubectl apply -f
kubectl apply -f
kubectl apply -f

You’re now ready for the Karpenter configuration.


Once Karpenter is installed, you’ll then need to configure it for the Worker Nodes that you want it to manage as Karpenter doesn’t do this out of the box.

As an example, the configuration below sets up Karpenter to manage Linux-based nodes that are of instance type A or M.

Ensure that you specify the IAM Role name that you created for the config below.

kubectl apply -f - <<EOF
kind: NodePool
  name: default
        - key:
          operator: In
          values: ["amd64"]
        - key:
          operator: In
          values: ["linux"]
        - key:
          operator: In
          values: ["spot"]
        - key:
          operator: In
          values: ["a", "m"]
        - key:
          operator: Gt
          values: ["2"]
        name: default
    cpu: 1000
    consolidationPolicy: WhenUnderutilized
    expireAfter: 720h # 30 * 24h = 720h
kind: EC2NodeClass
  name: default
  amiFamily: AL2
  role: "KarpenterNodeRole-k8squickstart-eks"
    - tags: "k8squickstart-eks"
    - tags: "k8squickstart-eks"

You can then test out the configuration to ensure that proper scaling takes place.

Deploy a test workload.

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
  name: inflate
  replicas: 0
      app: inflate
        app: inflate
      terminationGracePeriodSeconds: 0
        - name: inflate
              cpu: 1

Scale the workload.

kubectl scale deployment inflate --replicas 5

Check to ensure that Karpenter is working as expected.

kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l -c controller

Stormforge Configuration

Now that the Karpenter configuration is complete, you’ll implement the workload optimization piece, Stormforge. In the prerequisite section of this blog post, you went through the necessary prerequisites to sign up for Stormforge.

Once you’re signed in, you’ll see a screenshot similar to the one below.

Click the green + Add Cluster button.

Image description

Next, go through the steps of putting in information about your cluster.

Image description

You’ll reach a point where you need to download a values.yaml file which will contain the credentials Stormforge needs to have permissions to your Kubernetes cluster. Save the output in a values.yaml file.

clusterName: karpentertest
  clientID: ef0a6fd
  clientSecret: AwTG

The last step is to run the Stormforge Agent installation.

helm install stormforge-agent oci:// 
  --namespace stormforge-system 
  --values values.yaml

Kubecost Configuration

When implementing Kubecost, at first, you’ll most likely google “How do I install Kubecost?”. You’ll see several articles that show the installation, but the thing is it shows the installation with a Kubecost product key.

To get that product key, you have to contact Kubecost.

If you don’t want to do that, you can instead just install the 100% open-source version.

To install the open-source version, run the following Helm Chart.

helm upgrade --install kubecost                             
  --repo cost-analyzer 
  --namespace kubecost --create-namespace

You can ensure the resources are properly running with the command below.

kubectl get all -n kubecost

Wrapping Up

Consolidating the tools you use into one usable and manageable stack is key for any implementation, and when it comes to overall performance optimization, this three-prong stack makes the most sense to ensure that the application stack you’re running is as efficient as possible.