Resources
Frequently Asked Questions
StormForge’s customer support teams have fielded questions from a wide variety of potential customers. In this FAQ, we have assembled the most the frequently asked questions.
StormForge Platform
The StormForge Platform uses patent-pending machine learning to optimize Kubernetes resource efficiency at scale. In pre-production, StormForge uses rapid experimentation combined with load testing to optimize for every possible scenario and provides in-depth application analysis and insights to drive key architectural improvements. In production, StormForge maximizes the value of your existing observability data to reduce resource usage and cost while still meeting SLAs. StormForge accelerates your competitive advantage by allowing developers to focus on innovating, not tuning Kubernetes.
1. StormForge Optimize Live – Turn observability into actionability
- Observation-based optimization in prod
- Leverages observability data already being collected
- Machine learning recommends CPU and memory to improve efficiency
- Recommendations can be automatically implemented or manually approved
- Simple configuration and fast time-to-value
2. StormForge Optimize Pro – Proactive Optimization with deep application insights
- Experimentation-based optimization in pre-prod
- Uses load testing to simulate a range of scenarios
- ML optimizes for any goal by tuning any parameter
- Highlights trade-offs to enable smart business decisions
- Deep application insights to drive architectural improvements
3. Performance Test – Scalable, easy-to-use-Kubernetes load testing
- Create load tests in minutes with SaaS simplicity
- Scale to hundreds of thousands of requests per second for millions of concurrent users
- Simulate traffic from any geographic region
- Built for automation into CI/CD workflow
- Open workload model for accurate and real-world scenarios
Part of the StormForge Platform, StormForge Optimize Live uses machine learning to analyze observability data and make recommendations on container CPU and memory settings in order to reduce resource usage while still ensuring application performance.
StormForge Optimize Live ingests data from your existing observability tools. Our machine learning analyzes historical usage and trends and then makes recommendations for updated CPU and memory settings, at whatever frequency you specify. Updates can be configured for manual or automatic approval. If you choose manual approval, you can review the recommendation in detail before approving. If you choose automatic approval, the recommendation is automatically patched to the deployment in production. Optimize Live continues to watch your environment and repeat the process, so you can put your optimization on autopilot.
Part of the StormForge Platform, StormForge Optimize Pro works in a non-production environment using a process of automated, rapid experimentation to simulate a wide variety of scenarios. Optimize Pro uses patent-pending machine learning to analyze data and recommend the optimal configuration prior to deployment.
StormForge Optimize Pro uses a process of rapid experimentation in a non-production environment. Load is placed on the system using a performance test, which can be created using StormForge Performance Testing or with a third-party tool. StormForge machine learning analyzes the outcomes of the load test based on your goals (e.g. performance and cost), then recommends a new configuration to try. After running several trials, StormForge machine learning homes in on the configuration that will result in the optimal outcomes prior to deploying your application.
Software is not static, and today software companies have more releases, more often. Developers are using StormForge to make sure they’re managing software appropriately and that it’s running with optimal efficiency without having to spend time manually tuning applications. What does that mean from a value standpoint? It means they don’t have to choose between cost and performance or other metrics. Developers can unlock a whole new world by gaining a much broader view of their environment from a resource management standpoint and make more intelligent decisions.
The StormForge platform is built in a way that is meant to be integrated and part of the DevOps toolchain. That means that it is hardened from a cloud-native perspective and vertically integrated so that developers can integrate quickly to get up and running fast and realize value.
StormForge is the only solution that closes the loop between pre-production and production with both experimentation-based and observation-based optimization in a single platform. Our patent-pending Machine Learning engine allows us to provide a level of sophistication that goes above and beyond the basic statistical modeling used by other solutions. And, unlike observability and APM tools, which are important for seeing what’s happening in production, StormForge Optimize Live is proactive and enables you to automate the change or fix that comes from observing data. In that way, it’s the perfect partnership – marrying observability and APM tools and StormForge ML-driven optimization, and a big differentiation for StormForge.
StormForge started as a machine learning lab. We were a Docker Swarm shop for the first few years, and we were trying to solve our own lift-and-shift challenges from Docker Swarm to Kubernetes. When our team of data scientists and engineers were performing this lift-and-shift, we realized how painful tuning is for developers.
We knew we weren’t alone, and immediately went out and talked to as many developers as we could to see if they were experiencing some of the same problems. When you look at how many workloads are moving over to Kubernetes, it’s obviously a breakneck pace. That was really the start of StormForge – we are born out of a real business problem that stalls organizational momentum. So, we dedicated ourselves to solving the problem in a way that makes life easier for developers and gives businesses the intelligence and insight they need to make better resource decisions as they move to Kubernetes.
Getting started with StormForge is as easy as one-two-three. Talk to one of our Kubernetes experts, take it for a test drive, and start achieving your goals of increasing developer velocity, improving application performance and reducing cloud costs.
Kubernetes App Optimization
Kubernetes – or “k8s” – is an open-source software platform that automates Linux container operations. K8s enables IT and DevOps teams to manage workloads and services via a framework that runs containerized applications on distributed, cloud-based systems in automated and resilient ways.
Learn more about Kubernetes here.
With the continual growth and adoption of Kubernetes, companies need a way to manage the complexity of their Kubernetes applications. Monitoring workloads, manual tuning, and managing applications at scale while not over-provisioning resources that increase your cloud bill exponentially are just a few of the reasons why Kubernetes management is needed to simplify, and automate, your Kubernetes environment.
Learn more about Kubernetes management here.
Cloud-Native is an approach to building and running applications that exploits the advantages of the cloud computing delivery model. Cloud-native architecture takes full advantage of the distributed, scalable, flexible nature of the public cloud. “Going Cloud Native” means developing and deploying applications that abstract away many layers of infrastructure — networks, servers, applications, etc.
Learn more about Cloud Native here.
When organizations go cloud-native with their key business applications and manage them with Kubernetes, StormForge helps ensure that those apps maintain the desired performance while minimizing cost at all times. StormForge helps in Kubernetes-driven environments with application performance optimization – addressing a big challenge. Leveraging machine learning-powered automation, StormForge eliminates manual tasks and trial-and-error, replacing them with fast and dynamic analysis, adaptation, and action.
StormForge Optimization is purpose-built for Kubernetes. Its ground-breaking machine learning technology is a perfect fit for solving the complex problems that arise when going cloud-native. It is innovative both conceptually and in practice because it continuously optimizes and improves the Kubernetes operating environment.
Application optimization is the process of tuning, testing, and re-tuning an application’s parameters and configuration settings such that its operational performance is in line with the organization’s preferences – whether that be for the lowest cost, highest speed, or some other specific parameter.
One real-world example is that many companies focus on CPU and memory tuning. Alternatively, companies want to and should consider many other parameters as a way of establishing and maintaining an optimized Kubernetes environment. This is a major difference between StormForge’s proactive approach to Optimization as opposed to conventional APM (Application Performance Management) software products.
We recommend that you check out this StormForge blog post for a deeper dive into this topic: Improving performance cost-efficiency in Kubernetes Applications.
The simple answer is that Kubernetes is hard to work with because it was designed for managing containers at a Google scale with legacy services. Kubernetes, in short, supports a fast-moving, complex and demanding environment.
It is complicated for 3 reasons: (1) Deploying applications with Kubernetes involves advanced levels of automation and multifaceted levels of abstraction (i.e., everything is broken down into smaller and smaller pieces); (2) over the years, there have been many major releases of Kubernetes (3-4 sometimes per year) with a lot of new features and changes, and this pace is not expected to slow down, and (3) the levels and varieties of configurations are far greater than ever before because of containerization, resource sharing, and microservices.
- StormForge can tune any application running on Kubernetes. Some of the most common use cases, languages, and technologies tuned by our customers include:
Languages:
- Java
- .NET
- Python
- Javascript/Node.js
- Go
- PHP
- Any other language that runs in a container on Kubernetes
Technologies:
- Apache Spark
- Cassandra
- Drupal
- Elasticsearch
- Horizontal Pod Autoscaler
- Nginx
- PostgreSQL
- Vertical Pod Autoscaler
- WordPress
- Any other technology that runs in a container on Kubernetes, including all of your custom applications
There are two ways to optimize an application:
The first is handling the task manually. This involves a human manually changing settings or parameters, assessing how these “tweaks” have impacted results, and then employing a trial-and-error process until one set of results is acceptable and desirable. Clearly, this approach – a typical APM approach – is time-consuming, error-prone, and ultimately ineffective in the Kubernetes environment.
The alternative is automated optimization of application performance that is software-defined – driven by artificial intelligence and machine learning. Given the huge numbers of variables involved with cloud-based applications, and the complexity of testing, adjusting, and re-testing millions of combinations of variables, the automated approach is far superior to any manually-based efforts.
Getting an application’s performance optimized is important for several reasons. First, it means that the application will operate as users expect – in terms of availability, reliability, speed, responsiveness, etc. The second reason is financial. Over-provisioning cloud apps is one way to help ensure that they run well, but doing so can drive costs through the roof. Properly optimized applications perform well under load but do so in a cost-effective way with ‘just right’ provisioning.
Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters and thereby allow businesses to separate resources, security, and access to these resources.
A “service” or “microservice” is a component or specific collection of pods that act as a logical slice of an application. In other words, a service is a portion of an application that executes a particular task.
An experiment defines what you want to test, how it will be tested, and what we will do to optimize the application. StormForge automates, via a series of scientific trials, a variety, and range of parameters related to the Cloud environment, Kubernetes, the application(s), load, etc. StormForge’s Machine Learning algorithms vary the parameters used in these trials and automatically produce multiple simulations, under load, in order to find the optimal parameters and ultimately the optimal configuration for all involved technologies.
Kubernetes Performance Testing
Application performance testing is the process of assessing the performance, stability, scalability, reliability, and resource usage of applications under a particular workload. By defining test scenarios (test cases) and using user-defined data sources real-world workload can be generated and the application tested against this workload. The metrics gathered during test runs are being used to benchmark the current status of an application to its target status as it might have been defined as part of an SLA. If performance testing reveals that an application is not performing at the desired level, the next task is using performance data and other sources of information to locate and diagnose the problems, and take the necessary actions to resolve it. The overarching goal of performance testing, therefore, is not only to make an application fast but enable an organization to understand why the application is behaving as it is and what limitations it has.
Companies are looking to automate, scale, do fast deployments and lower the costs of applications. To determine whether or not organizations are actually gaining those potential benefits, they need to test application performance. By doing so, they get quantitative answers to key performance questions, such as the following: What is the actual speed, scalability, and stability of an application versus expectations? How does the application perform as the number of concurrent users rapidly increases or decreases? Is the application’s resource provisioning optimal?
With the StormForge Platform, teams can take advantage of Performance Testing as a Service (PTaaS). The platform enables them to create load tests in just minutes, and scale tests from tens to hundreds of thousands of requests per second. The StormForge Platform also enables tests that replicate the activity of millions of concurrent users. The Platform’s intuitive, user-friendly user interface allows teams to easily create repeatable automated load tests to incorporate into their CI/CD workflows.
At this time, any application reachable via HTTP can be tested with StormForge Performance. In some cases, closed systems (not reachable over the internet) can also be tested. Feel free to discuss your specific needs with us by contacting us.
The focus of performance testing changes over a project’s lifetime as different aspects of a system need to be evaluated. StormForge supports the following testing types: load, smoke, endurance (soak), throttle, stress, peak, and scalability.
Learn more about the different types of performance tests here.
The maximum load generated for each test depends on the plan an organization purchased and defined in each test case. With the highest plans, any kind of real-world load can be generated.
Of course. You can find the details in our documentation.
It’s pretty easy. Tests can be automated and scheduled using either the CLI in combination with your CI tooling or simply use the scheduling feature in the WebUI.
Yes. Each test case is highly configurable to simulate real-world user behavior observed by e.g. an organization’s marketing team (user journey, time-on-page, etc.). One can upload their own data sources and define data sources to be used in a test case.
Yes. Load can be generated from any inhabited continent to simulate load from clients in these regions.
Yes. In more and more cases, organizations are launching their mobile apps in regions with limited or varying mobile coverage. With Throttle testing in StormForge Performance, organizations can make sure to serve all of their customers and launch in new markets with confidence.
Sure thing. To distinguish between real user traffic and StormForge traffic to, for example, a production system, it is either possible to set a custom HTTP header or filter for the standard headers StormForge sends. In addition, one can also use basic auth to allow load tests to access your test environments.
While StormForge Performance Testing is included as part of the StormForge platform along with application optimization, customers can purchase Performance Testing as a standalone solution. Please contact us for details.
StormForge Performance Testing fits in the CI/CD pipeline right after a change is deployed to an integration or testing environment. Performance testing should happen, in general, at the same time as integration and acceptance testing.
If you’ve purchased the StormForge platform including both Application Optimization and Performance Testing, we include enough performance testing capacity to meet the needs of almost any optimization scenario. If you do run into limitations, please contact us to discuss options for increasing your load testing capacity.
If you’ve purchased StormForge Performance Testing on its own, the number of users, collaborators, test cases, test data, and test runs depend on which plan you have purchased. Contact us for more details on what is included with each plan.
About StormForge
StormForge is the single, unified software company that emerged following Carbon Relay’s acquisition of StormForger in November of 2020. The company’s team of world-class data scientists, machine learning experts and seasoned DevOps engineers build products that drive operational improvements in the Cloud and on-prem through better software performance and more effective resource utilization. StormForge’s main software offerings are automated performance testing and performance optimization of Kubernetes applications.
Carbon Relay acquired StormForger in November of 2020, and rebranded the newly combined companies as StormForge. We invite you to view our Press Release for more details.
StormForge offers solutions for application optimization and performance testing for your Kubernetes applications. They also offer a full range of Professional Services that are designed to help customers make smooth transitions from their monolithic apps to microservices running in containers orchestrated by Kubernetes. The company’s services help to bridge customers from their legacy approaches to a next-generation environment while ensuring performance, reliability, and cost-efficiency.
StormForge offers a full range of Professional Services and offerings. The company’s Services team members can guide customers through all stages of the DevOps lifecycle. Our experienced and hands-on team delivers to customers the technical expertise, oversight, and guidance they need to ensure reliability and performance before their products are released. Specific service offerings include Application QuickStart, Kubernetes Auditing & Guidance, and Education Service. For customers with unique or specific requirements, StormForge also offers Custom Consulting Services.
For more information on these offerings, visit the Professional Services page.
The most efficient way to contact StormForge is by visiting our Contact Us page. There you will find ways to contact the team, Press / Media inquiry information, our phone numbers, and office locations.
General inquiries can also be made by phone at +1 857-233-9831 or via email at info@stormforge.io.