On-Demand

Automatic Optimization of Kubernetes Pod Configuration Through Machine Learning

Air Date: March 2, 2021

Lars: Hi. Welcome to the webinar which is about automatic optimization of Kubernetes pods configuration through machine learning.

What you see here is the Cloud Native landscape and I’m pretty sure every one of you has seen this before. However, I really like to show this picture because it gives us an impression of the complexity, which we are dealing with on a day-to-day basis. The Cloud Native landscape shows vendors and products, and of course open source projects, for pretty much any problem we need to solve from the cloud vendors hosting services through tooling around automation, configuration, security, networking and storage, database solutions, and up to the application layer tooling, and of course monitoring, logging, and observability in general. So what application you want to create and run, you’ll need and use some of these proposed solutions, and probably you will use a lot of them. The fact that we need so many solutions for our day-to-day business and the fact that we must put them on a map speaks, in my opinion, for itself when it comes to thinking about complexity. So of course I have to point on our very own logo here with the red arrow. So this tiny pack of black pixels over there, it’s what we will talk about today. 

So let me introduce myself. I’m Lars Wolff. In 2014, I founded a performance testing software as a service together with Sebastian Cohen that product was called StormForger at the time. Last year, we joined forces with a company called Carbon Relay in Boston with a major expertise in machine learning based optimizations. We now call us StormForge and this is actually where you are here now listening to. For some years, I’m on a mission to help DevOps organizations to challenge the efficient high quality delivery at speed, and I know this sounds buzzwordy, but it’s actually really important to me. I don’t know anybody who wants to deliver slow and unreliable systems. 

If you want to talk to me, you can find me on the net as @larvegas, or can just drop me a line at lars@stormforge.io.

So, excuse me, at StormForge we are here to help you to ignite performance and crash complexity of your own system. We deliver performance testing and optimization powered by machine learning, which helps you to improve your systems proactively. So why? Creating and running applications you sooner or later will face some of the challenges to successfully run cloud native workloads. Over the last years, we identified the following pillars. So each of them are actually standing alone, but all are also interdependent on each other. That’s yet another thing of complexity. I hope you agree, but I’m also happy to discuss if not in the Q&A later. But performance is something which really matters. No one wants to have slow unresponsive systems. Scalability is a key fact for the cloud. We want to allocate the resources we need at the time we need them. We want to make sure that we can downscale the system if we don’t need that resources. And of course we want to scale out fast if we need more resources to serve our users. Another thing is reliability, which is just simply not negotiable. Most businesses heavily rely on software and its operation and some businesses actually really depend on that software. So last but not least, it comes to efficiency, which is a business crucial thing. First thing that comes to mind when thinking about efficiency is cost, and that’s totally true, but in my personal opinion, it’s not so much about actual cost savings. It’s more having an understanding where you actually spend which budget for what you want to serve, and knowing this gives you more room to equip other innovative projects or initiatives in the company with the resources, which are, until that time, maybe are used wrong or unused. 

So talking about understanding and knowledge, we should ask the question, how to actually gain an understanding of the behavior of one of our complex systems? The answer to this is it’s about testing. It’s about creating experiments. It’s about observing the behavior of the system and so mark and learn about specific situations. One tool for this is actually performance testing. So the bad news here, performance testing itself is another complex thing, I’m sorry. But the good news is in the last years, we worked hard building a product for and with our customers to make performance testing way easier. Starting performance testing early and doing it as often as possible is one of the most important things to actually face the mentioned challenges, so with StormForge Performance Testing, we focused from the very beginning on creating a tool which allows creating test scenarios in code. And so shift left in the development process, for those who are not familiar with that, with the shift left approach, it means to make sure that the creation and maintenance of test scenarios can be done immediately after a feature and functional test is created. So you don’t have to wait for another team to test your component, and you even so do not rely on, for example, external consultants. You own that non-function aspect of the performance testing of your component for yourself. Further, it’s super important to us to give the ability to run tests anytime, at any scale, from any cloud region in the world. It doesn’t matter if you just want to hit the start test run button in our UI or if you trigger a test run via an API call. One important thing about running a performance test is the management of the load generator itself. So especially if you want to make sure that you have reliable, comparable results and tests done on a regular basis. Of course we need reporting data after we have done some test runs, and we want to inspect the behavior of the system, and actually have a look at that reporting data, and we make sure that you and your colleagues can access this data immediately after a test run. This may sound silly to you, but there are a lot of teams out there which actually wait for hours, sometimes days, to get the reporting on their performance. So here it is just seconds. So it’s very important for us that you have immediate reporting after you created the test drive. Our performance testing solution was built with automation in mind, so integration in pipelines and testing for non-functional requirements is done using our command line client, and if you like to, you can create quality gates in your pipeline, and you’re just ready to go. All of this gives you the ability to share knowledge about the behavior of your system to a broad audience in your organization, which is very important since performance testing as a discipline is a cross-functional undertaking itself. To create, run, and analyze performance tests, it’s needed to talk to engineering of course, operations, the business itself, marketing, and sometimes other disciplines too. You gain the benefit to let the performance culture evolve in your organization.

All right, so at this point, we can create test scenarios in code, run tests any time we want to, inspect the reporting data to learn about our system’s behavior under specific conditions, even automatically. So the teams share knowledge and are not blocked anymore waiting for other teams or external consultants. They’re able to test their own components and actually own that component fully. After doing this in practice for some time, teams will come to the question if the given configuration of a particular component is the right one and if it’s sufficient for the confidence workload, so doing tests like this, actually using that practice is pretty much a kind of a feedback loop where you ever and ever iterate through performance tests, adjust the test scenarios, making sure you have the right test data, the right test environment, and learn about the behavior. But at one point, it always ends at that, or mostly ends at the point, that you ask yourself about the configuration. And from now on, it’s not about doing one test or a test on a regular basis anymore, it’s about conducting a series of tests to learn about different configurations of sometimes tiny parts of the systems. You may now know that configuration itself is another complex thing, of course, because finding the right configuration needs time and to test out configurations, you need to define the configuration, deploy a new configuration, run the test, inspect the report, adjust the configuration again, deploy it again, run the test again, and over and over again. Unless you, as unless the time you hopefully found enough working or good working configurations. This is a lot of guessing in here. Of course you will learn about some configuration combinations which somehow work, but there’s still a lot of guessing in here. So we actually looked at teams doing this for weeks, and with weeks I mean like full-time weeks, and I think I speak for you that it’s maybe fun to do something like that because it’s always good to have an action like that and working on in tasks like this and learn about the system, but you all can think of doing more interesting things than running the same test over and over again, right? 

So to explore a bit more the problems of configuration testing in general, I want to go on with an example and say we want to give one part a memory resources from 100 millibyte, maybe up to 1000 millibyte, and we also want to equip it with CPU. CPU resources from one milli CPU to one full CPU. And given that we allocate all of these resources in 10 steps, we actually, i mean this is easy math, need to do 100 test runs to check all of that combinations. We probably don’t want to check all of the combinations because we know, okay the biggest one will work and the smallest one probably will not work, but we have to do a lot at least a large amount of test runs. So given the fact that each of these test runs needs some minutes to run, let’s say 30 minutes, and that we also need some time to reconfigure the configuration, do another deployment, and also after a test run need some time to inspect the test cases, we can, for example, say this test will take two hours for one test run, and at that point, if you want to do a test series, this a one person is blocked for at least 25 days of reconfiguring and running tests. And personally, I think this is pretty much a boring task to do and hopefully useful as somebody is doing this out there. 

However, at that time here StormForge Optimize comes into play. So with StormForge Optimize, you automate this whole process. So optimize does the reconfiguration of the pod with the configuration, the deployment itself, runs the test, looks at your objectives, and starts over with the next test run with the next guest or maybe valid combination of configuration. Optimize for so to say searches for the best configuration under your objectives in a very smart way, and most importantly, it improves itself over that series ongoing. So as a result, optimize delivers all data and of all test runs and a recommendation for optimal parameter combination. Beside the recommendation itself, you can check out all configuration candidates and decide for the right tradeoffs to make. You can decide for yourself is it throughput versus cost which is important to us, or is it something like performance versus throughput. And still, since you have all the data in place from all the test runs, you can decide for yourself which reports you actually want to inspect for yourself. Obviously Optimize gives you some recommendations. Here in the diagram you can see five orange dots and also that better green dot, which are their recommendations in this picture here. So, one underestimated thing is that of course like doing this all manually is boring and also you produce a lot of kind of useless information because you learn a lot about different configurations, which are just like bad candidates and they’re not sufficient and you actually don’t have to look at them ever again. But you had to learn that and you had to check that out. Further, by the time when you’re ready with all this effort to do this manual test, for example, you will realize that your system under test actually will have changed because in that time teams probably have deployed a new version of the software and you are in the situation that you actually had to start over and over again if you would do that manually. So that’s the reason why, of course, you should not do that manually, and let that effort overtake by software like Optimize. So further, in my example here, we only looked at one single pod. I don’t know about you, but I’ve never seen a production application running only one pod and Optimize gives the ability to run these kinds of series against a large amount of pods at the same time. Actually against a large amount of different configurations, not only about pod configuration. You can also configure applications databases for example.

All right, so again, so what do we have to do to actually run successfully the cloud native workloads? The maybe most important thing is to get used to and start early with regular testing, continuous performance testing, of course, so you can make sure that you actually address the problem pillars from the very beginning, make sure that you challenge performance scalability, reliability, and the efficiency of your your system. Further, when it comes to the situation that you have to optimize your infrastructure set up or your pods for in that example here, it is very time consuming to do this manually and actually when it comes to a certain amount of complexity, when you have hundreds of pots for example, it’s not doable anymore by any team member. And obviously no team member should do a boring job like that. The team members should gain free time from manual testing and focus on the business critical deliverables and also make sure that collaboration on requirements and also the shared knowledge of the behavior of the system will gain a broader audience in the organization. So with that, StormForge helps DevOps to successfully run cloud native workloads and challenge the card performance, the card problem pillars, performance, scalability, reliability, and efficiency and that all without the need to focus on the manual testing, and also gain better results in a proactive way. 

All right, so with that said, it’s meant to just give an overview over the product. I’m really happy to be here to do some Q&A for all the attendees, meanwhile while we answer questions, you have the chance to of course do a free sign up at stormforge.io to get started in our free plan to get used to Performance Testing and also to Optimize. And again say thank you and I’m happy to answer some questions. Please use the Q&A section here in the webinar to ask your questions. Thank you very much.

So there’s one question already, so is performance testing or performance engineering the preferable approach, or is that depending on the company, its team, and its culture? 

So generally speaking, I would not say that it is depending on any approach or any culture of the company. More importantly is that you actually will to do performance testing for the given reasons so you’re facing the challenges we just talked about in the very beginning. You want to make sure that you learn about the behavior, to find solutions to actually face these challenges, and find solutions for that, and it will help you have a more, let’s say, modern culture or shared knowledge culture, definitely. I would definitely say this is the way to go, but users of Performance Testing are also in very, let’s say, strict environments where there is no given, for example, community of practices for performance testing. So in my personal opinion, it is important that it will introduce a culture shift and as I said in some of the slides in the very beginning, it gives you the chance to actually break out the silos because this is one of the big problems when it comes to yeah non-function testing in general. It’s often done by external teams or external consultants, which means that these teams in particular have a very good understanding of the non-financial requirements, or if these returns are violated or not, but the feedback loop to the engineering teams and to the operations teams is often broken or at least very slow. And the slowness part is, in my opinion, the one with the most or the biggest business effect because it sometimes takes weeks until people can fix performance issues and do redeployments, but before they can redeploy that fixed thing, they actually have to wait for another test. And we actually solve that by giving that authority to the teams itself. Again, shift lifting approach making sure that the engineers and ops teams actually have the ability to run their tests on their own and making sure that they meet their requirements. I hope that answers the question, if not please go ahead and go and ask another one.

How secure is StormForge?

So can you please elaborate on that because when it comes to the question security, there’s a lot of different angles. Can you give some more context?

Generally speaking, it’s the software as a service, so it is a secure service. You can log in there. From a testing perspective when it comes to security, one important thing is obviously that it’s complicated to have the right test data in place. So generally speaking, it is not allowed for us as a service and we don’t recommend that at all to use, for example, personal data as test data. So when, this is one privacy thing. So then there’s another thing when it comes to regulated environments, let’s say fintech or banking in general, then StormForge can definitely help you. Since we operate on the http layer, there’s a lot of possibilities to make sure that security guidelines are given. So one classic example for things in the sphere of security is the question, how can traffic come into my, let’s say, restricted network? Even for that there are several features available. First of all, all the traffic we just generated by performance testing is marked with special headers, you can also of course use basic http authentication as another layer, you can then add another layer of adding IPwages, and you also can get dedicated IPs to make sure that that traffic actually is allowed to come into your very inner of that network. I hope that answers the question in the right direction. Again, the question on how secure StormForge is is a very broad thing, so you can please give some details and I will try to answer that more.

So there is another question regarding machine learning. So machine learning is definitely a versatile method, but are there cases where machine learning don’t apply well to performance testing in terms of quality of results?

Of course, I would say definitely. So first of all the Performance Testing part, the machine learning aspects here are more given in the Optimized parts, so there’s no machine learning aspect even in the Performance Testing part. There’s of course a bit more, let’s call it sophistic parts, and when it comes to reporting. I totally agree with you that maybe it is not a good idea to probably set up automatic traffic shaping or traffic generation from a machine learning model, which is not what we’re doing here. So the machine learning part is based in Optimize, where it helps us to find the best configuration of the configuration candidates here. I hope, Paul, that answered your questions.

So when it comes to testing in general, to append that, it’s very important that you have a controlled experiment from the scenario side, and also that you make sure that you also have a controlled so-called system under test the target system you’re actually testing against that.

So another question, how do you define to StormForge what the variables are, which could be adjusted with the infrastructure structure to identify the best deployment of resources?

So this is a very nice question. So the easy answer obviously is you use our YAML file and describe other other variables or other parameters you want to adjust, and as you can think of this can of course get very complex. So you can check out our documentation. There are some guides with two ways. First of all, you can create the so-called experiment YAML file for yourself, which will be a big file, definitely. There’s also another let’s call it abstraction layer on top of that, which is called the upyama Approach. You can check this out. This will definitely help you and there maybe some other features coming in the future to make that easier. I’m with you that this can get complicated. Hope that answers your question.

So there’s a question in the chat. So as a reminder, please send your questions to the Q&A section, but I just see, how much is the overhead in Kubernetes itself? 

So this is also a good question and pretty easy to answer. So that obviously depends on how complex your configuration setup is. So if you have a lot of pods, with a lot of options, with a lot of parameters to look at to find the right configurations, we have to do continuous redeployment and re-testing, so we consume some resources in Kubernetes. Again, you probably want to make sure that this is in a controlled environment and also you want to give it some limits, but this is adjustable. 

And you’re the same person asking can you do a real demo? 

Of course, I will not do this here in the webinar, but please feel free to just register at our requested demo page, which must be www.stormforge.io/request-demo/, and then we can do a demo of all the products working together. Happy to do that. 

All right, so any more questions, please? Because I think this is the fun part of webinars, right? Is there a workshop or guided lab available? 

All right, that’s a very good question, Joseph. So there are workshops available so you can talk to professional services. Usually it is like a tailored workshop to make sure that it makes sense for you, not talking about general things like how to do performance testing and how to do configuration optimization. It’s more to do a deep dive on your particular problem. Happy to do that. 

Also there are some guide resources in our documentation which should clarify the common things. Let’s say, how to create scenarios or record scenarios from and for a mobile client, or how to do recording from a browser-based session, or I don’t know, what are actually the different types of performance testing, so from stress testing, to over endurance testing, up to the disciplines of case engineering and how can performance test or the group of the performance testing methods help you with that.

So ‘anonymous attendee’ is asking what are the use cases for machine learning uses StormForge, how it helps in achieving the goals?

So this a bit, either sounds like you did not really hear what I was talking about or I misexplained that, if so I’m sorry for that. So the main achievement is that we make sure that we have automation in place, which finds the right candidates for configuration since finding the right configuration will lead you to a large number of test runs and also a large number of, as we call it, trials to find that correct configuration. Machine learning helps you to learn what is the way or what is the search for the best configuration of the optimal configuration given that you gave some objectives of things that matter for you which are non-functional requirements. Things like, hey I want the system to answer 99% of their requests in under one second, for example. Hope that answered your question. 

All right, come on people! More questions. Again, this is the fun part. I mean there’s a $100 gift card!

Okay, so seems like this is it. Again, thank you for your time. I personally like that it’s more short presentation. Have a good Q&A in the end. I really like that. Thank you very much. Again, if you have any questions, just drop a line to lars@stormforge.io, or register for a demo on our website. 

Oh, there’s the last question coming in. Is there best practice or two you can share with performance testing while employing DevSecOps?

All right, Paul. So first of all, DevSecOps. I mean we could stay here and just discuss this whole topic, in my personal opinion. I think you are talking about that you also integrate different security testing approaches, for example in your pipeline, right? So if you ask stuff like, okay what should be done first? Testing for non-functional crime is from a performance perspective, so during the load test before security testing and pipeline, that’s the decision you actually have to find for yourself. This highly depends on your use case. For example, I was talking about quality gates when I was talking about how to integrate that in pipelines. This is something customers always ask for, can we create quality gates? That’s the reason why I address that. I would not recommend to create quality gates by the results out of performance tests in the very first place because having the right quality gates in place is, in my opinion, a very sophisticated situation. You really have to understand what are your requirements and really have your test that. And it’s the complete other way around when it comes to security. So when you do security testing. For example, when you do fast testing on your code base and there is a problem, you should definitely stop that pipeline. And I think this is kind of obvious. And that part, and this is something we have to of course juggle around with, but it’s performances here not so important than security. So this is actually one of the best practices. Actually it’s like this that some people think that we do security testing because there’s the name of stress testing in the performance testing methods. The testing parts here, performance tests we’re talking about actually is not addressing any security testing at all. Even so that the security people are using load testing and stress testing, for example, to find how a system behaves and finds security holes. So again, to your question, one of the best practices is to definitely do security or do quality gates with security tests, depends on your use case at least, and not do quality gates in the very first place with performance testing even so that you can do that.

I would say, so right now there’s no other best practice which really comes to my head. I would say both topics obviously are super important, the one is dealing with obviously security and the other one is dealing with facing the problems, which comes out of dealing with distributed systems. In the end it’s in business trade-off with which you start first. In the end, you have to deal with both of them at one time. Hope that answered that last question. 

So forget that I said “last question”. Just shoot out because I think now you guys get warm. So you actually I think you started to come back to your last question, so let me read that first. 

So I read the question aloud, so no doubt so this is very based on the complexity of the system, but given that executing performance tests can be expensive as you will want to emulate a production environment, how quickly can we identify where the bottlenecks are in the system? Can we define where we suspect the bottleneck might be to attempt adjustments first, or perhaps wait where the adjustment should be prioritized?

So absolutely. This is kind of a pro question. So I try to rephrase this. The question is, is your test strategy that you have, like let’s call it, an end-to-end test and a load test scenario, which actually is sending traffic to all the endpoints of your application, or do you focus on specific endpoints where you know that behind that endpoints there’s infrastructure, which is some kind of weak or where you suspect that there actually may be a bottleneck. And a good example for that is, for example, that people change databases. So given that, let’s say you’re running an e-commerce shop and you change your search database from, I  don’t know, uh Solar to Elastic, and then you want to find out, okay is there a bottleneck with the new test service definition and so the database? So you of course will focus with your scenario to that part of the whole system. And this is a very good approach, nevertheless, you should make sure that you also send traffic to the rest of the system because usually and this is a common thing and it’s an artifact of distributed systems, you cannot make sure that there are no side effects. So in practice, think of a scenario definition in Javascript and let’s say the first third is about visiting your home page and going to a search page, for example, and the last two third is about actually doing that searching and and going through a search result to a detail page. And given that the easiest thing you can do is just get rid of the first third in the scenario for a particular test run, or you can even try to set variables, for example, if I want to run the test run with an environment variable set no home page, for example no home page true, then you can respect that in the test scenario and make sure that you do not send traffic to the home page. But you send the traffic to the search, for example, and with that you focus on a particular part.

So there’s one question. I read that question, in performance testing, are you dealing with only CPU intensive and memory intensive tests like iops or are you also targeting for scalability?

So that’s a very good question. In general it’s not only about CPU and memory, of course, and we do not target that. So I took that as an example, I’m sorry if this was misunderstood, so in general with the performance testing solution, you can target whatever optimization view you want to do. So as you maybe know, there are a lot of different performance testing methods, so of course CPU and memory is one of the things to check out or to test for, which are kind of obvious, but there’s a lot of other aspects around that. For example, connection limits or networking configuration in general and of course iops, as you as you just stated out. And the performance testing solution itself is thought of like an http based load generator, which gives you the ability to actually create the scenarios in code and then you can create the traffic whatever you want to to make sure that you define, that you actually stress your system, and it’s not only about stress testing. But apply the traffic you need to the system to check out the things you want to aim for. And again, especially in a distributed system, things get very complex and even if you just would look at CPU and or memory then you would have a lot of work to do if you would look with single tests at each of the single components.

So somebody has a lot of good questions. I will just follow up. It’s the same person, the question is, how machine learning is used here? Could you please elaborate more on that?

Okay so the problem first of all is that we have, as you know, in the cloud and in Kubernetes a lot of configuration options. Again, it’s not only about CPU or memory. This is just an example. We can add here connection limits, for example. We can add here, I don’t know, specific database configuration. It depends highly on what your application consists of and the machine learning tool, or Optimize itself, takes all the objectives or all the configuration parameters, first of all, you want to optimize. You give it a range, let’s say, as for my example, the range for CPU resources, the range for all the request limits, but also maybe configuration, which is bound to an application pod. Let’s say for a database, where you actually change the configuration for the database itself, and Optimize takes that. So for say configuration scenarios, so the combinations of all the configuration and redeployments of all the pods and of the whole application and make sure that it will test it against that, we’ll find out whether the objective met, and if so, it will search for another configuration. And if the objective was not met, it may stop the test run and also search for another combination of the whole bunch of other configuration parameters. So the machine learning model itself helps you here because it learns over time what is, let’s say, the overall number of combinations of configuration for the whole application, and make sure that we search for so to say in a very fast and efficient way. And over time, again, it learns what is like kind of the fast way to find the best configurations. I hope that answers your question, if not please come up with another question for that. 

Okay, yeah. That’s a good thing, so there’s one person asking the following question, while going at CPU and memory is great, does it also consider tweaking the settings for pod disruption budgets for say HPA? 

And of course it does. So actually I don’t have the blog link over here, but there’s a blog article on our blog where I think Thibaut elaborated on that thing. How to improve HPA there. Hope that fits. Please go to our blog.

So another good question. Have people tried the learning model in combination with Kubernetes experiments? I think this is a very good question.

Yes, but I cannot tell you exactly how because the definition of their experiments is not actually known to me. However, this is a very good thing. So when you think of having the system on a test, which feels pretty stable and then you’re running the test series, maybe operated by Optimize, against that system on a test, and meanwhile, you also do chaos engineering activities then you of course will have an impact on the test results. So this is, by the way, I think like a good game day thingy or a good experiment to do, but you should define the goal you’re actually aiming for. What you want to learn with that. So in general, I would say these are different disciplines. The one thing is like doing chaos experiments or kind of cam testing activities, and then also safe environment or controlled environment, and the other thing is that you apply a traffic model against your whole system and making sure that you have the control of the system under testing, do not disturb it with chaos engineering practices for example, but again, I would not say it’s not allowed, so why not do that.

So there’s another question, is agile required for best practice for performance testing or are there other methodologies that would work, such as spiritual development?

Oh my gosh. So I said oh my gosh because I would not say that agile is required, and to be honest, I cannot answer what that should mean. Agile is required because the definition of agile is very broad and I think, in my personal opinion, it is very dependent on how your organization is structured and what kind of agile methodologies you’re actually using. But if you apply performance testing, it should fit in your process whatever and however your process looks like, and actually I have no idea about spiral development. I’ve never then done this before, so I’m happy if you enlighten me later and send me an email about that. I’m happy to discuss that. It’s interesting. Thank you.

So there’s a nice open source question, since you mentioned cloud native with all open source technologies, what are all open source technology are covered by StormForge? It doesn’t matter what kind of technology you actually use.  Think of it like this. So you’re running your system, let’s say, it is Kubernetes, and in your system you run different pods or you even run, let’s say, a service match using a style or something like that. And what performance testing will do, it will send traffic against your application as you define it in the scenario, and you use our Kubernetes controller from Optimize to actually do the redeployment with the right configuration of your application. So you of course have to define what should be done for the deployment, but we actually do not care about what products you use or what technologies you are using to solve your business problems.

So pricing question. So it’s an interesting question. As you know, it depends. Please write an email and we can elaborate on that.

So I’m going through the rest of the Q&A. By the way, my colleague sent you out the HPA article here.

I don’t understand that question regarding open source technologies and the on top of the application…

All right, I’m going through the question give me a second…

So there’s an interesting question, which I actually cannot answer to be honest. Have you heard of anyone replacing or augmenting the cube scheduler with the machine learning models to optimize bin packing? This would be ML tuned optimization at runtime after then and at the ISC time.

I’ve never personally heard about that. By the way, I’m not the Kubernetes super profe, however, that’s also an interesting question because we think machine learning should be applied in a situation that makes sense. We don’t think that machine learning should make people obsolete. I don’t think that. In my example, I showed that machine learning helps really good and getting rid of the boring work, but we also need to take control of that. So I have no idea if we you actually want to do that, but again, I’m happy if you write out an email and we discuss that that question further.

All right, folks. Then if then one more question again… Thank you very much for coming out. I hope you enjoyed that. Again, feel free to sign up at www.stormforge.io to start with Performance Testing and then go over to start Optimize. If you have any questions, just drop a line and we happy to meet you later. Thank you very much.