Reducing cloud waste by optimizing Kubernetes with machine learning
Applications are proliferating, cloud complexity is exploding, and Kubernetes is prevailing as the foundation for application deployment in the cloud. That sounds like an optimization task ripe for machine learning, and StormForge is acting up on that.
The cloud has become the de facto standard for application deployment. Kubernetes has become the de facto standard for application deployment. Optimally tuning applications deployed on Kubernetes is a moving target, and that means applications may be underperforming, or overspending. Could that issue be somehow solved using automation?
That’s a very reasonable question to ask, one that others have asked as well. As Kubernetes is evolving and becoming more complex with each iteration, and the options for deployment on the cloud are proliferating, fine-tuning application deployment and operation is becoming ever more difficult. That’s the bad news.
The good news is, we have now reached a point where Kubernetes has been around for a while, and tons of applications have used it throughout its lifetime. That means there is a body of knowledge — and crucially, data — that has been accumulated. What this means, in turn, is that it should be possible to use machine learning to optimize application deployment on Kubernetes.
StormForge has been doing that since 2016. So far, they have been targeting pre-deployment environments. As of today, they are also targeting Kubernetes in production. We caught up with CEO and Founder Matt Provo to discuss the ins and outs of StormForge’s offering.