OctoML announces the latest release of its platform, exemplifies growth in MLOps

OctoML announces the latest release of its platform, exemplifies growth in MLOps

OctoML is announcing the latest release of its platform to automate the deployment of production-ready models across the broadest array of clouds, hardware devices and machine learning acceleration engines.

Benchmark and deploy your machine learning models on AWS, Azure, and Google cloud, or at the edge, on AMD, Arm, Intel and Nvidia hardware. Improve performance, and use open source frameworks such as ONNX Runtime, TensorFlow, and TensorFlow Lite and TVM.

That’s OctoML’s offering in a nutshell. We think it paints a representative picture of today’s landscape in AI application deployment, a domain also known as MLOps. We have identified MLOps as a key part of the ongoing shift to machine learning-powered applications, and introduced OctoML in March 2021, on the occasion of their Series B funding round.

Launched today at TVMcon 2021, the conference around the open source Apache TVM framework for machine learning acceleration, OctoML’s new release brings a number of new features. We caught up with OctoML CEO and Co-founder Luis Ceze to discuss OctoML’s progress, as a proxy for MLOps progress at large.

The first thing to note in this progress report of sorts is that OctoML has exceeded the goals set out by Ceze in March 2021. Ceze noted back then that the goals for the company were to grow its headcount, expand to the edge, and make progress towards adding support for training machine learning models, beyond inference which is already supported.

Read the full article on ZDNet


Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives.

Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.


Write a Reply or Comment

Your email address will not be published.