Run:AI takes your AI and runs it, on the super-fast software stack of the future

Run:AI takes your AI and runs it, on the super-fast software stack of the future

Startup Run:AI exits stealth, promises a software layer to abstract over many AI chips

It’s no secret that machine learning in its various forms, most prominently deep learning, is taking the world by storm. Some side effects of this include the proliferation of software libraries for training machine learning algorithms, as well as specialized AI chips to run those demanding workloads.

The time and cost of training new models are the biggest barriers to creating new AI solutions and bringing them quickly to market. Experimentation is needed to produce good models, and slightly-modified training workloads could be run hundreds of times before they’re accurate enough to use. This results in very long times-to-delivery, as workflow complexities and costs grow.

Today Tel Aviv startup Run:AI exits stealth mode, with the announcement of $13 million in funding for what sounds like an unorthodox solution: rather than offering another AI chip, Run:AI offers a software layer to speed up machine learning workload execution, on premise and in the cloud.

The company works closely with AWS, and is a VMware technology partner. Its core value proposition is to act as a management platform to bridge the gap between the different AI workloads and the various hardware chips, and run a really efficient and fast AI computing platform.

When we first heard about it, we were skeptical. A software layer that sits on top of hardware sounds a lot like virtualization. Is virtualization really a good idea when it’s all about being as close to the metal as possible to squeeze as much performance out of AI chips as possible? This is what Omri Geller, Run:AI co-founder and CEO thinks:

“Traditional computing uses virtualization to help many users or processes share one physical resource efficiently; virtualization tries to be generous. But a deep learning workload is essentially selfish since it requires the opposite:

It needs the full computing power of multiple physical resources for a single workload, without holding anything back. Traditional computing software just can’t satisfy the resource requirements for deep learning workloads.”

So, even though this sounds like virtualization, it’s a different kind of virtualization. Run:AI claims to have completely rebuilt the software stack for deep learning to get past the limits of traditional computing, making training massively faster, cheaper and more efficient.

Still, AI chip manufacturers have their own software stacks, too. Presumably, they know their own hardware better. Why would someone choose to use a 3rd party software layer like Run:AI? And what AI chips does Run:AI support?

Read the full article on ZDNet


Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives.

Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.


Write a Reply or Comment

Your email address will not be published.