Running AI workloads is coming to a virtual machine near you, powered by GPUs and Kubernetes

Running AI workloads is coming to a virtual machine near you, powered by GPUs and Kubernetes

Run:AI offers a virtualization layer for AI, aiming to facilitate AI infrastructure. It’s seeing lots of traction and just raised a $75M Series C funding round. Here’s how the evolution of the AI landscape has shaped its growth.

Run:AI takes your AI and runs it on the super-fast software stack of the future. That was the headline to our 2019 article on Run:AI, which had then just exited stealth. Although we like to think it remains accurate, Run:AI’s unconventional approach has seen rapid growth since.

Run:AI, which touts itself as an “AI orchestration platform”, today announced that it has raised $75M in Series C round led by Tiger Global Management and Insight Partners, who led the previous Series B round. The round includes the participation of additional existing investors, TLV Partners and S Capital VC, bringing the total funding raised to date to $118M.

We caught up with Omri Geller, Run:AI CEO and co-founder, to discuss AI chips and infrastructure, Run:AI’s progress, and the interplay between them.

Read the full article on ZDNet


Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives.

Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.


Write a Reply or Comment

Your email address will not be published.