Nvidia doubles down on AI language models and inference as a substrate for the Metaverse, in data centers, the cloud and at the edge

Nvidia doubles down on AI language models and inference as a substrate for the Metaverse, in data centers, the cloud and at the edge

At Nvidia’s GTC event today, CEO Jensen Huang made announcements the company claims have the potential to transform multi-trillion dollar industries. We cherry-pick from the announcements, focusing on the hardware and software infrastructure that powers the applications that make the headlines.

GTC, Nvidia’s flagship event, is always a source of announcements around all things AI. The fall 2021 edition is no exception. Huang’s keynote emphasized what Nvidia calls the Omniverse. Omniverse is Nvidia’s virtual world simulation and collaboration platform for 3D workflows, bringing its technologies together.

Based on what we’ve seen, we would describe the Omniverse as Nvidia’s take on Metaverse. You will be able to read more about the Omniverse in Stephanie Condon and Larry Dignan’s coverage here on ZDNet. What we can say is that indeed, for something like this to work, a confluence of technologies is needed.

So let’s go through some of the updates in Nvidia’s technology stack, focusing on components such as large language models (LLMs) and inference.

Read the full article on ZDNet


Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives.

Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.


Write a Reply or Comment

Your email address will not be published.