NVIDIA unveils Hopper, its new hardware architecture to transform data centers into AI factories
NVIDIA just announced Hopper, a new GPU architecture that promises significant performance improvements for AI workloads. We look under the hood to decipher whether the emphasis on Transformer AI models translates to a radical redesign, and look at the updates in the software stack.
NVIDIA did it again, but this time with a twist — appearing to borrow a page from the competition’s playbook. At NVIDIA GTC, which has shaped into one of the AI industry’s most important events, the company announced the latest iteration of its hardware architecture and products. Here’s a breakdown of the announcements and what they mean for the ecosystem at large.
GTC, which began Monday and runs through Thursday, features 900+ sessions. More than 200,000 developers, researchers, and data scientists from 50+ countries have registered for the event. At his GTC 2022 keynote, NVIDIA founder and CEO Jensen Huang announced a wealth of news in data center and high-performance computing, AI, design collaboration and digital twins, networking, automotive, robotics, and healthcare.
Huang’s framing was that “companies are processing, refining their data, making AI software … becoming intelligence manufacturers.” If the goal is to transform data centers into ‘AI Factories,’ as NVIDIA puts it, then placing Transformers at the heart of this makes sense.
The centerfold in the announcements has been the new Hopper GPU Architecture, which NVIDIA dubs “the next generation of accelerated computing.” Named for Grace Hopper, a pioneering U.S. computer scientist, the new architecture succeeds the NVIDIA Ampere architecture, launched two years ago. The company also announced its first Hopper-based GPU, the NVIDIA H100.