AI chips for big data and machine learning: Hard choices in the cloud and on-premise

AI chips for big data and machine learning: Hard choices in the cloud and on-premise

How can GPUs and FPGAs help with data-intensive tasks such as operations, analytics, and machine learning, and what are the options?

Applications and infrastructure evolve in lock-step. That point has been amply made, and since this is the AI regeneration era, infrastructure is both enabling AI applications to make sense of the world and evolving to better serve their needs.

As things usually go, the new infrastructure stack to power AI applications has been envisioned and given a name — Infrastructure 3.0 — before it is fully fledged. We set off to explore both the obvious, here and now, and the less obvious, visionary parts of this stack.

In order to keep things manageable, we will limit ourselves to “specialized hardware with many computing cores and high bandwidth memory” and call it AI chips for short. We take a look at how these AI chips can benefit data-centric tasks, both in terms of operational databases and analytics as well as machine learning (ML).

Read the full article on ZDNet

Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives. Analysis, Essays, Interviews, News. Mid-to-long form, 1-3 times/month.

 
 

Write a Reply or Comment

Your email address will not be published.