Resisting the urge to be impressed, knowing what we talk about when we talk about AI

Resisting the urge to be impressed, knowing what we talk about when we talk about AI

A brief history of recent advances in AI, what they mean, and why you should care — or not.

The barrage of new AI models released by the likes of DeepMind, Google, Meta and OpenAI is intensifying. Each of them is different in some way, each of them renewing the conversation about their achievements, applications, and implications.

Imagen, like DALLE-2, Gato, GPT-3 and other AI models before them are all impressive, but maybe not for the reasons you think. Here’s a brief account of where we are in the AI race, and what we have learned so far.

At this pace, it’s getting harder to even keep track of releases, let alone analyze them. Let’s start this timeline of sorts with GPT-3. We choose GPT-3 as the baseline and the starting point for this timeline for a number of reasons.

OpenAI’s creation was announced in May 2020, which already looks like a lifetime ago. That is enough time for OpenAI to have created a commercial service around GPT-3, exposing it as an API via a partnership with Microsoft.

Read the full article on ZDNet


Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives.

Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.


Write a Reply or Comment

Your email address will not be published.