4 AI research trends everyone is (or will be) talking about

Using AI in the real world remains challenging in many ways. Organizations are struggling to attract and retain talent, build and deploy AI models, define and apply responsible AI practices, and understand and prepare for regulatory framework compliance.

At the same time, the DeepMinds, Googles and Metas of the world are pushing ahead with their AI research. Their talent pool, experience and processes around operationalizing AI research rapidly and at scale puts them on a different level from the rest of the world, creating a de facto AI divide.

These are 4 AI research trends that the tech giants are leading on, but everyone else will be talking about and using in the near future.

One of the key talking points regarding the way forward in AI is whether scaling up can lead to substantially different qualities in models. Recent work by a group of researchers from Google Research, Stanford University, UNC Chapel Hill and DeepMind says it can.

Their research discusses what they refer to as emergent abilities of large language models (LLMs). An ability is considered to be emergent if it is not present in smaller models but is present in larger models. The thesis is that existence of such emergence implies that additional scaling could further expand the range of capabilities of language models.

Read the full article on VentureBeat

Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives. Analysis, Essays, Interviews, News. Mid-to-long form, 1-3 times/month.

 
 

Write a Reply or Comment

Your email address will not be published.