Massaging AI language models for fun, profit and ethics

Massaging AI language models for fun, profit and ethics

Do AI language models really demonstrate intelligence? What about morality? Is it ok to tweak them, and if yes, who gets to do this, and how do the rest of us know?

Do statistics amount to understanding? And does AI have a moral compass? On the face of it, both questions seem equally whimsical, with equally obvious answers. As the AI hype reverberates; however, those types of questions seem bound to be asked time and time again. State of the art research helps probe.

Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e. profitable). Some AI enthusiasts market their creations as genuinely intelligent despite this understandable detour, writes Gary N. Smith on Mind Matters.

Smith is the Fletcher Jones Professor of Economics at Pomona College. His research on financial markets, statistical reasoning, and artificial intelligence, often involves stock market anomalies, statistical fallacies, and the misuse of data have been widely cited. He is also an award-winning author of a number of books on AI.

In his article, Smith sets out to explore the degree to which Large Language Models (LLMs) may be approximating real intelligence. The idea for LLMs is simple: using massive datasets of human-produced knowledge to train machine learning algorithms, with the goal of producing models that simulate how humans use language.

Read the full article on ZDNet


Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives.

Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.


Write a Reply or Comment

Your email address will not be published.