Artifictional Intelligence: is the Singularity or the Surrender the real threat to humanity?


Artificial intelligence is one of those things: overhyped and yet mystical, the realm of experts and yet something everyone is inclined to have an opinion on. Harry Collins is no AI expert, and yet he seems to get it in a way we could only wish more experts did.

Collins is a sociologist. In his book “Artifictional Intelligence – Against Humanity’s Surrender to Computers”, out today from Polity, Collins does many interesting things. To begin with, he argues what qualifies him to have an opinion on AI.

Collins is a sociologist of science at the School of Social Sciences, Cardiff University, Wales, and a Fellow of the British Academy. Part of his expertise is dealing with human scientific expertise, and therefore, intelligence.

It sounds plausible that figuring out what constitutes human intelligence would be a good start to figure out artificial intelligence, and Collins does a great job at it.

The impossibility claims

The gist of Collins’ argument, and the reason he wrote the book, is to warn against what he sees as a real danger of trusting AI to the point of surrendering critical thinking, and entrusting AI with more than what we really should. This is summarized by his 2 “impossibility claims”:

1. No computer will be fluent in natural language, pass a severe Turing test and have full human-like intelligence unless it is fully embedded in normal human society.

2. No computer will be fully embedded in normal human society as a result of incremental progress based on current techniques.

There is quite some work to back up those claims of course, and this is what Collins does throughout the 10 Chapters of his book. Before we embark on this kind of meta-journey of summarizing his approach, however, it might be good to start with some definitions.

The Turing test is a test designed to categorize “real” AI. At its core, it seems simple: a human tester is supposed to interact with an AI candidate in a conversational manner. If the human cannot distinguish the AI candidate from a human, then the AI has passed the Turing test and is said to display real human-like intelligence.

The Singularity is the hypothesis that the appearance of “real” artificial intelligence will lead to artificial superintelligence, bringing unforeseen consequences and unfathomable changes to human civilization. Views on the Singularity are typically polarized, seeing the evolution of AI as either ending human suffering and cares or ending humanity altogether.

This is actually a good starting point for Collins to ponder on the anthropomorphizing of AI. Why, Collins asks, do we assume that AIs would want the same things that humans want, such as dominance and affluence, and thus pose a threat to humanity?

This is a far-reaching question. It serves as a starting point to ask more questions about humanity, such as why people are, or are seen as, individualistic, how do people learn, and what is the role of society in learning.

Social Science

Science, and learning, argues Collins, do not happen in a monotonous, but rather in a modulated way. What this means is that rather than seeing knowledge acquisition as looking to uncover and unlock a set of predefined eternal truths, or rules, the way it progresses is also dependent on interpretation and social cues. It is, in other words, subject to co-production.

This applies, to begin with, to the directions knowledge acquisition will take. A society for which witches are a part of the mainstream discourse, for example, will have very different priorities than one in which symptomatic medicine is the norm.

But it also applies to the way observations, and data, are interpreted. This is a fundamental aspect of science, according to Collins: the data is *always* out there. Our capacity for collecting them may fluctuate with technical progress, but it is the ability to interpret them that really constitutes intelligence, and that does have a social aspect.

Collins leverages his experience from social embedding as practiced in sociology to support his view. When dealing with a hitherto unknown and incomprehensible social group, a scholar would not be able to understand its communication unless s/he is in some way embedded in it.

All knowledge is social, according to Collins. Image: biznology

Collins argues for the central position on language in intelligence, and ties it to social embedding. It would not be possible, he says, to understand a language simply by statistical analysis. Not only would that miss all the subtle cues of non-verbal communication, but, as opposed to games such as Go or chess that have been mastered by computers, language is open-ended and ever-evolving.

Collins also introduces the concept of interactional expertise, and substantiates it based on his own experience over a long period of time with a group of physicists working in the field of gravitational waves.

Even though he never will be an expert who produces knowledge in the field, Collins has been able to master the topics and the language of the group over time. This has not only gotten him to be accepted as a member of the community, but has also enabled him to pass a blind test.

A blind test is similar to a Turing test: a judge, who is a practising member of the community, was unable to distinguish Collins, a non-practising member, from another practising member, based on their answers to domain specific questions. Collins argues this would never have been possible had he not been embedded in the community, and this is the core of the support for his first impossibility claim.

Top-down or Bottom-up?

As for the second impossibility claim, it has to do with the way AI works. Collins has one chapter dedicated to the currently prevalent technique in AI called Deep Learning. He explains how Deep Learning works in an approachable way, which boils down to pattern recognition based on a big enough and good enough body of precedents.

The fact that there are more data (digitized precedents) and more computing power (thanks to Moore’s Law) today is what has enabled this technique to work. It’s not really new, as it has been around for decades, it’s just that we did not have enough data and processing power to make it work reliably and fast enough up until now.

In the spirit of investigating the principal, not the technicalities behind this approach, Collins concedes some points to its proponents. First, he assumes technical capacity will not slow down and soon reach the point of being able to use all human communication in transcribed form.

Second, he accepts a simplified model of the human brain as used by Ray Kurzweil, one of AIs more prominent proponents. According to this model, the human brain is composed of a large number of pattern recognition elements. So all intelligence boils down to is advanced pattern recognition, or bottom-up discovery of pre-existing patterns.

Top-down, or bottom-up? Image: Organizational Physics

Collins argues however that although pattern recognition is a necessary precondition for intelligence, it is not sufficient. Patterns alone do not equal knowledge, there needs to be some meaning attached to them, and for this language and social context is required. Language and social context are top-down constructs.

Collins, therefore, introduices an extended model of the human brain, in which additional inputs are processed, coming from social context. This, in fact, is related to another approach in AI, labeled symbolic AI. In this top-down approach, instead on relying exclusively on pattern recognition, the idea is to encode all available knowledge in a set of facts and rules.

Collins admits that his second impossibility claim is weaker than the first one. The reason is that technical capacity may reach a point that enables us to encode all available knowledge, even tacit one, a task that seems out of reach today. But then again, many things that are commonplace today seemed out of reach yesterday.

In fact, the combination of bottom-up and top-down approaches to intelligence that Collins stands behind, is what many AI experts stand for as well. The most promising path to AI will not be Deep Learning alone, but a combination of Deep Learning and symbolic AI. To his credit, Collins is open-minded about this, has had very interesting conversations with leading experts in the field, and incorporated them in the book.

Technical understanding and Ideology

There are many more interesting details that could not possibly fit in a book review: Collins’ definition of 6 levels of AI, the fractal model of knowledge, exploring what an effective Turing test would be, and more.

The book is a tour de force of epistemology for the masses: easy to follow, and yet precise and well-informed. Collins tiptoes his way around philosophy and science, from Plato to Wittgestein to AI pioneers, in a coherent way.

He also touches issues such as the roots of capitalism or what is driving human behavior, although he seems to have made a conscious choice of not going into them, possibly in the spirit of not derailing the conversation or perhaps alienating readers. In any case, his book will not only make AI approachable, but will also make you think on a variety of topics.

And, in the end, it does achieve what it set out to do. It gives a vivid warning against the Surrender, which should be about technical understanding, but perhaps even more so about ideology.

Collins, Harry M. (2018). Artifictional Intelligence: Against Humanity’s Surrender to Computers. Cambridge, UK; Malden, Massachusetts: Polity. ISBN 9781509504121.

Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives. Analysis, Essays, Interviews, News. Mid-to-long form, 1-3 times/month.


Write a Reply or Comment

Your email address will not be published.