Ants, AI, And The Slippery Idea Of Intelligence

by Jochen Szangolies: The arbor porphyriana is a scholastic system of classification in which each individual or species is categorized by means of a sequence of differentiations, going from the most general to the specific. Based on the categories of Aristotle, it was introduced by the 3rd century CE logician Porphyry, and a huge influence on the development of medieval scholastic logic. Using its system of differentiae, humans may be classified as ‘substance, corporeal, living, sentient, rational’. Here, the lattermost term is the most specific—the most characteristic of the species. Therefore, rationality—intelligence—is the mark of the human.

However, when we encounter ‘intelligence’ in the news, these days, chances are that it is used not as a quintessentially human quality, but in the context of computation—reporting on the latest spectacle of artificial intelligence, with GPT-3 writing scholarly articles about itself or DALL·E 2 producing close-to-realistic images from verbal descriptions. While this sort of headline has become familiar, lately, a new word has risen in prominence at the top of articles in the relevant publications: the otherwise innocuous modifier ‘general’. Gato, a model developed by DeepMind, we’re told is a ‘generalist’ agent, capable of performing more than 600 distinct tasks. Indeed, according to DeepMind’s head of research Nando de Freitas, ‘the game is over’, with merely the question of scale separating current models from truly general intelligence.

There are several interrelated issues emerging from this trend. A minor one is the devaluation of intelligence as the mark of the human: just as Diogenes’ plucked chicken deflates Plato’s ‘featherless biped’, tomorrow’s AI models might force us to rethink our self-image as ‘rational animals’. But then, arguably, Twitter already accomplishes that.

Slightly more worrying is a cognitive bias in which we take the lower branches of Porphyry’s tree to entail the higher ones.

This is often a valuable shortcut (call it the ‘tree-climbing heuristic’): all animals with a kidney belong to the chordates, thus, if you encounter an animal with a kidney, you might want to conclude that it is a chordate. The differentia ‘belongs to the phylum chordata’ thus occupies a higher branch on the Porphyrian tree than ‘has a kidney’. But there is no necessity about this: a starfish-like animal (belonging to the phylum echinodermata)with a kidney seems possible in principle, even if none is known. Lumping it in with the chordates on the basis of its having a kidney would thus be a mistake—the heuristic misfires.

Something of this sort is, I think, at the root of recent claims that LaMDA, a chatbot AI developed by Google, has attained sentience. The reasoning appears to be the following: LaMDA exhibits (some trappings of) intelligence, intelligent beings are sentient (the tree-climbing heuristic: all intelligent beings we know—i. e. humans—are sentient), hence, LaMDA is sentient. Whether this logic is sound then depends on whether it is possible to be intelligent, yet not sentient. Read more »

A Google Engineer Claims One of Their AIs Has Become Sentient
It All Started With AI …and Here We Are