by Joseph Shieber: Discussions of artificial intelligence are hard to avoid. A recent Pew study, for example, found that 90% of Americans have heard at least something about artificial intelligence – which is astounding, when you consider that only about 70% of Americans likely know who the current Vice President is.
Despite this growing awareness of AI, it is quite possible to argue that, if anything, people are too slow to recognize its potential. That same Pew study noted that only 18% of all US adults have tried ChatGPT. This is disturbing, given how wide-ranging the effects of AI promise to be.
PODCAST: New Social Contract
For example, as summarized by a recent explainer by Our World in Data, the capabilities of AI systems have improved remarkably in just the past 10 years – particularly in the areas of reading and language comprehension.
This increase in the capacities of AI systems has been mirrored by increased academic interest in artificial intelligence, with the numbers of scholarly publications related to AI more than doubling in the previous decade.
Since I’m going to be arguing that these systems are neither artificial nor intelligent, it will be useful to designate them differently. I’ll call them LLMs, or large language models.
I’ll focus most on the question of whether LLMs are intelligent, but it actually seems odd to term them “artificial” as well. Consider other tools that we employ to simplify our lives – washing machines, robot vacuum cleaners, or cars, say. We don’t call our washing machines “artificial;” nor do we say that we drive artificial cars. The reason for this is that such tools aren’t fake or phony. Rather, they’re genuine aids that make our lives easier.
When you contemplate the remarkable advances of LLMs, it’s hard to deny that such systems are also genuine aids that promise to make our lives easier in a variety of ways. For example, the top performing large language models continue to improve at a rapid pace, achieving scores that are increasingly closer to the performance of human experts on benchmark tests of general and domain specific knowledge; though such systems still lag behind in problem-solving ability in mathematics and coding, large language models continue to make strides in those areas as well.
So it’s actually a misnomer to refer to such systems as “artificial;” far from being fake or phony, they are genuine tools with potentially wide-ranging applications. Given this, however, and considering the remarkable benchmarks these systems have already achieved, it might seem strange to suggest that these systems, though widely referred to as “artificial intelligence,” are in fact not intelligent. Let me explain why I think that it is in fact a mistake to use the term “artificial intelligence” to refer to these systems.
First, let me say how I won’t be arguing against the intelligence of LLMs. I won’t be appealing to the fact that “LLMs cannot understand, interact with, or comprehend reality” (as a recent article summarized the views of Meta’s chief AI researcher Yann LeCun). Nor will I be arguing that the lack of intelligence of LLMs stems from the fact that they are not embodied (as the philosopher and cognitive scientist Anthony Chemero has suggested). I also won’t highlight the fact that LLMs are incapable of conscious experience to establish that they aren’t intelligent (as the AI researcher Michael Wooldridge seems to argue, when he laments that “LLMs have never experienced anything. They are just programs that have ingested unimaginable amounts of text. LLMs might do a great job at describing the sensation of being drunk, but this is only because they have read a lot of descriptions of being drunk. They have not, and cannot, experience it themselves.”)
The closest argument I’ve seen to the one that resonates with me was in a recent Substack post by Arnold Kling. In that post, Kling suggests that LLMs cannot be intelligent because intelligence “is not a thing at all. It is an ongoing process. It is like science. You should not think of science as a body of absolute truth. Instead, think of the scientific method as a way of pursuing truth.”
Borrowing from Jonathan Rauch, Kling calls intelligence – the process by which we seek truth and avoid error – the “Constitution of Knowledge.” He suggests that LLMs “are not the artificial equivalent of the process of improving knowledge” because “they do not perform the functions of the Constitution of Knowledge—trying new ideas, testing them, keeping what works, and discarding the rest.” Read the whole entry »
VIDEO: Some Stuff About AI