The Theory of Mind That Says Artificial Intelligence is Possible

by Tim Sommers in 3 Quarks Daily: Does your dog feel pain? Or your cat? Surely, nonhuman great apes do. Dolphins feel pain, right? What about octopuses? (That’s right, “octopuses” not “octopi.”)They seem to be surprisingly intelligent and to exhibit pain-like behavior – even though the last common ancestor we shared was a worm 600 million years ago.

Given that all these animals (and us) experience pain, it seems exceedingly unlikely that there would only be a single kind of brain or neurological architecture or synapse that could provide the sole material basis for pain across all the possible beings that can feel pain. Octopuses, for example, have a separate small brain in each tentacle. This implies that pain, and other features of our psychology or mentality, can be “multiply realized.” That is, a single mental kind or property can be “realized,” or implemented (as the computer scientists prefer), in many different ways and supervene on many distinct kinds of physical things.

We don’t have direct access to the phenomenal properties of pain (what it feels like) in octopuses – or in fellow humans for that matter. I can’t feel your pain, in other words, much less my pet octopuses’. So, when we say an octopus feels pain like ours, what can we mean? What makes something an example (or token) of the mental instance (or type) “pain”? The dominant answer to that question in late twentieth century philosophy was called the “functionalism” answer (though many think functionalism goes all the way back to Aristotle).

Functionalism is the theory that what makes something pain does not depend on its internal constitution or phenomenal properties, but rather the role or function it plays in the overall system. Pain might be, for example, a warning or a signal of bodily damage. What does functionalism say about the quest for Artificial General Intelligence (AGI)?

It suggests that not only is AGI possible, but that there are no in-principle constraints on what we could make an AGI out of since neither the physical basis, nor the phenomenal properties (and by extension, maybe, sentience and consciousness), are necessary to create an AGI. Though there are many possible objections to functionalism, for the reasons just mentioned “functionalism” has long been viewed as the philosophy of mind that most clearly underwrites the possibility of AGI. After all, on this account, a mind can be made out of anything, even the nation of China, as long as the right functional relations obtain…

More here