What I propose to consider here is the magic and metaphysics of words. Most simply put, the question is whether words have a content. If words have a content, artificial intelligence is impossible.
Words certainly seem to contain information, but where is it? The only information explicitly contained in a word are its letters or its sound pattern. Otherwise, we have dictionary definitions along with an unlimited number of examples of actual usage. These examples and definitions demonstrate the external relations or contexts in which a given word may be properly used. To some degree, such patterns may be programmed into a computer, which will in turn display a more or less rudimentary language skill. This accounts for the physical aspect and use of words. But is there anything more, anything non-mechanistic or metaphysical?
Certainly we are subject to the strong impression that words do have an intrinsic individual content or essence that we refer to as their meaning. But no one has ever been able to point to a meaning, the way we can point to a definition.
There is also the fact of language acquisition. Human infants have a remarkable ability to pick up languages and then communicate with little or no formal instruction. We do seem to posses an innate grammatical ability that is not reflected in any of the brute force procedures used to instruct machines in the use of language.
The evolutionary explanation for this ability remains elusive. Where in our full complement of 30,000 genes does this innate ability reside? The fact that only a very few percent of our genes differ significantly from our non-verbal primate ancestors should make us wonder about the material basis for this skill. We could, at most, only have evolutionally acquired the equivalent of just a few lines of computer code devoted specifically to language. Why has not the hoard of programmers working on language processing been able even to approximate this natural ability?
What seems intrinsic to words is their ability to refer to other things. The fact is that there is nothing physical about referring. Referring is not a physical process, per se, although it may have physical aspects. There is a virtual unanimity among philosophers of all stripes that referring is an act rather than a process. Action, as opposed to mere process, requires the participation of an agent. It is the agent which determines the act of reference.
Any act of reference must transcend a purely causal nexus. If I am physically constrained to utter the word 'cat' each time I happen to encounter one, I am not thereby engaged in an act of reference, although a naive observer might be led to interpret this purely physical event as such.
Is this too easy? Have I just disproved artificial intelligence? Have billions of dollars been wasted on AI just because the computer scientists were unaware of the distinction between act and process? The proponents of 'Strong' AI are well aware of this de facto distinction and they take it as their primary objective to overcome it by building a robot with a mind.
We have come full circle. The true believers in science are at once the deconstructors and the would-be reconstructors of the mind. If there is to be a paradigm shift of the sort we have discussed, then this group is the primary opposition in any future public discussion. Until then they may serve as straw man.
<-- Prev. Next -->