In a hypothetical race to claim the mantle of biggest threat to humanity, nuclear war, ecological catastrophe, rising authoritarianism, and new pandemics are still well in front of the pack. But, look there, way back but coming on fast. Is that AI? Is it a friend rushing forward to help us, or another foe rushing forward to bury us?
As a point of departure for this essay, in their recent Op Ed in The New York Times Noam Chomsky and two of his academic colleagues—Ian Roberts, a linguistics professor at the University of Cambridge, and Jeffrey Watumull, a philosopher who is also the director of artificial intelligence at a tech company—tell us that “however useful these [AI] programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects….”
They continue: “Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility.”
Readers might take these comments to mean current AI so differs from how humans communicate that predictions that AI will displace humans in any but a few minor domains is hype. The new Chatbots, painters, programmers, robots, and what all are impressive engineering projects but nothing to get overly agitated about. Current AI handles language in ways very far from what now allows humans to use language as well as we do. More, current AIs’ neural networks and large language models are encoded with “ineradicable defects” that prevent the AIs from using language and thinking remotely as well as people. The Op Ed’s reasoning feels like a scientist hearing talk about a perpetual motion machine that is going to revolutionize everything. The scientist has theories that tell her a perpetual motion machine is impossible. The scientist therefore says the hubbub about some company offering one is hype. More, the scientist knows the hubbub can’t be true even without a glance at what the offered machine is in fact doing. It may look like perpetual motion, but it can’t be, so it isn’t. But what if the scientist is right that it is not perpetual motion but nonetheless the machine is rapidly gaining users and doing harm, with much more harm to come?
Chomsky, Roberts, and Watumull say humans use language as adroitly as we do because we have in our minds a human language faculty that includes certain properties. If we didn’t have that, or if our faculty wasn’t as restrictive as it is, then we would be more like birds or bees, dogs or chimps, but not like ourselves. More, one surefire way we can know that another language-using system doesn’t have a language faculty with our language faculty’s features is if it can do just as well with a totally made up nonhuman language as it can do with a specifically human language like English or Japanese. The Op Ed argues that the modern chatbots are of just that sort. It deduces that they cannot be linguistically competent in the same ways that humans are linguistically competent.
Applied more broadly, the argument is that humans have a language faculty, a visual faculty, and what we might call an explanatory faculty that provide the means by which we converse, see, and develop explanations. These faculties permit us a rich range of abilities. As a condition of doing so, however, they also impose limits on other conceivable abilities. In contrast, current AIs do just as well with languages that humans can’t possibly use as with ones we can use. This reveals that they have nothing remotely like the innate human language faculty since, if they had that, it would rule out the non human languages. But does this mean AIs cannot, in principle, achieve competency as broad, deep, and even creative as ours because they do not have faculties with the particular restrictive properties that our faculties have? Does it mean that whatever they do when they speak sentences, when they describe things in their visual field, or when they offer explanations for events we ask them about—not to mention when they pass the bar exam in the 90th percentile or compose sad or happy, reggae or rock songs to order—they not only aren’t doing what humans do, but also they can’t achieve outcomes of the quality humans achieve?
If the Op Ed said current AIs don’t have features like we have so they can’t do things the way we do things, that would be fine. In that case, it could be true that AIs can’t do things as well as we do them, but it could also be true that for many types of exams, SATs and Bar Exams, for example, they can outperform the vast majority of the population. What happens tomorrow with GPT 4 and in a few months with GPT 5, or in a year or two with GPT 6 and 7, much less later with GPT 10? What if, as seems to be the case, current AIs have different features than humans but those different features let it do many things we do differently than we do them, but as well or even better than we do them?
The logical problem with the Op Ed is that it seems to assume that only human methods can, in many cases, attain human-level results. The practical problem is that the Op Ed may cause many people to think that nothing very important is going on or even could be going on, without even examining what is in fact going on. But what if something very important is going on? And if so, does it matter?
If the Op Ed focused only on the question “is contemporary AI intelligent in the same way humans are intelligent,” the authors’ answer is no, and in this they are surely right. That the authors then emphasize that they “fear that the most popular and fashionable strain of AI—machine learning—will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge,” is also fair. Likewise, it is true that when current programs pass the Turing test, if they haven’t already done so, it won’t mean that they think and talk the same way we do, or that how they passed the test will tell us anything about how we converse or think. But their passing the test will tell us that we can no longer hear or read their words and from that alone distinguish their thoughts and words from our thoughts and words. But will this matter?
Chomsky, Roberts, and Watumull’s essay seems to imply that AI’s methodological difference from human faculties means that what AI programs can do will be severely limited compared to what humans can do. The authors acknowledge that what AI can do may be minimally useful (or misused), but they add that nothing much is going on comparable to human intelligence or creativity. Cognitive science is not advancing and may be set back. AIs can soundly outplay every human over a chessboard. Yes, but so what? These dismissals are fair enough, but does the fact that current AI generates text, pictures, software, counseling, medical care, exam answers, or whatever else by a different path than humans arrive at very similar outputs mean that current AI didn’t arrive there at all? Does the fact that current AI functions differently than we do necessarily mean, in particular, that it cannot attain linguistic results like those we attain? Does an AI being able to understand nonhuman languages necessarily indicate that the AI cannot exceed human capacities in human languages, or in other areas?
We worry that to dismiss the importance of current AIs because they don’t embody human mechanisms risks obscuring that AI is already having widespread social impact that ought to concern us for practical, psychological, and perhaps security reasons. We worry that such dismissals may imply AIs don’t need very substantial regulation. We have had effective moratoriums on human cloning, among other uses of technology. The window for regulating AI, however, is closing fast. We worry that the task at hand isn’t so much to dispel exaggerated hype about AI as it is to acknowledge AI’s growing capacities and understand not only its potential benefits but also its imminent and longer run dangers so we can conceive how to effectively regulate it. We worry that the really pressing regulatory task could be undermined by calling what is occurring “superficial and dubious” or “hi tech plagiarism” so as to counter hype.
Is intelligent regulation urgent? To us, it seems obvious it is. And are we instead seeing breakneck advance? To us, it seems obvious we are. Human ingenuity can generate great leaps that appear like magic and even auger seeming miracles. Un-opposed capitalism can turn even great leaps into pain and horror. To avoid that, we need thought and activism that wins regulations.
Technologies like ChatGPT don’t exist in a vacuum. They exist within societies and their defining political, economic, community, and kinship institutions.
Social Media algorithms calculate the right hit that never truly satisfies. They keep us reaching for more. In the same way that Social Media is engineered to elicit addiction through user-generated content, language model AI has the potential to be far more addicting, and damaging. Particularly for vulnerable populations, AI can be fine-tuned to learn and exploit each person’s vulnerabilities—generating content and even presentation style specifically to hook users in.
In a society with rampant alienation, AI can exploit our need for connection. Imagine millions tied into AI subscription services desperate for connection. Profit motive will incentivize AI companies to not just lure more and more users, but to keep them coming back.
Once tied in, the potential for misinformation & propagandization greatly exceeds even social media. If AI replaces human labor in human defining fields, what then is left of “being human”? Waiting for AI guidance? Waiting for AI orders?
Clarity about what to do can only emerge from further understanding what is happening. But even after a few months of AI experiences, suggestions for minimal regulations seem pretty easy to come by.
This piece first appeared on ZNet. Source: Counter Punch