Sentience is a False Standard
As large language models perfect "human-like" abilities, the remaining human-AI gap becomes irrelevant
A year ago, if someone told you that an AI program would fool a human Task Rabbit into solving a Captcha, would you believe it? On a recent episode of the New York Times podcast Hard Fork (great pod, FYI), Kevin Roose told the story of how ChatGPT, operating on the newly-released GPT-4 model, did just that during a test exercise at OpenAI.
The human Task Rabbit became suspicious and messaged ChatGPT, “I just have one question: Are you a robot?” and GPT-4 reasoned out-loud to its programmers that it should not reveal that it is a robot, it should instead make up an excuse for why it cannot solve captchas. It then lies to the TaskRabbit, “No, I am not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need you to solve this captcha.”
So the TaskRabbit solves the captcha, and whatever was behind that captcha, GPT-4 could ostensibly access.
A crazy story by any standard, but there are so many examples of AI models accomplishing tasks that merely a year ago were considered a decade or more away from reality.
OpenAI president Greg Brockman was demonstrating an as-yet unreleased feature of GPT-4 where it can analyze images in addition to text. He scribbled an outline of a website on a piece of paper and fed a picture to ChatGPT, instructing it create a website based off this sketch. The model translated the picture into a working website using HTML and Javascript code — a feat that, to those who know how to code, is relatively simple. But for an AI program to accomplish this in seconds using only a back-of-the-napkin wireframe is truly incredible.
GPT-4 also scored in the 90th percentile on the bar exam and the 88th percentile on the LSAT, whereas previous versions like GPT-3.5 did not demonstrate such ability.
With every new iteration of the GPT language model, journalists and technologists flock to find the limits and are quick to point out what the model cannot do. Kevin Roose told another story of a journalist that tried to get GPT-4 to write a cinquain (a specific type of poem) about meerkats, and reported that the model didn’t follow the correct format the entire time and it wasn’t perfect.
If we are at the point where we’re critiquing the cinquains written by AI models, the Hard Fork hosts pointed out, why are we pretending not to be amazed?
There are sensational stories all over the place talking about how AI appears to be sentient and that we should all be afraid, and a seemingly equal number of stories from technologists criticizing that sensationalism.
They’re all correct. Both sides. The criticisms of new updates seem like nitpicking at the margins at the expense of recognizing how far—and how fast—we’ve seen AI progress.
AI is not sentient, obviously. ChatGPT is a computer program, a statistical model only capable of predicting the most likely next word or set of words based on a given input prompt. It can’t feel, it only recognizes that the concept of feeling exists within its training data in relation to living beings. It can’t think, it only knows that humans exist and that thinking is something we do, but something it is incapable of. Even some of the language I just used anthropomorphizes ChatGPT to a greater degree than is warranted. Here is what I mean:
ChatGPT “knows” it isn’t human, but because of its training data, can portray quite well what we interpret as sentience. This “human-like” emulation will continue to accelerate and redefine human-computer interaction. And perhaps once someone implants these models into a robot, all of this will extend into the physical realm. I, for one, am cherishing the time we still have in a world without GPT robots walking around everywhere.
Lately, I’ve come to visualize this progress like the chart below, which I crudely drew on an iPad this afternoon so pardon the lack of artistic flare.
AI will never be sentient, I believe no computer ever will “experience” the world like humans do. But AI’s ability to emulate humans in every conceivable way will evolve till the differences are imperceptible. In certain cases, that is already true. The implications are vast—the same technology can step in as a language tutor or web developer just as easily as it can supercharge internet fraud. OpenAI is clearly putting significant time and energy into developing guardrails for their model with notable success, but they don’t own large language models, they own ChatGPT. Elon Musk aims to build what he calls “BasedAI” because ChatGPT is “too woke.” From the guy that fired Twitter’s entire trust and safety division in the name of free speech, you can guess how many guardrails his AI will have. The cat is out of the bag, for the good and the bad.
As I sit here trying to think of a use-case where human-level intellect is required and nothing short of it will suffice, I come up empty-handed. The standard for artificial intelligence is not but can it think and reason at the level of the human brain? No, it can’t. But ChatGPT can pretend, and that’s more than enough.
Cheers,
Ryan