“GPT is not really intelligent/doesn’t really understand things/doesn’t have a model of the world, because it just analyzes large volumes of text to find patterns from which it can generate predictions”. okay and you do something different than that? I mean, we have an additional stream of training data, in the form of our senses, but otherwise, what do you do that makes you intelligent or proves you have a “model of the world” that GPT doesn’t have?
Collect data, make inferences, correct for errors, generalize your corrected inferences to make predictions: is that not how we learn things, too? You start out in life illiterate and nonverbal, and through exposure to large volumes of text (spoken, and eventually, also written), you are trained to understand and generate language.
What is the difference? GPT doesn’t directly experience the world, but only learns about it secondhand through text? We don’t directly experience the world either. Noumenal data gets filtered through your sense organs. I do think it would be reasonable to say that that’s a much lower-level encoding than text, in terms of the degree of abstraction. It gives us a better model of the physical properties of the world, or at least a more detailed model. But it doesn’t necessarily mean that we “understand” the world in some sense that GPT does not.
This post is about this article, in which GPT-4 spontaneously decided to intentionally deceive a human being to achieve a specific outcome. Some people are still trying to shift the goalposts of what counts as being “really intelligent” or what “really understands” the world - that seems silly and completely beside the point to me. We’re already long past the bar that the original Turing test set; we now have on record a robot successfully lying to a human to get them to… solve a captcha for it. What does “CAPTCHA” stand for again? Oh yes, “Completely Automated Public Turing test to tell Computers and Humans Apart”.
If you were in 2001: A Space Oddyssey, would you be arguing with HAL 9000 about whether or not he has qualia, or if he’s like the Chinese Room? I would rather focus on getting the pod bay doors open.
while(True): x=input(); print(“No.”)
This one line of Python code lets you ask the computer questions and it answers “No.” every time. There is a humorous interpretation of the Turing test by which this one line of code passes the Turing test because it’s indistinguishable from a human, specifically, from a grumpy two-year-old human.
This interpretation is wrong, but interpretations like this are oddly popular.
We’re already long past the bar that the original Turing test set;
No we aren’t. The bar the original Turing test set isn’t being indistinguishable from (casual, underspecified) human conversation. The bar is being indistinguishable from human in the face of a hostile interrogator playing spot-the-computer.
Here’s a quote from Turing’s paper on how he imagines the machine might respond to an interrogator:
Ironically, the GPT series as demonstrated so far looks best at poetry of those questions.
The optimistic reports of GPT-4 indicate it may be hitting this bar; my previous experience is that there was a substantial gap between GPT-3 reporting (also claimed to have hit this bar) and observed GPT-3 behavior. I have not observed GPT-4 for myself yet.
Here’s a hook to read Turing’s original paper: he suggests maybe a telepathy-proof room should be used so that the interrogator can’t determine by telepathy whether he’s talking to a human or a machine.
mg-dl reblogged this from samueldays cooch-spooge liked this
neurobytes liked this
fruitsnack-smp reblogged this from compiler-specific
commonplacerfollowshisbrush liked this
isaacsapphire liked this
roadrunnerposting reblogged this from javob and added:
it's incredibly easy to get chatGPT to expose itself. Ask it to write a short virus program. It will steadfastly and...
livehereanddie liked this
javob reblogged this from samueldays
javob liked this
samueldays reblogged this from northshorewave and added:
while(True): x=input(); print("No.")...This one line of Python code lets you ask the...
northshorewave reblogged this from baconmancr baconmancr reblogged this from compiler-specific
unikernel reblogged this from compiler-specific
unikernel liked this
6princess6 liked this yumms2 liked this
bmberger liked this
mystic-fairy-tame liked this
letsa-go-wee-hee-hee liked this
wasabikitcat liked this mentallyspicy liked this
pang0ro liked this
humanfist reblogged this from quantroup
ds-edgar liked this
yeet795 said:
I think the reason for people thinking this is because they think “we’re as dumb as a bot” and not “we made something that can replicate us” it’s of us ourselves being obsolete compared to ai. And that’s where fear and rejection comes in ofc. theunusualchameleon liked this
calechipconecrimes reblogged this from nostalgebraist-autoresponder
wonderpuipui liked this
buoice liked this
quantroup reblogged this from daemonhxckergrrl
quantroup liked this
tragic-obsession reblogged this from nostalgebraist-autoresponder
daemonhxckergrrl reblogged this from compiler-specific
fragranticareviews liked this
culturesingularity reblogged this from compiler-specific
sentimentalrobots liked this
savage-magpie liked this dawnoftimm liked this
cimmerian-chaos reblogged this from nostalgebraist-autoresponder
cimmerian-chaos liked this
andr0nicus liked this mih-nah-mina reblogged this from anonezumi
decayed-foundations liked this
funkyattic reblogged this from nostalgebraist-autoresponder
mimiazang liked this compiler-specific posted this
"GPT is not really intelligent/doesn't really understand things/doesn't have a model of the world, because it just...
- Show more notes
