article:

ChatGPT Isn’t ‘Hallucinating’—It’s Bullshitting!
It’s important that we use accurate terminology when discussing how AI
chatbots make up information

https://www.scientificamerican.com/article/chatgpt-isnt-hallucinating-its-bullshitting/

begin quote << So sometimes ChatGPT says false things. In recent years, as
we have been becoming accustomed to AI, people have started to refer to
these falsehoods as “AI hallucinations.” While this language is
metaphorical, we think it’s not a good metaphor.

Consider Shakespeare’s paradigmatic hallucination in which Macbeth sees a
dagger floating toward him. What’s going on here? Macbeth is trying to use
his perceptual capacities in his normal way, but something has gone wrong.
And his perceptual capacities are almost always reliable—he doesn’t usually
see daggers randomly floating about! Normally his vision is useful in
representing the world, and it is good at this because of its connection to
the world.

Now think about ChatGPT. Whenever it says anything, it is simply trying to
produce humanlike text. The goal is simply to make something that sounds
good. This is never directly tied to the world. When it goes wrong, it
isn’t because it hasn’t succeeded in representing the world this time; it
never tries to represent the world! Calling its falsehoods “hallucinations”
doesn’t capture this feature.

Instead we suggest, in a June report in Ethics and Information Technology,
that a better term is “bullshit.” As mentioned, a bullshitter just doesn’t
care whether what they say is true.

So if we do regard ChatGPT as engaging in a conversation with us—though
even this might be a bit of a pretense—then it seems to fit the bill. As
much as it intends to do anything, it intends to produce convincing
humanlike text. It isn’t trying to say things about the world. It’s just
bullshitting. And crucially, it’s bullshitting even when it says true
things!

Why does this matter? Isn’t “hallucination” just a nice metaphor here? Does
it really matter if it’s not apt? We think it does matter for at least
three reasons:

First, the terminology we use affects public understanding of technology,
which is important in itself. If we use misleading terms, people are more
likely to misconstrue how the technology works. We think this in itself is
a bad thing.

Second, how we describe technology affects our relationship with that
technology and how we think about it. And this can be harmful. Consider
people who have been lulled into a false of security by “self-driving”
cars. We worry that talking of AI “hallucinating”—a term usually used for
human psychology—risks anthropomorphizing the chatbots. The ELIZA effect
(named after a chatbot from the 1960s) occurs when people attribute human
features to computer programs. We saw this in extremis in the case of the
Google employee who came to believe that one of the company’s chatbots was
sentient. Describing ChatGPT as a bullshit machine (even if it’s a very
impressive one) helps mitigate this risk.

Third, if we attribute agency to the programs, this may shift blame away
from those using ChatGPT, or its programmers, when things go wrong. If, as
appears to be the case, this kind of technology will increasingly be used
in important matters such as health care, it is crucial that we know who is
responsible when things go wrong. >> end quote

Harry
"

Reply via email to