Giovanni Santostasi <gsantost...@gmail.com> wrote:

There is a reason why millions of people, journalists, politicians and us
> here in this email list are discussing this.
> The AI is going through a deep place in the uncanny valley. We are
> discussing all this because it starts to show behavior that is very close
> to what we consider not just sentient, but human.
>

It is not close to sentient. It is no closer to intelligence or sentience
than a snail or an earthworm brain is. I mean that literally. People have
run previous versions of this program on laptop computers, which -- as one
AI expert put it -- have about as much actual intelligence as an earthworm.
Other forms of AI are somewhat intelligent, but this method is not. It may
look sentient to some people, but that is a delusion. This is art, not
life. It is a clever simulacrum, like an antique wind-up doll.

This is no more sentient than the characters in an animated cartoon. You
can make an animated cartoon that evokes feelings of sympathy, emotion,
pathos or humor in a person watching it, but it is entirely fiction. The
drawings and computer generated images in the cartoon have absolutely no
emotions, feelings, intelligence, any more than an oil painting by Goya
does. Canvas and dry pigments have no emotions.

Sometime in the distant future an intelligent, sentient AI may appear. If
that happens, perhaps we should be concerned about its feelings. Although I
doubt it will have any feelings. ChatGPT has no feelings any more than a
dishwasher, a pickaxe, or a desktop computer does, so there is nothing to
be concerned about.

I will grant that playing cruel video games in which you shoot people, or
steal automobiles, or rape people may be bad for the person playing the
game. But it does not harm the transistor and hard disk that execute the
program. Projecting actual images of WWII battles does not harm the movie
projector, or frighten it. Printing vile pornography on paper does not hurt
the paper or the printing press.

- Jed

Reply via email to