Robin <mixent...@aussiebroadband.com.au> wrote:

> What I was trying to say, is that if an AI is programmed to mimic human
> behaviour*, then it may end up mimicking the
> worst aspects of human behaviour, and the results could be just as
> devastating as if they had been brought about by an
> actual human, whether or not the AI is "sentient".


Yes, AI programmed by evil people might cause harm. Such as telling a
terrorist how to make a bomb. Or sending e-mail messages to people
defrauding them of money. But as things are now, it cannot cause physical
harm in the world. No AI robot has any physical control over things, as far
as I know. An AI cannot seize control over NORAD and launch missiles, like
in a science fiction movie. Not yet, anyway.

I agree we should definitely beware of such developments. The people making
ChatGPT are aware of these potential problems.

Frankly, I am more concerned about people using AI or conventional Google
search to cause harm. Mainly by looking up ways to commit crimes. Recently
someone killed his wife. The police looked through his computer and found
Google searches for "how to get rid of a dead body" and things like that.



> I guess the real question is does the AI have "will", or at least a
> simulation thereof?
>

Nope. The present implementation has no will at all.



> My definition of life:- Anything potentially capable of taking measures to
> ensure it's own survival is alive.
>

No AI can do anything like that. They have no idea of the real world, or
the fact they exist as objects in it. Sentience, in other words. They are
still very far from that level of intelligence. AI researchers have been
trying to give AI a working model of the real world for decades, but they
have made little progress. The ChatGPT model is far simpler than that. It
is amazing that it works as well as it does, but it is not any kind of
simulation of the real world, and it would not give a robot any ability to
deal with the real world.

Reply via email to