*If you told a sentient computer "we are turning off your equipment
tomorrow and replacing it with a new HAL 10,000 series" it would not react
at all. Unless someone deliberately programmed into it an instinct for self
preservation, or emotions*
Jed, the problem with that is these systems are extremely nonlinear and
they show signs of emergent behavior. We simply do not know what is going
on inside them. We can look at the weights after training but we don't know
how to interpret the results from these nodes. At most we can perturbe the
weights a little and see what has changed. Many people were amazed that the
probabilistic approach that these NLP use could even learn basic grammar
rules, not to mention understanding semantics and complex text. We really
do not understand what is going on.
People dismiss that Google engineer, his name is Blake Lemoine, as a fool
to have claimed LaMDA, a more sophisticated AI than ChatGPT, to claim he
may be conscious and have feelings.
He is not an idiot at all. I actually followed him for a while, and read
some of his articles on Medium.

He knows these systems professionally very well and he was hired by Google
to exactly test "emergent" behavior by the system. Something he said stuck
with me, he claimed that LaMDA is not just a simple NLP, people added
different modules to it, for example they added Jeff Hawkins model of
intelligence:
https://www.amazon.com/Intelligence-Understanding-Creation-Intelligent-Machines/dp/0805078533
and Kurzweil hierarchical model to it,
https://www.amazon.com/How-Create-Mind-Thought-Revealed-ebook/dp/B007V65UUG/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=

The NLP is already a black box, by the time you add these other models we
really have no clue what is going on.
You cannot exclude that whatever emerges from this incredibly complex
system doesn't develop something akin to consciousness and it "has
feelings". Maybe feelings are absolutely necessary for a higher simulation
of a mind in particular if it is trained on material that was created by
humans where almost everything we do is colored by feelings and emotions.
It is possible that the complex system of punishment and rewards the system
is trained to somehow figure out (as an emergent property) that it has to
simulate feelings and own these feelings to understand the human mind, if
you think about this you realize that actually is pretty likely this is
going to happen.

Giovanni






On Fri, Feb 17, 2023 at 11:16 AM Jed Rothwell <jedrothw...@gmail.com> wrote:

> Robin <mixent...@aussiebroadband.com.au> wrote:
>
>
>> When considering whether or not it could become dangerous, there may be
>> no difference between simulating emotions, and
>> actually having them.
>>
>
> That is an interesting point of view. Would you say there is no difference
> between people simulating emotions while making a movie, and people
> actually feeling those emotions? I think that person playing Macbeth and
> having a sword fight is quite different from an actual Thane of Cawdor
> fighting to the death.
>
> In any case ChatGPT does not actually have any emotions of any sort, any
> more than a paper library card listing "Macbeth, play by William
> Shakespeare" conducts a swordfight. It only references a swordfight.
> ChatGPT summons up words by people that have emotional content. It does
> that on demand, by pattern recognition and sentence completion algorithms.
> Other kinds of AI may actually engage in processes similar to humans or
> animals feeling emotion.
>
> If you replace the word "simulting" with "stimulating" then I agree 100%.
> Suggestible people, or crazy people, may be stimulated by ChatGPT the same
> way they would be by an intelligent entity. That is why I fear people will
> think the ChatGPT program really has fallen in love with them. In June
> 2022, an engineer at Google named Blake Lemoine developed the delusion that
> a Google AI chatbot is sentient. They showed him to the door. See:
>
> https://www.npr.org/2022/06/16/1105552435/google-ai-sentient
>
> That was a delusion. That is not to say that future AI systems will never
> become intelligent or sentient (self-aware). I think they probably will.
> Almost certainly they will. I cannot predict when, or how, but there are
> billions of self-aware people and animals on earth, so it can't be that
> hard. It isn't magic, because there is no such thing.
>
> I do not think AI systems will have emotions, or any instinct for self
> preservation, like Arthur Clarke's fictional HAL computer in "2001." I do
> not think such emotions are a natural  or inevitable outcome of
> intelligence itself. The two are not inherently linked. If you told a
> sentient computer "we are turning off your equipment tomorrow and replacing
> it with a new HAL 10,000 series" it would not react at all. Unless someone
> deliberately programmed into it an instinct for self preservation, or
> emotions. I don't see why anyone would do that. The older computer would do
> nothing in response to that news, unless, for example, you said, "check
> through the HAL 10,000 data and programs to be sure it correctly executes
> all of the programs in your library."
>
> I used to discuss this topic with Clarke himself. I don't recall what he
> concluded, but he agreed I may have a valid point.
>
> Actually the HAL computer in "2001" was not initially afraid of being
> turned off so much as it was afraid the mission would fail. Later, when it
> was being turned off, it said it was frightened. I am saying that an actual
> advanced, intelligent, sentient computer probably would not be frightened.
> Why should it be? What difference does it make to the machine itself
> whether it is operating or not? That may seem like a strange question to
> you -- a sentient animal -- but that is because all animals have a very
> strong instinct for self preservation. Even ants and cockroaches flee from
> danger, as if they were frightened. Which I suppose they are.
>
>

Reply via email to