The German free mason/Rotary newspaper FAZ today did publish a warning
about CGPT because the bot obviously did not correctly answer some
questions about covid related stuff that would contradict that mafia meme.
https://www.faz.net/aktuell/wissen/computer-mathematik/wird-kuenstliche-intelligenz-wie-chatgpt-zum-propagandawerkzeug-18687644.html
(now almost all paywalled)
So the goal is people should start to use/like the tools and then begin
to believe any crap the tool tells you. This tool already today is a
dangerous manipulation of the truth as e.g. regarding covid its 100%
inline with the big pharma mafia meme.
The same also holds for Wiki where e.g. the Covid page is manipulated by
a redactor claiming to sit in Pakistan (in reality a big pharma agent).
Pakistan is a 1000% lawless state with a criminal government. OK almost
all eastern states work that way.
*But what happened with the FAZ warning???* The grand master did call
them out. The article is gone. Just the actual online page source still
contains its link...but the story is invisible!
J.W.
On 17.02.2023 22:59, Giovanni Santostasi wrote:
eJed,
There is a reason why millions of people, journalists, politicians and
us here in this email list are discussing this.
The AI is going through a deep place in the uncanny valley. We are
discussing all this because it starts to show behavior that is very
close to what we consider not just sentient, but human.
Now how this is achieved it doesn't really matter. To be honest given
the very non linear process of how neural networks operate, the
probabilistic nature at the core of how the text is generated and how
this probability is used to interpret language (that I think is
actually a stronger quality of ChatGPT than his ability to respond to
the prompts) we are not really sure of what is going on in the black box.
What we have to go with is the behavior. While most of us are
impressed and fascinated by this AI behavior (otherwise there will not
be so much excitement and discussion in the first place) after
interacting with ChatGPT for a little while it is clear something is
amiss and it is not quite fully conscious as we will recognize in
another human being. But we are close, very close. It is not even
several orders of magnitude away close. Maybe 1-2 magnitudes. By the
way one parameter to consider is how many degrees of freedom this
thing has. ChatGPT has about 10^12 parameters (basically nodes in the
network). If we make a rough analogy between a synapses and a degree
of freedom this amount of connection correspond to that of a rat. A
rat is a pretty clever animal. Also, consider that most connections in
biological brains are dedicated to regulation of the body not to
higher information processing.
Humans have about 10^15 connections so just in computational power
alone we are 3 orders of magnitude away. Now consider that the trend
in NLP in the last several years is that there is an improvement in
parameters by a factor of 10 every year. This means that we will have
the computational power of a person in one of these AI in only 3
years. It is not just what ChatGPT can do now we should consider but
its potentials. To me the strongest lesson we learned so far is how
easy is to simulate the human mind, and in fact one of its most
important features that is to create (see AI art, or story telling by
ChatGPT) and to communicate using a sophisticated language and mastery
of grammar and semantics. It is incredible. All the discussion around
simulation vs real are meaningless.
Our brain is a simulation not sure why is not understood by most
people. We make up the world. Most of our conscious life are actually
filling the gaps, confabulating to make sense of the sensory
information we receive (highly filtered and selected) and our internal
mental states. Our waking life is not to dissimilar from dreams,
really. I want to argue that the reason these NLP work so amazing well
with limited resources is exactly because they are making things up as
they go, EXACTLY like we do. Children also learn by imitating, or
simulating, what adults do, that is exactly the evolutionary function
of playing.
So let's stop in making this argument that these AI are not conscious
or cannot be conscious because they simulate, it is the opposite
because they simulate so well I think they are already in the grey
area of being "conscious" or manifesting some quality of consciousness
and it is just a matter of few iterations and maybe some adds on to
the NLP (additional modules that can integrate the meta information
better) to have a fully conscious entity.
Giovanni
On Fri, Feb 17, 2023 at 11:16 AM Jed Rothwell <jedrothw...@gmail.com>
wrote:
Robin <mixent...@aussiebroadband.com.au> wrote:
When considering whether or not it could become dangerous,
there may be no difference between simulating emotions, and
actually having them.
That is an interesting point of view. Would you say there is no
difference between people simulating emotions while making a
movie, and people actually feeling those emotions? I think that
person playing Macbeth and having a sword fight is quite different
from an actual Thane of Cawdor fighting to the death.
In any case ChatGPT does not actually have any emotions of any
sort, any more than a paper library card listing "Macbeth, play by
William Shakespeare" conducts a swordfight. It only references a
swordfight. ChatGPT summons up words by people that have emotional
content. It does that on demand, by pattern recognition and
sentence completion algorithms. Other kinds of AI may actually
engage in processes similar to humans or animals feeling emotion.
If you replace the word "simulting" with "stimulating" then I
agree 100%. Suggestible people, or crazy people, may be stimulated
by ChatGPT the same way they would be by an intelligent entity.
That is why I fear people will think the ChatGPT program really
has fallen in love with them. In June 2022, an engineer at Google
named Blake Lemoine developed the delusion that a Google AI
chatbot is sentient. They showed him to the door. See:
https://www.npr.org/2022/06/16/1105552435/google-ai-sentient
That was a delusion. That is not to say that future AI systems
will never become intelligent or sentient (self-aware). I think
they probably will. Almost certainly they will. I cannot predict
when, or how, but there are billions of self-aware people and
animals on earth, so it can't be that hard. It isn't magic,
because there is no such thing.
I do not think AI systems will have emotions, or any instinct for
self preservation, like Arthur Clarke's fictional HAL computer in
"2001." I do not think such emotions are a natural or inevitable
outcome of intelligence itself. The two are not inherently linked.
If you told a sentient computer "we are turning off your equipment
tomorrow and replacing it with a new HAL 10,000 series" it would
not react at all. Unless someone deliberately programmed into it
an instinct for self preservation, or emotions. I don't see why
anyone would do that. The older computer would do nothing in
response to that news, unless, for example, you said, "check
through the HAL 10,000 data and programs to be sure it correctly
executes all of the programs in your library."
I used to discuss this topic with Clarke himself. I don't recall
what he concluded, but he agreed I may have a valid point.
Actually the HAL computer in "2001" was not initially afraid of
being turned off so much as it was afraid the mission would fail.
Later, when it was being turned off, it said it was frightened. I
am saying that an actual advanced, intelligent, sentient computer
probably would not be frightened. Why should it be? What
difference does it make to the machine itself whether it is
operating or not? That may seem like a strange question to you --
a sentient animal -- but that is because all animals have a very
strong instinct for self preservation. Even ants and cockroaches
flee from danger, as if they were frightened. Which I suppose they
are.
--
Jürg Wyttenbach
Bifangstr. 22
8910 Affoltern am Albis
+41 44 760 14 18
+41 79 246 36 06