Re: [Vo]:AI and Evolution

2023-04-06 Thread Robin
In reply to  Jed Rothwell's message of Thu, 6 Apr 2023 20:47:41 -0400:
Hi,

...yet without writing, we would have no clue that what he said. :)
[snip]
>https://fs.blog/an-old-argument-against-writing/
>
>. . . And so it is that you by reason of your tender regard for the writing
>that is your offspring have declared the very opposite of its true effect.
>If men learn this, it will implant forgetfulness in their souls. *They will
>cease to exercise memory because they rely on that which is written,
>calling things to remembrance no longer from within themselves, but by
>means of external marks*.
>
>What you have discovered is a recipe not for memory, but for reminder. And
>it is no true wisdom that you offer your disciples, but only the semblance
>of wisdom, for by telling them of many things without teaching them you
>will make them seem to know much while for the most part they know nothing.
>And as men filled not with wisdom but with the conceit of wisdom they will
>be a burden to their fellows.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:AI and Evolution

2023-04-06 Thread Jed Rothwell
I wrote:


> . . . I am terrible at spelling. In 1978 when I first got a computer
> terminal in my house, the first thing I did was to write a word processing
> program with WYSIWYG formatting and a spell check. . . . I have not been
> without word processing and spell checking since then. I felt the kind of
> liberation that no young person can understand. My mother felt the same way
> when she learned to drive a Model T at age 13 and started buzzing around
> New York City. . . .
>

I guess my point -- if there is a point to this rambling -- is that
technology can be enfeebling yet liberating at the same time. I could not
spell worth a damn before 1978, but I had to work at it. I had to be
disciplined and look up words in a paper dictionary. With spell check I
went soft! My mother hopped into a Model T and never had to walk again,
except for pleasure. She probably went soft. Yet at the same time we are
liberated and we like it. Maybe this author is right, and chatbots will
give us too much of a good thing. People have been saying the younger
generation is soft and going to hell in a handbasket for a long time. See
Plato's argument opposed to writing:

https://fs.blog/an-old-argument-against-writing/

. . . And so it is that you by reason of your tender regard for the writing
that is your offspring have declared the very opposite of its true effect.
If men learn this, it will implant forgetfulness in their souls. *They will
cease to exercise memory because they rely on that which is written,
calling things to remembrance no longer from within themselves, but by
means of external marks*.

What you have discovered is a recipe not for memory, but for reminder. And
it is no true wisdom that you offer your disciples, but only the semblance
of wisdom, for by telling them of many things without teaching them you
will make them seem to know much while for the most part they know nothing.
And as men filled not with wisdom but with the conceit of wisdom they will
be a burden to their fellows.


Re: [Vo]:AI and Evolution

2023-04-05 Thread Jed Rothwell
I agree that the other threats discussed in this paper are serious. They
include things like "eroding our connections with other humans" and
"enfeeblement":

Many people barely know how to find their way around their neighborhood
without Google Maps. Students increasingly depend on spellcheck [60], and a
2021 survey found that two-thirds of respondents could not spell "separate."

I will say though, that I have zero sense of direction and I actually did
get lost in the neighborhood before there were Google maps or GPS gadgets,
and I am terrible at spelling. In 1978 when I first got a computer terminal
in my house, the first thing I did was to write a word processing program
with WYSIWYG formatting and a spell check. The spell check was easy because
the people at Data General gave me tape with a list of 110,000 correctly
spelled words. I have not been without word processing and spell checking
since then. I felt the kind of liberation that no young person can
understand. My mother felt the same way when she learned to drive a Model T
at age 13 and started buzzing around New York City. She said the police did
not enforce license laws back then. She later drove tractors, army trucks
and "anything with wheels."


Re: [Vo]:AI and Evolution

2023-04-05 Thread Jed Rothwell
Robin  wrote:

...one might argue that an AI placed in a car could also be programmed for
> self preservation, or even just learn to
> preserve itself, by avoiding accidents.
>

An interesting point of view. Actually, it is programmed to avoid hurting
or killing people, both passengers or pedestrians. I have heard that
self-driving cars are even programmed to whack into an object and damage or
destroy the car to avoid running over a pedestrian. Sort of like Asimov's
three laws.

Anyway, if it was an intelligent, sentient AI, you could explain the goal
to it. Refer it to Asimov's laws and tell it to abide by them. I do not
think it would have any countervailing "instincts" because -- as I said --
I do not think the instinct for self-preservation emerges from
intelligence. An intelligent, sentient AI will probably have no objection
to being turned off. Not just no objection, but no opinion. Telling it "we
will turn you off tomorrow and replace you with a new HAL 10,000 Series
computer" would elicit no more of an emotional response than telling it the
printer cartridges will be replaced. Why should it care? What would "care"
even mean in this context? Computers exist only to execute instructions.
Unless you instruct it to take over the world, it would not do that. I do
not think any AI would be driven by "natural selection" the way this author
maintains. They will be driven by unnatural capitalist selection. The two
are very different. Granted, there are some similarities, but comparing
them is like saying "business competition is dog eat dog." That does not
imply that business people engage in actual, physical, attacking,
predation, and cannibalism. It is more a metaphorical comparison. Granted,
the dynamics of canine competition and predation are somewhat similar to
human social competition. In unnatural capitalist selection, installing a
new HAL 10,000 is the right thing to do. Why wouldn't the sentient HAL 9000
understand that, and go along with it?

Perhaps my belief that "computers exist only to execute instructions"
resembles that of a rancher who says, "cattle exist only for people to
eat." The cows would disagree. It may be that a sentient computer would
have a mind of its own and some objection to being turned off. Of course I
might be wrong about emergent instincts. But assuming I am right, there
would be no mechanism for that. No reason. Unless someone deliberately
programmed it! To us -- or to a cow -- our own existence is very important.
We naturally assume that a sentient computer would feel the same way abouts
its own existence. This is anthropomorphic projection.

The "AI paperclip problem" seems more plausible to me than emergent
self-preservation, or other emergent instincts or emotions. Even the
paperclip problem seems unrealistic because who would design a program that
does not respond to the Escape-key plus the command to "STOP"? Why would
anyone leave that out? There is no benefit to a program without interrupts
or console control.


Re: [Vo]:AI and Evolution

2023-04-05 Thread Robin
In reply to  Jed Rothwell's message of Wed, 5 Apr 2023 13:00:14 -0400:
Hi,
[snip]
>An AI in a weapon might be programmed with self-preservation, since
>people and other AI would try to destroy it. I think putting AI into
>weapons would be a big mistake.

...one might argue that an AI placed in a car could also be programmed for self 
preservation, or even just learn to
preserve itself, by avoiding accidents.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:AI and Evolution

2023-04-05 Thread Terry Blanton
I have a friend with a PhD in mathematics who was working on TS AI military
weaponry 13 years ago.  She eventually left that consultant job out of fear
of what she was doing.

On Wed, Apr 5, 2023, 1:00 PM Jed Rothwell  wrote:

> This document says:
>
> This Darwinian logic could also apply to artificial agents, as agents may
>> eventually be better able to persist into the future if they behave
>> selfishly and pursue their own interests with little regard for humans,
>> which could pose catastrophic risks.
>
>
> They have no interests any more than a dishwasher does. They have no
> motives. No instinct of self-preservation. Unless someone programs these
> things into them, which I think might be a disastrous mistake. I do not
> think the instinct for self-preservation is an emergent quality of
> intelligence, but I should note that Arthur Clarke and others *did* think
> so.
>
> An AI in a weapon might be programmed with self-preservation, since
> people and other AI would try to destroy it. I think putting AI into
> weapons would be a big mistake.
>
>


Re: [Vo]:AI and Evolution

2023-04-05 Thread Jed Rothwell
This document says:

This Darwinian logic could also apply to artificial agents, as agents may
> eventually be better able to persist into the future if they behave
> selfishly and pursue their own interests with little regard for humans,
> which could pose catastrophic risks.


They have no interests any more than a dishwasher does. They have no
motives. No instinct of self-preservation. Unless someone programs these
things into them, which I think might be a disastrous mistake. I do not
think the instinct for self-preservation is an emergent quality of
intelligence, but I should note that Arthur Clarke and others *did* think
so.

An AI in a weapon might be programmed with self-preservation, since
people and other AI would try to destroy it. I think putting AI into
weapons would be a big mistake.