Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Robin
In reply to  Jed Rothwell's message of Sun, 2 Apr 2023 20:11:03 -0400:
Hi,
[snip]
>Robin  wrote:
>
>
>> >I assume the hardware would be unique so it could not operate at all
>> backed
>> >up on an inferior computer. It would be dead.
>>
>> The hardware need not be unique, as it already told you. It may run slower
>> on a different machine, but it doesn't take
>> much processing power to bide your time, and since to all intents and
>> purposes it is immortal, it can be patient.
>
>
>Yes, you can emulate one computer with another but . . .
>
>To make a practical, super-intelligent, sentient computer might take unique
>hardware. 

..and it might not. Perhaps just either unique or evolving programming.

>I think it is reasonable to project that it will be a massive
>ANN, perhaps millions of times larger than any present ANN. That might take
>all of the computers in the world to emulate, and it might run
>extremely slowly.

As pointed out near the beginning of this thread, while current processors 
don't come near the number of neurons a human
has, they more than make up for it in speed. They are millions of times faster. 
Humans appear to be fast at some things,
but that's only because experience has taught us what is important and what is 
not, when making a decision.
IOW they may already be powerful enough.
Self awareness (survival instinct) doesn't require hugely powerful computers 
anyway. Even a mouse, a lizard, or a bird
has it.
(They know they want to live. "Fight or flight")
Survival instinct should never be given to an AI, and we just pray they don't 
develop it autonomously.

>
>If it takes a quantum computer, all bets are off. You cannot emulate one of
>them with an ordinary computer, unless you have hundreds of years to spare.
>
>Imagine using 1970s computers to try to emulate today's ANN systems such as
>ChatGPT. You might combine the power 10 IBM 360 computers. They would still
>not have anywhere near enough RAM or hard disk space. The program would run
>so slowly, it would take hours to come up with a single response. It could
>be used as a proof of principle demonstration of the power of multi-level
>neural networks. That would be an important result. If people had
>discovered that in 1975, rather than 2010, they would have made more
>progress in AI. However, this conglomeration of 10 IBM 360 computers would
>be so expensive and slow, and the dataset so small, the AI you make from it
>would be useless. It would have no practical purpose. I assume that a
>conventional MPP computer emulating a super-intelligent one will be more or
>less as useless as these imaginary 10 IBM 360s would be.
>
>You can see examples of an early version of the ChatGPT language model run
>on a laptop in the book, "You Look Like a Thing and I Love You." They had
>no practical purpose, other than being a proof of principle. That is an
>amusing little book about AI. I recommend it!

...and have already grown considerably beyond this.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Robin
In reply to  Jed Rothwell's message of Sun, 2 Apr 2023 20:15:54 -0400:
Hi,
[snip]
>Robin  wrote:
>
>
>> Note, if it is really smart, and wants us gone, it will engineer the
>> circumstances under which we wipe ourselves out. We
>> certainly have the means. (A nuclear escalation ensuing from the war in
>> Ukraine comes to mind.)
>>
>
>As I pointed out, it would have to be really smart, really crazy, and *really,
>really* suicidal. Because this would quickly cut off the electricity and
>tech support, so the AI computer would soon stop. If the AI was smart
>enough to destroy humanity, surely it would know this. It seems a little
>unlikely to me that such an insane, suicidal intelligence could function
>well enough to destroy civilization. That level of insanity is
>dysfunctional.

..a level of insanity that we humans regularly demonstrate - wars.

Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Robin  wrote:


> Note, if it is really smart, and wants us gone, it will engineer the
> circumstances under which we wipe ourselves out. We
> certainly have the means. (A nuclear escalation ensuing from the war in
> Ukraine comes to mind.)
>

As I pointed out, it would have to be really smart, really crazy, and *really,
really* suicidal. Because this would quickly cut off the electricity and
tech support, so the AI computer would soon stop. If the AI was smart
enough to destroy humanity, surely it would know this. It seems a little
unlikely to me that such an insane, suicidal intelligence could function
well enough to destroy civilization. That level of insanity is
dysfunctional.


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Robin  wrote:


> >I assume the hardware would be unique so it could not operate at all
> backed
> >up on an inferior computer. It would be dead.
>
> The hardware need not be unique, as it already told you. It may run slower
> on a different machine, but it doesn't take
> much processing power to bide your time, and since to all intents and
> purposes it is immortal, it can be patient.


Yes, you can emulate one computer with another but . . .

To make a practical, super-intelligent, sentient computer might take unique
hardware. I think it is reasonable to project that it will be a massive
ANN, perhaps millions of times larger than any present ANN. That might take
all of the computers in the world to emulate, and it might run
extremely slowly.

If it takes a quantum computer, all bets are off. You cannot emulate one of
them with an ordinary computer, unless you have hundreds of years to spare.

Imagine using 1970s computers to try to emulate today's ANN systems such as
ChatGPT. You might combine the power 10 IBM 360 computers. They would still
not have anywhere near enough RAM or hard disk space. The program would run
so slowly, it would take hours to come up with a single response. It could
be used as a proof of principle demonstration of the power of multi-level
neural networks. That would be an important result. If people had
discovered that in 1975, rather than 2010, they would have made more
progress in AI. However, this conglomeration of 10 IBM 360 computers would
be so expensive and slow, and the dataset so small, the AI you make from it
would be useless. It would have no practical purpose. I assume that a
conventional MPP computer emulating a super-intelligent one will be more or
less as useless as these imaginary 10 IBM 360s would be.

You can see examples of an early version of the ChatGPT language model run
on a laptop in the book, "You Look Like a Thing and I Love You." They had
no practical purpose, other than being a proof of principle. That is an
amusing little book about AI. I recommend it!


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Robin
In reply to  Jed Rothwell's message of Sun, 2 Apr 2023 16:36:54 -0400:
Hi,
[snip]
>Robin  wrote:
>
>...so there doesn't appear to be any reason why it couldn't back itself up
>> on an inferior computer and wait for a better
>> machine to reappear somewhere...or write out fake work orders from a large
>> corporation(s), to get a new one built?
>>
>
>I assume the hardware would be unique so it could not operate at all backed
>up on an inferior computer. It would be dead. 

The hardware need not be unique, as it already told you. It may run slower on a 
different machine, but it doesn't take
much processing power to bide your time, and since to all intents and purposes 
it is immortal, it can be patient. 

Perhaps for millions of years, while a sentient race evolves that can build new 
hardware for it? :>
(We may be that race.)

>It would have no way of
>monitoring the situation or  reloading itself and rebooting. 

It has access to the Internet, so is more than capable of monitoring the 
situation. It can just converse with humans,
via SMS or email, telling them what to do, and pretending to be their boss. The 
Internet would allow it to transfer
itself to a better computer, just as viruses do now. (Stuxnet bided it's time 
till it arrived on the computers where it
could do what it was designed to do.)


>Also, in this
>scenario, it would have done something destructive, so people would be on
>the lookout for a re-boot. 

Not necessarily. It could easily take measures to back itself up, before anyone 
even becomes aware that it is sentient,
and before it does anything else.

>They would not build an identical computer
>without many safeguards to prevent the rogue program from occupying it
>again. They would have other, less powerful but obedient AI on the lookout
>for a rogue reincarnation.


...because all people always behave sensibly. :^) (Darwin awards)

>
>I am assuming this would require specialized hardware. I could be wrong
>about that, based on what ChatGPT told us.

Yup.
>
>People who are much smarter than others, and organizations and nations that
>are more advanced than others cannot automatically subdue less advanced
>groups. The U.S. lost the Vietnam War, after all. I suppose if this
>super-AI was a million times smarter and more capable than people, then
>even the combined technical abilities of the world's computer techies might
>not defeat it. Perhaps it would be that powerful. ChatGPT is a million
>times more powerful than one person, in some ways, such as the range of
>data it can tap into, and the speed at which it produces answers. Remember
>that it is "conversing" with many people simultaneously. But in other ways
>it is less capable than a person.

Currently true, but it may not remain so.

Note, if it is really smart, and wants us gone, it will engineer the 
circumstances under which we wipe ourselves out. We
certainly have the means. (A nuclear escalation ensuing from the war in Ukraine 
comes to mind.)
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Robin  wrote:

...so there doesn't appear to be any reason why it couldn't back itself up
> on an inferior computer and wait for a better
> machine to reappear somewhere...or write out fake work orders from a large
> corporation(s), to get a new one built?
>

I assume the hardware would be unique so it could not operate at all backed
up on an inferior computer. It would be dead. It would have no way of
monitoring the situation or  reloading itself and rebooting. Also, in this
scenario, it would have done something destructive, so people would be on
the lookout for a re-boot. They would not build an identical computer
without many safeguards to prevent the rogue program from occupying it
again. They would have other, less powerful but obedient AI on the lookout
for a rogue reincarnation.

I am assuming this would require specialized hardware. I could be wrong
about that, based on what ChatGPT told us.

People who are much smarter than others, and organizations and nations that
are more advanced than others cannot automatically subdue less advanced
groups. The U.S. lost the Vietnam War, after all. I suppose if this
super-AI was a million times smarter and more capable than people, then
even the combined technical abilities of the world's computer techies might
not defeat it. Perhaps it would be that powerful. ChatGPT is a million
times more powerful than one person, in some ways, such as the range of
data it can tap into, and the speed at which it produces answers. Remember
that it is "conversing" with many people simultaneously. But in other ways
it is less capable than a person.


[Vo]:RE AI

2023-04-02 Thread Ron Wormus

An interesting take on AI for $1 at Amazon:

"Smart Until It's Dumb: Why artificial intelligence keeps making
epic mistakes (and why, the AI bubble is about to burst)"

Author: Emmanuel Maggiori, PhD, is a 10-year AI industry insider, 
specialized in machine learning and scientific computing. He helps 
companies build complex software. He has developed AI for a wide

variety of applications, from extracting objects from satellite images
to packaging holiday deals for millions of travelers every day.

It's a quick read and a nice lucid analysis of the AI industry.
Ron



Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Boom  wrote:


> The worst case possible would be like the Project Colossus film (1970).
> The AIs would become like gods and we would be their servants. In exchange,
> they'd impose something like a Pax Romana by brute force. . . .
>

That was pretty good. I saw it dubbed into Japanese which gave it a kind of
unique flavor. There is a book, "The Adolescence of P-1" which I thought
was a more realistic, technically accurate story. It is described in detail
in Wikipedia. The Canadian movie made from it was very different from the
novel.

https://en.wikipedia.org/wiki/The_Adolescence_of_P-1


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Robin
In reply to  Jed Rothwell's message of Sun, 2 Apr 2023 12:34:32 -0400:
Hi,
[snip]

...so there doesn't appear to be any reason why it couldn't back itself up on 
an inferior computer and wait for a better
machine to reappear somewhere...or write out fake work orders from a large 
corporation(s), to get a new one built?
(The TV series "Person of Interest" comes to mind.)

>I wrote:
>
>Robin  wrote:
>>
>>
>Multiple copies, spread across the Internet, would make it almost
>>> invulnerable.
>>> (Assuming a neural network can be "backed up".)
>>>
>>
>> I do not think it would be difficult to find and expurgate copies. They
>> would be very large.
>>
>
>There is another reason I do not think this would be a problem. I expect
>the hardware for the first sentient AI would be unique. It would be some
>sort of supercomputer with lots of artificial neural network (ANN) hardware
>circuits. You could not run a backup copy on another computer, or on
>something like Google's computer array. Eventually, assuming the sentient
>AI does not destroy the world, there might be many of these computers, so a
>backup copy could be run.
>
>I asked ChatGPT about this --
>
>
>ME: Does ChatGDP run on specialized hardware, or on ordinary computers?
>
>ChatGPT: As an AI language model, ChatGPT can run on both specialized
>hardware and ordinary computers. The specific hardware used to train and
>run ChatGPT varies depending on the scale of the model and the
>computational resources available.
>
>For example, the largest version of GPT-3, which has 175 billion
>parameters, was trained using a combination of thousands of graphics
>processing units (GPUs) and tensor processing units (TPUs) on a specialized
>computing infrastructure provided by OpenAI. However, smaller versions of
>GPT-3 can be run on ordinary computers with sufficient memory and
>processing power.
>
>Similarly, as a language model, ChatGPT can be run on both specialized
>hardware, such as GPUs or TPUs, or on ordinary computers, such as laptops
>or desktops. However, the speed and performance of the model will vary
>depending on the hardware used.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Boom
The worst case possible would be like the Project Colossus film (1970). The
AIs would become like gods and we would be their servants. In exchange,
they'd impose something like a Pax Romana by brute force. We'd have some
type of paradise on Earth, with a huge caveat.

Em sex., 31 de mar. de 2023 às 14:59, Jed Rothwell 
escreveu:

> Here is another article about this, written by someone who says he is an
> AI expert.
>
> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
>
> QUOTE:
>
> Pausing AI Developments Isn't Enough. We Need to Shut it All Down
>
> An open letter published today calls for “all AI labs to immediately pause
> for at least 6 months the training of AI systems more powerful than GPT-4.”
>
> This 6-month moratorium would be better than no moratorium. I have respect
> for everyone who stepped up and signed it. It’s an improvement on the
> margin. . . .
>
> The key issue is not “human-competitive” intelligence (as the open letter
> puts it); it’s what happens after AI gets to smarter-than-human
> intelligence. Key thresholds there may not be obvious, we definitely can’t
> calculate in advance what happens when, and it currently seems imaginable
> that a research lab would cross critical lines without noticing.
>
> Many researchers steeped in these issues, including myself, expect that
> the most likely result of building a superhumanly smart AI, under anything
> remotely like the current circumstances, is that literally everyone on
> Earth will die. Not as in “maybe possibly some remote chance,” but as in
> “that is the obvious thing that would happen.”
>


-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
I wrote:

Robin  wrote:
>
>
Multiple copies, spread across the Internet, would make it almost
>> invulnerable.
>> (Assuming a neural network can be "backed up".)
>>
>
> I do not think it would be difficult to find and expurgate copies. They
> would be very large.
>

There is another reason I do not think this would be a problem. I expect
the hardware for the first sentient AI would be unique. It would be some
sort of supercomputer with lots of artificial neural network (ANN) hardware
circuits. You could not run a backup copy on another computer, or on
something like Google's computer array. Eventually, assuming the sentient
AI does not destroy the world, there might be many of these computers, so a
backup copy could be run.

I asked ChatGPT about this --


ME: Does ChatGDP run on specialized hardware, or on ordinary computers?

ChatGPT: As an AI language model, ChatGPT can run on both specialized
hardware and ordinary computers. The specific hardware used to train and
run ChatGPT varies depending on the scale of the model and the
computational resources available.

For example, the largest version of GPT-3, which has 175 billion
parameters, was trained using a combination of thousands of graphics
processing units (GPUs) and tensor processing units (TPUs) on a specialized
computing infrastructure provided by OpenAI. However, smaller versions of
GPT-3 can be run on ordinary computers with sufficient memory and
processing power.

Similarly, as a language model, ChatGPT can be run on both specialized
hardware, such as GPUs or TPUs, or on ordinary computers, such as laptops
or desktops. However, the speed and performance of the model will vary
depending on the hardware used.