TED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, March 04, 2003 9:33 AM
Subject: RE: [agi] Playing with fire
>
> David,
>
> What you're suggesting is closely related to the "global brain" idea
>
> http://pespmc1.vub.ac.be/GBRAINREF.html
>
> Francis He
Behalf Of C. David Noziglia
> Sent: Tuesday, March 04, 2003 8:39 AM
> To: [EMAIL PROTECTED]
> Subject: Re: [agi] Playing with fire
>
>
> It seems to me that a lot of the us-against-them-or-it flavor of this
> conversation is based on the assumption that both machine AI and hum
d Wyble" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, March 03, 2003 5:47 PM
Subject: Re: [agi] Playing with fire
>
> > Extra credit:
> > I've just read the Crichton novel PREY. Totally transparent movie-scipt
but
> > a perfect text book on how
Alan,
I've asked you repeatedly not to make insulting or anti-Semitic comments on
this list. But yet, you feel you have to keep referring to Eliezer as "the
rabbi" and making other similar choice comments. This is not good!
As list moderator, I am hereby forbidding you to post to the AGI list
>Philip>I personally think humans as a society are
capable of saving themselves from their own individual and collective
stupidity. I've worked explicitly on this issue for 30 years and still
retain some optimism on the subject.>> Colin: I'm with Pei
Wang. Let's explore and deal with it.>OK
hould be able to handle AGI properly for
our benefit, even though there is always a danger of mis-handling, like any
technology.
Pei
- Original Message -
From:
Philip Sutton
To: [EMAIL PROTECTED]
Sent: Monday, March 03, 2003 9:43
PM
Subject: Re: [agi] Playing with
fir
Hi Pei / Colin,
> Pei: This is
the conclusion that I have been most afraid of from this
> "Friendly
AI" discussion. Yes, AGI can be very dangerous, and I don't
> think any of
the solutions proposed so far can eliminate the danger
> completely. However
I don't think this is a valid reason t
One thing I should add:
It's the same hubris I mentioned in my previous message that prompted us to send out
satellites effectively bearing our home address and basic physiology on a plaque in
the hope that aliens would find it and come to us. Even NASA scientists seem to have
no fear of anyth
> Extra credit:
> I've just read the Crichton novel PREY. Totally transparent movie-scipt but
> a perfect text book on how to screw up really badly. Basically the formula
> is 'let the military finance it'. The general public will see this
> inevitable movie and we we will be drawn towards the mor
> (1) Since we cannot accurately predicate the future
> implications of our
> action, almost all research can lead to deadly result ---
> just see what has
> been used as weapons in the current world. If we ask for a
> guaranty for
> safety before a research, then we cannot do anything. I don't
>
> I would point out that our legal frameworks are designed under the
> assumption that there is rough parity in intelligence between all
> actors in the system. The system breaks badly when you have extreme
> disparities in the intelligence of the actors because you are breaking
> one of the unde
On Mon, 2003-03-03 at 15:40, Alan Grimes wrote:
> I am OK with _INDEPENDANT_ AI so long as it exists within the same legal
> framework as any human...
I would point out that our legal frameworks are designed under the
assumption that there is rough parity in intelligence between all actors
in th
Anand wrote:
> The following are some of the reasons why I believe Friendly AI is the
> safest advanced technology to develop *first*, and thus the best
> advanced technology to selectively support *first* (the main source of
> my views is Eli's CFAI analysis of a moral AI's theoretical possibili
Over time, human societies move forward, for benefit or harm,
though usually for benefit. It's important for us to try and selectively
choose how we *positively* move forward, rather than attempting to stand
still or move backwards. Each of us can use our resources (time, contacts,
money, intelli
From: "Philip Sutton" <[EMAIL PROTECTED]>
> > Ben: I don't know how society is going to react to the creation of a
> > super-smart AGI. But clearly one thing it depends on is the rate of
> > advance. If Eliezer is right, the transition from a pretty smart AGI
> > to a superintelligent AGI will b
Ben,
> Ben: I don't know how society is going to react to the creation of a
> super-smart AGI. But clearly one thing it depends on is the rate of
> advance. If Eliezer is right, the transition from a pretty smart AGI
> to a superintelligent AGI will be quick enough that the slow
> mechanisms of
16 matches
Mail list logo