Re: [agi] Playing with fire

2003-03-04 Thread C. David Noziglia
TED]> To: <[EMAIL PROTECTED]> Sent: Tuesday, March 04, 2003 9:33 AM Subject: RE: [agi] Playing with fire > > David, > > What you're suggesting is closely related to the "global brain" idea > > http://pespmc1.vub.ac.be/GBRAINREF.html > > Francis He

RE: [agi] Playing with fire

2003-03-04 Thread Ben Goertzel
Behalf Of C. David Noziglia > Sent: Tuesday, March 04, 2003 8:39 AM > To: [EMAIL PROTECTED] > Subject: Re: [agi] Playing with fire > > > It seems to me that a lot of the us-against-them-or-it flavor of this > conversation is based on the assumption that both machine AI and hum

Re: [agi] Playing with fire

2003-03-04 Thread C. David Noziglia
d Wyble" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Monday, March 03, 2003 5:47 PM Subject: Re: [agi] Playing with fire > > > Extra credit: > > I've just read the Crichton novel PREY. Totally transparent movie-scipt but > > a perfect text book on how

FW: Selectively supporting the safest advanced tech [Re: [agi] Playing with fire]

2003-03-03 Thread Ben Goertzel
Alan, I've asked you repeatedly not to make insulting or anti-Semitic comments on this list. But yet, you feel you have to keep referring to Eliezer as "the rabbi" and making other similar choice comments. This is not good! As list moderator, I am hereby forbidding you to post to the AGI list

RE: [agi] Playing with fire

2003-03-03 Thread Colin Hales
>Philip>I personally think humans as a society are capable of saving themselves from their own individual and collective stupidity.  I've worked explicitly on this issue for 30 years and still retain some optimism on the subject.>> Colin: I'm with Pei Wang. Let's explore and deal with it.>OK

Re: [agi] Playing with fire

2003-03-03 Thread Pei Wang
hould be able to handle AGI properly for our benefit, even though there is always a danger of mis-handling, like any technology.   Pei   - Original Message - From: Philip Sutton To: [EMAIL PROTECTED] Sent: Monday, March 03, 2003 9:43 PM Subject: Re: [agi] Playing with fir

Re: [agi] Playing with fire

2003-03-03 Thread Philip Sutton
Hi Pei / Colin, > Pei: This is the conclusion that I have been most afraid of from this > "Friendly AI" discussion. Yes, AGI can be very dangerous, and I don't > think any of the solutions proposed so far can eliminate the danger > completely. However I don't think this is a valid reason t

Re: [agi] Playing with fire

2003-03-03 Thread Brad Wyble
One thing I should add: It's the same hubris I mentioned in my previous message that prompted us to send out satellites effectively bearing our home address and basic physiology on a plaque in the hope that aliens would find it and come to us. Even NASA scientists seem to have no fear of anyth

Re: [agi] Playing with fire

2003-03-03 Thread Brad Wyble
> Extra credit: > I've just read the Crichton novel PREY. Totally transparent movie-scipt but > a perfect text book on how to screw up really badly. Basically the formula > is 'let the military finance it'. The general public will see this > inevitable movie and we we will be drawn towards the mor

RE: [agi] Playing with fire

2003-03-03 Thread Colin Hales
> (1) Since we cannot accurately predicate the future > implications of our > action, almost all research can lead to deadly result --- > just see what has > been used as weapons in the current world. If we ask for a > guaranty for > safety before a research, then we cannot do anything. I don't >

Re: Selectively supporting the safest advanced tech [Re: [agi] Playing with fire]

2003-03-03 Thread Alan Grimes
> I would point out that our legal frameworks are designed under the > assumption that there is rough parity in intelligence between all > actors in the system. The system breaks badly when you have extreme > disparities in the intelligence of the actors because you are breaking > one of the unde

Re: Selectively supporting the safest advanced tech [Re: [agi]Playing with fire]

2003-03-03 Thread James Rogers
On Mon, 2003-03-03 at 15:40, Alan Grimes wrote: > I am OK with _INDEPENDANT_ AI so long as it exists within the same legal > framework as any human... I would point out that our legal frameworks are designed under the assumption that there is rough parity in intelligence between all actors in th

Re: Selectively supporting the safest advanced tech [Re: [agi] Playing with fire]

2003-03-03 Thread Alan Grimes
Anand wrote: > The following are some of the reasons why I believe Friendly AI is the > safest advanced technology to develop *first*, and thus the best > advanced technology to selectively support *first* (the main source of > my views is Eli's CFAI analysis of a moral AI's theoretical possibili

Selectively supporting the safest advanced tech [Re: [agi] Playing with fire]

2003-03-03 Thread Anand
Over time, human societies move forward, for benefit or harm, though usually for benefit. It's important for us to try and selectively choose how we *positively* move forward, rather than attempting to stand still or move backwards. Each of us can use our resources (time, contacts, money, intelli

Re: [agi] Playing with fire

2003-03-03 Thread Pei Wang
From: "Philip Sutton" <[EMAIL PROTECTED]> > > Ben: I don't know how society is going to react to the creation of a > > super-smart AGI. But clearly one thing it depends on is the rate of > > advance. If Eliezer is right, the transition from a pretty smart AGI > > to a superintelligent AGI will b

[agi] Playing with fire

2003-03-03 Thread Philip Sutton
Ben, > Ben: I don't know how society is going to react to the creation of a > super-smart AGI. But clearly one thing it depends on is the rate of > advance. If Eliezer is right, the transition from a pretty smart AGI > to a superintelligent AGI will be quick enough that the slow > mechanisms of