Re: [singularity] The humans are dead...

2007-05-27 Thread Samantha Atkins
On May 27, 2007, at 5:48 PM, Stathis Papaioannou wrote: On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote: Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever mea

Re: [singularity] "Friendly" question...

2007-05-27 Thread Samantha Atkins
On May 27, 2007, at 12:37 PM, Abram Demski wrote: Alright, that's sensible. The reason I asked was because it seemed to me that it would need to keep humans around to build hardware, feed it mathematical info, et cetera. It is not at all sensible. Today we have no real idea how to build a

Re: [singularity] "Friendly" question...

2007-05-27 Thread Richard Loosemore
Joshua Fox wrote: Abram, Let's say that the builders want to keep things safe and simple for starters, and concentrate on the best possible AGI theorem-prover, rather than some complex do-gooding machine. The best way for the machine to achieve its assigned goal is to improve not only its o

Re: [singularity] The humans are dead...

2007-05-27 Thread Jef Allbright
On 5/27/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote: > Which got me thinking. It seems reasonable to think that killing a > human is worse than killing a mouse because a human is more > intelligent/complex/conscious/...etc...(use what ev

Re: [singularity] The humans are dead...

2007-05-27 Thread Stathis Papaioannou
On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote: Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever measure you prefer) than a mouse. So, would killing a super in

Re: [singularity] "Friendly" question...

2007-05-27 Thread Abram Demski
Alright, that's sensible. The reason I asked was because it seemed to me that it would need to keep humans around to build hardware, feed it mathematical info, et cetera. On 5/27/07, Joshua Fox < [EMAIL PROTECTED]> wrote: Abram, Let's say that the builders want to keep things safe and simple f

Re: [singularity] "Friendly" question...

2007-05-27 Thread Joshua Fox
Abram, Let's say that the builders want to keep things safe and simple for starters, and concentrate on the best possible AGI theorem-prover, rather than some complex do-gooding machine. The best way for the machine to achieve its assigned goal is to improve not only its own software but also it

Re: [singularity] "Friendly" question...

2007-05-27 Thread Abram Demski
Joshua Fox,could you give an example scenario of how an AGI theorem-prover would wipe out humanity? On 5/27/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: Joshua Fox wrote: > [snip] > When you understand the following, you will have surpassed most AI > experts in understanding the risks: If t

Re: [singularity] The humans are dead...

2007-05-27 Thread Richard Loosemore
Shane Legg wrote: http://www.youtube.com/watch?v=WGoi1MSGu64 Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever measure you prefer) than a mouse. So, would killing

[singularity] The humans are dead...

2007-05-27 Thread Shane Legg
http://www.youtube.com/watch?v=WGoi1MSGu64 Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever measure you prefer) than a mouse. So, would killing a super intelligent

Re: [singularity] "Friendly" question...

2007-05-27 Thread Richard Loosemore
Joshua Fox wrote: [snip] When you understand the following, you will have surpassed most AI experts in understanding the risks: If the first AGI is given or decides to try for almost any goal, including a simple "harmless" goal like being as good as possible at proving theorems, then humanity

Re: [singularity] "Friendly" question...

2007-05-27 Thread John Ku
On 5/27/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: Every society has sociopaths, and generally recognises them as such and deals with them. The problem is that through most of history, "normal" people have thought it was OK to treat slaves, women, Jews, homosexuals etc. in ways consider

Re: [singularity] "Friendly" question...

2007-05-27 Thread Stathis Papaioannou
On 27/05/07, John Ku <[EMAIL PROTECTED]> wrote: On 5/26/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > > What if the normative governance system includes doing terrible things? > I think for some people, namely sociopaths, it probably sometimes does. Now evolution has given most of us e

Re: [singularity] "Friendly" question...

2007-05-27 Thread John Ku
I would feel relieved if there was miscommunication between us. I was mostly concerned with the issue of what we *should* care about and what cares we *should* be acting upon. If you are simply talking about some cognitive biases we have that you concede ought to be overcome (or that we ought to