On May 27, 2007, at 5:48 PM, Stathis Papaioannou wrote:
On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote:
Which got me thinking. It seems reasonable to think that killing a
human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever mea
On May 27, 2007, at 12:37 PM, Abram Demski wrote:
Alright, that's sensible. The reason I asked was because it seemed
to me that it would need to keep humans around to build hardware,
feed it mathematical info, et cetera.
It is not at all sensible. Today we have no real idea how to build a
Joshua Fox wrote:
Abram,
Let's say that the builders want to keep things safe and simple for
starters, and concentrate on the best possible AGI theorem-prover,
rather than some complex do-gooding machine.
The best way for the machine to achieve its assigned goal is to improve
not only its o
On 5/27/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote:
> Which got me thinking. It seems reasonable to think that killing a
> human is worse than killing a mouse because a human is more
> intelligent/complex/conscious/...etc...(use what ev
On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote:
Which got me thinking. It seems reasonable to think that killing a
human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever measure you
prefer) than a mouse.
So, would killing a super in
Alright, that's sensible. The reason I asked was because it seemed to me
that it would need to keep humans around to build hardware, feed it
mathematical info, et cetera.
On 5/27/07, Joshua Fox < [EMAIL PROTECTED]> wrote:
Abram,
Let's say that the builders want to keep things safe and simple f
Abram,
Let's say that the builders want to keep things safe and simple for
starters, and concentrate on the best possible AGI theorem-prover, rather
than some complex do-gooding machine.
The best way for the machine to achieve its assigned goal is to improve not
only its own software but also it
Joshua Fox,could you give an example scenario of how an AGI theorem-prover
would wipe out humanity?
On 5/27/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Joshua Fox wrote:
> [snip]
> When you understand the following, you will have surpassed most AI
> experts in understanding the risks: If t
Shane Legg wrote:
http://www.youtube.com/watch?v=WGoi1MSGu64
Which got me thinking. It seems reasonable to think that killing a
human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever measure you
prefer) than a mouse.
So, would killing
http://www.youtube.com/watch?v=WGoi1MSGu64
Which got me thinking. It seems reasonable to think that killing a
human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever measure you
prefer) than a mouse.
So, would killing a super intelligent
Joshua Fox wrote:
[snip]
When you understand the following, you will have surpassed most AI
experts in understanding the risks: If the first AGI is given or decides
to try for almost any goal, including a simple "harmless" goal like
being as good as possible at proving theorems, then humanity
On 5/27/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
Every society has sociopaths, and generally recognises them as such and
deals with them. The problem is that through most of history, "normal"
people have thought it was OK to treat slaves, women, Jews, homosexuals etc.
in ways consider
On 27/05/07, John Ku <[EMAIL PROTECTED]> wrote:
On 5/26/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
>
> What if the normative governance system includes doing terrible things?
>
I think for some people, namely sociopaths, it probably sometimes does.
Now evolution has given most of us e
I would feel relieved if there was miscommunication between us.
I was mostly concerned with the issue of what we *should* care about and
what cares we *should* be acting upon. If you are simply talking about some
cognitive biases we have that you concede ought to be overcome (or that we
ought to
14 matches
Mail list logo