Re: Safe forms of AGI [WAS Re: [singularity] The humans are dead...]

2007-05-31 Thread Roland Pihlakas
On 5/31/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote: On 5/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Instead, what you do is build the motivational system in such a way that > it must always operate from a massive base of thousands of small > constraints. A system that is constrain

Re: Safe forms of AGI [WAS Re: [singularity] The humans are dead...]

2007-05-31 Thread Chuck Esterbrook
On 5/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: Instead, what you do is build the motivational system in such a way that it must always operate from a massive base of thousands of small constraints. A system that is constrained in a thousand different directions simply cannot fail in a

RE: [singularity] The humans are dead...

2007-05-30 Thread Keith Elis
Samantha Atkins wrote: > Keith Elis wrote: > >> There is some unique point in the space of moral >calculations where >> the >> potential existence of billions of superintelligences outweighs the >> current existence of one. Not knowing where this point >lies, I have to >> generate my

RE: Safe forms of AGI [WAS Re: [singularity] The humans are dead...]

2007-05-30 Thread Keith Elis
Thanks for your response, Richard. I'm not on equal footing when it comes to cognitive science, but I do want to comment on one idea. Richard Loosemore wrote: >Instead, what you do is build the motivational system in such >a way that >it must always operate from a massive base of thousands o

Re: SPAM: Re: SPAM: Re: [singularity] The humans are dead...

2007-05-30 Thread Samantha Atkins
they will do if they are true intellectuals. Jon From: Samantha Atkins [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 11:28 PM To: singularity@v2.listbox.com Subject: SPAM: Re: SPAM: Re: [singularity] The humans are dead... On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote: But

RE: SPAM: Re: SPAM: Re: [singularity] The humans are dead...

2007-05-30 Thread Jonathan H. Hinck
MAIL PROTECTED] Sent: Tuesday, May 29, 2007 11:28 PM To: singularity@v2.listbox.com Subject: SPAM: Re: SPAM: Re: [singularity] The humans are dead... On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote: But does there need to be consensus among the experts for a public issue to be r

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 7:03 PM, Keith Elis wrote: I understand that you value intelligence and capability, but I can't see my way to the destruction of humanity from there. I was quite careful to say that that was not what I would choose. The existence of superintelligence (a fact of the

Re: SPAM: Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote: But does there need to be consensus among the experts for a public issue to be raised? Regarding other topics that have been on the public discussion palate for awhile, how often has this been the case? Perhaps with regard to issues s

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 12:43 PM, Jonathan H. Hinck wrote: Thanks for your response (below). To clarify, I wasn't talking about the need to initiate a public policy (at least not at the front end of the process). Rather, I was talking about the need for an open dialog and discussion, such as we n

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
l and material benefits to all regardless of employment in one great leap. So what is the grading along the way? From: Jonathan H. Hinck [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 10:02 AM To: singularity@v2.listbox.com Subject: RE: [singularity] The humans are dead... Sorry, me again.

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
be almost completely detrimental where there are any results at all. - samantha Jon -Original Message- From: Jonathan H. Hinck [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 9:15 AM To: singularity@v2.listbox.com Subject: RE: [singularity] The humans are dead... Is a broad-based polit

RE: [singularity] The humans are dead...

2007-05-29 Thread Keith Elis
Samantha Atkins wrote: >I very very much mind. But would I sacrifice such a vast >intelligence to protect humanity? That is a highly rhetorical >question I hope to never need to answer in reality.Whatever my >answer might be it would not be automatic.If I knew beyond a >sha

RE: SPAM: Re: [singularity] The humans are dead...

2007-05-29 Thread Jonathan H. Hinck
ial classism/elitism than anything else? Jon From: Russell Wallace [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 4:40 PM To: singularity@v2.listbox.com Subject: SPAM: Re: [singularity] The humans are dead... On 5/29/07, Jonathan H. Hinck

Re: [singularity] The humans are dead...

2007-05-29 Thread Russell Wallace
On 5/29/07, Jonathan H. Hinck <[EMAIL PROTECTED]> wrote: There should therefore be more "politics" posts and discussions, such as we are having now. *laughs* Take a look back over this thread; there's no agreement whatsoever on any aspect of the topic, or even on meta-topics like whether the

RE: [singularity] The humans are dead...

2007-05-29 Thread Jonathan H. Hinck
. There should therefore be more "politics" posts and discussions, such as we are having now. Jon -Original Message- From: Mark [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 2:22 PM To: singularity@v2.listbox.com Subject: RE: [singularity] The humans are dead... Jon,

RE: [singularity] The humans are dead...

2007-05-29 Thread Mark
Jon, regarding your politics post - My impression is that, as a general principle, proposals for radical change, of almost any kind, are not well-received by the general public, and that such change is more likely to occur if it's ideology, presentation, and development are broken into gradua

RE: [singularity] The humans are dead...

2007-05-29 Thread Jonathan H. Hinck
e the rubber hits the road. Jon -Original Message- From: Jonathan H. Hinck [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 9:15 AM To: singularity@v2.listbox.com Subject: RE: [singularity] The humans are dead... Is a broad-based political/social movement to (1) raise consciousne

Re: [singularity] The humans are dead...

2007-05-29 Thread Jef Allbright
On 5/29/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: On 29/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote: > I. Any instance of rational choice is about an agent acting so as to > promote its own present values into the future. The agent has a model > of its reality, and this model will

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
[mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 9:15 AM To: singularity@v2.listbox.com Subject: RE: [singularity] The humans are dead... Is a broad-based political/social movement to (1) raise consciousnessregarding the potential of A.I. and its future implications and to, in turn, (2

RE: [singularity] The humans are dead...

2007-05-29 Thread Jonathan H. Hinck
-Original Message- From: Jonathan H. Hinck [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 9:15 AM To: singularity@v2.listbox.com Subject: RE: [singularity] The humans are dead... Is a broad-based political/social movement to (1) raise consciousness regarding the potential of A.I. and its

RE: [singularity] The humans are dead...

2007-05-29 Thread Natasha Vita-More
Ben Goertzel wrote: > But once a powerful AGI is actually created by person X, the prior > mailing list posts of X are likely to be scrutinized, and > interpreted by people whose points of view are as far from > transhumanism as you can possibly imagine ... but who > may have plenty of power in

RE: [singularity] The humans are dead...

2007-05-29 Thread Jonathan H. Hinck
.listbox.com Subject: RE: [singularity] The humans are dead... Is a broad-based political/social movement to (1) raise consciousness regarding the potential of A.I. and its future implications and to, in turn, (2) stimulate public discussion about this whole issue possible at this time? Or is

RE: [singularity] The humans are dead...

2007-05-29 Thread Jonathan H. Hinck
disregard for the "geeks") for this to be possible? (Please, please, thoughts anyone?) Jon -Original Message- From: Keith Elis [mailto:[EMAIL PROTECTED] Sent: Monday, May 28, 2007 9:19 PM To: singularity@v2.listbox.com Subject: RE: [singularity] The humans are dead... Ben Goer

Safe forms of AGI [WAS Re: [singularity] The humans are dead...]

2007-05-29 Thread Richard Loosemore
Keith Elis wrote: Answer me this, if you dare: Do you believe it's possible to design an artificial intelligence that won't wipe out humanity? Yes, most certainly I do. I can hardly stress this enough. Did you read my previous post on the subject of motivation systems? This contained m

Re: [singularity] The humans are dead...

2007-05-29 Thread Richard Loosemore
Keith Elis wrote: Richard Loosemore wrote: >Your email could be taken as threatening to set up a website >to promote >violence against AI researchers who speculate on ideas that, in your >judgment, could be considered "scary". I'm on your side, too, Richard. I understand this, and I

Re: [singularity] The humans are dead...

2007-05-29 Thread Stathis Papaioannou
On 29/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote: I. Any instance of rational choice is about an agent acting so as to promote its own present values into the future. The agent has a model of its reality, and this model will contain representations of the perceived values of other agents, b

Re: [singularity] The humans are dead...

2007-05-29 Thread Shane Legg
Bill, Our future is already decided behind closed doors. Or maybe you don't understand how politics works? I never said anything about how things currently work. I am asking whether open discussion or behind-closed- doors is a better way to address these difficult problems. Which would you

Re: [singularity] The humans are dead...

2007-05-29 Thread BillK
On 5/29/07, Shane Legg wrote: But then what happens? Potentially very important issues, indeed probably the most important ones since they are likely to be some of the most "scary", disappear out of the scope of open discussion. Instead these issues get worked through in private behind closed

Re: [singularity] The humans are dead...

2007-05-29 Thread Shane Legg
Keith, Shane, you might not believe this, but I'm on your side. You might be on my side, but are you on humanities side? What I mean is: Sure, if I avoid debates about issues that I think are going to be very important then that might save my skin in the future if somebody wants to take my wor

Re: [singularity] The humans are dead...

2007-05-28 Thread Russell Wallace
On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: We are actually in some functional agreement here. My own efforts (though not so much in the day job currently) will be directed toward IA via supportive software (some AI flavored, some not) for the time-being (this side of financial inde

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 9:10 PM, Russell Wallace wrote: On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: Without AI or such IA to be almost the same thing I don't have much reason to believe humanity will see 3007. *nods* Or rather - in my opinion - it probably will last that long eith

Re: [singularity] The humans are dead...

2007-05-28 Thread Russell Wallace
On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: Without AI or such IA to be almost the same thing I don't have much reason to believe humanity will see 3007. *nods* Or rather - in my opinion - it probably will last that long either way, but the chance of longer term survival might hav

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 8:11 PM, Russell Wallace wrote: On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: I think you know well enough that most of us who have considered such things for significant time have done considerable work to get beyond "metal men". Yep. So had I. Then I discov

Re: [singularity] The humans are dead...

2007-05-28 Thread Russell Wallace
On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: I think you know well enough that most of us who have considered such things for significant time have done considerable work to get beyond "metal men". Yep. So had I. Then I discovered considerable work is nowhere near enough, alas. A

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 6:52 PM, Russell Wallace wrote: On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: So your are happily provincial in this respect. Addendum: I think it is my view that is unprovincial. We're programmed to think intelligence = humanlike, because for the last million

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 6:23 PM, Keith Elis wrote: Samantha Atkins wrote: On what basis is that answer correct? Do you mean factual in that it is the choice that you would make and that you belief proper? Or are you saying it is more objectively correct. If so, on what basis? Mere assertion an

RE: [singularity] The humans are dead...

2007-05-28 Thread Keith Elis
Ben Goertzel wrote: > Right now, no one cares what a bunch of geeks and freaks > say about AGI and the future of humanity. > > But once a powerful AGI is actually created by person X, the prior > mailing list posts of X are likely to be scrutinized, and > interpreted by people whose points of

Re: [singularity] The humans are dead...

2007-05-28 Thread Russell Wallace
On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: So your are happily provincial in this respect. Addendum: I think it is my view that is unprovincial. We're programmed to think intelligence = humanlike, because for the last million years the only forms of general intelligence in existe

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 5:44 PM, Keith Elis wrote: Richard Loosemore wrote: Your email could be taken as threatening to set up a website to promote violence against AI researchers who speculate on ideas that, in your judgment, could be considered "scary". I'm on your side, too, Richard. Answer

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 4:29 PM, Joel Pitt wrote: On 5/29/07, Keith Elis <[EMAIL PROTECTED]> wrote: In the end, my advice is pragmatic: Anytime you post publicly on topics such as these, where the stakes are very, very high, ask yourself, Can I be taken out of context here? Is this position, wh

Re: [singularity] The humans are dead...

2007-05-28 Thread Russell Wallace
On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: What do you mean that you don't believe in superhuman intelligent machines? I mean in the same way I don't believe in machines having bird-level flight; I wrote a longer explanation elsethread and posted it to the canonizer the other day,

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 3:32 PM, Russell Wallace wrote: On 5/28/07, Shane Legg <[EMAIL PROTECTED]> wrote: If one accepts that there is, then the question becomes: Where should we put a super human intelligent machine on the list? If it's not at the top, then where is it and why? I don't claim to

RE: [singularity] The humans are dead...

2007-05-28 Thread Keith Elis
Samantha Atkins wrote: >On what basis is that answer correct? Do you mean factual in >that it is >the choice that you would make and that you belief proper? >Or are you >saying it is more objectively correct. If so, on what basis? Mere >assertion and braggadocio will not do for

Re: [singularity] The humans are dead...

2007-05-28 Thread Benjamin Goertzel
Unfortunately, I have come to agree with Keith on this issue. Discussing issues like this [comparative moral value of humans versus superhuman AGIs] on public mailing lists seems fraught with peril for anyone who feels they have a serious chance of actually creating AGI. Words are slippery, and

Re: [singularity] The humans are dead...

2007-05-28 Thread Richard Loosemore
Shane Legg wrote: On 5/27/07, *Richard Loosemore* <[EMAIL PROTECTED] > wrote: What possible reason do we have for assuming that the "badness" of killing a creature is a linear, or even a monotonic, function of the intelligence/complexity/consciousness of th

RE: [singularity] The humans are dead...

2007-05-28 Thread Keith Elis
Richard Loosemore wrote: >Your email could be taken as threatening to set up a website >to promote >violence against AI researchers who speculate on ideas that, in your >judgment, could be considered "scary". I'm on your side, too, Richard. Answer me this, if you dare: Do you believe it'

Re: [singularity] The humans are dead...

2007-05-28 Thread Joel Pitt
On 5/29/07, Keith Elis <[EMAIL PROTECTED]> wrote: In the end, my advice is pragmatic: Anytime you post publicly on topics such as these, where the stakes are very, very high, ask yourself, Can I be taken out of context here? Is this position, whether devil's advocate or not, going to come back an

RE: [singularity] The humans are dead...

2007-05-28 Thread Keith Elis
Shane Legg wrote: > Are you suggesting that I avoid asking questions that might entail > unpleasant answers? Maybe, if we all go around not discussing scary > stuff, when super intelligence arrives everything will be just fine? > > Rather than setting up a website to intimidate people who try to

Re: [singularity] The humans are dead...

2007-05-28 Thread Russell Wallace
On 5/28/07, Shane Legg <[EMAIL PROTECTED]> wrote: If one accepts that there is, then the question becomes: Where should we put a super human intelligent machine on the list? If it's not at the top, then where is it and why? I don't claim to have answers to any of these questions, I'm just wond

Re: [singularity] The humans are dead...

2007-05-28 Thread Shane Legg
On 5/27/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: What possible reason do we have for assuming that the "badness" of killing a creature is a linear, or even a monotonic, function of the intelligence/complexity/consciousness of that creature? You produced two data points on the graph, and

Re: [singularity] The humans are dead...

2007-05-28 Thread Shane Legg
Keith, killing me and the people I care about, I'm probably going to have to do something about you, too, since you're the guy trying to build the damn things. Are you suggesting that I avoid asking questions that might entail unpleasant answers? Maybe, if we all go around not discussing sca

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
Keith Elis wrote: Shane Legg wrote: If a machine was more intelligent/complex/conscious/...etc... than all of humanity combined, would killing it be worse than killing all of humanity? You're asking a rhetorical question but let's just get the correct

Re: [singularity] The humans are dead...

2007-05-28 Thread Richard Loosemore
Keith Elis wrote: Shane Legg wrote: If a machine was more intelligent/complex/conscious/...etc... than all of humanity combined, would killing it be worse than killing all of humanity? You're asking a rhetorical question but let's just get the correct

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
Keith Elis wrote: Shane Legg wrote: If a machine was more intelligent/complex/conscious/...etc... than all of humanity combined, would killing it be worse than killing all of humanity? You're asking a rhetorical question but let's just get the correct

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
Shane Legg wrote: http://www.youtube.com/watch?v=WGoi1MSGu64 Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever measure you prefer) than a mouse. So, would killing

Re: [singularity] The humans are dead...

2007-05-28 Thread Jef Allbright
On 5/28/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: On 28/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote: > > Before you consider whether killing the machine would be bad, you have to > > consider whether the machine minds being killed, and how much it minds being > > killed. You can't

RE: [singularity] The humans are dead...

2007-05-28 Thread Keith Elis
Shane Legg wrote: If a machine was more intelligent/complex/conscious/...etc... than all of humanity combined, would killing it be worse than killing all of humanity? You're asking a rhetorical question but let's just get the correct answer out there fir

Re: [singularity] The humans are dead...

2007-05-28 Thread Stathis Papaioannou
On 28/05/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: Before you consider whether killing the machine would be bad, you have to consider whether the machine minds being killed, and how much it minds being killed. You can't actually prove that death is bad as a mathematical theorem; it is somet

Re: [singularity] The humans are dead...

2007-05-28 Thread Stathis Papaioannou
On 28/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote: Before you consider whether killing the machine would be bad, you have to > consider whether the machine minds being killed, and how much it minds being > killed. You can't actually prove that death is bad as a mathematical > theorem; it is s

Re: [singularity] The humans are dead...

2007-05-27 Thread Samantha Atkins
On May 27, 2007, at 5:48 PM, Stathis Papaioannou wrote: On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote: Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever mea

Re: [singularity] The humans are dead...

2007-05-27 Thread Jef Allbright
On 5/27/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote: > Which got me thinking. It seems reasonable to think that killing a > human is worse than killing a mouse because a human is more > intelligent/complex/conscious/...etc...(use what ev

Re: [singularity] The humans are dead...

2007-05-27 Thread Stathis Papaioannou
On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote: Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever measure you prefer) than a mouse. So, would killing a super in

Re: [singularity] The humans are dead...

2007-05-27 Thread Richard Loosemore
Shane Legg wrote: http://www.youtube.com/watch?v=WGoi1MSGu64 Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever measure you prefer) than a mouse. So, would killing

[singularity] The humans are dead...

2007-05-27 Thread Shane Legg
http://www.youtube.com/watch?v=WGoi1MSGu64 Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever measure you prefer) than a mouse. So, would killing a super intelligent