Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Tom McCabe
These questions, although important, have little to do with the feasibility of FAI. I think we can all agree that the space of possible universe configurations without sentient life of *any kind* is vastly larger than the space of possible configurations with sentient life, and designing an AGI to

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Tom McCabe
--- Samantha Atkins <[EMAIL PROTECTED]> wrote: > > On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote: > > > > > We can't "know it" in the sense of a mathematical > > proof, but it is a trivial observation that out of > the > > bazillions of possible ways to configure matter, > only > > a ridiculou

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Matt Mahoney
I think I am missing something on this discussion of friendliness. We seem to tacitly assume we know what it means to be friendly. For example, we assume that an AGI that does not destroy the human race is more friendly than one that does. We also want an AGI to obey our commands, cure disease,

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Kaj Sotala
On 6/22/07, Charles D Hixson <[EMAIL PROTECTED]> wrote: Dividing things into us vs. them, and calling those that side with us friendly seems to be instinctually human, but I don't think that it's a universal. Even then, we are likely to ignore birds, ants that are outside, and other things that

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Samantha  Atkins
On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote: We can't "know it" in the sense of a mathematical proof, but it is a trivial observation that out of the bazillions of possible ways to configure matter, only a ridiculously tiny fraction are Friendly, and so it is highly unlikely that a selected