Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-28 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > > My assumption is friendly AI under the CEV model. Currently, FAI is > unsolved. > > CEV only defines the problem of friendliness, not a solution. As I > > understand it, CEV defines AI as friendly if on average it gives humans > what > > they

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-28 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: Matt Mahoney wrote: Suppose that the collective memories of all the humans make up only one billionth of your total memory, like one second of memory out of your human lifetime. Would it make much difference if it was erase

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-27 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > Suppose that the collective memories of all the humans make up only one > > billionth of your total memory, like one second of memory out of your > human > > lifetime. Would it make much difference if it was erased to make

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-26 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: Why do say that "Our reign will end in a few decades" when, in fact, one of the most obvious things that would happen in this future is that humans will be able to *choose* what intelligence level to be experiencing, on a da

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-25 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Why do say that "Our reign will end in a few decades" when, in fact, one > of the most obvious things that would happen in this future is that > humans will be able to *choose* what intelligence level to be > experiencing, on a day to day basis

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-25 Thread Richard Loosemore
Matt Mahoney wrote: Richard, I have no doubt that the technological wonders you mention will all be possible after a singularity. My question is about what role humans will play in this. For the last 100,000 years, humans have been the most intelligent creatures on Earth. Our reign will end in

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-25 Thread Matt Mahoney
Richard, I have no doubt that the technological wonders you mention will all be possible after a singularity. My question is about what role humans will play in this. For the last 100,000 years, humans have been the most intelligent creatures on Earth. Our reign will end in a few decades. Who i

RE: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread YOST Andrew
Green Tomorrow [WAS Re: [singularity] QUESTION] This is a perfect example of how one person comes up with some positive, constructive ideas and then someone else waltzes right in, pays no attention to the actual arguments, pays no attention to the relative probability of different outcomes

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread Richard Loosemore
candice schuster wrote: Hi Richard, Without getting too technical on you...how do you propose implementing these ideas of yours ? In what sense? The point is that "implementation" would be done by the AGIs, after we produce a blueprint for what we want. Richard Loosemore - This li

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread Richard Loosemore
This is a perfect example of how one person comes up with some positive, constructive ideas and then someone else waltzes right in, pays no attention to the actual arguments, pays no attention to the relative probability of different outcomes, but just snears at the whole idea with a

RE: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread candice schuster
Hi Richard, Without getting too technical on you...how do you propose implementing these ideas of yours ? Candice> Date: Tue, 23 Oct 2007 20:28:42 -0400> From: [EMAIL PROTECTED]> To: singularity@v2.listbox.com> Subject: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]> > candice schus

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-23 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: Let's assume for the moment that the very first AI is safe and friendly, and not an intelligent worm bent on swallowing the Internet. And let's also assume that once this SAFAI starts self improving, that it quickly advances to the point where it