Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Stan Nilsen
Richard Loosemore wrote: I am not sure I understand. There is every reason to think that "a currently-envisionable AGI would be millions of times "smarter" than all of humanity put together." Simply build a human-level AGI, then get it to bootstrap to a level of, say, a thousand times huma

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > Just what do you want out of AGI? Something that thinks like a person or > > something that does what you ask it to? > > Either will do: your suggestion achieves neither. > > If I ask your non-AGI the following questio

RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Richard Loosemore:> I am only saying that I see no particular limitations, given the things > that I know about how to buld an AGI. That is the best I can do. Sorry to flood everybody's mailbox today; I will make this my last message. I'm not looking to impose a viewpoint on anybody; you have c

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Derek Zahn wrote: Richard Loosemore: > I am not sure I understand. > > There is every reason to think that "a currently-envisionable AGI would > be millions of times "smarter" than all of humanity put together." > > Simply build a human-level AGI, then get it to bootstrap to a level of, >

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Derek Zahn wrote: I asked: > Imagine we have an "AGI". What exactly does it do? What *should* it do? Note that I think I roughly understand Matt's vision for this: roughly, it is google, and it will gradually get better at answering questions and taking commands as more capable systems ar

RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Richard Loosemore:> I am not sure I understand.> > There is every reason to think that "a currently-envisionable AGI would > be millions of times "smarter" than all of humanity put together."> > Simply build a human-level AGI, then get it to bootstrap to a level of, > say, a thousand times human

RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Samantha Atkins writes: > Beware the wish granting genie conundrum. Yeah, you put it better than I did; I'm not asking what wishes we'd ask a genie to grant, I'm wondering specifically what we want from the machines that Ben and Richard and Matt and so on are thinking about and building. Si

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Derek Zahn wrote: Matt Mahoney writes: > Just what do you want out of AGI? Something that thinks like a person or > something that does what you ask it to? I think this is an excellent question, one I do not have a clear answer to myself, even for my own use. Imagine we have an "AGI". Wha

RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
I asked:> Imagine we have an "AGI". What exactly does it do? What *should* it do? Note that I think I roughly understand Matt's vision for this: roughly, it is google, and it will gradually get better at answering questions and taking commands as more capable systems are linked in to the net

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: Matt Mahoney wrote: Perhaps you have not read my proposal at http://www.mattmahoney.net/agi.html or don't understand it. Some of us have read it, and it has nothing whatsoever to do with Artificial Intelligence. It is a l

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Samantha  Atkins
On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote: Matt Mahoney writes: > Just what do you want out of AGI? Something that thinks like a person or > something that does what you ask it to? The "or" is interesting. If it really "thinks like a person" and at at least human level then I doubt

RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Matt Mahoney writes:> Just what do you want out of AGI? Something that thinks like a person or> something that does what you ask it to? I think this is an excellent question, one I do not have a clear answer to myself, even for my own use. Imagine we have an "AGI". What exactly does it do? Wh

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > Perhaps you have not read my proposal at > http://www.mattmahoney.net/agi.html > > or don't understand it. > > Some of us have read it, and it has nothing whatsoever to do with > Artificial Intelligence. It is a labor-in

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Matt Mahoney wrote: --- Mike Tintner <[EMAIL PROTECTED]> wrote: My point was how do you test the *truth* of items of knowledge. Google tests the *popularity* of items. Not the same thing at all. And it won't work. It does work because the truth is popular. Look at prediction markets. Look a

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney
--- Mike Tintner <[EMAIL PROTECTED]> wrote: > My point was how do you test the *truth* of items of knowledge. Google tests > the *popularity* of items. Not the same thing at all. And it won't work. It does work because the truth is popular. Look at prediction markets. Look at Wikipedia. It is