Richard Loosemore wrote:
I am not sure I understand.
There is every reason to think that "a currently-envisionable AGI would
be millions of times "smarter" than all of humanity put together."
Simply build a human-level AGI, then get it to bootstrap to a level of,
say, a thousand times huma
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > Just what do you want out of AGI? Something that thinks like a person or
> > something that does what you ask it to?
>
> Either will do: your suggestion achieves neither.
>
> If I ask your non-AGI the following questio
Richard Loosemore:> I am only saying that I see no particular limitations,
given the things > that I know about how to buld an AGI. That is the best I can
do.
Sorry to flood everybody's mailbox today; I will make this my last message.
I'm not looking to impose a viewpoint on anybody; you have c
Derek Zahn wrote:
Richard Loosemore:
> I am not sure I understand.
>
> There is every reason to think that "a currently-envisionable AGI would
> be millions of times "smarter" than all of humanity put together."
>
> Simply build a human-level AGI, then get it to bootstrap to a level of,
>
Derek Zahn wrote:
I asked:
> Imagine we have an "AGI". What exactly does it do? What *should* it do?
Note that I think I roughly understand Matt's vision for this: roughly,
it is google, and it will gradually get better at answering questions
and taking commands as more capable systems ar
Richard Loosemore:> I am not sure I understand.> > There is every reason to
think that "a currently-envisionable AGI would > be millions of times "smarter"
than all of humanity put together."> > Simply build a human-level AGI, then get
it to bootstrap to a level of, > say, a thousand times human
Samantha Atkins writes:
> Beware the wish granting genie conundrum.
Yeah, you put it better than I did; I'm not asking what wishes we'd ask a genie
to grant, I'm wondering specifically what we want from the machines that Ben
and Richard and Matt and so on are thinking about and building.
Si
Derek Zahn wrote:
Matt Mahoney writes:
> Just what do you want out of AGI? Something that thinks like a person or
> something that does what you ask it to?
I think this is an excellent question, one I do not have a clear answer
to myself, even for my own use.
Imagine we have an "AGI". Wha
I asked:> Imagine we have an "AGI". What exactly does it do? What *should* it
do?
Note that I think I roughly understand Matt's vision for this: roughly, it is
google, and it will gradually get better at answering questions and taking
commands as more capable systems are linked in to the net
Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
Perhaps you have not read my proposal at
http://www.mattmahoney.net/agi.html
or don't understand it.
Some of us have read it, and it has nothing whatsoever to do with
Artificial Intelligence. It is a l
On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote:
Matt Mahoney writes:
> Just what do you want out of AGI? Something that thinks like a
person or
> something that does what you ask it to?
The "or" is interesting. If it really "thinks like a person" and at
at least human level then I doubt
Matt Mahoney writes:> Just what do you want out of AGI? Something that thinks
like a person or> something that does what you ask it to?
I think this is an excellent question, one I do not have a clear answer to
myself, even for my own use.
Imagine we have an "AGI". What exactly does it do? Wh
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > Perhaps you have not read my proposal at
> http://www.mattmahoney.net/agi.html
> > or don't understand it.
>
> Some of us have read it, and it has nothing whatsoever to do with
> Artificial Intelligence. It is a labor-in
Matt Mahoney wrote:
--- Mike Tintner <[EMAIL PROTECTED]> wrote:
My point was how do you test the *truth* of items of knowledge. Google tests
the *popularity* of items. Not the same thing at all. And it won't work.
It does work because the truth is popular. Look at prediction markets. Look
a
--- Mike Tintner <[EMAIL PROTECTED]> wrote:
> My point was how do you test the *truth* of items of knowledge. Google tests
> the *popularity* of items. Not the same thing at all. And it won't work.
It does work because the truth is popular. Look at prediction markets. Look
at Wikipedia. It is
15 matches
Mail list logo