There is a big difference between being able to fake something for a brief period of time and being able to do it correctly. All of your phrasing clearly indicates that *you* believe that your systems can only "fake it" for a brief period of time, not "do it correctly". Why are you belaboring the point? I don't get it since your own points seem to deny your own argument.

And even if you can do it for small, toy conversations where you recognize the exact same assertions -- that is nowhere close to what you're going to need in the real world.

When the average librarian is able to answer veterinary questions to
the satisfaction of a licensing board conducting an oral examination,
then we will be living in the era of agi, won't we?

Depends upon your definition of AGI. That could be just a really kick-ass decision support system -- and I would actually bet a pretty fair chunk of money that 15 years *is* entirely within reason for the scenario you suggest.

----- Original Message ----- From: "Linas Vepstas" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Monday, November 12, 2007 7:28 PM
Subject: Re: [agi] What best evidence for fast AI?


On Mon, Nov 12, 2007 at 06:56:51PM -0500, Mark Waser wrote:
>It will happily include "irrelevant" facts

Which immediately makes it *not* relevant to my point.

Please read my e-mails more carefully before you hop on with ignorant
flames.

I read your emails, and, mixed in with some insightful and highly
relevent commentary, there are also many flames. Repeatedly so.

"Relevence" is not an easy problem, nor is it obviously a hard one.
To provide relevent answers, one must have a model of who is asking.
So, in building a computer chat system, one must first deduce things
about the speaker.  This is something I've been trying to do.

Again, with my toy system, I've gotten so far as to be able to
let the speaker proclaim that "this is boring", and have the
system remember, so that, for future conversations, the "boring"
assertions are not revisited.

Now, "boring" is a tricky thing: "a horse is genus equus" may be boring
for a child, and yet interesting to young adults. So the problem of
relevent answers to questions is more about creating a model of the
person one is conversing with, than it is about NLP processing,
representation of knowledge, etc. Conversations are contextual;
modelling that context is what is interesting to me.

The result of hooking up a reasoning system, a knowledgebase like
opencyc or sumo, an nlp parser, and a homebrew contextualizer is
not "agi".  It's little more than a son-et-lumiere show.  But it
already does the things that you are claiming to be "unadulterated BS".

And regarding
>If and when you find a human who is capable of having conversations
>about horses with small farmers, rodeo riders, vets, children
>and biomechanicians, I'll bet that they won't have a clue about
>galaxy formation or enzyme reactions. Don't set the bar above
>human capabilites.

Go meet your average librarian.  They won't know the information off the
top of their heads (yet), but they'll certainly be able to get it to you --

Go meet google. Or wikipedia. Cheeses.

and the average librarian fifteen years from now *will* be able to.

When the average librarian is able to answer veterinary questions to
the satisfaction of a licensing board conducting an oral examination,
then we will be living in the era of agi, won't we?

--linas

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64415771-2a51bf

Reply via email to