On Mon, Nov 12, 2007 at 07:46:15PM -0500, Mark Waser wrote:
>    There is a big difference between being able to fake something for a 
> brief period of time and being able to do it correctly.  All of your 
> phrasing clearly indicates that *you* believe that your systems can only 
> "fake it" for a brief period of time, not "do it correctly".  Why are you 
> belaboring the point?  I don't get it since your own points seem to deny 
> your own argument.

I don't think BenG claimed to be able to build an AGI in 6 months,
but rather something that can fake it for a breif period of time.
I was rising to the defense of that.

> >When the average librarian is able to answer veterinary questions to
> >the satisfaction of a licensing board conducting an oral examination,
> >then we will be living in the era of agi, won't we?
> 
> Depends upon your definition of AGI.  That could be just a really kick-ass 
> decision support system -- and I would actually bet a pretty fair chunk of 
> money that 15 years *is* entirely within reason for the scenario you 
> suggest.

Actually, I agree with that. Or, to paraphrase, I think that
NLP-speaking know-it-all librarians are reasonable in 15 years,
as they seem to be just shiny and polished versions of things 
we have today.

So perhaps the AGI question is, "what is the difference between 
a know-it-all mechano-librarian, and a sentient being?" 

--linas

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64420786-7a64ef

Reply via email to