Ben Goertzel:> Yes -- it is true, we have not created a human-level AGI yet. No 
serious> researcher disagrees. So why is it worth repeating the point?
Long ago I put Tintner in my killfile -- he's the only one there, and it's 
regrettable but it was either that or start taking blood pressure medicine... 
so *plonk*.  It's not necessarily that I disagree with most of his (usually 
rather obvious) points or think his own ideas (about image schemas or whatever) 
are worse than other stuff floating around, but his toxic personality makes the 
benefit not worth the cost.  Now I only have to suffer the collateral damage in 
responses.
 
However, I went to the archives to fetch this message.   I do think it would be 
nice to have "tests" or "problems" that one could point to as partial 
progress... but it's really hard.  Any such things have to be fairly rigorously 
specified (otherwise we'll argue all day about whether they are solved or not 
-- see Tintner's "Creativity" problem as an obvious example), and they need to 
not be "AGI complete" themselves, which is really hard.  For example, Tintner's 
Narrative Visualization task strikes me as needing all the machinery and a very 
large knowledge base so by the time a system could do a decent job of this in a 
general context it would already have demonstrably solved the whole thing.
 
The other common criticism of "tests" is that they can often be solved by 
Narrow-AI means (say, current face recognizers which are often better at this 
task than humans).  I don't necessarily think this is a disqualification 
though... if the solution is provided in the context of a particular 
architecture with a plausible argument for how the system could have produced 
the specifics itself, that seems like some sort of progress.
 
I sometimes wonder if a decent measurement of AGI progress might be to measure 
the ease with which the system can be adapted by its builders to solve narrow 
AI problems -- sort of a "cognitive enhancement" measurement.  Such an approach 
makes a decent programming language and development environment be a tangible 
early step toward AGI but maybe that's not all bad.
 
At any rate, if there were some clearly-specified tests that are not 
AGI-complete and yet not easily attackable with straightforward software 
engineering or Narrow AI techniques, that would be a huge boost in my opinion 
to this field.  I can't think of any though, and they might not exist.  If it 
is in fact impossible to find such tasks, what does that say about AGI as an 
endeavor?
 

-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com

Reply via email to