Derek Zahn wrote:
Ben Goertzel:
> Yes -- it is true, we have not created a human-level AGI yet. No serious
> researcher disagrees. So why is it worth repeating the point?
Long ago I put Tintner in my killfile -- he's the only one there, and
it's regrettable but it was either that or start taking blood pressure
medicine... so *plonk*. It's not necessarily that I disagree with most
of his (usually rather obvious) points or think his own ideas (about
image schemas or whatever) are worse than other stuff floating around,
but his toxic personality makes the benefit not worth the cost. Now I
only have to suffer the collateral damage in responses.
Yes, he was in my killfile as well for a long time, then I decided to
give him a second chance. Now I am regretting it, so back he goes ...
*plonk*.
Mike: the only reason I am now ignoring you is that you persistently
refuse to educate yourself about the topics discussed on this list, and
instead you just spout your amateur opinions as if they were fact. Your
inability to distinguish real science from your amateur opinion is why,
finally, I have had enough.
I apologize to the list for engaging him. I should have just ignored
his ravings.
However, I went to the archives to fetch this message. I do think it
would be nice to have "tests" or "problems" that one could point to as
partial progress... but it's really hard. Any such things have to be
fairly rigorously specified (otherwise we'll argue all day about whether
they are solved or not -- see Tintner's "Creativity" problem as an
obvious example), and they need to not be "AGI complete" themselves,
which is really hard. For example, Tintner's Narrative Visualization
task strikes me as needing all the machinery and a very large knowledge
base so by the time a system could do a decent job of this in a general
context it would already have demonstrably solved the whole thing.
It looks like you, Ben and I have now all said exactly the same thing,
so we have a strong consensus on this.
The other common criticism of "tests" is that they can often be solved
by Narrow-AI means (say, current face recognizers which are often better
at this task than humans). I don't necessarily think this is a
disqualification though... if the solution is provided in the context of
a particular architecture with a plausible argument for how the system
could have produced the specifics itself, that seems like some sort of
progress.
I sometimes wonder if a decent measurement of AGI progress might be to
measure the ease with which the system can be adapted by its builders to
solve narrow AI problems -- sort of a "cognitive enhancement"
measurement. Such an approach makes a decent programming language and
development environment be a tangible early step toward AGI but maybe
that's not all bad.
At any rate, if there were some clearly-specified tests that are not
AGI-complete and yet not easily attackable with straightforward software
engineering or Narrow AI techniques, that would be a huge boost in my
opinion to this field. I can't think of any though, and they might not
exist. If it is in fact impossible to find such tasks, what does that
say about AGI as an endeavor?
My own feeling about this is that when a set of ideas start to gel into
one coherent approach to the subject, with a description of those ideas
being assembled as a book-length manuscript, and when you read those
ideas and they *feel* like progress, you will know that substantial
progress is happening.
Until then, the only people who might get an advanced feeling that such
a work is on the way are the people on the front lines, you see all the
pieces coming together just before they are assembled for public
consumption.
Whether or not someone could write down tests of progress ahead of that
point, I do not know.
Richard Loosemore
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com