On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner <tint...@blueyonder.co.uk>wrote:

>  Jim :This illustrates one of the things wrong with the
> dreary instantiations of the prevailing mind set of a group.  It is only a
> matter of time until you discover (through experiment) how absurd it is to
> celebrate the triumph of an overly simplistic solution to a problem that is,
> by its very potential, full of possibilities]
>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI.  Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
> is creative, polyform thinking. Or, if you prefer, it's convergent vs
> divergent thinking, the difference between wh. still seems to escape Dave &
> Ben & most AGI-ers.
>

Well, I agree with what (I think) Mike was trying to get at, except that I
understood that Ben, Hutter and especially David were not only talking about
prediction as a specification of a single prediction when many possible
predictions (ie expectations) were appropriate for consideration.

For some reason none of you seem to ever talk about methods that could be
used to react to a situation with the flexibility to integrate the
recognition of different combinations of familiar events and to classify
unusual events so they could be interpreted as more familiar *kinds* of
events or as novel forms of events which might be then be integrated.  For
me, that seems to be one of the unsolved problems.  Being able to say that
the squares move to the right in unison is a better description than saying
the squares are dancing the irish jig is not really cutting edge.

As far as David's comment that he was only dealing with the "core issues," I
am sorry but you were not dealing with the core issues of contemporary AGI
programming.  You were dealing with a primitive problem that has been
considered for many years, but it is not a core research issue.  Yes we have
to work with simple examples to explain what we are talking about, but there
is a difference between an abstract problem that may be central to
your recent work and a core research issue that hasn't really been solved.

The entire problem of dealing with complicated situations is that these
narrow AI methods haven't really worked.  That is the core issue.

Jim Bromer



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to