--- Ben Goertzel <[EMAIL PROTECTED]> wrote:

> Matt Maohoney wrote:
> > My point is that when AGI is built, you will have to trust its answers
> based
> > on the correctness of the learning algorithms, and not by examining the
> > internal data or tracing the reasoning.
> 
> Agreed...
> 
> >I believe this is the fundamental
> > flaw of all AI systems based on structured knowledge representations, such
> as
> > first order logic, frames, connectionist systems, term logic, rule based
> > systems, and so on.
> 
> I have a few points in response to this:
> 
> 1) Just because a system is "based on logic" (in whatever sense you
> want to interpret that phrase) doesn't mean its reasoning can in
> practice be traced by humans.  As I noted in recent posts,
> probabilistic logic systems will regularly draw conclusions based on
> synthesizing (say) tens of thousands or more weak conclusions into one
> moderately strong one.  Tracing this kind of inference trail in detail
> is pretty tough for any human, pragmatically speaking...
> 
> 2) IMO the dichotomy between "logic based" and "statistical" AI
> systems is fairly bogus.  The dichotomy serves to separate extremes on
> either side, but my point is that when a statistical AI system becomes
> really serious it becomes effectively logic-based, and when a
> logic-based AI system becomes really serious it becomes effectively
> statistical ;-)

I see your point that there is no sharp boundary between structured knowledge
and statistical approaches.  What I mean is that the normal software
engineering practice of breaking down a hard problem into components with well
defined interfaces does not work for AGI.  We usually try things like:

input text --> parser --> semantic extraction --> inference engine --> output
text.

The fallacy is believing that the intermediate representation would be more
comprehensible than the input or output.  That isn't possible because of the
huge amount of data.  In a toy system you might have 100 facts that you can
compress down to a diagram that fits on a sheet of paper.  In reality you
might have a gigabyte of text that you can compress down to 10^9 bits. 
Whatever form this takes can't be more comprehensible than the input or output
text.

I think it is actually liberating to remove the requirement for transparency
that was typical of GOFAI.  For example, your knowledge representation could
still be any of the existing forms but it could also be a huge matrix with
billions of elements.  But it will require a different approach to build, not
so much engineering, but more of an experimental science, where you test
different learning algoriths at the inputs and outputs only.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to