Ben,

   I agree with the vast majority of what I believe that you mean but . . .

1) Just because a system is "based on logic" (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans.  As I noted in recent posts,
probabilistic logic systems will regularly draw conclusions based on
synthesizing (say) tens of thousands or more weak conclusions into one
moderately strong one.  Tracing this kind of inference trail in detail
is pretty tough for any human, pragmatically speaking...

However, if the system could say to the human, I've got hundred thousand separate cases from which I've extracted six hundred twenty two variables which each increase the probability of x by half a percent to one percent individually and several of them are positively entangled and only two are negatively entangled (and I can even explain the increase in probability in 64% of the cases via my login subroutines) . . . . wouldn't it be pretty easy for the human to debug anything with the system's assistance? The fact that humans are slow and eventually capacity-limited has no bearing on my argument that a true AGI is going to have to be able to explain itself (if only to itself).

The only real case where a human couldn't understand the machine's reasoning in a case like this is where there are so many entangled variables that the human can't hold them in comprehension -- and I'll continue my contention that this case is rare enough that it isn't going to be a problem for creating an AGI.

My only concern with systems of this type is where the weak conclusions are unlabeled and unlabelable and thus may be a result of incorrectly over-fitting questionable data and creating too many variables and degrees and freedom and thus not correctly serving to predict new cases . . . . (i.e. the cases where the system's "explanation" is wrong).

2) IMO the dichotomy between "logic based" and "statistical" AI
systems is fairly bogus.  The dichotomy serves to separate extremes on
either side, but my point is that when a statistical AI system becomes
really serious it becomes effectively logic-based, and when a
logic-based AI system becomes really serious it becomes effectively
statistical ;-)

I think that I know what you mean but I would phrase this *very* differently. I would phrase it that an AGI is going to have to be able to perform both logic-based and statistical operations and that any AGI which is limited to one of the two is doomed to failure. If you can contort statistics to effectively do logic or logic to effectively do statistics, then you're fine -- but I really don't see it happening. I also am becoming more and more aware of how much feature extraction and isolation is critical to my view of AGI.




----- Original Message ----- From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Sunday, December 03, 2006 11:30 PM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis


Matt Maohoney wrote:
My point is that when AGI is built, you will have to trust its answers based
on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.

Agreed...

I believe this is the fundamental
flaw of all AI systems based on structured knowledge representations, such as
first order logic, frames, connectionist systems, term logic, rule based
systems, and so on.

I have a few points in response to this:

1) Just because a system is "based on logic" (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans.  As I noted in recent posts,
probabilistic logic systems will regularly draw conclusions based on
synthesizing (say) tens of thousands or more weak conclusions into one
moderately strong one.  Tracing this kind of inference trail in detail
is pretty tough for any human, pragmatically speaking...

2) IMO the dichotomy between "logic based" and "statistical" AI
systems is fairly bogus.  The dichotomy serves to separate extremes on
either side, but my point is that when a statistical AI system becomes
really serious it becomes effectively logic-based, and when a
logic-based AI system becomes really serious it becomes effectively
statistical ;-)

For example, show me how a statistical procedure learning system is
going to learn how to carry out complex procedures involving
recursion.  Sure, it can be done -- but it's going to involve
introducing structures/dynamics that are accurately describable as
versions/manifestations of logic.

Or, show me how a logic based system is going to handle large masses
of uncertain data, as comes in from perception.  It can be done in
many ways -- but all of them involve introducing structures/dynamics
that are accurately describable as "statistical."

Probabilistic inference in Novamente includes

-- higher-order inference that works somewhat like standard term and
predicate logic
-- first-order probabilistic inference that combines various heuristic
probabilistic formulas with distribution-fitting and so forth .. i.e.
"statistical inference" wrappedin a term logic framework...

It violates the dichotomy you (taking your cue from the standard
literature) propose/perpetuate....  But it is certainly not the only
possible system to do so.

3) Anyway, trashing "logic incorporating AI systems" based on the
failings of GOFAI is sorta like trashing "neural net systems" based on
the failings of backprop, or trashing "statistical learning systems"
based on the failings of linear discriminant analysis or linear
regression.

Ruling out vast classes of AI approaches based on what (vaguely
defined) terms they have associated with them ("logic", "statistics",
"neural net") doesn't seem like a good idea to me.   Because I feel
that all these standard paradigms have some element of correctness and
some element of irrelevance/incorrectness to them, and any one of them
could be grown into a working AGI approach -- but, in the course of
this growth process, the apparent differences btw these various
approaches will inevitably be overcome and the deeper parallels made
more apparent...

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to