Matt Maohoney wrote:
My point is that when AGI is built, you will have to trust its answers based on the correctness of the learning algorithms, and not by examining the internal data or tracing the reasoning.
Agreed...
I believe this is the fundamental flaw of all AI systems based on structured knowledge representations, such as first order logic, frames, connectionist systems, term logic, rule based systems, and so on.
I have a few points in response to this: 1) Just because a system is "based on logic" (in whatever sense you want to interpret that phrase) doesn't mean its reasoning can in practice be traced by humans. As I noted in recent posts, probabilistic logic systems will regularly draw conclusions based on synthesizing (say) tens of thousands or more weak conclusions into one moderately strong one. Tracing this kind of inference trail in detail is pretty tough for any human, pragmatically speaking... 2) IMO the dichotomy between "logic based" and "statistical" AI systems is fairly bogus. The dichotomy serves to separate extremes on either side, but my point is that when a statistical AI system becomes really serious it becomes effectively logic-based, and when a logic-based AI system becomes really serious it becomes effectively statistical ;-) For example, show me how a statistical procedure learning system is going to learn how to carry out complex procedures involving recursion. Sure, it can be done -- but it's going to involve introducing structures/dynamics that are accurately describable as versions/manifestations of logic. Or, show me how a logic based system is going to handle large masses of uncertain data, as comes in from perception. It can be done in many ways -- but all of them involve introducing structures/dynamics that are accurately describable as "statistical." Probabilistic inference in Novamente includes -- higher-order inference that works somewhat like standard term and predicate logic -- first-order probabilistic inference that combines various heuristic probabilistic formulas with distribution-fitting and so forth .. i.e. "statistical inference" wrappedin a term logic framework... It violates the dichotomy you (taking your cue from the standard literature) propose/perpetuate.... But it is certainly not the only possible system to do so. 3) Anyway, trashing "logic incorporating AI systems" based on the failings of GOFAI is sorta like trashing "neural net systems" based on the failings of backprop, or trashing "statistical learning systems" based on the failings of linear discriminant analysis or linear regression. Ruling out vast classes of AI approaches based on what (vaguely defined) terms they have associated with them ("logic", "statistics", "neural net") doesn't seem like a good idea to me. Because I feel that all these standard paradigms have some element of correctness and some element of irrelevance/incorrectness to them, and any one of them could be grown into a working AGI approach -- but, in the course of this growth process, the apparent differences btw these various approaches will inevitably be overcome and the deeper parallels made more apparent... -- Ben G ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303