Pei, Another issue with a KB inference engine as contrasted with a FOL theorem prover is that the former seeks answers to queries, and the latter often seeks to disprove the negation of the theorem by finding a contradiction. Cycorp therefore could not reuse much of the research from the automatic theorem proving community. And on the other hand the database community commonly did not investigate deep inference.
As the Semantic Web community continues to develop new deductive inference engines tuned to inference (ie. query answering) over large RDF KBs , I expect to see open-source forward-chaining, and backward-chaining inference engines that can be optimized in the same way that I described for Cyc. -Steve Stephen L. Reed Artificial Intelligence Researcher http://texai.org/blog http://texai.org 3008 Oak Crest Ave. Austin, Texas, USA 78704 512.791.7860 ----- Original Message ---- From: Pei Wang <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Monday, February 18, 2008 10:47:43 AM Subject: Re: [agi] would anyone want to use a commonsense KB? Steve, I also agree with what you said, and what Cyc uses is no longer pure resolution-based FOL. A purely resolution-based inference engine is mathematically elegant, but completely impractical, because after all the knowledge are transformed into the clause form required by resolution, most of the semantic information in the knowledge structure is gone, and the result is "equivalent" to the original knowledge in truth-value only. It is hard to control the direction of the inference without semantic information. Pei On Feb 18, 2008 11:13 AM, Stephen Reed <[EMAIL PROTECTED]> wrote: > > Pei: Resolution-based FOL on a huge KB is intractable. > > Agreed. > > However Cycorp spend a great deal of programming effort (i.e. many > man-years) finding deep inference paths for common queries. The > strategies > were: > > prune the rule set according to the context > substitute procedural code for modus ponens in common query paths > (e.g. > isa-links inferred via graph traversal) > structure the inference engine as a nested set of iterators so > that easy > answers are returned immediately, and harder-to-find answers trickle > out > later. > establish a battery of inference engine controls (e.g. time bounds, > speed > vs. completeness - whether to employ expensive inference strategies > for > greater coverage of answers) and have the inference engine > automatically > apply the optimal control configuration for queries > determine rule utility via machine learning and apply prioritized > inference > modules within the given time constraints > My last in-house talk at Cycorp, in the summer of 2006, described > a notion > of mine that Cyc's deductive inference engine behaves as an > interpreter, and > that for a certain set of queries, a dramatic speed improvement > (e.g. four > orders of magnitude) could be achieved by compiling the query, and > possibly > preprocessing incoming facts to suit expected queries. The > queries that > interested me were those embedded in an intelligent application, > and which > could be viewed as a query template with parameters. The > compilation > process I described would explore the parameter space with > programmer-chosen > query examples. Then the resulting proof trees would be compiled > into > executable code - avoiding entirely the time consuming candidate > rule search > and their application when the query executes. My notion for Cyc's > deductive inference engine optimization is analogous to SQL query > optimization technology. > > I expect to use this technique in the Texai project at the point > when I need > a deductive inference engine. > > -Steve > > Stephen L. Reed > > Artificial Intelligence Researcher > http://texai.org/blog > http://texai.org > 3008 Oak Crest Ave. > Austin, Texas, USA 78704 > 512.791.7860 > > > > ----- Original Message ---- > From: Pei Wang <[EMAIL PROTECTED]> > To: agi@v2.listbox.com > Sent: Monday, February 18, 2008 6:17:59 AM > Subject: Re: [agi] would anyone want to use a commonsense KB? > > On Feb 17, 2008 9:42 PM, YKY (Yan King Yin) > <[EMAIL PROTECTED]> wrote: > > > > So far I've been using resolution-based FOL, so there's only 1 > inference > > rule and this is not a big issue. If you're using nonstandard > inference > > rules, perhaps even approximate ones, I can see that this > distinction is > > important. > > Resolution-based FOL on a huge KB is intractable. > > Pei > > ------------------------------------------- > agi > Archives: http://www.listbox.com/member/archive/303/=now > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: http://www.listbox.com/member/?& > > Powered by Listbox: http://www.listbox.com > > > ________________________________ > Be a better friend, newshound, and know-it-all with Yahoo! Mobile. > Try it > now. > ________________________________ > > > agi | Archives | Modify Your Subscription ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?& Powered by Listbox: http://www.listbox.com ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b Powered by Listbox: http://www.listbox.com