On Fri, 15 Sep 2006, Phillip Lord wrote:





"WB" == William Bug <[EMAIL PROTECTED]> writes:

 WB> CLASSes represent UNIVERSALs or TYPEs.  The TBox is the set of
 WB> CLASSes and the ASSERTIONs associated with CLASSes.

 WB> INSTANCEs represent EXISTENTIALs or INDIVIDUALs instantiating a
 WB> CLASS in the real world.  The ABox is the set of INSTANCEs and
 WB> the ASSERTIONs associated with those INSTANCEs.



I'd take a slight step back from this. You can think of classes and
instances in this way. But in the OWL sense, a class is a logical
construct with a set of computational properties. "Instances" is a
more difficult term. OWL actually has individuals. The instance store
uses "instances" because they are not really OWL individuals.
There is also a philosophical concept of what a class is, what a
universal is an so on, which may be somewhat different, and is also
open to debate.

 WB> Properly specified CLASSes are defined in the context of the
 WB> INSTANCEs whose PROPERTIES and RELATIONs they formally
 WB> represent.

 WB> Properly specified INSTANCEs are defined via their reference to
 WB> an appropriate set of CLASSes.

Think this would be circular. An OWL class is defined by the
individuals that it might have in any model which fits the
ontology. Not just the individuals it has an a specific model.


 WB> Reasoners (RacerPro, Pellet, FACT++) generally have
 WB> optimizations specific to either reasoning on the TBox or
 WB> reasoning on the ABox, but it's difficult (i.e., no existing
 WB> examples experts such as Phil and others can cite) to optimize
 WB> both for reasoning on the TBox, the ABox AND - most importantly
 WB> - TBox + ABox (across these sets).

ABox is more complex than TBox, although I believe the difference is
not that profound (ie they are both really complex). For a DL as
expressive as that which OWL is based on, the complexities are always
really bad. In other words, no reasoner can ever guarantee to scale
well in all circumstances.

Once again: pure production/rule-oriented systems *are* built to scale well in *all* circumstances (this is the primary advantage they have over DL reasoners - i.e., reasoners tuned specifically to DL semantics). This distinction is critical: not every reasoner is the same and this is the reason why there is interest in considerations of using translations to datalog and other logic programming systems (per Ian Horrocks suggestion below):

Another interesting approach that has only recently been presented by
Motik et al is to translate a DL terminology into a set of disjunctive
datalog rules, and to use an efficient datalog engine to deal with
large numbers of ground facts. This idea has been implemented in the
Kaon2 system, early results with which have been quite encouraging (see
http://kaon2.semanticweb.org/). It can deal with expressive languages
(such as OWL), but it seems to work best in data-centric applications,
i.e., where the terminology is not too large and complex.

I'd go a step further and suggest that even large terminologies aren't a problem for such systems as their primary bottleneck is memory (very cheap) and the complexity of the rule set. The set of horn-like rules that express DL semantics are *very* small.

Chimezie Ogbuji
Lead Systems Analyst
Thoracic and Cardiovascular Surgery
Cleveland Clinic Foundation
9500 Euclid Avenue/ W26
Cleveland, Ohio 44195
Office: (216)444-8593
[EMAIL PROTECTED]

Reply via email to