This is a debugging problem, but not a deployment problem. If one's data is inconsistent one needs to fix it. Usually such inconsistencies are either errors in the data that need to be fixed, or indications that one needs to get clearer about what one wants to say. In this case you need to make a choice about whether you want to say something that we called in [1] the 'statement level' or the 'domain level'. If at the domain level you need to put your neck on the line and say which experiment is right. If at the statement level you need to remodel so that you are clearly communicating that you are representing author statements.

-Alan

[1] section 2,3 of http://owl-workshop.man.ac.uk/acceptedLong/ submission_26.pdf

On Apr 17, 2007, at 9:53 PM, [EMAIL PROTECTED] wrote:



I think *if the ontology classifies reasonably at all*, then this
sort of query approach can achieve reasonable performance for this
rough application profile with a reasonable amount of engineering
effort in many cases.

Oh, but this is quite an important
We can expect that most of the ontologies that are based on 'real data' are inconsistent, if not even highly inconsistent -- not because of errors on the side of the ontology designers, but because the represented information is contradictory. For example, we have found some inconsistency in one of our SenseLab OWL versions that was caused by the fact that the results of two experiments that were entered into the knowledge base were contradictory. Of course, this is a good example for the utility of an OWL reasoner, because it pointed us to a (potentially interesting or important) contradiction in the literature.

However, such contradictions could lead a reasoning-based approach to querying fail, or at least they can make them less performant, as you said.


cheers,
Matthias Samwald





.
--
"Feel free" - 10 GB Mailbox, 100 FreeSMS/Monat ...
Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail



Reply via email to