Pat Hayes wrote:
At 6:31 PM -0400 6/26/08, Ogbuji, Chimezie wrote:
Hey, Pat.  Comments below
> I would disagree about this case being the exception. >Negation as failure can be validly used to infer from a
>failure if the data is controlled (which is especially the
>case with well-designed experiments where it would be
>irresponsible to to do otherwise).
>
>
>What are you referring to by "well-designed experiments"?

"Well-designed experiments" is probably not a useful characterization,
so let me try again.  Let's say you are nurse performing a history and
physical assesment on a patient in order to make entries into his/her
medical record and one of the questions you ask *routinely* is whether
or not the patient has a particular symptom/problem: headaches for
instance.  If the patient says: "no", and you are conforming to default
negation as part of a subsequent analysis, then it would seem sufficient
to not make any assertion about the existence of a headache.  Otherwise,
you would need to be able to either infer that the patient doesn't have
a headache (provably false) *or* have an explicit assertion of absence:

I.

_:a a cpr:patient

II.

_:a a cpr:patient
_:a a cpr:PersonWithoutHeadache

Juat as an aside, its hardly fair to this in RDF which doesn't have negation of/ any/ kind.


My point is that, in the first model you *can* infer that the patient
doesn't have a headache because the assertion is missing and you *know*
that the question was asked.

Well, let me push on this. Lets suppose that whoever wrote the record did indeed know this, and they used the 'if I don't say it, its false' strategy, saving themselves some work. But now, this is all written down in RDF. Send this RDF somewhere else, where someone else reads it. How do/ they/ know that its OK to use NAF on this RDF? The RDF itself doesn't describe the nurse's data-recording conventions, and it doesn't say that its a closed world with respect to having headaches. All it does is not refer to headaches at all. There might be any number of reasons for this. Maybe the nurse just didn't think about headaches, maybe (like my wife's endocrinologist) the doctor just didn't consider headaches to be in his focus of attention. Maybe this RDF was extracted from a larger data set by a SPARQL query which didn't happen to refer to headaches. In general, you *don't* know anything more than what is *explicitly told* to you. At any rate, that has to be the ground assumption of an ontology engine, especially in a Web setting where you have absolutely no control over what happened to the data while it was on its way to you, and nobody is under any obligation to tell you.
Pat is right here. I am speaking from personal experience. The closed world assumption is not taken in medical training. I was a medical student (long, long time ago). When we started to be interns, we are explicitly told to check and write down everything about a patient even if (we think) most of what we will check will be negative symptoms. We didn't like it because it took forever to write down a patient's history. Only experienced doctor (after becoming a residency) are allowed to check only relevant (they think) symptoms. But the difference here is that experienced doctors are allowed to not check everything but not to assume what they didn't check are negative. In fact, experienced doctor still write down the negative symptoms which they think are relevant. I don't think it is a valid case to argue for closed-world assumption.

Xiaoshu

Reply via email to