sorry for the long delay...see below...Dan
Kashyap, Vipul wrote:
You are correct that classes in HL7 may have sub-classes.
[VK] I think the interesting question is whether these
classes are metaclasses, i.e., whether they belong to layer 1
or whether they are in layer 2.
<dan> Classes and subclasses in a UML model cannot represent
"different layers" in your layered hierarchy...a subclass in a UML
"isa" hierarchy "is" the same class as it's parent, it is just
constrained by an additional set of attributes. />
[VK] Agreed. However UML supports the metaclass stereotype which
can be used to represent Layer 1 classes. The issue is whether
a blood pressure observation class is an instance of the
observtion metaclass; or whether blood pressure observation class is
a subclass of the Observation class in HL7.
<dan> I'm confused...can you illustrate in UML, perhaps with the blood
pressure example? />
To be more specific, by definition, once a class in HL7 is
instantiated, the classCode and the moodCode can never be
changed throughout the lifecycle of the instance.
[VK] Was wondering if instead of having multiple class codes
and mood codes, if it were possible to actually represent
them as individual classes?
I beliebve the BRIDG model follows this approach.
<dan> Correct. There is no semantic difference when representing
the classCodes and moodCodes as separate classes in UML than with
"superloading" the current classes with classCode and moodCode
attributes. The classCode and moodCode attributes in the RIM are
simply a method for extending the model through vocabulary
manipulations. The BRIDG model elected to not use classCode and
moodCode in the UML for two reasons (one most important reason is
that the BRIDG is a "pre-RIM mapping analysis model" for the
domain experts. A later mapping of the BRIDG to the RIM for
purposes of use in documents, services, and messages would
collapse the various classes into the appropriate moodCodes and
classCode representations. />
[VK] Appears to me then that the reason for introducing the mood
codes and class codes is just a way to make the representation
more compact and doesn't add new semantic information. This is one
of the reasons why the RIM is difficult to understand.
Would prefer a modeling approach similar to the BRIDG model where
all the mood and class codes are explicitly represented as
subclasses.
<dan> Everyone has preferences on modeling conventions (and I don't
necessarily disagree with your preferences)...The important thing is to
understand what the models mean...If the concepts are the same, the
concepts are the same...the pictures are just techniques for
communicating the concepts. />
Therefore, operationally, the HL7 RIM ontology is
definitively declared when the instance is created.
[VK] This is interesting, because typically one first creates
ontologies and then instantiates them.
<dan> depends what one means when one says they "create" an ontology. An
ontology is just another name for a belief system. When one writes down
one's beliefs, one is not really creating an ontology. />
<dan> In small domains, that is true. However, in large domains,
where information models and terminology model techniques are
integrated, the "small domain" techniques provide huge amounts of
ontologic combinatorial explosions. />
[VK] I don't think this is a small vs large domain issue, it is
more of a modeling approach. The combinatorial explosion is due to
the underlying complexity of the domain, that will not go
away. For e.g., there are a huge number of classes in Galen
and Snomed. In the semantic web approach, instances are classified
into one or more classes. In programming languages, one declares a
variable to be of a particular type. But in both these cases the
types and classes are defined ahead of time. So, I am not clear
in what sense you mean the above statemetn
<dan> looks like the antecedent to my statement "In small domains..." is
lost somewhere above. In any case, in small domains, one can easily get
a picture of all the classes on a small diagram that is easy for people
to look at together. In large domains, the multitude of classes makes
the diagram huge and makes it difficult to express the essentials on one
computer screen or piece of paper (too many trees to see the forest).
The HL7 UML model of the RIM that makes mood and class code attributes
is simply a pictorial approach that assists discussion in many venues,
i.e. one doesn't need a huge piece of paper on the wall! Again, not to
get hung up in pictures of concepts. Focus on the concepts. />
Further granularity in the semantic meaning of the instance
is declared in the "code" attribute, which contains a series
of fields: Original Text;
mapping of orginal text to an expression from a published
vocabulary (e.g. SNOMED);
[VK] If we view SNOMED as an ontology, this effectively
declares that instance to be an instance of the class
described by the SNOMED expression.
<dan> Correct...The instance must be simultaneously an expression
of any hierarchies and other associations in SNOMED and of any
hierarchies and assocations in the HL7 RIM. />
The essential rule of Term Info in HL7 is that none of these
parts of an "expression" may contradict the other, although
each part may contribute to the total semantic meaning of the
"expression." It is also important that the semantic meaning
of the "class" within its hierarchy in the RIM and the
meaning of the published code within its hierarchy in the
published coding system not contradict each other. However,
much work remains in order to remove contradictions in the
hierarchies of all these ontologies when used together.
[VK] This is exactly where having a common representational
formalism and framework to represent information models and
terminologies would be very useful!
<dan> Ergo...The Term Info project. Should this group join efforts
with HL7 TermInfo, since both groups are trying to achieve the
same ends? />
[VK] Would like to propose a task force where the OWL can be
offered as a candidate formalism to support the requirements
identified by TermInfo. Would this be of interest to you, Samson,
Peter, Stan Huff and the rest of the HL7/TermInfo gang?
(As noted earlier, the RIM is a compromise between the very
abstract, raw, models like ASN.1 or EAV and the more concrete
models often found in database schemas for a narrow domain.)
[VK] This sort of validates my opinion that it is more of a
meta-model, i.e., it belongs to Layer 1.
<dan> Correct. The RIM in your model belongs in Layer 1 and the
domain specific, derived models from the RIM, e.g. Clinical
Statement Pattern, implementable RMIM, CDA,, service models,
belong in your Layer 2. />
[VK] Great!
What are called Archetypes in OpenEHR correspond to HL7
structures called Care Structures in HL7 Patient Care. These
"Care Structures" represent aggregations of classes used to
represent a medical record construct such as a problem list
or care plan. Care Structures typical provide the "context"
to very granular concepts. For example, by itself, the term
"diabetes Type 2" is merely a concept. Once diabetes is
placed within a problem list care structure for a specific
patient, the "sense" of what is meant by "diabetes Type 2"
in a particular assertion of the term is more clear.
[VK] Would be interested in undertanding the semantics
underlying the "Care Structure"? Maybe one could
model specific classes for a Problem and a Care Plan and may
be Diabetes Type 2 can be a subclass or an instance of the
Problem MetaClass or Class. Just throwing out some alternate
modeling approaches ... Would like to know the fallacies if
any.
<dan> These models that are specific to problems and care plans already
exist under the Care Structure Domain in Care Provision. />
<dan> When the SNOMED code for diabetes is used in the Observation
class in the RIM, one is creating an instance of the combined
relationships found in the RIM and in SNOMED. You aren't really
adding any new modeling approaches here. />
[VK] Probably not, what I am trying to do is reinterpret and make
explicit the semantics in the context of a multi-layered
representation.
In HL7 templated CDA documents (like CCD), templates are used to
bind to a schematron conformance test that validates that a
certain XML Care Structures (again, aggreations of classes,
attributes, and vocabulary) do not extend beyond a specific set of
allowable constraints. Therefore, templates don't really add to
semantic meaning. However, the do enforce semantic meaning, and
therefore support improved interoperability.
[VK] Agree CDA documents do not add to the semantics. We are more
interested in the information model or R-MIM underlying the CDA.
<dan> you missed the point that templateID is part of the RMIM of
the CDA. The templates are used to enforce the combined
information model and terminology models conformance statements. />
[VK] OK, then what you are suggesting is that a template is
logically equivalent to a set of constraints on the information
model. Would be interested in representing these conformance
statements as a set of OWL axioms
<dan> I agree...Adding an OWL version of these conformance statements
would be a great next step. />
I hope this long-winded description helps in this "multi-layered
Knowledge Representation" discussion. How one classifies the
concept of "context" for a given concept, or the concept of
"conformance testing the constraints on an aggregation of
structure and vocabulary" in a multi-layer Knowledge
Representation is not clear to me.
[VK] Some thoghts on this are as follows:
- A context can be typically represented as a MetaClass or a
Class.
<dan> Context for an instance of a class is represented by all the
many class associations that exist for an individual class
instance. Computationally, in an EHR, the context extends to
anything previously recorded in the EHR as well as all the
associations to references outside of the EHR, e.g. knowledge
links to country information, terminology information, basic
science information, facility information, etc. Don't think too
small on context! />
- A given concept can be a class which can be represented as
an instance or a sublcass of the context or associated with a
context through well defined
semantic relationships
- Can you present a concrete definition of conformance? I am
assuming for the purposes of this discussion Conformance =
Semantic Subsumption.
Assuming that we have represented concepts and aggregation
structures in a common formalism, conformance would
correspond to checking
for subsumption.
<dan> There are many kinds of "conformance." One basic example is
testing the contents of a data entry field before committing the
contents to the database to make sure the contents have the right
kinds of characters, e.g. numeric, alphabetic, etc.
[VK] This is basically syntax checking which checks for the
format in which data is represented and is not an information
modeling or semantics issue.
Schematron testing in CDA tests the conformance of the XML
structure and the codes and other values within the XML structure
(think terminology) to make sure the wrong codes aren't used in a
specific XML structure.
[VK] XMl structure testing can be tricky because the healthcare IT
community has used XML Schema to represent information models. XML
Schema is a language designed to describe the format and structure
of XML documents in contrast with languages
such as RDF, OWL and UML which seek to describe the semantics
underlying these documents. So "checking for conformance of XML
Structure" could either (A) check for the validitiy of the
structure of the XML Document or for (B) validity of the information
model (R-MIM) underlying the XML document. What would be relevant
is (B) and we could try to use OWL axioms to describe the type of
conformance statements represented by (B)
Finally matching terminologies is a semantics issues and
OWL/Description Logics have been used to represent Snomed and
terminology matchin can be expressed in terms of OWL subsumptions.
<dan> Again...agreed...OWL is a natural tool for this task />
I'm sure that a broader definition of conformance can be created
that includes things as basic as character validation and as
complex as information model/vocabulary model validation. />
[VK] What can easily be implemted using OWL is information
model/vocabulary validation
Cheers,
---Vipul
The information transmitted in this electronic communication is intended only
for the person or entity to whom it is addressed and may contain confidential
and/or privileged material. Any review, retransmission, dissemination or other
use of or taking of any action in reliance upon this information by persons or
entities other than the intended recipient is prohibited. If you received this
information in error, please contact the Compliance HelpLine at 800-856-1983 and
properly dispose of this information.