Brian Bray wrote:
>The most complete source is probably the HL7 RIM model. It should be
>available from the HL7 site www.hl7.org
>
The HL7 RIM is not a domain model - i.e. it is not a model of concrete
concepts in the (clinical medicine) domain. It is a model of mostly
non-volatile, abstract concepts which (according to the HL7 analysis)
can be used to express any concrete concept in the domain (such as the
act of observing a blood pressure, or ordering a medicine).
Domain models created in UML or similar, and designed for software
and/or database schema implementation are nearly always useless, and at
worst are the dangerous fantasies of data modellers in national and
state helth departments. Whey useless? I have included some reasons from
my archetypes paper below. Why dangerous? Because people who have no
idea what they are doing divert important resources into large dead-end
projects.
The HL7 model is also, by the way a model conceived for defining
messages about things, rather than an EHR. For some understanding on
this issue, see the "HIS Manifesto" at
http://www.deepthought.com.au/health/openEHR/openEHR.html.
None of the models for EHR standards such as GEHR or CEN 13606 are
domain models either; they are small models of abstract concepts which
can be used to express all entities in the domain in some way.
Collabroation efforts betwen openEHR, CEN, PROREC, EuroRec Institute are
currently underway to achieve convergence between these models, and to
adopt an archetype methodology for modelling domain concepts in a way
whcih is computable. For the current GEHR Australia submission to this
process, see the openEHR reference model links at
http://www.deepthought.com.au/health/openEHR/openEHR.html.
- thomas beale
Problems with Domain models being used to build systems.
-----------------------------------------------------------
(these statements apply to single-level models, which attempt to capture
all semantics for a system being built. Domain models as a basis for
software are even worse since they try to capture everything)
taken from: http://www.deepthought.com.au/it/archetypes.html
* The model encodes only the requirements found during the current
development, along with best guesses about future ones.
* Models containing both generic and domain concepts in the same
inheritance hierarchy are problematic : the model can be unclear
since very general concepts may be mixed with very specific ones,
and later specialisation of generic classes is effectively
prevented. The model is also not easily reusable in other domains.
* Technical problems such as the "fragile base class" problem (See
[9.] <cid:[EMAIL PROTECTED]> ) must be
understood and avoided both initially, and during maintenance.
* It is often difficult to complete models satisfactorily , since
the number of domain concepts may be large, and ongoing
requirements gathering can lead to an explosion of domain
knowledge, all of which has to be incorporated into the final
model. In domains where the number of concepts is very large, such
as health, this problem can retard software system completion
significantly.
* There may be a problem of semantic fitness . It is often not
possible to clearly model domain concepts directly in the classes,
methods and attributes of typical object formalisms. Domain
concepts have significant variability, and often require
constraints expressed in predicate logic to complete their
definition. A more powerful "language" for domain concepts may be
needed. See The Problem of Variability
<cid:[EMAIL PROTECTED]> below.
* Modelling can be logistically difficult to manage , due to the
fact that two types of people are involved: domain specialists,
and software developers. Domain specialists are forced to express
their concepts in a software formalism, such as UML or a
programming language (assuming they understand such formalisms),
or more usually, make their requirements known through an ad hoc
interface of discussions and document reviews with developers.
Software developers often have difficulty in dealing with numerous
concepts they don't understand. The processes of investigation
which would also naturally proceed at different rates are forced
to occur in an unhappy synchrony, since domain investigation is
typically more involved, whereas software requirements capture and
analysis is usually driven by project deadlines. The typical
result is a substandard modelling process in which domain concepts
are often lost or incorrectly expressed, and software which
doesn't really do what users want.
* Introduction of new concepts requires software changes, and
typically rebuilding, testing and redeployment , which are
expensive and risky. If conversion of legacy data and/or
significant downtime is also involved, the costs can become a
serious problem. All of these cost factors have routinely made
existing systems uneconomic in the past, and mandated complete
redevelopment or replacemen
*
Even when some level interoperability is initially achieved, it
generally degrades over time, due to systems diverging from agreed
common models, as they follow differing local requirements. See
[10.] <cid:[EMAIL PROTECTED]> for a discussion
of interoperability issues.
* Standardisation is difficult to achieve . With large domain
models, it is logistically and technically (and often politically)
difficult for different vendors and users to agree on a common
model. Lack of standardisation not only makes interoperability
difficult to achieve, it makes automated processing (such as
decision support or data mining) nearly impossible, since there
are almost no general assumptions such systems can make about the
underlying model.
Many of the above shortcomings apply also to "standardised" domain
models created by standards bodies, industry groups and governments. The
worst feature of these typically very large models is that they embody
no single point of view (in fact they are an amalgam of many), and as
such cannot be used to build software. Large models do not create any
significant improvement in the ability of their target systems to deal
with change, although widespread adoption may make for reasonable
interoperability, at least initially. However, local requirements need
to be catered for, and doing this while remaining faithful to the
standard model mandates a level of discipline in change management not
found in most organisations.
In short, the single-model methodology produces systems which may work
for the present, but whose utility degrades to a point where they become
uneconomic.
>
>
>-Brian
>
>Rob Cecil a �crit :
>
>>Hello,
>>
>>Can someone please direct me to projects that have published database
>>schemas (DDL, ERD), or domain object models (UML) for common healthcare
>>objects, e.g. Patient, Episode, etc., both clinical and financial objects.
>>
>>Thanks
>>
>>Rob Cecil
>>
>
--
..............................................................
Deep Thought Informatics Pty Ltd
Information and Knowledge Systems Engineering
phone: +61 7 5439 9405
mailto:[EMAIL PROTECTED]
Electronic Health Record - http://www.gehr.org/
openEHR proposals - http://www.deepthought.com.au/health/openEHR/openEHR.html
Knowledge Methodology - http://www.deepthought.com.au/it/archetypes.html
Community Informatics - http://www.deepthought.com.au/ci/rii/Output/mainTOC.html
..............................................................