On Mon, 18 Dec 2000 07:29:12   Wayne Wilson wrote:
>Andrew po-jung Ho wrote:
>> 
>> 
>> So - in essense, HL7 was inadequate as an intermediate schema.  On the other
>> hand, the combination of HL7 and the "interface engine" was a more functional
>> "intermediate schema" for your setting.
>> 
>You can view it that way. but it is not quite what I meant. From what I
>know, every large scale health care organization in the U.S., at least,
>has used interface engine technology, this is not a unique situation.

Since HL7 is inadequate for every organization in the U.S., there is most likely 
something fundamentally flawed about its approach?

>  It was not the failure of HL7 per se, but rather the failure of an
>approach that tried to do 100's of one to one transformations that
>failed.  

So, how does the "interface engine" technology fix these failures?
It would seem to me that the "interface engine" is actually doing nearly 1:1 mapping 
between the nodes.

>Even with the best of agreements, enough difference creeps in
>that expecting any consistency across all those one to one
>transformations is too much to ask.  

This is exactly my point.  Even when we pay the price for a "reference intermediate 
schema", what we get may be less than what we bargain for.

>The intermediary provides a single
>point for agreement to be mediated by using one transformation per
>system.  

Again, this is the rationale for having a common intermediary.  What you have 
indicated is that it does not really work as promised.  Of course, the question is how 
much work the "interface engine" must do.  The larger the "interface engine", the less 
useful the common intermediary.

If your argument is that it is not perfect but is the best solution possible, then you 
will have to support that with a proof that no better solution is theoretically 
possible. 

>This is pure math, based on the fact that each transformation
>looses or changes some of the intended information.  

If you accept this, then you must agree that going from A==>B without an intermeidary 
schema will introduce the least error.

>The more
>transformations you put in the path, the more the end result deviates
>from the original.

Right.  Therefore, why force all transformations from A==>B to go through 
C(=HL7=common referecence schema)?

In the architecture that I am proposing, it is not mandatory that all mediations are 
direct (i.e. A==>B).  However, direct transformation is permitted if going through 
intermediates are inadequate.  Basically, the number of intermediates can go from 0 to 
n.  In the case when intermediates=0, then it is a direct mediation (A==>B).  In the 
case where the number of intermediates=2, then A==>C, C==>D, D==>B.

>  With enough transformations, the original data has been so 'corrupted'
>that any aggregate function becomes hopeless.  

For schema that cannot tolerate n>m intermediatries, then the construction/discovery 
of a path where n (< or =) m is necessary.  This is really no different from hand 
tuning the HL7 interface engine.

>Our analysis of legacy
>system transformations showed that hand  tuning (accomplished at the
>interface engine) of the transformation (into HL7) based on an intimate
>understanding of the source information was necessary.  Even at that, we
>had to make many changes and legacy system wide sweeps of the infomation
>store to 'cleanse' the legacy information itself!  I would like to
>believe that some magic 'mediators' exist that can do this without human
>intervention, but even Renner, et. al. don't support that conclusion.

While flexible, reusable, and modular mediators are not as desirable as 'magic 
mediators', I would think that they are better than hand tuning the "interface engine" 
for each and every organization in the U.S..

>  There is a reason why a very viable market exists in health care for
>interface engines and in data wharehousing for 'cleansing' solutions.

This is the current state of art.  Shouldn't we try to produce a better solution?

>The bringing together of multiple viewpoints for single  point computer
>processing demands it.
>
>  My point about scale is this:  With a small enough set of records that
>a human being can read them all, the 'cleansing' and 'transformation'
>takes place inside the human's conscious understanding.  It's only when
>your record set's get large enough (per an individual's retrieval set)
>that you demand on a daily basis a computer to assist with these acts
>and that you start to see problems.

I agree that the current state of art is clumsy and imperfect.  The question that I 
have is whether breaking the data transform/interchange problem into smaller problems 
by performing transformations on a smaller schema (not smaller number of records) can 
be a solution?

Rather than interchanging the entire medical dictionary, for example, we could have a 
mediator for just the blood pressure (item) or a small collection of terms (form).  
Thus, the use of a common reference schema is still possible - but is not required.  
Also, by using this flexible approach, terms that cannot tolerate going through an 
intermediary can be directly transformed (e.g. severity of pain).  On the other hand, 
terms that tend to be resistant to error/drift can go through more intermediaries 
(e.g. date).

Thank you for the careful review and comments,

Andrew
---
Andrew P. Ho, M.D.
OIO: Open Infrastructure for Outcomes
www.TxOutcome.Org
Assistant Clinical Professor
Department of Psychiatry, Harbor-UCLA Medical Center
University of California, Los Angeles


Join 18 million Eudora users by signing up for a free Eudora Web-Mail account at 
http://www.eudoramail.com

Reply via email to