Open source efforts/software like OpenMRS, WorldVistA (VistA Office 
etc.), OSCAR etc. that are focused on diffusion/uptake and continuous 
improvement. All need to have practical tools methods etc. to work 
effectively in the heterogeneous health IT ecosystem. Building on Tim's 
view:

 >> I believe that with a modest upfront investment one can go a long way
 >> toward interoperability.  The
 >> open source community should be leading in this area, because of the
 >> increased cooperation.

What would that modest investment be? Who would be willing to 
collaborate to make it happen? How does a practical approach dance 
effectively with and benefit from the vision/work of the 
"interoperability" expert community?  How can we leverage the OSHCA 
meeting in May to help the open source health community take the 
leadership role?


Joseph


Will Ross wrote:
> What a wonderful discussion.   I am so glad to have Regenstrief's  
> OpenMRS at the table!   I also know there are other lurkers out there  
> (you know who you are!) who can add to the robust discussion.  But my  
> purpose here is to highlight one point.   Paul, Dave and Tim have all  
> mentioned not allowing the pursuit of "perfect" semantic  
> interoperability to interfere with simple incremental improvements  
> that can be realized immediately.   This is in fact one of the  
> hallmarks of the decades of dramatic real-world demonstrations that  
> Regenstrief has brought to central Indiana.   And it is the central  
> tenet of the Connecting For Health (USA version) effort to make  
> records portable and electronic without requiring a rip and replace  
> changeout of all legacy health record systems.   And it was one of  
> the key points in Andy Grove's "Shift Left" address at Stanford this  
> past november.
> 
>    http://news-service.stanford.edu/news/2006/november8/med- 
> grove-110806.html
> 
> But we all know this is a marathon, not a sprint.   This year's TEPR  
> conference is the 23rd annual meeting devoted to the immanent  
> transition from paper to digital charting.
> 
>    http://www.medrecinst.com/conference/tepr/index.asp
> 
> Meanwhile, in my rural region of California, 2007 may be the year we  
> see adoption of EHR rise above 10% among small practices.   The  
> arrival of new FOSS projects like OpenMRS can only help improve our  
> rate of adoption.
> 
> With best regards,
> 
> [wr]
> 
> - - - - - - - -
> 
> On Feb 17, 2007, at 9:24 PM, David Forslund wrote:
> 
>> Tim Churches wrote:
>>> David Forslund wrote:
>>>
>>>> I've seen no real
>>>> effort in the open source community to embrace interoperability.
>>>> Certainly interoperability has
>>>> been opposed by much of industry until recently, but there is no  
>>>> good
>>>> reason for the open source community to not embrace it.
>>>>
>>> Dave, interoperability, although good in theory, is not an end in
>>> itself. Thus you have to ask the question: in the settings in  
>>> which open
>>> source health information systems are or are likely to be  
>>> deployed, what
>>> are the "business drivers" or the "business case" for  
>>> interoperability,
>>> and what sort of interoperability?
>>>
>>> Thus, although there is indeed no good reason not to embrace
>>> interoperability, there may be, in many open source deployment  
>>> settings,
>>> no good reason to embrace it, either, given that supporting
>>> interoperability is not without some cost.
>>>
>> I agree with you, with a caveat.  If you plan for interoperability,  
>> the
>> cost isn't very high. Adding it
>> later is much more expensive.  For the patient, the value of
>> interoperability is very high.  Clearly
>> for implementers, the demand for interoperability is not high since it
>> might take away from the
>> local business model.
>>> For example, the COAS specs document is 260 pages long, but if you  
>>> go to
>>> the "Interoperation" chapter in it, it refers you to four other CORBA
>>> specifications, each also several hundred pages long, which need  
>>> to be
>>> assimilated first. So that's a thousand pages. And that's even before
>>> one works out how to implement all this. That's the cost. So unless
>>> there are strong reasons to do this, in the always-resource- 
>>> constrained
>>> world of open source development, it is no wonder it is hardly ever
>>> implemented.
>>>
>> Have you tried to read WS-Services documentation?  It is far more
>> complex than the CORBA specs.
>> Clearly the OMG specs requires an implementer to understand something
>> about CORBA and IDL,
>> but these have been available in book stores for years and there are
>> numerous free implementations
>> around with voluminous tutorials. The discipline of having well- 
>> defined
>> interfaces between services
>> is well worth the time invested to understand them.  You don't have to
>> read all of CORBA to understand
>> the value of COAS.  The UML models contained should go a long way to
>> helping you see the value
>> of the approach and adopting some of the interface principles which
>> would make the implementation
>> of interoperability much easier in the future.  And there are
>> implementations that can be studied.
>>>> Sending HL7 messages over SMTP encrypted email is  a wonderful  
>>>> idea for
>>>> someone who is trying to get the most amount of money for support  
>>>> from a
>>>> customer, but has little to do with building truly distributed  
>>>> systems.
>>>>
>>> Tell that to the people using encrypted SMTP mail. I suppose it means
>>> what one means by "truly distributed systems".
>>>
>> Of course.  I'm speaking of a system that supports the ability to be
>> viewed as a "single" system distributed
>> over a network of machines.  P2P is far from a "single" system image.
>>>> I think that one should avoid asynchronous, time-delayed  
>>>> coordination of
>>>> updates to the same record in multiple locations.  What we have done
>>>> in COAS is to basically  provide versioning of a record so
>>>> that all versions are available.
>>>>
>>> That skirts the issue of coming up with the currently definitive  
>>> version
>>> of a record for analysis purposes, but doesn't solve it. Which  
>>> version
>>> should be used when analysing the data?
>>>
>> There obviously is no way of telling this.  This depends on how it is
>> used and the type of analysis.
>> One might want to analyze what changes have occurred in the record for
>> audit purposes.  Typically
>> one is only interested in the latest version of a record.   If you  
>> want
>> an algorithm to create a "new"
>> version of a record based on previous versions, this could be done,  
>> but
>> I don't believe there is
>> one good solution to this problem.
>>>> The B-Safer web application in  OpenEMed was used in a
>>>> distributed environment.  We had very heterogeneous feeds  
>>>> (available in
>>>> the clients/translate directory) from  a variety of data sources
>>>> (no two alike).   Users of  the data had views that
>>>> were potentially different for each site.
>>>>
>>> Differing views are what need to be avoided (at least eventually,  
>>> when
>>> all nodes in a network have caught up with each other).
>>>
>> Not necessarily.  The different views in our case were driven by the
>> security requirements of not
>> being able to see other participants data except in the aggregate.  In
>> the GCPR project the views
>> were to be in the form that the user was familiar with.  So that a DoD
>> record would take on a VA
>> view when viewed by someone in the VA so that they would see a
>> uniformity of records.  And
>> vice versa for a DoD person viewing VA data.
>>>> It does appear that programming languages seem to be the biggest  
>>>> barrier
>>>> for this particular
>>>> open source community.  Some like Java, some like Python, some  
>>>> like PHP,
>>>> etc.  That was
>>>> the value of the IDL used in COAS, because it is language  
>>>> independent
>>>> and really quite easy to read as an interface (as opposed to  
>>>> trying to
>>>> read WSDL).
>>>>
>>> If I type "python IDL" into Google, I get hits for Python and IDL,  
>>> the
>>> matrix language (like Matlab), but not IDL as in COAS. In other  
>>> words,
>>> CORBA support is not exactly mainstream, or even a well-supported  
>>> niche.
>>> Neither is openEHR (yet, maybe some day).
>>>
>> Of course, but IDL is an ISO standard.   Frequency of appearance in
>> Google doesn't always
>> mean much since it has a hard time deciphering the context.  The  
>> OMG IDL
>> is near the top
>> of the list.  I'm not sure what this has to do with anything, however.
>>>> It  is important
>>>> for interoperability to have interfaces specified in a language- 
>>>> neutral
>>>> way, so that, in fact,
>>>> Python built systems can interoperate with Java, etc.
>>>>
>>> Well, it depends what the aim is. If clinic A simply needs to  
>>> share data
>>> with clinics B and C, and none of them currently have information
>>> systems, then installing the same software in all of them and  
>>> using less
>>> general means to share data between those clinics may be the easiest
>>> path to take. Yes, that's short-sighted, but in many places it is
>>> important to walk before you can run. In any case, systems are not  
>>> set
>>> in stone - all information systems have a limited life span and it is
>>> wrong to forgo an adequate but sub-optimal system now while  
>>> waiting for
>>> a perfect system tomorrow. With open source, you can largely have  
>>> both.
>>>
>>> If the aim is to drop software into the midst of a modern large  
>>> hospital
>>> in a developed country, then what you say is correct.
>>>
>> These are certainly the drivers that keep people from sharing data  
>> over
>> a larger region.  They obviously
>> have merit which is why people have followed them.  I agree with the
>> statement, however, to think globally
>> and act locally.  The interoperable interface standards can actually
>> make things easier when working
>> locally and prepare one for the future.  People underestimate the
>> importance of sharing their data
>> over a wide area.  This may be even more important for a third-world
>> country that wants to have
>> remote medical assistance.   I don't advocate waiting for the  
>> "perfect"
>> system.  But using modular
>> techniques with well defined interfaces actually allows one to evolve
>> into the future at a lower cost, in
>> my opinion.   Others may not share this view, of course.
>>
>> I believe that with a modest upfront investment one can go a long way
>> toward interoperability.  The
>> open source community should be leading in this area, because of the
>> increased cooperation.  Unfortunately,
>> it seems to be lagging behind.
>>
>> Dave
>>> Tim C
>>>
>>>
>>
>>
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
> 
> 
> [wr]
> 
> - - - - - - - -
> 
> will ross
> project manager
> mendocino informatics
> 216 west perkins street, suite 206
> ukiah, california  95482  usa
> 707.462.6369 [office]
> 707.462.5015 [fax]
> www.minformatics.com
> 
> - - - - - - - -
> 
> "Getting people to adopt common standards is impeded by patents."
>          Sir Tim Berners-Lee,  BCS, 2006
> 
> - - - - - - - -
> 
> 
> 
> 
> 
>  
> Yahoo! Groups Links
> 
> 
> 
> .
> 

Reply via email to