David Forslund wrote:
> Joseph Dal Molin wrote:
>> Open source efforts/software like OpenMRS, WorldVistA (VistA Office 
>> etc.), OSCAR etc. that are focused on diffusion/uptake and continuous 
>> improvement. All need to have practical tools methods etc. to work 
>> effectively in the heterogeneous health IT ecosystem. Building on Tim's 
>> view:
>>
>>  >> I believe that with a modest upfront investment one can go a long way
>>  >> toward interoperability.  The
>>  >> open source community should be leading in this area, because of the
>>  >> increased cooperation.
>>
>> What would that modest investment be? Who would be willing to 
>> collaborate to make it happen? How does a practical approach dance 
>> effectively with and benefit from the vision/work of the 
>> "interoperability" expert community?  How can we leverage the OSHCA 
>> meeting in May to help the open source health community take the 
>> leadership role?
>>
>>
>> Joseph
>>   
> The above quote was from me, not Tim.  I don't know if he has the same 
> view or not.

I am not in any way antithetical to investing effort in
interoperability. However, I do not regard it as an end in itself. The
goal of open source health informatics must always be to improve the
health and health care of people. If widespread and ongoing
interoperability is important, in a given setting or sub-domain, to
achieving those goals, then lots of effort should be put into
implementing highly generalised, standards-based interoperability. If
only limited intraoperability between, say, a few clinics all running
the same software is required, then I believe it is perfectly
permissible to take shortcuts and go for easier-to-implement
non-standard interoperability mechanisms, particularly when software
development resources are tight, as they almost always are in open
source projects. And if interoperability is just not needed, then there
is no point building it in. All these views are modified by the level of
resources and the expected longevity of the software. If millions of
dollars and tens or hundreds of person-years are being ploughed into a
project, then it would be silly not to consider standards-based
interoperability right from the start. But if, like most open source
projects, the budget ranges from zero to a few hundred thousand dollars,
and a few person-years of effort or less is involved, then a more zen
approach can be taken - regard the software as ephemeral, to be evolved
or recreated on a regular basis, perhaps even every year or so. In that
case, the failure to build in complex, standards=based interoperability
at the early stages is not such a disaster, even if it is needed later.
Better to get the project up on its feet first.


> The "modest
> investment" is in the design of a system up front.  It always saves time 
> to go through a design process
> rather than just start coding.  The design process involves 
> understanding and documenting the underlying
> abstractions of the process.  This can lead to well-designed interfaces 
> which properly divide up the labor involved
> more efficient development.  It is at this point that one reviews the 
> literature to see how well the interfaces
> match to existing standards or systems.

I agree with this to a degree, although I am utterly convinced that the
traditional "waterfall" methods of designing everything on paper or as
thought-experiments, encoding that in written specs, and then slavishly
implementing those specs, is completely broken (yet I still see it used
all the time for software projects, half of which then fail). Better to
keep the initial design phase brief, then start coding and reviewing the
outcome and design using highly iterative agile development methods.

Tim C

Reply via email to