On Sat, 2003-02-08 at 03:32, [EMAIL PROTECTED] wrote:
> When you think about what the options 
> are for the Big IT boss for acquiring a new capability for their organization 
> there are three basic approaches.  For a given set of required functionality 
> the basic options are:
> 1.  Buy a total COTS (Commercial Off The Shelf) Solution.  This assumes of 
> course that there is a product out there that meets the requirements.
> 2.  Develop the solution from scratch.  This can be an "inhouse" development 
> project, an outsourced development project or a combination of inhouse and 
> outsourced work.
> 3.  Start with an NDI (Non-Developmental Item) to meet a given amount of the 
> requirements and meet the rest of the requirements by doing development.  The 
> NDI in this case could be a third party proprietary product, a piece of 
> shareware/freeware or opensource code.  Regardless the source, it is a 
> software component that provides a known quantity of capability for the team 
> and to make up the difference the team develops the additional components to 
> meet the full requirement set.
> 
> If I understand what you have written above then you are in favor of the 
> third approach especially if the developed code is added to the opensource 
> and made available as opensource too. Or in the event one has to take 
> approach 2 then the code developed is put out as opensource.  Am I correct?

Yes, spot on, although the name "Non-developmental-item" may be a bit
misleading for open source items. Yes, there is a whole lot of
development already done, which doesn't have to be re-done, but an open
source "NDI" offers to possibility of direct further development, rather
than having to treat it as a "black box" and build an interface to it 
or a wrapper around it in order to extend its functionality.

Here is a concrete example, one which we are using in our project. There
is a stunningly good FOSS statistics and statistical graphics package
available called R (see http://www.r-project.org), based on the S
statistics language developed by AT&T in the 1980s. The R team features
many eminent applied statisticians from around the world, and the
package is mature and robust. However, at its heart it is a stats
package, and thus it is not a good choice for building large, complex
applications (and was never designed for that purpose): the S language
which is used to drive R is functional (as in "everything is a
function") and only partly object-oriented. Very efficient for
mathematical work, but not for complex systems of applications, user
interfaces etc.

On the other hand we have a FOSS language such as Python (see
http://www.python.org), which is highly object-oriented, is popular and
is almost self-documenting (plus has lots of documentation tools) and
was designed from the ground up for building large, complex
applications. There are many mature Pythons libraries in a range of
critical areas such as network infrastructure, GUI and Web-app toolkits
and interfaces to enterprise data stores, as well as for handling large
chunks of array data (the latter developed by physicists at Los Alamos
and Lawrence Livermore labs in the US).

But interfacing Python and R by traditional methods is a tedious, slow
and error-prone process.

Enter RPy, a very neat bit of work by Walter Moriera, a maths post-grad
based in Montevideo, Uruguay (see http://rpy.sf.net). RPy embeds the
whole of the R environment inside the Python virtual machine (pcode
interpreter), and provides an elegant object-oriented interface between
them. So now we have an enterprise-class development environment (Python
and friends), state-of-the-art stats and graphs (R) and interfaces to
just about every database and network resource under the Sun. All we
need is a higher level framework for data analysis, and some end-user
interfaces (Web and GUI) and bingo, we would have a fabulous health data
analysis tool which would rival commercial suites such as SAS (see
http://www.sas.com). Of course, that last bit - the framework - is
non-trivial, but it is more inspiration than perspiration - all the
really time-consuming work has been done (that's not to say the
inspiration is not to be found in Python and R - in fact, they are
replete with it). 

Hence, I don't think that "non-developmental item" quite captures the
benefits of building on FOSS components.

> 
> There are obvious advantages of taking approach 3 especially if one starts 
> with a well written, high quality piece of NDI code that meets a significant 
> bulk of requirements or at a minimum provides a solid set of software 
> infrastructure components that can be used to compose the system and then 
> focus development on the applications needed to meet the requirements.  

Yes, the story related above is far from unique.

> 
> In small software projects (less than 1000 functions points of capability) 
> this approach can appear to be "magical" in the sense of being truly low cost 
> and meeting the schedule (assuming these expectations were set realistically 
> to begin with).  In medium size software projects (1000-10,000 function 
> points) there is still a significant effect on cost and schedule but it may 
> not be so obvious to management as the project moves forward in time.  This 
> is because they tend to get focused on the ongoing struggle of what's being 
> developed and forget that if they had to develop the functionality that was 
> in the NDI they wouldn't be as far along as they are.  For large scale 
> (greater than 10,000 function points) it has been my experience that 
> regardless the amount of capability one starts with in the NDI, management 
> always loses sight of that fact and becomes focused just on what is being 
> developed.  This is probably a good thing because it's the large scale 
> projects (greater than 10,000 function points) that have the highest failure 
> rate.  In my view, the use of the NDI as a starting block coupled with an 
> effective software engineering shop is what makes the large scale project 
> even feasible to begin with.
> 
> In terms of dollar savings the most dramatic effect is seen for the smaller 
> projects.  As the size of the concomittant development effort goes up the 
> savings realized by the project team in approach 3 is not from the fact that 
> they started with an NDI but that they have an effective/efficient 
> development/test team.  This is especially true for large scale projects 
> where more than 10,000 function points of capability are being added to what 
> was already in the NDI software. 

I think that is an excellent exposition of the effects of scale, and I
agree with everything you say. My only comment is that any project with
more than 10,000 function points is just too large, and the Big IT
bosses need to learn to be content with smaller successes (but more
often). Lack of continuity amongst Big IT bosses (and big bosses in
general) may be a problem here - if they are on 3 or 5 year contracts,
the perception may be that they have only one shot at it, so it might as
well be a big one.
 
> 
> So I guess the million dollar question then is, "How many function points of 
> capability do we think the EHR will require and how much of that can be "jump 
> started" with approach 3 using NDI and how much will remain to be developed?  
> If the remaining development is less than 1000 function points then the use 
> of opensource as the NDI is "magical" in its effect.  If remaining 
> development is between 1000 to 10000 function points then the cost savings 
> effect of opensource as the NDI is less remarkable and if the amount of 
> develoment is greater than 10,000 function points the use of NDI increases 
> the likelihood of success of the project but the largest cost savings and 
> schedule savings come from having an effective software engineering process.

Yes, and my take on most of the announced EHR projects (as opposed to
EHR standards/methodologies like OpenEHR) is that they are
over-ambitious, given that there are huge social adjustments to be made
by everyone (the public, healthcare industry and healthcare
professionals), not to mention, IMHO, a number of as-yet unresolved
technical issues with respect to security, confidentiality and privacy.
Some, like the Australian HealthConnect initiative, are hastening slowly
and concentrating on small pilots, and that's a very good thing. I'm 
not sure whether the measured progress by design or due to bureaucratic
inertia, but either way, its a Good Thing. At least another 5 years of
pilots is required before going the whole hog and building a national
EHR.

> 
> If one accepts what I've said above then depending on the approach taken and 
> the amount of new development to be done, one can structure a rational 
> argument for the use of opensource as the NDI in the project.  The emphasis 
> is a little different for the 3 levels of scale.

Yes indeed, but I suspect that if you look at health IT project budgets
at the organisational level, a significant proportion is spent on small
and medium sized projects, and this is where the funded FOSS approach
(either as the main development or as fall-back) can be a real winner,
as you point out. Maybe in the very large projects too, but I agree
that's a much harder sell at this stage.

Tim C


Reply via email to