Franklin Valier wrote:
> In science this type of study only has value as to its scientifically
> agreed upon use.  Its ability to be relied upon to make reliable
> conclusions from the methodology has to be taken into perspective when
> reading the study.  It has value, but in science you don't take it too
> seriously.  We rely on empirical studies for serious evaluation of a
> phenomena.  If they haven't been done, all you can say is this is all
> have and this is all we know right now.  Not much.  I wouldn't get too
> upset about this.

I think that you are being overly dismissive of observational studies.
Controlled experiments are great, but a) they can be hard to arrange
when the thing being tested is a hospital-wide information system which
costs tens of millions of dollars to implement and b) controlled trials
can introduce their own sets of biases and limit generalisability due to
overly tight selection criteria. And how practical is it to randomise
whole hospitals to "get teh computer system" or "stay with paper"?
OPolitically that is rather hard to do.

Certainly in the case of evaluations of implementations of hospital and
other clinical infromations systems it is best to use a before-and-after
study design, in which the hospital acts as its own matched control, and
the same survey instruments and methods are used before and after the
implementation of the system. It is easy to say that in retrospect, but
getting money from management to commission an expensive evaluation
study of a new information system BEFORE the system has even begun to be
installed can be a challenge, I suspect.

Tim C

Reply via email to