Tony Grivell wrote:

> I think that's a reasonable suggestion - especially since different 
> people interpret these (and related) terms differently.   I've picked 
> out a couple of 'definitions' in the form that pathologists use (and 
> in my experience this is a group who are precise and pedantic in a 
> very professional way! - and they are the generators of much of the 
> quantitative data in an EHR).  In fact, they have come from a draft 
> European standard from CEN/TC and are consistent with ISO and other 
> international bodies.  (Taken from R. Haeckel (Ed.) "Evaluation 
> Methods in Laboratory Medicine") 

Tony, can you give me the full reference to this? I'll quote it in the 
Reference Model

> "ACCURACY is the closeness of the agreement between the result of a 
> measurement and a true value of the measurand".  I.e it depends on 
> knowledge of the TRUE value of the thing being measured, as during 
> analyser calibration, when a consistent BIAS might be noted.  It 'is 
> usually expressed numerically by statistical measures of the inverse 
> concept, INACCURACY of measurement', which is defined as "discrepancy 
> between the result of a measurement and the true value of a measurand" 
> - and which 'is usually expressed numerically as the error of 
> measurement'.  'Inaccuracy, when applied to sets of results, describes 
> a combination of random error of measurement and the systematic error 
> of measurement.' 

So this means that "+/-5%" is a statistically-based measure of the 
inaccuracy of the instrument or method being used. It does not make any 
statement about the individual measured value with respect to the "true" 
value - only that statistically, it is known to be within the 10% wide 
band centred on the true value.

> "PRECISION of a measurement is the closeness of agreement between 
> independent results of measurement obtained by a measurement procedure 
> under prescribed conditions".  I.e. the variation obtained with 
> repeated measurements on a single specimen.  Precision thus 'depends 
> only on the distribution of random errors of measurement. It is 
> usually expressed numerically by statistical measures of imprecision 
> of measurements'.  "IMPRECISION is the dispersion of independent 
> results of measurements obtained by a measurement procedure under 
> specified conditions".  'It is usually expressed numerically as the 
> repeatability standard deviation or reproducibility standard deviation 
> of the results of measurement.' When applied to sets of results, 
> imprecision 'depends solely on the dispersion of random error of 
> measurement and does not relate to the true value of the measureable 
> quantity'. 

a) so what about the "definition" of precision as significant places in 
a number, i.e. the level of preciseness to which a numerical result is 
reported - which is logically related to the definition above, since 
there is no point reporting to a higher degree of precision than 
actually available in the real-world measuring process...

b) the above definition would imply that we should report a 
standard-deviation of a notional population of meansurements of the same 
actual value....

c) should there be a merged definition of these concepts, as per this 
suggestion in HL7:

(quoting Gunther Schadow from CQ/MnM lists in HL7)
The NIST guide for uncertainty in measurements says that the
traditional notions of accuracy vs. precision should be superceded
by the one concept of uncertainty. So, any given measurement you
take is really a probability distribution over the measurement
domain. The probability distribution is typically described
parametrically. The NIST guide goes into quite specifics about
that and I have to say that it went a little bit past my memory.
But one of the ways they do specify their uncertainty is by
giving the mean and a standard deviation. That's often assuming
that your distribution is normal, which it often is due to the
central limit theorem.

But if it isn't, you need to know what your distribution type
and its parameters are.

In HL7 v3 we have a data type called Parametric Probability
Distribution, which is a generic type extension that works with
any base data type. In most cases we will have a PPD<PQ>.
The PPD<PQ> ends up having the properties:

   mean value
   unit
   distribution type code
   standard deviation

The distribution type code can distinguish normal, gamma, beta,
X^2, etc. The table of distribution types also summarizes how the
parameters mu and sigma relate to the specific parameters that
are usually used for each distribution type.

During the design of this we thought we would better use the
specific parameters for each distribution type, but those turned
out to be all deriveable from mu and sigma. The advantage of sending
mean and standard deviation consistently is that even if a
receiver does not understand the distribution type, he will get
a pretty good idea about the measurement from just looking at
mu and sigma.

I would encourage anyone with an interest in these matters to
review the V3DT semantics spec and particularly the table of
distribution types. This part has not received the same amount
of review as other parts, so errors are possible.

Recently in work we are doing here, I came to appreciate the
advantage of using moments as parameters instead of mean and
standard deviation. Well, first moment is the same as mean
but second moment has advantages over standard deviation when
combining population statistics. However, second moment and
standard deviation are also easily derivable. But I understand
that one could specify higher order moments to describe a
distribution.

regards
-Gunther




-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org

Reply via email to