On 11 August in a thread about -How to determine adequate samples Ken
Mintz <[EMAIL PROTECTED]> wrote:


The std err (se) around the mean is given by the formula:
    
     se =       sd  / sqrt(n)  (68% conf)
     se = (1.96*sd) / sqrt(n)  (95% conf)
     se = (2.58*sd) / sqrt(n)  (99% conf)

  where sd is the std dev.  Suppose you want to be 99% that the
population
  avg is within +/-3% of the sample avg.  Then, se = 0.03*avg. (We
choose
  3% or whatever arbitrarily.)  Then the minimum sample size (n) is:

     x = (0.03*avg) / (2.58*sd)
     n = x*x


My question is - Is it sensible to use this formula to do something
else?

Situation:  35 'examiners' award a percentage score for the performance
of examinees.  In the course of a year each examiner will see about 500
examinees, perhaps 30 or so in a single session.

I'm using Microsoft Access Database to *enter* the scores awarded (not
to analyse it! - that's to be done using Minitab).  I want to set up a
'rule' in Access that indicates that there is less than 99% certainty
that a session is +/- 3% (or similar arbitrary cut off points) from a
'gold standard'.

(e.g. to compare the session mean for a single examiner with - A the
grand mean for all that examiners scores and - B, the session mean for
that single examiner with the grand mean for the population of
examiners.  The basic question is - Are A & B within the +/- 3% bounds?
If yes = accept, if no = check and adjust).

Help and advice appreciated.

Jonathan Robbins 

J H Robbins FRSA FRPS posting from Dorset in the UK.


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to