Correction procedure

2001-06-03 Thread Bekir

Dear Donald Burrill,

Thank you for the reply to me.
Sorry I wrote by mistake My aim was to compare groups 2, 3, 4 with
control(group 1), would be: My aim was to compare groups 2, 3, 4,5 
with control(group 1). Anyway you had corrected it.

The rewiever had written me: Accordingly, a statistical penalty
needs to be paid in order to account for the increased risk of a Type
1 error due to multiple comparisons. The easist way to achieve this
goal to adjust the P value require to declare significance using the
Bonferroni correction.

1.What is the correct meaning of the last sentence? What must I do? 
As you wrote, must I find the adjusted p values or declare the
adjusted significance level alpha?

2. There are apparently and exactly three groups; groups 1, 3, 5 that
had the same proportions of translocation. Therefore, to compare only
the group 2 and 3 with the control can be appropriate, can it be?
Thus, there would be two comparisons and the p valus 0.008 (0.008 x 2
=0.016) and 0.02 (0.02 x 2 = 0.04)would be significant. Is it right?


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Correction procedure

2001-06-03 Thread Donald Burrill

On 3 Jun 2001, Bekir wrote, in part:

 My aim was to compare groups 2, 3, 4, 5 with control (group 1). ... 
 
 The rewiever had written me:  Accordingly, a statistical penalty 
 needs to be paid in order to account for the increased risk of a Type 
 1 error due to multiple comparisons.  The easist way to achieve this
 goal to adjust the P value require to declare significance using the
 Bonferroni correction.
 
 1.  What is the correct meaning of the last sentence?  What must I do? 
 As you wrote, must I find the adjusted p values or declare the
 adjusted significance level alpha?

Your choice:  the two ways of approaching the problem are equivalent. 
Either divide the criterion significance level (alpha) by the number 
of comparisons, as Duncan Smith recommended, and compare the p-values 
reported by your statistical routine to this adjusted value;  or 
adjust the p-values by multiplying the reported values by the number of 
comparisons.  Thus p = 0.02  adjusted alpha = 0.125 for one of your 
comparisons, or adjusted p = 0.08  nominal alpha = 0.05.  If I 
understand your reviewer correctly, (s)he seems to be requesting the 
latter:  adjusting the p-value.  

 2.  There are apparently and exactly three groups; groups 1, 3, 5 that 
 had the same proportions of translocation. 

Wasn't it groups 1, 4, 5 that had the same proportions?

 Therefore, to compare only the group 2 and 3 with the control can be 
 appropriate, can it be?  

Such a comparison may be appropriate (but see below);  but this does not 
change the situation.  Had groups 4 and 5 NOT had proportion equal to 
group 1, you would surely have wanted to make those two comparisons also. 
The question is not, how many comparisons were useful or significant;  
but how many comparisons would you have chosen to consider before you 
observed the results of this particular experiment.  By your description, 
you certainly considered AT LEAST the four comparisons mentioned in your 
first paragraph above.

 Thus, there would be two comparisons and the p valus 0.008 (0.008 x 2 
 = 0.016) and 0.02 (0.02 x 2 = 0.04)would be significant.  Is it right?

As explained above, and as Duncan Smith responded, No.
Duncan mentioned Dunnett's test.  This might indeed be appropriate for 
your design, but not for the analyses you have so far done.  Dunnett's 
test would normally follow the finding of a significant F value in a 
one-way analysis of variance (testing the formal hypothesis that the 
true (population) proportions in the five groups are all identical.  
Such an analysis could be undertaken with your data, but some persons 
(possibly including your reviewer?  I don't know) would object to 
carrying out an analysis of variance (ANOVA) with dichotomous data.

One advantage to ANOVA is the possibility of drawing conclusions more 
complex, and possibly more interesting, than the pairwise comparisons 
that you had originally envisioned.  In particular, you could test the 
contrast between Groups 2 and 3 combined, with Groups 1, 4, and 5 
combined;  since it seems clear that this is the only thing that is 
going on in your data.  Testing that contrast by the Scheffe' method, 
which offers experimentwise protection against the null hypotheses for 
any imaginable contrast, might be useful:  that contrast, involving all 
100 cases, is more powerfully tested than the series of pairwise 
comparisons, and may well be significant even against the conservative
Scheffe' criterion.  
  Whether that is useful _for_your_purposes_ is another matter entirely. 
If there is some useful meaning and interpretation to be gained in 
observing that only groups 2 and 3 differ from the control group and 
that groups 4 and 5 are indistinguishable from the control group, then 
this contrast would be useful to test formally.  If that outcome does 
not lend itself to useful interpretation (and the advance of knowledge 
in the field), you would probably be better off staying with the four 
pairwise comparisons you started with.

 
 Donald F. Burrill [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,  [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264 603-535-2597
 184 Nashua Road, Bedford, NH 03110  603-471-7128



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: fit-ness

2001-06-03 Thread Rich Ulrich

On Thu, 31 May 2001 12:05:24 +0100, Alexis Gatt [EMAIL PROTECTED]
wrote:

 Hi,
 
 a basic question from a MSc student in England. First of all, yeah I read
 the FAQ and I didnt find anything answering my question, which is fairly
 simple: I am trying to analyse how well several mathematical methods perform
 to modelize a scanner. So I have, for every input data, the corresponding
 output given by the scanner and the values given by the mathematical models
 I am using.
 First, given the distribution of the errors, I can use the usual mean-StdDev

I can think of two or 3 meanings of 'scanner'  and not a one of 
them would have a simple, indisputable measure of 'error.'
 1) Some measures would be biased toward one  'method'  
or another, so a winner would be obvious.
 2) Some samples to be tested would be biased (similarly)
toward a winner by one method or another.  So you select
your winner by selecting your mix of samples.

If you have fine measures, then you can give histograms of your
results (assuming  1-dimensional, as your alternatives suggest).

Is it enough to have the picture?
What would your audience demand?  What is your need?


 if the distro is normal, or median-95th percentile otherwise. Any other
 known methods to enhance the pertinence of the analysis? Any ideas welcome.

Average squared error (giving SD) is popular.  
Average absolute error de-emphasizes the extremes.
Count of errors beyond a critical limit sometimes fills a need.

A more complicated way is to build in a cost function.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=