On 29 Apr 2004 at 14:32, [EMAIL PROTECTED] wrote:
> On 19 Mar 2004 at 9:54, Phillip Good wrote:
>
> > I was merely commenting on your broad claim that MLE's are desirable
> > for all distributions. What did you mean by that?
> >
> > "Robert J. MacG. Dawson" <[EMAIL PROTECTED]> wrote:Philip Good
> > wrote:
> >
> > Alas, the desirable properties of an estimate arise for an MLE only
> > if that distribution is normal.
> >
>
> (delayed reply)
> You still did'nt inform us about your considerata! allowing you to
> dispose of ML estimators. We are waiting. In the meantime I have a
> question from one of your books P. Good, permutation tests, 1. edition
> (in case there are more) page 58, 4.4.1 Missing combinations.
>
> There you are doing a somewhat mystical bootstrap without explaining
> why some more usual variant does'nt work.
>
I got one privbat reply from Good, but still not answering the
question about the methods in his book. So I give more details, so
that others can help me judge the reasonableness of this method. From
page 58.
The problem is about effect of age on the mediation of the immune
response. What was mesured was the anti-SRBC respnse of spleen cells
derived from C57BL mice of various ages. In one set of trials, the
cells were derived entirely from the spleens of young mice, in a
second set of trials, they came from the spleens of old mice, and in
a third they came from a mixture (50%-50%) of the two.
For young mice we have:
x_{2,1,k} = \mu + \alpha + e_{2,1,k}
for old mice:
x_{1,2,k} = \mu-\alpha + e_{1,2,k}
and for the mixture with p from young and (1-p) from old, where
actually p=0.5:
x_{2,2,k} =p (\mu+\alpha) + (1-p) (\mu - \alpha) - \gamma + e_{2,2,k}
= \mu + (2p-1) \alpha - \gamma +e_{2,2,k}
(here there also is an algebraic error in the book, ciorrected here)
where \gamma is an interaction parameter, and the interest is in
\gamma. It is also known that the distribution of the errors will be
different for the different groups. (so a residuals based bootstrap
is no good).
Then it seems natural to seek a natural estimator of \gamma, and
bootstrap that (by stratified bootstrap).
Such a natural estimator is
\hat{\gamma} = mean of(p x_{1,2, }} + mean of { (1-p) x_{2,1,} } -
mean of {x_{2,2,}},
which is \gamma + some noise.
This can then be bootstrapped.
But now Good goes on saying that this is not possible because 1) we
cannot be suer that \mu=0 (seems irrelevant, since whatever value of
\mu is eliminated in the above estimator) or 2) the groups does'nt
have the same number of elements, which also seems irrelevant.
So he proposes to use the estimator above, but without the "mean of"
part, with one observation choosen at random from each group, and
then bootstrap this. This is a strange estimator, very inefficient,
and bootstrapping an inefficient estimator does'nt seem reasonable.
The reasons given by Good for choosing his starnge estimatir does'nt
seem valid, either.
So then the questions are: will Good explain us the reasons for the
strange choices he has given in his published book, without there
given any reason for them?
Or will someone else comment of my criticism here?
Kjetil Halvorsen
> Code from R, entering the data, doing a usual stratifieb(by the tree
> groups) bootstrap, and then your version, based on the last eq. on
> page 59.
>
> (Using the R boot package)
>
> > Good <- data.frame(resp=scan(),
> group=factor(c(rep("Y",5),rep("O",4),
> + rep("M", 3))))
> 1: 5640 5120 5780 4430 7230
> 6: 1150 2520 900 50
> 10: 7100
> 11: 11020
> 12: 13065
> 13:
> Read 12 items
> > summary(Good)
> resp group
> Min. : 50 M:3
> 1st Qu.: 2178 O:4
> Median : 5380 Y:5
> Mean : 5334
> 3rd Qu.: 7133
> Max. :13065
> # usual bootstrap:
> > boot.Good1 <- function(data, ind) {
> + rs <- data[ind,1]
> + gr <- data[ind,2]
> + means <- tapply(rs, gr, mean)
> + gamma <- 0.5*means[2]+0.5*means[3]-means[1]
> + return(gamma) }
> > boot1 <- boot(Good, boot.Good1, R=999, strata=Good$group)
> > boot.ci(boot1)
> BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
> Based on 999 bootstrap replicates
>
> CALL :
> boot.ci(boot.out = boot1)
>
> Intervals :
> Level Normal Basic
> 95% ( -9891, -4150 ) (-10233, -4456 )
>
> Level Percentile BCa
> 95% (-9539, -3762 ) (-9338, -3496 )
> Calculations and Intervals on Original Scale
> Warning message:
> Bootstrap variances needed for studentized intervals in:
> boot.ci(boot1)
> > boot.Good2 <- function(data, ind) {
> + x22 <- data[ind,1][10]
> + x21 <- data[ind,1][1]
> + x12 <- data[ind,1][6]
> + gamma <- 0.5*x12 + 0.5*x21 - x22
> + return(gamma) }
> # Good's bootstrap:
> > boot2 <- boot(Good, boot.Good2, R=999, strata=Good$group)
> > boot.ci(boot2)
> BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
> Based on 999 bootstrap replicates
>
> CALL :
> boot.ci(boot.out = boot2)
>
> Intervals :
> Level Normal Basic
> 95% (-5491, 4764 ) (-4500, 3070 )
>
> Level Percentile BCa
> 95% (-10480, -2910 ) ( -8010, -2225 )
> Calculations and Intervals on Original Scale
> Warning : BCa Intervals used Extreme Quantiles
> Some BCa intervals may be unstable
> Warning messages:
> 1: Bootstrap variances needed for studentized intervals in:
> boot.ci(boot2)
> 2: Extreme Order Statistics used as Endpoints in: norm.inter(t,
> adj.alpha)
> >
>
> Note that the bootstrap confidence intervals are rather different. Can
> you justify yout rather strange bootstrap?, it is not justified in the
> book. Or explain why the moore usual one is mno good for this example
> (not done in the book).
>
> Cheers,
>
> Kjetil Halvorsen
>
>
> > Now, why would you say that? Until you say what "the" desirable
> > properties of an estimate are I certainly can't confute your claim,
> > but I am rather confident that if you *do* I can either find a
> > counterexample or reassure myself that you desire some rather odd
> > things from estimates that I needn't trouble myself about.
> >
> > So let's see your desiderata and we'll get this cleared up.
> >
> > -Robert
> > .
> > .
> > =================================================================
> > Instructions for joining and leaving this list, remarks about the
> > problem of INAPPROPRIATE MESSAGES, and archives are available at: .
> > http://jse.stat.ncsu.edu/ .
> > =================================================================
> >
> >
> > Phillip Good
> > http.ms//www.statistician.usa
> > "Never trust anything that can think for itself if you can't see
> > where it keeps its brain." JKR
> >
> > Do you Yahoo!?
> > Yahoo! Mail - More reliable, more storage, less spam
>
>
> .
> .
> =================================================================
> Instructions for joining and leaving this list, remarks about the
> problem of INAPPROPRIATE MESSAGES, and archives are available at: .
> http://jse.stat.ncsu.edu/ .
> =================================================================
>
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
. http://jse.stat.ncsu.edu/ .
=================================================================