Frank and all,
The point you were looking for was in a page that was linked from the
referenced page - I apologize for confusion. Please take a look at the
two last paragraphs here:
http://people.revoledu.com/kardi/tutorial/Bootstrap/examples.htm
Though, possibly it's my ignorance, maybe it's yours, but you actually
missed the important point again. It is that you just don't estimate
mean, or CI, or variance on PK profile data! It is as if you were trying
to estimate mean, CI and variance of a "Toccata_&_Fugue_in_D_minor.wav"
file. What for? The point is in the music! Would the mean or CI or
variance tell you anything about that? Besides, everybody knows the
variance (or variability?) is there and can estimate it without spending
time on calculations.
What I am trying to do is comparable to compressing a wave into mp3 - to
predict the wave using as few data points as possible. I have a bunch of
similar waves and I'm trying to find a common equation to predict them
all. I am *not* looking for the variance of the mean!
I could be wrong (though it seems less and less likely), but you keep
talking about the same irrelevant parameters (CI, variance) on and on.
Well, yes - we are at a standstill, but not because of Davison &
Hinkley's book. I can try reading it, though as I stated above, it is
not even "remotely related" to what I am trying to do. I'll skip it then
- life is too short.
Nevertheless I thank you (all) for relevant criticism on the procedure
(in the points where it was relevant). I plan to use this methodology
further, and it was good to find out that it withstood your criticism. I
will look into the penalized methods, though.
--
Michal J. Figurski
Frank E Harrell Jr wrote:
Michal Figurski wrote:
Tim,
If I understand correctly, you are saying that one can't improve on
estimating a mean by doing bootstrap and summarizing means of many
such steps. As far as I understand (again), you're saying that this
way one can only add bias without any improvement...
Well, this is in contradiction to some guides to bootstrap, that I
found on the web (I did my homework), for example to this one:
http://people.revoledu.com/kardi/tutorial/Bootstrap/Lyra/Bootstrap
Statistic Mean.htm
Where on that web site does it state anything that is remotely related
to your point? It shows how to use the bootstrap to estimate the bias,
does not show that the bias is important (it isn't; the simulation is
from a normal distribution and the sample mean is perfectly unbiased;
you are just seeing sampling error in the bias estimate).
It is all confusing, guys... Once somebody said, that there are as
many opinions on a topic, as there are statisticians...
Also, translating your statements into the example of hammer and rock,
you are saying that one cannot use hammer to break rocks because it
was created to drive nails.
With all respect, despite my limited knowledge, I do not agree.
The big point is that the mean, or standard error, or confidence
intervals of the data itself are *meaningless* in the pharmacokinetic
dataset. These data are time series of a highly variable quantity,
that is known to display a peak (or two in the case of Pawinski's
paper). It is as if you tried to calculate a mean of a chromatogram
(example for chemists, sorry).
Nevertheless, I thank all of you, experts, for your insight and
advice. In the end, I learned a lot, though I keep my initial view.
Summarizing your criticism of the procedure described in Pawinski's
paper:
If you think that you can learn statistics easily when I would have a
devil of a time learning chemistry, and if you are not willing to read
for example the Davison and Hinkley bootstrap text, I guess we are at a
standstill.
Frank Harrell
- Some of you say that this isn't bootstrap at all. In terms of
terminology I totally submit to that, because I know too little. Would
anyone suggest a name?
- Most of you say that this procedure is not the best one, that there
are better ways. I will definitely do my homework on penalized
regression, though no one of you has actually discredited this
methodology. Therefore, though possibly not optimal, it remains valid.
- The criticism on "predictive performance" is that one has to take
into account also other important quantities, like bias, variance,
etc. Fortunately I did that in my work: using RMSE and log residuals
from the validation process. I just observed that models with
relatively small RMSE and log residuals (compared to other models)
usually possess good predictive performance. And vice versa.
Predictive performance has also a great advantage over RMSE or
variance or anything else suggested here - it is easily understood by
non-statisticians. I don't think it is /too simple/ in Einstein's
terms, it's just simple.
Kind regards,
--
Michal J. Figurski
Tim Hesterberg wrote:
I'll address the question of whether you can use the bootstrap to
improve estimates, and whether you can use the bootstrap to "virtually
increase the size of the sample".
Short answer - no, with some exceptions (bumping / Random Forests).
Longer answer:
Suppose you have data (x1, ..., xn) and a statistic ThetaHat,
that you take a number of bootstrap samples (all of size n) and
let ThetaHatBar be the average of those bootstrap statistics from
those samples.
Is ThetaHatBar better than ThetaHat? Usually not. Usually it
is worse. You have not collected any new data, you are just using the
existing data in a different way, that is usually harmful:
* If the statistic is the sample mean, all this does is to add
some noise to the estimate
* If the statistic is nonlinear, this gives an estimate that
has roughly double the bias, without improving the variance.
What are the exceptions? The prime example is tree models (random
forests) - taking bootstrap averages helps smooth out the
discontinuities in tree models. For a simple example, suppose that a
simple linear regression model really holds:
y = beta x + epsilon
but that you fit a tree model; the tree model predictions are
a step function. If you bootstrap the data, the boundaries of
the step function will differ from one sample to another, so
the average of the bootstrap samples smears out the steps, getting
closer to the smooth linear relationship.
Aside from such exceptions, the bootstrap is used for inference
(bias, standard error, confidence intervals), not improving on
ThetaHat.
Tim Hesterberg
Hi Doran,
Maybe I am wrong, but I think bootstrap is a general resampling
method which
can be used for different purposes...Usually it works well when you
do not
have a presentative sample set (maybe with limited number of samples).
Therefore, I am positive with Michal...
P.S., overfitting, in my opinion, is used to depict when you got a
model
which is quite specific for the training dataset but cannot be
generalized
with new samples......
Thanks,
--Jerry
2008/7/21 Doran, Harold <[EMAIL PROTECTED]>:
I used bootstrap to virtually increase the size of my
dataset, it should result in estimates more close to that
from the population - isn't it the purpose of bootstrap?
No, not really. The bootstrap is a resampling method for variance
estimation. It is often used when there is not an easy way, or a
closed
form expression, for estimating the sampling variance of a statistic.
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.