Mike,
Thanks. I just got the Steyerberg etal, 2001 article and will give it a squiz.
Tony
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 10, 2004 11:22 AM
To: [EMAIL PROTECTED]
Subject: Re: [edstat] Split half designs
A simulation paper by Steyerberg a few years ago showed that the split
half approach is probably too conservative. You're better off estimating
an a priori model and then using the bootstrap for validation. If you've
"always" had success before wiht the split half method, I'd say you've
been a very lucky fellow till now :-)
Mike Babyak
Tony Baglioni <[EMAIL PROTECTED]> wrote:
> [-- text/plain, encoding 7bit, charset: iso-8859-1, 28 lines --]
>
> I have a sample of 2590 patients that I have randomly divided into two
> groups - one for exploratory work and one for validation. When I check the
> randomization process by comparing the groups on 15 predictor variables
> there are no significant differences. However, when I develop a model on
> one split half and attempt to validate it on the second split half, the
> results are abysmal. I have used this same process on other models and
> they've always validated.
>
> When the samples are not significantly different on any of the predictor
> variables, what would cause a model to fail to validate? Whilst I know it
> is not appropriate, I've re-randomized the split-halfs several times with
> the same results.
>
> Any help much appreciated.
>
> Tony
>
> _______________________
>
> The Epsilon Group, LLC
> 1410 Sachem Place
> Charlottesville, VA 22901 USA
> 434.975.0097 x302
> 434.975.0477 (fax)
> [EMAIL PROTECTED]
> www.epsilongroup.com
>
>
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
. http://jse.stat.ncsu.edu/ .
=================================================================
