On 10 Apr 2004 at 9:37, Roger Levy wrote:

> Richard Ulrich <[EMAIL PROTECTED]> wrote in message

.
.
.
> Each of my covariates is three-valued.  So the situation for which ML
> and exact logistic regression were giving me substantially different
> results was with a half-dozen covariates, i.e. 3^6=729 possible
> covariate vectors, and 300 datapoints, therefore the covariate space
> was sparsely populated.  I was not including any interaction terms,
> and in most cases each datapoint had a unique set of predictor values,
> so there were only seven parameters in my model and overfitting is
> almost certainly not an issue.
> 
> So to restate my confusion, what I don't understand is the technical
> reason why asymptotic ML estimates for parameter confidence intervals

Depends on the asymptotics you use. A direct normal approximation for 
the distribution of the ML estimator might be bad, but you can 
instead use a chisquare approximation for -2 log likelihood 
difference (deviance). With modern, fast computers, that is 
practicable, with likelihood profiling. 

Likelihood profiling for instance is implemented in 
R.

Kjetil Halvorsen

> and p-values would be unreliable in such a situation, since sample
> size is relatively large in absolute terms.
> 
> Many thanks for the help.
> 
> Best,
> 
> Roger
> .
> .
> =================================================================
> Instructions for joining and leaving this list, remarks about the
> problem of INAPPROPRIATE MESSAGES, and archives are available at: .   
>               http://jse.stat.ncsu.edu/                    .
> =================================================================


.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to