The follwing is a code snippet from a power simulation 
program that I'm using:
 
estbeta<-fixef(fitmodel)
 sdebeta<-sqrt(diag(vcov(fitmodel)))
  for(l in 1:betasize)
  {  
   cibeta<-estbeta[l]-sgnbeta[l]*z1score*sdebeta[l]
    if(beta[l]*cibeta>0)              powaprox[[l]]<-powaprox[[l]]+1
      sdepower[l,iter]<-as.numeric(sdebeta[l])
  } 
 
Estbeta recovers the fixed effects from a model fitted using lmer.
Beta is defined elsewhere and is a user specified input
that relates the data generated in the simulation to an oucome.  
So, it seems pretty clear that the third line from the bottom is
a clever test of whether the confidence interval traps 0.  My 
question is why use beta[l]*cibeta>0 rather than 
estbeta[l]*cibeta>0.  Is that because in the long run the model 
parameter etimates tend toward the betas specified by the user?  
In other words, what really matters is the standard errors, right?

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to