Hello everyone, 

I need to pick your brains! I am running a mixed
effects regressions logit model (glmer) on some binomial data, where the
dependent variable is coded as either 0 (failure) or 1 (success). I have
two variables A and B, each with 2 levels. 

I have applied a
contr.sum(2) coding to both variables because I want the output to be
interpreted as an ANOVA - so the intercept should represent the grand
mean (in logits) and the main effects should represent the deviations
from the grand mean (correct me if I am wrong). 

To double check that I
am using the correct contrast coding, along with glmer models, I run lrm
models on the same data - because the lrm model does not have random
effects, the intercept it estimates should correspond to the actual
grand mean, am I right? I have 768 successes and 300 failures, so total
number of answers is 1068, which should make a grand mean in logits of
.94. 

When I run the lrm model: lrm(answer~A*B), I do NOT get the grand
mean of. 94 in the intercept, but something much higher : 1.23. When I
run lrm models with only one predictor, I get an intercept of .94 (the
grand mean) when I include only B, and one of 1.23 when I include only A
(the same intercept when I include both predictors and their
interaction). I do not understand why.  

Below I give the data
breakdown by condition - the numerator gives the number of successes and
the denominator the total number of answers in the condition 

 A level
1 A level 2 

B level 1 135/263 250/272  

B level 2 139/266 244/267


First I thought that it might have to do with the fact that the data
set is not fully balanced. However, I get the same thing when I use new
variables A and B that have been 'scaled as numeric' (I believe this
scaling addresses the imbalance).  

Here is the distribution of the
number of observations across condition: 

 A level 1 A level 2

 B
level 1 263 272
 B level 2 266 267

Does anyone have any clue as to why
this is happening? I need to understand what I am doing, because later I
need to analyse (actually re-analyse) some more complex data. I have
only started to use contrasts recently (partly as a result of reading
Parsimonius mixed models by Bates et al) and I have found that using
(appropriate) contrasts makes a big difference to the ease of
convergence of a model, at least on my data. 

Thanks!

Maria Nella
Carminati

  

Reply via email to