Hello,

Setup: I have data with ~10K observations. Observations come from 16
different laboratories (labs). I am interested in how a continuous factor,
X, affects my dependent variable, Y, but there are big differences in the
variance and mean across labs.

I run this model, which controls for mean but not variance differences
between the labs:
lm(Y ~ X + as.factor(labs)).
The effect of X is highly significant (p < .00001)

I then run this model using lme4:
lmer(Y~ X + (1|labs)) #controls for mean diffs bw labs
lmer(Y~X + (X|labs)) #and possible slope heterogeneity bw labs.

For both of these latter models, the effect of X is non-significant (|t| <
1.5).

What might this be telling me about my data? I guess the second (X|labs) may
tell me that there are big differences in the slope across labs, and that
the slope isn't significant against the backdrop of 16 slopes that differ
quite a bit between each other. Is that right? (Still, the enormous drop in
p-value is surprising!). I'm not clear on why the first (1|labs), however,
is so discrepant from just controlling for the mean effects of labs.

Any help in interpreting these data would be appreciated. When I first saw
the data, I jumped for joy, but now I'm muddled and uncertain if I'm
overlooking something. Is there still room for optimism (with respect to X
affecting Y)?

JJ

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to