I postulate the following model

    AC <- glmer(Accuracy ~ RT*Group + (1+RT|Group:subject) +
(1+RT|Group:Trial), data = da, family = binomial, verbose = T)


Here I predict Accuracy from RT, Group (which has values 0 or 1) and the
interaction of Group and RT (those are the fixed effects). I also estimate
the random effects for both intercepts and slopes for subjects and
different trials. However, these random effects are nested in one of the
mentioned two groups. That means that I calculate the subject random
effects separately for group 0 and for group 1. Also, the trial random
effects are calculated separately for group 0 and for group 1.

The results are following:

    Random effects:
     Groups             Name            Variance Std.Dev. Corr
     Group:subject (Intercept)         0.9785   0.9892
                                     RT         0.1434   0.3787   -0.77
     Group:Trial   (Intercept)            0.7694   0.8772
                                   RT           0.1047   0.3236   -0.68
    Number of obs: 39401, groups:  Group:subject, 438; Group:Trial, 180

    Fixed effects:
                     Estimate Std. Error z value Pr(>|z|)
    (Intercept)       2.72834    0.11997  22.742  < 2e-16 ***
    RT                 -0.98367    0.05909 -16.647  < 2e-16 ***
    Group1           -0.12424    0.16829  -0.738  0.46036
    RT:Group1       0.23286    0.08163   2.853  0.00434 **

All the random effects coefficients represent the effects for Group 0  and
1 random effects together, without differentiating them. I would like to
get the following:

1) estimations for subject and trial random effects in group 0 and in group
1 separately (Variance and Correlations).

2) estimations of the correlations between random slopes in subjects in
group 0 and group 1.

Questions:

3) Can lme4 and lmerTest do this? If yes, how?

4) If it cannot, is it justified to do separate models for group 0 and for
group 1, and then compare the results? the problem here is that I don't get
the statistical test of the RT:Group1 interaction.

5) Is it justified to extract the random effects for different groups and
then calculate the correlations, and variances manually? If yes, is it more
reasonable to extract the random effects from the model where the
interaction between RT and Group is included, or from the models which are
separated according to the group (as mentioned in question 4). I know that
you get different results than when letting lme4 calculate the coefficients
due to the marginal probabilities...

Thanks!

*EDIT A*
Roland from CrossValidated suggested to try and specify the random effects
as this:
(RT * Group | Group:subject) + (RT * Group | Group:Trial)

This is what I got:

    Random effects:
     Groups           Name             Variance Std.Dev. Corr
     Group:subject  (Intercept)       0.88355  0.9400
                             RT               0.11654  0.3414
-0.87
                            Group1          0.68278  0.8263   -0.32
0.26
                          RT:Group1        0.12076  0.3475   -0.01 -0.28
-0.24

     Group:Trial   (Intercept)         0.64182  0.8011
                           RT                 0.09434  0.3071
-0.76
                    Group1                  0.75896  0.8712   -0.37
0.29
                    RT:Group1             0.15605  0.3950    0.29 -0.53
-0.52
    Number of obs: 39401, groups:  Group:subject, 438; Group:Trial, 180

    Fixed effects:
                     Estimate Std. Error z value Pr(>|z|)
    (Intercept)       2.70777    0.11273  24.021  < 2e-16 ***
    RT               -0.98825    0.05821 -16.976  < 2e-16 ***
    Group1           -0.08302    0.16997  -0.488  0.62525
    RT:Group1         0.25620    0.08793   2.914  0.00357 **
    ---
    Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

    convergence code: 0
    unable to evaluate scaled gradient
    Model failed to converge: degenerate  Hessian with 5 negative
eigenvalues


*A1*) This looks like what I was looking for, especially when I run the
command. However, the model did not converge

    convergence code: 0
    unable to evaluate scaled gradient
    Model failed to converge: degenerate  Hessian with 5 negative
eigenvalues

What should I do with these warnings?

*A2*) I am a bit baffeled when I extract random effects
ranef(mod$Group:subject)

                       (Intercept)            RT
Group1          RT:Group1
    0:251001     -1.308168428  0.4780271048  0.352869565    -0.0737619415
    0:251002      1.050036079 -0.3071004273 -0.294625317    -0.0334146992
    0:251003     -1.220858015  0.4676770866  0.326114487    -0.0949017322
    0:251004      0.944849620 -0.2545466823 -0.268350172    -0.0564150418
    ...
    1:251001     -0.197649527  0.0839724493 -0.649897297    -0.1228681971
    1:251002      0.710716899 -0.2103765167  0.006884114    -0.2151618897
    1:251003     -0.402869078  0.1326561677 -0.344966110     0.0257983193
    1:251004     -0.321174375  0.0874198115  0.191529601     0.1521126993

I already have nested subjects in rownames (0:251001) - so that means
subject 251001 in group 0, and then again I have values for each subject in
group 0 (intercept column) and group 1(Group1 column). The same is with
slope. What does this data show me?

What is the difference between defining random factors as
`1+RT|Group:subject` and then looking at Intercept and RT values for
0:subject1, 0:subject2...., 1:subject1, 1:subject2...

and

defining random factors as `RT*Group|subject` and looking at the various
columns (Intercept, RT, Group1, RT:Group1) for subject1, subject 2 etc.?

Thank you,
Dominik

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to