I was about to suggest a similar strategy using the effects package:

require(effects)
data("lexdec",package = "languageR")
lexdec.lmer <- lmer(RT ~ Trial*PrevType + (1|Subject) + (1|Word),
                    contrasts=list(PrevType="contr.sum"), data = lexdec)
summary(lexdec.lmer)
eff <- effect("Trial:PrevType", lexdec.lmer,
                     xlevels=list(Trial = c(1, 100)))
as.data.frame(eff)
# Trial PrevType      fit         se    lower    upper
# 1  nonword 6.449084 0.03747189 6.375587 6.522581
# 100  nonword 6.416167 0.03510482 6.347313 6.485022
# 1     word 6.368973 0.03760717 6.295210 6.442736
# 100     word 6.354509 0.03513726 6.285591 6.423427

The output "lower" and "upper" bounds are the 95% CI.

On Thu, May 12, 2016 at 9:05 AM, Henrik Singmann <
singm...@psychologie.uzh.ch> wrote:

>
>
> Hi Florian,
>
> Sorry for the late reply.
>
> An alternative idea is to use lsmeans which provides this functionality as
> shown below. I hope this example data somewhat works.
>
> Note that lsmeans per default uses pbkrtest to calculate the standard
> errors which cen be both time and memory consuming. To disable run:
> lsm.options(disable.pbkrtest = TRUE)
>
> Hope that helps,
> Henrik
>
>
> require(afex)
> require(lsmeans)
> data("lexdec",package = "languageR")
>
> lexdec.lmer <- mixed(RT ~ Trial*PrevType + (1|Subject) + (1|Word), data =
> lexdec)
> lexdec.lmer
> #           Effect         df F.scaling         F p.value
> # 1          Trial 1, 1577.79      1.00   7.06 **    .008
> # 2       PrevType 1, 1581.80      1.00 14.86 ***   .0001
> # 3 Trial:PrevType 1, 1578.48      1.00      1.07     .30
>
> lsmeans(lexdec.lmer, "PrevType", at = list(Trial = c(1)))
> # NOTE: Results may be misleading due to involvement in interactions
> #  PrevType   lsmean         SE    df lower.CL upper.CL
> #  nonword  6.449084 0.03747297 30.30 6.372586 6.525582
> #  word     6.368973 0.03760838 30.74 6.292244 6.445702
> #
> # Confidence level used: 0.95
>
> pairs(lsmeans(lexdec.lmer, "PrevType", at = list(Trial = c(1))))
> # NOTE: Results may be misleading due to involvement in interactions
> #  contrast         estimate         SE      df t.ratio p.value
> #  nonword - word 0.08011087 0.02066816 1581.83   3.876  0.0001
>
> lsmeans(lexdec.lmer, "PrevType", at = list(Trial = c(100)))
> #  PrevType   lsmean         SE    df lower.CL upper.CL
> #  nonword  6.416167 0.03510497 23.35 6.343607 6.488728
> #  word     6.354509 0.03513742 23.44 6.281896 6.427122
> #
> # Confidence level used: 0.95
>
> pairs(lsmeans(lexdec.lmer, "PrevType", at = list(Trial = c(100))))
> # NOTE: Results may be misleading due to involvement in interactions
> #  contrast         estimate          SE      df t.ratio p.value
> #  nonword - word 0.06165852 0.008597497 1587.14   7.172  <.0001
>
>
>
>
>
> Am 10.05.2016 um 01:54 schrieb T. Florian Jaeger:
>
>> Hi ling-R-lang-lers
>>
>> I'm looking for ideas to deal with the following situation. I have an
>> analysis in which there's an interaction of a categorical variable,
>> treatment, and a continuous variable, trial.
>>
>> I'd like to estimate the effect of treatment at trial = x. Specifically,
>> I'd lke to also calculate the CI or significance of treatment at trial =
>> x (so I can't just calculate the predicted effect). I'd like to do so
>> without giving up the linearity assumption for trial (i.e., I can't just
>> record the model to a simple effects specification).
>>
>> I guess I could just sample from the model and calculate significance
>> over the samples (e.g., with the sim function from the arms package),
>> but I feel there should be a more straightforward way to do this, based
>> on the variance covariance matrices. Any ideas?
>>
>> Thank you,
>>
>> Florian
>>
>
>
>


-- 
-----------------------------------------------------
Dan Mirman
Assistant Professor
Department of Psychology
Drexel University
http://www.danmirman.org
-----------------------------------------------------

Reply via email to