Daniel Lakeland dlakelan at street-artists.org writes:
We have used the lmer package to fit various models for the various
experiments that she has done (random effects from multiple
measurements for each animal or each trial, and fixed effects from
developmental stage, and genotype etc).
I am helping my wife do some statistical analysis. She is a biologist,
and she has performed some measurements on various genotypes of
mice. My background is in applied mathematics and engineering, and I
have a fairly good statistics background, but I am by no means a PhD
level expert in
: [R] lmer coefficient distributions and p values
I am helping my wife do some statistical analysis. She is a biologist,
and she has performed some measurements on various genotypes of
mice. My background is in applied mathematics and engineering, and I
have a fairly good statistics background
Dear all,
I want to estimate a crossed-random-effects model (i.e., measurements,
students, schools) where students migrate between schools over time.
I'm interested in the fixed effects of SES, age and their
interaction on read (reading achievement) while accounting for the
sample design. Based
Dear all,
I want to estimate a crossed-random-effects model (i.e., measurements,
students, schools) where students migrate between schools over time.
I'm interested in the fixed effects of SES, age and their
interaction on read (reading achievement) while accounting for the
sample design. Based
On 8/7/07, Daniel Caro [EMAIL PROTECTED] wrote:
Dear all,
I want to estimate a crossed-random-effects model (i.e., measurements,
students, schools) where students migrate between schools over time.
I'm interested in the fixed effects of SES, age and their
interaction on read (reading
Dear all,
Here's my question about lmer.
this function is very weird because it depends on the data set to work.
sometimes it works, other times does not work.
I attach the data file and the codes are below that I tried to make it work.
Please let me know what the problem is and how to solve
I'm using lmer to fit mixed-effect logistic regression models. This
is for a small data set.
First, I fit a constant:
Generalized linear mixed model fit using Laplace
Formula: propm ~ (1 | study)
Data: inducedSR71507.dat
Family: binomial(logit link)
AIC BIC logLik deviance
183.7
Hi all,
I try to fit a glmm model with binomial distribution and I would to
verify that the scale parameter is close to 1...
the lmer function gives the following result :
Estimated scale (compare to 1 ) 0.766783
But I would like to know how this estimation (0.766783) is performed,
and I
PROTECTED]
- Original Message
From: Douglas Bates [EMAIL PROTECTED]
To: Steven McKinney [EMAIL PROTECTED]
Cc: Iasonas Lamprianou [EMAIL PROTECTED]; r-help@stat.math.ethz.ch
Sent: Tuesday, 15 May, 2007 2:17:34 AM
Subject: Re: [R] lmer function
On 5/14/07, Steven McKinney [EMAIL PROTECTED
Hi All.
I'm trying to run a simple model from Baayan, Davidson, Bates and getting
a confusing error message. Any ideas what I'm doing wrong here?
# Here's the data.
Subj- factor(rep(1:3,each=6))
Item- factor(rep(1:3,6))
SOA - factor(rep(0:1,3,each=3))
RT -
Message
From: Douglas Bates [EMAIL PROTECTED]
To: Steven McKinney [EMAIL PROTECTED]
Cc: Iasonas Lamprianou [EMAIL PROTECTED]; r-help@stat.math.ethz.ch
Sent: Tuesday, 15 May, 2007 2:17:34 AM
Subject: Re: [R] lmer function
On 5/14/07, Steven McKinney [EMAIL PROTECTED] wrote:
Running lme4
On 5/16/07, Rick DeShon [EMAIL PROTECTED] wrote:
Hi All.
I'm trying to run a simple model from Baayan, Davidson, Bates and getting
a confusing error message. Any ideas what I'm doing wrong here?
# Here's the data.
Subj- factor(rep(1:3,each=6))
Item- factor(rep(1:3,6))
SOA
On 5/14/07, Iasonas Lamprianou [EMAIL PROTECTED] wrote:
Does anyone know if the lmer function of lme4 works fine for unbalanced
designs? I have the examination results of 1000 pupils on three subjects, one
score every term. So, I have three scores for English (one for every term),
three
@stat.math.ethz.ch
Subject: Re: [R] lmer function
On 5/14/07, Iasonas Lamprianou [EMAIL PROTECTED] wrote:
Does anyone know if the lmer function of lme4 works fine for unbalanced
designs? I have the examination results of 1000 pupils on three subjects,
one score every term. So, I have three scores
On 4/2/07, Seyed Reza Jafarzadeh [EMAIL PROTECTED] wrote:
Hi,
I am getting a warning message when I am fitting a generalized linear
mixed model (m1.2 below).
CHOLMOD warning: matrix not positive definite
Error in objective(.par, ...) : Cholmod error `matrix not positive
definite' at
Dear Douglas Bates,
Thanks for your attention. Please see the warnings with additional
control argument for the models that failed.
m1.2 - lmer(o ~ pv1o + pv2o + pv1toa + pv2toa + sesblf + (pv1o | prov) + (1
| pm), data = mydata[1:1392,], family = quasipoisson, control =
list(msVerbose =
Hi,
I am getting a warning message when I am fitting a generalized linear
mixed model (m1.2 below).
CHOLMOD warning: matrix not positive definite
Error in objective(.par, ...) : Cholmod error `matrix not positive
definite' at file:../Supernodal/t_cholmod_super_numeric.c, line 614
Any idea?
I have data consisting of several binary responses from a large
number of subjects on seven similar items. I have been using lmer
with (crossed) random effects for subject and item. These effects are
almost always (in the case of subject, are always) significant
additions to my model,
I have data consisting of binary responses from a large number of
subjects on seven similar items. I have been using lmer with
(crossed) random effects for subject and item. These effects are
almost always (in the case of subject, always) significant additions
to the model, testing this
Dear R users
I am trying to obtain p-values for (quasi)poisson lmer models, including
Markov-chain Monte Carlo sampling and the command summary.
My problems is that p values derived from both these methods are
totally different. My question is
(1) there a bug in my code and
(2) How can I
Dear R users,
Please ignore my first email titled Lme and p values it was an unfinished
email. I am trying to calculate p values for lmer. I have read the posts on
http://wiki.r-project.org/rwiki/doku.php?id=guides:lmer-testss=lme%20and%20aov.My
question is why are the pvalue for my model so
Dear R users
I am trying to obtain p-values for (quasi)poisson lmer models, including
Markov-chain Monte Carlo sampling and the command summary.
My problems is that p values derived from both these methods are
totally different. My question is
(1) there a bug in my code and
(2) How can I
Dear all,
I am currently analyzing count data from a hierarchical design, and I?ve
tried to follow the suggestions for a correct estimation of p-values as
discusssed at R-Wiki
(http://wiki.r-project.org/rwiki/doku.php?id=guides:lmer-testss=lme%20and%20aov).
However, I have the problem that my
Den Må, 2007-02-12, 13:58 skrev Christoph Scherber:
Dear all,
I am currently analyzing count data from a hierarchical design, and I?ve
tried to follow the suggestions for a correct estimation of p-values as
discusssed at R-Wiki
Dear Henric,
Thanks, now it works; but how reliable are these estimates? Especially
with p-values close to 0.05 it is of course important that the range of
the estimates is not too large. I´ve just run several simulations, each
of which yielding sometimes quite different p-values.
Best wishes
Christoph Scherber Christoph.Scherber at agr.uni-goettingen.de writes:
Dear Henric,
Thanks, now it works; but how reliable are these estimates? Especially
with p-values close to 0.05 it is of course important that the range of
the estimates is not too large. I´ve just run several
Dear R,
I am current working on a mixed effects model, family poisson. Could
someone please explain to me how to check the residuals? I have read
another post with the same question but no answer unfotunately. I have
tried resid and qqnorm.
Many thanks
Beth Strain
Dear all,
We are dealing with a variable (BA) which indicates the overlap between
small mammal home ranges. It varies between 0 and 1 and it can be
interpreted as the probability of two home ranges to overlap,
therefore we would have modelled it with the binomial family, also
supported by the
We are dealing with a variable (BA) which indicates the overlap
between small mammal home ranges. It varies between 0 and 1 and it
can be interpreted as the probability of two home ranges to
overlap, therefore we would have modelled it with the binomial
family, also supported by the
Hello,
I have some problems trying to write up the formula for lmer.
I have 43 subjects ( random factor) which were seen twice ( Visit : repeated
measure - fixed). on each visit the patient performed a graded effort exercise
( effort : repeated measures, ordered, fixed 4 levels).
So Subject is
Dear All,
I am using lmer to fit a glmm model using the PQL approximation.
My data is not a Bernoulli (0/1) but a binomial one. The first thing I
noticed is that the method gives me the approximated covariance matrix for the
random effects but nothing about the residuals. In addition, I
Hello,
I've just located the illuminating explanation by Douglas Bates on degrees
of freedom in mixed models.
The take-home message appears to be: don't trust the p-values from lme.
Questions:
Should I give up hypothesis testing for fixed effects terms in mixed models?
Has my time spent reading
Try using mcmcsamp() to sample from the posterior distribution of the
parameter estimates. You can calculate a p-value from that, if that is
your desire. Instructions are in the R wiki:
http://wiki.r-project.org/rwiki/doku.php?id=guides:lmer-tests
HTH,
Simon.
Dan Bebber wrote:
Hello,
Greetings all,
I am using lmer (lme4 package) to analyze data where the response is a
proportion (0 to 1). It appears to work, but I am wondering if the analysis
is treating the response appropriately - i.e. can lmer do this?
I have used both family=binomial and quasibinomial - is one more
Dear Cameron,
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Cameron Gillies
Sent: Sunday, December 03, 2006 1:58 PM
To: r-help@stat.math.ethz.ch
Subject: [R] lmer and a response that is a proportion
Greetings all,
I am using lmer (lme4
On Sun, 3 Dec 2006, John Fox wrote:
Dear Cameron,
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Cameron Gillies
Sent: Sunday, December 03, 2006 1:58 PM
To: r-help@stat.math.ethz.ch
Subject: [R] lmer and a response that is a proportion
Brian Ripley [EMAIL PROTECTED] wrote:
On Sun, 3 Dec 2006, John Fox wrote:
Dear Cameron,
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Cameron Gillies
Sent: Sunday, December 03, 2006 1:58 PM
To: r-help@stat.math.ethz.ch
Subject: [R] lmer
@stat.math.ethz.ch
Subject: Re: [R] lmer and a response that is a proportion
Dear Brian and John,
Thanks for your insight. I'll clarify a couple of things
incase it changes your advice.
My response is a ratio of two measures taken during a bird's
path, which varies from 0 to 1, so I
[mailto:[EMAIL PROTECTED]
Sent: Sunday, December 03, 2006 6:31 PM
To: Prof Brian Ripley; John Fox
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] lmer and a response that is a proportion
Dear Brian and John,
Thanks for your insight. I'll clarify a couple of things
incase it changes your advice
, December 03, 2006 6:31 PM
To: Prof Brian Ripley; John Fox
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] lmer and a response that is a proportion
Dear Brian and John,
Thanks for your insight. I'll clarify a couple of things
incase it changes your advice.
My response is a ratio of two
Cameron Gillies cgillies at ualberta.ca writes:
Hello Simon and John,
I'm afraid I need to include random effects, both a random intercept and
possibly random coefficients and it doesn't look like betareg can do that.
Kevin Wright has posted wish on R-wiki for beta mixed effects model. There
://socserv.mcmaster.ca/jfox
-Original Message-
From: Cameron Gillies [mailto:[EMAIL PROTECTED]
Sent: Sunday, December 03, 2006 6:31 PM
To: Prof Brian Ripley; John Fox
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] lmer and a response that is a proportion
Dear
Hi
I know that p-values doesn't appear anymore in the summary of a linear
mixed-model with lmer. However, if I do a mixed logistic regression with
lmer using family=binomial, the summary includes a p-values for fixed
effects.
Is it normal, could I use those p-values to interpret the fixed
On 11/29/06, Julien [EMAIL PROTECTED] wrote:
Hi
I know that p-values doesn't appear anymore in the summary of a linear
mixed-model with lmer. However, if I do a mixed logistic regression with
lmer using family=binomial, the summary includes a p-values for fixed
effects.
Is it normal, could I
I use lmer to fit a model m
what is predict function for this one?
I try predict(m,new_data)
but it doesn't work?
Aimin
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
Alan Juilland Alan.Juilland at unil.ch writes:
2/ I read somewhere that lme is more adequate when heteroscedasticity is
strong. Do I have to use lme instead of lmer ?
This is correct, because currently lme has the weights argument to handle this
(i.e. weights=varPower()), while lmer is
Hi everybody,
I'm trying to analyse a set of data with a non-normal response, 2 fixed
effects and 1 nested random effect with strong heteroscedasticity in the
model.
I planned to use the function lmer : lmer(resp~var1*var2 + (1|rand)) and
then use permutations based on the t-statistic given
Hi all,
I am having issues comparing models with lmer. As an example, when
I run the code below the model summaries (AIC, BIC, loglik) differ between
the summary() and anova() commands. Can anyone clear up what's wrong?
Thank you!
Darren Ward
library(lme4)
data(sleepstudy)
On 18 Oct 2006 16:21:11 -0400, Darren M. Ward
[EMAIL PROTECTED] wrote:
Hi all,
I am having issues comparing models with lmer. As an example, when
I run the code below the model summaries (AIC, BIC, loglik) differ between
the summary() and anova() commands. Can anyone clear up what's wrong?
Hi everybody,
I'm trying to analyse a set of data with a non-normal response, 2 fixed
effects and 1 nested random effect with strong heteroscedasticity in the
model.
I planned to use the function lmer : lmer(resp~var1*var2 + (1|rand)) and
then use permutations based on the t-statistic given
Thank you for the pointer to the FAQ. Thought I had searched the FAQ
thoroughly, obviously I didn't.
Unfortunately, my stats aren't up to fully understanding the explanation
and the proposed solution in the FAQ.
For the time being, I would recommend using a Markov Chain Monte Carlo
sample
Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Darren M. Ward
Sent: Thursday, October 05, 2006 5:18 PM
To: r-help@stat.math.ethz.ch
Subject: [R] lmer BIC changes between output and anova
list,
i am using lmer to fit multilevel models and trying to use
anova
When I do lmer models I only get Estimate, Standard Error and t value in
the output for the fixed effects.
Is there a way I get degrees of freedom and p values as well?
I'm a very new to R, so sorry if this a stupid question.
Thank you
- Mike
Mike Ford
Centre for Speech and Language
On Fri, 2006-10-06 at 17:05 +0100, Mike Ford wrote:
When I do lmer models I only get Estimate, Standard Error and t value in
the output for the fixed effects.
Is there a way I get degrees of freedom and p values as well?
I'm a very new to R, so sorry if this a stupid question.
Thank
list,
i am using lmer to fit multilevel models and trying to use anova to compare the
models. however, whenever i run the anova, the AIC, BIC and loglik are
different from the original model output- as below. can someone help me out
with why this is happening? (i'm hoping the output
The first example in the 'lmer' help page is the following:
(fm1 - lmer(Reaction ~ Days + (Days|Subject), sleepstudy))
If this does not answer your question, please provide commented,
minimal, self-contained, reproducible code with your explanation of why
and how it does not.
Dear all,
I wish to thank Christoph Buser and John Wilkinson for their input, and
especially John his examples and for for pointing me to the thread
'Doubt about nested aov output' where the rat-example was hiding...:
Dear Henrik
There is an article in the R-News Fitting linear mixed models
in R in which you can find some examples for the syntax of
nested and non-nested design.
http://cran.r-project.org/doc/Rnews/Rnews_2005-1.pdf
Hope this helps
Christoph
Dear all,
During my pre-R era I tried (yes, tried) to understand mixed models by
working through the 'rat example' in Sokal and Rohlfs Biometry (2000)
3ed p 288-292. The same example was later used by Crawley (2002) in his
Statistical Computing p 363-373 and I have seen the same data being
Dear readers,
Is it possible to specify a model
y=X %*% beta + Z %*% b ; b=(b_1,..,b_k) and b_i~N(0,v^2) for i=1,..,k
that is, a model where the random slopes for different covariates are i.i.d.,
in lmer() and how?
In lme() one needs a constant grouping factor (e.g.: all=rep(1,n)) and would
On 7/4/06, John Christie [EMAIL PROTECTED] wrote:
Hi,
I have been having a tedious issue with lmer models with lots of
factors and lots of levels. In order to get the basic information at
the beginning of the print out I also have to generate these enormous
tables as well. Is there
Hi,
I have been having a tedious issue with lmer models with lots of
factors and lots of levels. In order to get the basic information at
the beginning of the print out I also have to generate these enormous
tables as well. Is there a method command to leave off all of the
I'm estimating two models for data with n = 179 with four clusters (21,
70, 36, and 52) named siteid. I'm estimating a logistic regression model
with random intercept and another version with random intercept and
random slope for one of the independent variables.
fit.1 -
/
http://www.student.kuleuven.be/~m0390867/dimitris.htm
- Original Message -
From: Rick Bilonick [EMAIL PROTECTED]
To: R Help r-help@stat.math.ethz.ch
Sent: Thursday, June 29, 2006 3:52 PM
Subject: [R] lmer - Is this reasonable output?
I'm estimating two models for data with n = 179
On Fri, 2006-06-23 at 21:38 -0700, Spencer Graves wrote:
Permit me to try to repeat what I said earlier a little more clearly:
When the outcomes are constant for each subject, either all 0's or all
1's, the maximum likelihood estimate of the between-subject variance in
Inf. Any
see inline
Rick Bilonick wrote:
On Fri, 2006-06-23 at 21:38 -0700, Spencer Graves wrote:
Permit me to try to repeat what I said earlier a little more clearly:
When the outcomes are constant for each subject, either all 0's or all
1's, the maximum likelihood estimate of the
Rick Bilonick wrote:
I guess the moral is before you do any computations you have to make
sure the procedure makes sense for the data.
Is this a candidate for the fortunes package? (an oxymoronic profound, but
obvious comment).
-- Bert Gunter
Genentech Non-Clinical Statistics
South
Permit me to try to repeat what I said earlier a little more clearly:
When the outcomes are constant for each subject, either all 0's or all
1's, the maximum likelihood estimate of the between-subject variance in
Inf. Any software that returns a different answer is wrong. This is
On Tue, 2006-06-20 at 20:27 +0200, Göran Broström wrote:
On 6/19/06, Rick Bilonick [EMAIL PROTECTED] wrote:
On Sun, 2006-06-18 at 13:58 +0200, Douglas Bates wrote:
If I understand correctly Rick it trying to fit a model with random
effects on a binary response when there are either 1 or 2
You could think of 'lmer(..., family=binomial)' as doing a separate
glm fit for each subject, with some shrinkage provided by the assumed
distribution of the random effect parameters for each subject. Since
your data are constant within subject, the intercept in your model
without
On Wed, 2006-06-21 at 08:35 -0700, Spencer Graves wrote:
You could think of 'lmer(..., family=binomial)' as doing a separate
glm fit for each subject, with some shrinkage provided by the assumed
distribution of the random effect parameters for each subject. Since
your data are
On 6/19/06, Rick Bilonick [EMAIL PROTECTED] wrote:
On Sun, 2006-06-18 at 13:58 +0200, Douglas Bates wrote:
If I understand correctly Rick it trying to fit a model with random
effects on a binary response when there are either 1 or 2 observations
per group.
If you look at Rick's examples,
On Sat, 2006-06-17 at 09:46 -0700, Spencer Graves wrote:
'lmer' RETURNS AN ERROR WHEN SAS NLMIXED RETURNS AN ANSWER
Like you, I would expect lmer to return an answer when SAS
NLMIXED does, and I'm concerned that it returns an error message instead.
Your example is not
On Sun, 2006-06-18 at 13:58 +0200, Douglas Bates wrote:
If I understand correctly Rick it trying to fit a model with random
effects on a binary response when there are either 1 or 2 observations
per group. I think that is very optimistic because there is so little
information available per
On Wed, 14 Jun 2006, Martin Henry H. Stevens wrote:
Hi folks,
Warning: I don't know if the result I am getting makes sense, so this
may be a statistics question.
The fitted values from my binomial lmer mixed model seem to
consistently overestimate the cell means, and I don't know why. I
If I understand correctly Rick it trying to fit a model with random
effects on a binary response when there are either 1 or 2 observations
per group. I think that is very optimistic because there is so little
information available per random effect (exactly 1 or 2 bits of
information per random
'lmer' RETURNS AN ERROR WHEN SAS NLMIXED RETURNS AN ANSWER
Like you, I would expect lmer to return an answer when SAS
NLMIXED does, and I'm concerned that it returns an error message instead.
Your example is not self contained, and I've been unable to
get the result you
see inline
Martin Henry H. Stevens wrote:
Hi folks,
Warning: I don't know if the result I am getting makes sense, so this
may be a statistics question.
The fitted values from my binomial lmer mixed model seem to
consistently overestimate the cell means, and I don't know why. I
Hi folks,
Warning: I don't know if the result I am getting makes sense, so this
may be a statistics question.
The fitted values from my binomial lmer mixed model seem to
consistently overestimate the cell means, and I don't know why. I
assume I am doing something stupid.
Below I include
I'm using FC4 and R 2.3.1 to fit a mixed effects logistic regression.
The response is 0/1 and both the response and the age are the same for
each pair of observations for each subject (some observations are not
paired). For example:
id response age
10 30
10 30
21 55
2
: Fri 5/19/2006 7:54 PM
To: Frank E Harrell Jr
Cc: Douglas Bates; r-help
Subject:Re: [R] lmer, p-values and all that
On Fri, 2006-05-19 at 17:44 -0500, Frank E Harrell Jr wrote:
Douglas Bates wrote:
Users are often surprised and alarmed that the summary of a linear
. . . .
Doug
Users are often surprised and alarmed that the summary of a linear
mixed model fit by lmer provides estimates of the fixed-effects
parameters, standard errors for these parameters and a t-ratio but no
p-values. Similarly the output from anova applied to a single lmer
model provides the sequential
Douglas Bates wrote:
Users are often surprised and alarmed that the summary of a linear
. . . .
Doug,
I have been needing this kind of explanation. That is very helpful.
Thank you. I do a lot with penalized MLEs for ordinary regression and
logistic models and know that getting sensible
On Fri, 2006-05-19 at 17:44 -0500, Frank E Harrell Jr wrote:
Douglas Bates wrote:
Users are often surprised and alarmed that the summary of a linear
. . . .
Doug,
I have been needing this kind of explanation. That is very helpful.
Thank you. I do a lot with penalized MLEs for ordinary
Amelie LESCROEL lescroel_cebc at no-log.org writes:
Hello,
Im trying to fit the following model:
Dependent variable: MAXDEPTH (the maximum depth reached by a penguin during
a given dive)
Fixed effects: SUCCESSMN (an index of the individual quality of a bird),
STUDYDAY (the day of
Hello,
Im trying to fit the following model:
Dependent variable: MAXDEPTH (the maximum depth reached by a penguin during
a given dive)
Fixed effects: SUCCESSMN (an index of the individual quality of a bird),
STUDYDAY (the day of the study, from -5 to 20, with 0=Dec 20), and the
Hi, Doug:
I just confirmed a discrepancy (reported by Paul Cossens) between the
output of lme and lmer using the Oxide example in Pinheiro and Bates,
pp. 167-170. It appears that coef(lmer(...)) adds only the ranef for
Wafer and omits the one for Lot.
Consider the
Sorry to be late coming to this discussion. I was just talking with
Harold on the telephone and we decided why the inconsistency exists.
The lme function is designed to fit models with strictly nested random
effects. (It can be used to fit models with crossed random effects
but the process is
Paul:
I may have found the issue (which is similar to your conclusion). I
checked using egsingle in the mlmRev package as these individuals are
strictly nested in this case:
library(mlmRev)
library(nlme)
fm1 - lme(math ~ year, random=~1|schoolid/childid, egsingle)
fm2 - lmer(math ~ year
My question relates to problems that I'm having matching lme and lmer
examples in PB.
using Matix 0.995
In the Oxide example in p167-170 I can't get the level 2 coefficient
estimates to match
the fm1Oxide model in lme is
data(Oxide,package=nlme)
lme(Thickness~1,Oxide)
which I translate in
Hi!
I've just updated my version of R, from 2.0.1 to 2.2.1.
When I run my old lmer models in the new version, I get no p-values for
the fixed factors, i got them with the older version of R.
Also, when I try to run a binomial model in lmer, with the AGQ method I
get the following message:
Error in
Dear Erlend
For an answer to your first question about the degrees of
freedom, have a look at:
https://stat.ethz.ch/pipermail/r-help/2006-January/016384.html
To answer the second question: I think that the AGQ method is
not yet implemented as the error message implies. lmer is a
function that
Will your data support using lme in the 'nlme' package? If yes, I
suggest you switch. This is consistent with a response given recently
by Doug Bates to a crudely related question
(http://finzi.psych.upenn.edu/R/Rhelp02a/archive/68340.html).
You probably know that lmer
Hello!
I would like to use lmer() to fit data, which are some estimates and
their standard errors i.e kind of a meta analysis. I wonder if weights
argument is the right one to use to include uncertainty (standard
errors) of data into the model. I would like to use lmer(), since I
would like
RSiteSearch(lmer nested) produced 85 hits, the first of which looks
to me like it would answer your question
(http://finzi.psych.upenn.edu/R/Rhelp02a/archive/61571.html): Have you
tried replacing state with region:state something like the following:
lmer (y ~ black*female + (1 |
On 1/14/06, Leo Gürtler [EMAIL PROTECTED] wrote:
Dear altogether,
is it possible to integrate weights arguments within lmer to
incorporate statements to handle heteroscedasticity as it is possible
with lme?
I searched the R-archive but found nothing, insofer I assume it is not
possible, but
Dear altogether,
is it possible to integrate weights arguments within lmer to
incorporate statements to handle heteroscedasticity as it is possible
with lme?
I searched the R-archive but found nothing, insofer I assume it is not
possible, but as lmer is under heavy develpoment, maybe something
The version of lmer based on the supernodal Cholesky factorization,
which we will release real soon, does not crash on this example. It
does give very large estimates of the variances in that model fit, at
least for the simulation that I ran.
It is best if you use set.seed(123454321) (or
Thanks to some help by Doug Bates (and the updated version of the Matrix
package), I've refined my question about fitting nested and non-nested
factors in lmer(). I can get it to work in linear regression but it
crashes in logistic regression. Here's my example:
# set up the predictors
1 - 100 of 128 matches
Mail list logo