Hoyt's ANOVA and Cronbach's alpha are the same statistic. I think there is an
example in Nunnally and Bernstein. I have no idea what a W statistic is. Can
you explain that?
-Original Message-
From: [EMAIL PROTECTED] on behalf of thamrin hm
Sent: Mon 9/3/2007 3:37 AM
To: R-help@stat.math
Ahh, the key to getting what you want is to ask the same question over
and over again. This question is not about R and an answer can be found
in all basic books on hierarchical linear models.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Simon P
You can get this using alpha() or alpha.Summary() in the MiscPsycho package.
Stratified alpha coefficients are coming for the next release, BTW.
-Original Message-
From: [EMAIL PROTECTED] on behalf of Steve Powell
Sent: Tue 8/21/2007 10:02 AM
To: r-help@stat.math.ethz.ch
Subject: [R] sta
> -Original Message-
> From: Thomas Lumley [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, August 07, 2007 11:06 AM
> To: Doran, Harold
> Cc: Lucy Radford; r-help@stat.math.ethz.ch
> Subject: Re: [R] Robust Standard Errors for lme object
>
> On Tue, 7 Aug 2
Lucy:
Why are you interested in robust standard errors from lme? Typically,
robust standard errors are sought when there is model misspecification
due to ignoring some covariance among units with a group.
But, a mixed model is designed to directly account for covariances among
units within a grou
Gang:
I think what Peter is asking for is for you to put some of your output
in an email. If the values of the fixed effects are the same across
models, but the F-tests are different, then there is a whole other
thread we will point you to for an explanation. (I don't presume to
speak for other pe
Maybe use the sink() function
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Yuchen Luo
> Sent: Thursday, August 02, 2007 10:17 PM
> To: r-help@stat.math.ethz.ch
> Subject: [R] Saving an expression to a file
>
> Dear Friends.
> I have a very long
at.math.ethz.ch; Doran, Harold
> Subject: Re: [R] Matrix Multiplication, Floating-Point, etc.
>
> Thank you for responding!
>
> I realize that floating point operations are often inexact,
> and indeed, the difference between the two answers is within
> the all.equal tolerance
This is giving you exactly what you are asking for. The operator * does
element by element multiplication. So, .48 + -.48 =0, right? Is there
another mathematical possibility you were expecting?
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Tal
Michael
Assume your data frame is called "data" and your variable is called
"V1". Converting this to a factor is:
data$V1 <- factor(data$V1)
Creating the classes can be done using ifelse(). Something like
data$class <- ifelse(data$V1 < .21, A, ifelse(data$V1 < .41, B, C))
Harold
> -Orig
I think you want to use detach()
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Weiwei Shi
> Sent: Tuesday, July 03, 2007 3:27 PM
> To: r-help@stat.math.ethz.ch
> Cc: [EMAIL PROTECTED]
> Subject: [R] reinforce library to re-load
>
> Hi,
>
> I am
Just take advantage of R's vectorized calculations as
x <- seq(1:600)
.8^x
Harold
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of livia
> Sent: Tuesday, July 03, 2007 12:05 PM
> To: r-help@stat.math.ethz.ch
> Subject: [R] sequences
>
>
> Hi, I
All of my resources for numerical analysis show that the spectral
decomposition is
A = CBC'
Where C are the eigenvectors and B is a diagonal matrix of eigen values.
Now, using the eigen function in R
# Original matrix
aa <- matrix(c(1,-1,-1,1), ncol=2)
ss <- eigen(aa)
# This results yields bac
/2007 3:32 PM
To: Doran, Harold
Cc: Jean-Baptiste Ferdy; r-help@stat.math.ethz.ch
Subject: Re: [R] degrees of freedom in lme
On Mon, 2007-06-25 at 13:15 -0400, Doran, Harold wrote:
> This is such a common question that it has a an "FAQ-like" response from Doug
> Bates. Google &
This is such a common question that it has a an "FAQ-like" response from Doug
Bates. Google "lmer p-values and all that" to find the response.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Jean-Baptiste Ferdy
> Sent: Monday, June 25, 2007 12:
I dealt with something like this recently.
x <- data.frame(plot = gl(2,5), tree = rnorm(10))
y <- split(x, x$plot)
ss <- numeric(2)
for(i in 1:2){
ss[i] <- sample(row.names(y[[i]][1]), 1)
}
z <- x[ss,]
People help out of the goodness of the hearts and not for publication
recognition.
In addition to Peter's comments, the following link summarizes the issue
as well. This is a direct response to the SAS/lmer DF issue.
https://stat.ethz.ch/pipermail/r-help/2006-May/094765.html
Harold
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf
Thanks for the clarification, Marc. Also, I should correct my error below. I
wrote excel's limit is 16^2. But, it is 2^16 -1
-Original Message-
From: Marc Schwartz [mailto:[EMAIL PROTECTED]
Sent: Fri 6/1/2007 6:23 PM
To: Doran, Harold
Cc: Guanrao Chen; r-help@stat.math.ethz.ch
Su
There is no maximum size. This will be driven by (at least) two issues.
First, how much memory you have on your own computer and second what
data you have in each cell. For instance, an integer takes less memory
than a floating point.
Other spreadsheet programs like excel limit the number of rows
Rina
p-values are not generated as a part of the lmer object returned. If you read
the ranef help it will show you two things. First, how you can get a
"caterpillar plot" and second how you can get the posterior variances of the
random effects. Then, you can do your test to see if they are diff
By the end of the day I will have a vignette completed for MiscPsycho.
This vignette lays out the mathematical details for the primary
functions in the package and provides substantive examples on how to use
these functions in a sample session.
This vignette will ultimately end up being distribute
I have just submitted MiscPsycho to CRAN. MiscPsycho contains functions
for miscellaneous psychometrics that may be useful for applied
psychometricians. MML estimation already exists in the ltm package.
Hence, a jml option is provided for users who prefer this method. The
jml function gives back ra
> The best, of course, would be to get rid of Perl altogether.
In Python, it is possible to make standalone executables. Is it possible
to also do this in Perl, then one could eliminate a perl install. Or, is
it possible to use Python to accomplish what perl is currently doing? I
may be getting
Hi Gabor:
I tried the link below, but it seems to be broken.
> -Original Message-
> From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
> Sent: Friday, May 04, 2007 10:05 AM
> To: Doran, Harold
> Cc: Duncan Murdoch; r-help@stat.math.ethz.ch
> Subject: [SPAM] -
ase, this may be too disheartening.
Harold
> -Original Message-
> From: Duncan Murdoch [mailto:[EMAIL PROTECTED]
> Sent: Thursday, May 03, 2007 3:51 PM
> To: Doran, Harold
> Cc: Gabor Grothendieck; r-help@stat.math.ethz.ch
> Subject: [SPAM] - Re: [SPAM] - Re: [R] R packag
age-
> From: Duncan Murdoch [mailto:[EMAIL PROTECTED]
> Sent: Thursday, May 03, 2007 3:24 PM
> To: Doran, Harold
> Cc: Gabor Grothendieck; r-help@stat.math.ethz.ch
> Subject: [SPAM] - Re: [R] R package development in windows -
> Bayesian Filter detected spam
>
> On 5/3/2007
othendieck [mailto:[EMAIL PROTECTED]
> Sent: Thursday, May 03, 2007 2:50 PM
> To: Doran, Harold
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] R package development in windows
>
> It can find sh.exe so you haven't installed Rtools.
>
> There are several HowTo
I'm attempting to build an R package for distribution and am working
from the directions found at
http://www.maths.bris.ac.uk/~maman/computerstuff/Rhelp/Rpackages.html#Wi
n-Win
I've read through Writing R Extensions and various other "helpful" web
sites. I've installed all relevant software (perl,
citation()
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Tomas Mikoviny
> Sent: Wednesday, May 02, 2007 11:44 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] reference in article
>
> Hi all R positive,
>
> does anyone know how to refer R in a
In the near future I will release MiscPsycho, a package that contains
various functions useful for applied psychometricians. I would like to
include some data sets for distribution in the package, but have not
created any of these on my own, but have used data distributed in other
packages such as
sample(1:2, 10, replace=TRUE)
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of raymond chiruka
> Sent: Wednesday, April 25, 2007 9:45 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] creating random numbers
>
> l want to create a column of 1
No, use a while loop. Something like
change <- 1
while(abs(change) > .001 ){
do stuff
change <- updated change
}
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of rach.s
> Sent: Thursday, April 19, 2007 8:00 AM
> To: r-help@stat.math.
I think there are many who can help, but this question is quite vague.
This assumes we have access to the book you note and can make sense of
your question w/o sample data.
If you cannot find a sample data set please create a sample data file.
However, there are so many sample data sets in the mlm
x <- c(1,4,15,6,7)
which(x==max(x))
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> [EMAIL PROTECTED]
> Sent: Wednesday, April 04, 2007 11:14 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] argmax
>
> Hello,
>
> Is there any function that re
One option is to use lmer in a monte carlo simulation. I just did this last
week. Check out the article published in the American Statistician and can be
found at http://maven.smith.edu/~nhorton/R/r.pdf.
The article is not about power per se, but is about R as a toolbox for
mathematical statist
As Doug noted yesterday, you may have a limit on the memory needed to
create the model matrix for the fixed effects. But, aside from that,
based on what I see below, it appears you have manually created a model
matrix yourself. If that's true, you don't need to do that. You can
create a single colu
The function integrate() uses AGQ. There are other functions for
gaussian quadrature in the statmod() package that I really like.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Caio
> Lucidius Naberezny Azevedo
> Sent: Wednesday, March 21, 2007 5
Suppose I have longitudinal data and want to use the econometric strategy of
"de-meaning" a model matrix by time. For sake of illustration 'mat' is a model
matrix for 3 individuals each with 3 observations where ``1'' denotes that
individual i was in group j at time t or ``0'' otherwise.
mat <
It's been an interseting game of "telephone". Actually, the thread
started with a recommendation to have a place on CRAN for prospective
employers to place job adverts that require R as a skill.
I think the sig is a good idea. But, I think it would be much easier to
have something akin to what Pyt
The other day, CNN had a story on working at Google. Out of curiosity, I
went to the Google employment web site (I'm not looking, but just
curious). In perusing their job posts for statisticians, preference is
given to those who use R and python. Other languages, S-Plus and
something called SAS wer
Hi Dave
We had a bit of an off list discussion on this. You're correct, it can
be negative IF the covariance among individual items is negative AND if
that covariance term is larger than the sum of the individual item
variances. Both of these conditions would be needed to make alpha go
negative.
Weiwei
Something is wrong. Coefficient alpha is bounded between 0 and 1, so
negative values are outside the parameter space for a reliability
statistic. Recall that reliability is the ratio of "true score" variance
to "total score variance". That is
var(t)/ var(t) + var(e)
If all variance is tru
representation. I need to walk through this and see what
turns up.
Thanks for the recommendation.
> -Original Message-
> From: Gabor Grothendieck [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, January 24, 2007 11:06 AM
> To: Doran, Harold
> Cc: r-help@stat.math.ethz.ch
&
Perfect, thxs
> -Original Message-
> From: Dimitris Rizopoulos
> [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, January 24, 2007 10:49 AM
> To: Doran, Harold
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] Replace missing values in lapply
>
> you need
I have some matrices stored as elements in a list that I am working
with. On example is provided below as TP[[18]]
> TP[[18]]
level2
level1 1 2 3 4
1 79 0 0 0
2 0 0 0 0
3 0 0 0 0
4 0 0 0 0
Now, using prop.table on this gives
> prop.table(TP[[18]],1)
I have matrices stored within a list like something as follows:
a <- list(matrix(rnorm(50), ncol=5), matrix(rnorm(50), ncol=5))
b <- list(matrix(rnorm(50), nrow=5), matrix(rnorm(50), nrow=5))
I don't recall how to perform matrix multiplication on each list element
such that the result is a new li
Aov is not the right function for this problem. Lmer is designed for
multilevel modeling. There are a lot of resources, but start with the
following vignette
Library(lme4)
vignette("MlmRevSoft")
And then turn to Pinhiero and Bates
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto
help(package='lme4') will tell you
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> [EMAIL PROTECTED]
> Sent: Monday, January 15, 2007 8:37 AM
> To: Martin Maechler
> Cc: [EMAIL PROTECTED]; r-help@stat.math.ethz.ch;
> [EMAIL PROTECTED]
> Subject
I don't know of any functions specifically designed to handle DIF (e.g.,
mantel-hantzel). But, ltm does give you the point estimates and standard
errors so you can do a t-test between the focal and reference groups.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECT
I'm a bit new with python, but have found it extremely easy to learn and
use. I have been using it to pre-process some text files that we often
deal with and need to be formatted in a certain way before they can be
used for statistical analysis in another software program.
I suppose there is one t
I don't run the program this way so I don't know. But it makes me wonder if the
problem is with \Sexpr{} or the way she is trying to run Sweave and compile.
So, is it possible that code chunks are working but not \Sexpr{}?
If you have something like
<<>>=
R code here
@
Do you get that output?
Suzette
I have not experienced any problems with \Sexpr{} and new R versions. It might
be helpful if you could provied a minimal example of your .Rnw and how \Sexpr{}
is used within.
It is obvious what OS you're using given the path dirctories below, but
normally it might be useful to be expli
?VarCorr
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Rick Bilonick
> Sent: Wednesday, December 13, 2006 11:40 AM
> To: R Help
> Subject: [R] Obtaining Estimates of the Random Effects Parameters
>
> I'm running simulation using lme and sometime
BTW, Mac OS sits on the Darwin unix system. So, you have all the
benefits of Mac and can access unix via the terminal (Steve Jobs is
brilliant). Things like emacs are waiting for you to use on the Mac. I
haven't explored whether one can install R on the Mac and use it via the
unix interface (or whe
t: Wednesday, December 06, 2006 12:07 PM
> To: Doran, Harold
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] intercept value in lme
>
> It is boundend, you're right. In fact it is -25<=X<=0
>
> These are cross-national survey data (I was investigated 7
> countries
As Andrew noted, you need to provide more information. But, what I see
is that your model assumes X is continuous but you say it is bounded,
-25 < X < 0
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of victor
> Sent: Wednesday, December 06, 2006 3:3
This question is slightly off topic, but I'll use R to try and make it
as relevant as possible. I'm working on a problem where I want to
compare estimates from a posterior distribution with a uniform prior
with those obtained from a frequentist approach. Under these conditions
the estimates should
David:
Is this what you are looking for?
set.seed(10)
tmp <- data.frame(group = gl(3,10), something = rnorm(30))
with(tmp, tapply(something, group, mean))
1 2 3
-0.4906568 0.3695915 -0.9129640
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAI
I would turn this on its head. The problem with social science grad
schools is that students are not expected to know R. In my org doing
psychometrics, we won't even consider an applicant if they only know
SPSS.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] O
You might try the statmod package which provides nodes and weights for
gaussian quadrature.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Xiaofan Cao
> Sent: Wednesday, November 08, 2006 12:43 PM
> To: r-help@stat.math.ethz.ch
> Subject: [R] Nume
or, it could mean you need to recenter your time variable.
-Original Message-
From: [EMAIL PROTECTED] on behalf of Marc Bernard
Sent: Fri 11/3/2006 7:24 AM
To: r-help@stat.math.ethz.ch
Subject: [R] correaltion equal 1
Dear All,
I wonder if this is a technical or an interpretation
Not currently.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of John Fox
> Sent: Thursday, November 02, 2006 9:51 AM
> To: 'R-help'
> Subject: [R] correlation argument for lmer?
>
> Dear r-help members,
>
> Can lmer() in the lme4 package fit model
You could do this
> V1 <- c("apple","honey","milk","bread","butter")
> V2 <- c("bread","milk")
> intersect(V1,V2)
[1] "milk" "bread"
> setdiff(V1,V2)
[1] "apple" "honey" "butter"
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Antje
> Sent: Wed
I would like to be able to place a normal distribution surrounding the
predicted values at various places on a plot. Below is some toy code
that creates a scatterplot and plots a regression line through the data.
library(MASS)
mu <- c(0,1)
Sigma <- matrix(c(1,.8,.8,1), ncol=2)
set.seed(123)
x <- m
This question comes up periodically, probably enough to give it a proper
thread and maybe point to this thread for reference (similar to the
'conservative anova' thread not too long ago).
Moving from lme syntax, which is the function found in the nlme package,
to lmer syntax (found in lme4) is not
See ?read.table and the FAQs on importing data
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Lorenzo Isella
> Sent: Wednesday, October 18, 2006 11:09 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] Automatic File Reading
>
> Dear All,
> I am gi
Because rda files are R objects. Use load, not read.table
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Rita Gottloiber
> Sent: Thursday, October 12, 2006 10:36 AM
> To: r-help@stat.math.ethz.ch; r-help@stat.math.ethz.ch
> Subject: [R] import rda
?vcov
From: [EMAIL PROTECTED] on behalf of zhijie zhang
Sent: Thu 10/12/2006 9:56 AM
To: R-help@stat.math.ethz.ch
Subject: [R] how to get the variance-covariance matrix/information of alphaand
beta after fitting a GLMs?
Dear friends,
After fitting a generali
Hi David:
In looking at your original post it is a bit difficult to ascertain
exactly what your null hypothesis was. That is, you want to assess
whether there is a treatment effect at time 3, but compared to what. I
think your second post clears this up. You should refer to pages 224-
225 of Pinhi
It's not really possible to help without knowing what errors you received and
maybe some reproducible code. I think I remember this, though. From what I
recall, there was no distinction between box and chick, so you cannot estimate
both variance components.
> -Original Message-
> From:
Do you mean something like this:
plot(approx(Day,V), type='l')
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> [EMAIL PROTECTED]
> Sent: Monday, October 02, 2006 10:32 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] line plot through NA
>
>
Dan:
lmer cannot currently be used for the 2PL. As you note, it is straightforward
to estimate the 1PL, but the a-parameters present a current challenge. Doug
mentioned to me the other day he is doing some work on this, so I have copied
him on this reply.
Harold
-Original Message-
Fr
Peter,
There is a much easier way to do this. First, you should consider
organizing your data as follows:
set.seed(1) # for replication only
# Here is a sample dataframe
tmp <- data.frame(city = gl(3,10, label = c("London", "Rome","Vienna"
)), q1 = rnorm(30))
# Compute the means
with(tmp, tappl
It depends a bit on what function you are using. For example,
set.seed(1)
xx <- c(NA, rnorm(10))
> mean(xx)
[1] NA
> mean(xx, na.rm=TRUE)
[1] 0.1322028
Is how you would use this to compute a mean.
Harold
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Beh
Suppose I have a square matrix P
P <- matrix(c(.3,.7, .7, .3), ncol=2)
I know that
> P * P
Returns the element by element product, whereas
> P%*%P
Returns the matrix product.
Now, P^2 also returns the element by element product. But, is there a
slick way to write
P %*% P %*% P
Obviously,
Also, it is probably easier to use gl() than coerce your data into a
factor
fact <- gl(3, 3, label = c("A", "B", "C"))
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Liaw, Andy
> Sent: Tuesday, September 12, 2006 11:32 AM
> To: Afshartous, David;
Just add the following to your code
new.fact = fact[1:6, drop=T]
> new.fact
[1] A A A B B B
Levels: A B
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Afshartous, David
> Sent: Tuesday, September 12, 2006 11:23 AM
> To: r-help@stat.math.ethz.
I don't know, this seems pretty simple and intuitive (to me)
# Create a sample data set with 439 variables
tmp <- data.frame(matrix(c(rnorm(4390)), ncol=439))
colnames(tmp)<-paste("col", 1:439, sep = "")
# rename a certain variable in that dataset
names(tmp)[(which(names(tmp)=='col1'))]<-'NewName
names(data) <- c('Apple', 'Orange')
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Ethan Johnsons
> Sent: Monday, September 11, 2006 12:49 PM
> To: r-help@stat.math.ethz.ch
> Subject: [R] rename cols
>
> A quick question please!
>
> How do you r
Hi Duncan
Here is a bit more detail, this is a bit tough to explain, sorry for not
being clear. Ordering is not important because the vector I am creating
is used as a sufficient statistic in an optimization routine to get some
MLEs. So, any combination of the vector that sums to X is OK. But, the
Dear list
Suppose I have the following vector:
x <- c(3,4,2,5,6)
Obviously, this sums to 20. Now, I want to have a second vector, call it
x2, that sums to x where 5 <= x <= 20, but there are constraints.
1) The new vector must be same length as x
2) No element of the new vector can be 0
3) Ele
This is a very basic question, but I am a bit confused with optim. I
want to get the MLEs using optim which could replace the newton-raphson
code I have below which also gives the MLEs. The function takes as input
a vector x denoting whether a respondent answered an item correctly
(x=1) or not (x=0
Why are the results not reliable?
From: ESCHEN Rene [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 23, 2006 3:48 AM
To: Spencer Graves; r-help@stat.math.ethz.ch
Cc: Doran, Harold
Subject: RE: [R] Random structure of
mailto:[EMAIL PROTECTED] On Behalf Of Iuri
Gavronski
Sent: Thursday, August 17, 2006 1:26 PM
To: Doran, Harold
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] Variance Components in R
I am trying to replicate Finn and Kayandé (1997) stud
This will give you many
examples.
vignette('MlmSoftRev')
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Iuri Gavronski
Sent: Thursday, August 17, 2006 11:16 AM
To: Doran, Harold
Subject: Re: [R] Vari
Iuri:
The lmer function is optimal for large data with crossed random effects.
How large are your data?
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Iuri Gavronski
> Sent: Thursday, August 17, 2006 11:08 AM
> To: Spencer Graves
> Cc: r-help@stat
6 PM
> To: Doran, Harold
> Cc: r-help@stat.math.ethz.ch
> Subject: Re: [R] read.csv issue
>
> On Wed, 16 Aug 2006, Prof Brian Ripley wrote:
>
> > Set allowEscapes = FALSE when reading. See the help page
> for more details.
> >
> > There is perhaps an argum
om: jim holtman [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 16, 2006 3:10 PM
To: Doran, Harold
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] read.csv issue
Try 'gsub'
> y
schid
I'm trying to read in some data from a .csv format and have come across
the following issue. Here is a simple example for replication
# A sample .csv format
schid,sch_name
331-802-7081,School One
464-551-7357,School Two
388-517-7627,School Three \& Four
388-517-4394,School Five
Note the third lin
Can you provide the summary(m2) results?
> -Original Message-
> From: Simon Pickett [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, August 16, 2006 7:14 AM
> To: Doran, Harold
> Cc: r-help@stat.math.ethz.ch
> Subject: [SPAM] - RE: [R] REML with random slopes and random
&
I don't this is because you are using REML. The BLUPs from a mixed model
experience some shrinkage whereas the OLS estimates would not.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Simon Pickett
> Sent: Tuesday, August 15, 2006 11:34 AM
> To: r
Let's maybe back up a bit on this. You said you are interested in
learning about the application of the Gibbs sampler for IRT models. I
don't think opening the C++ code would be the best approach for this.
Let me recommend the following article
Patz, R. J., and Junker, B. W. (1999). A straig
> (out <- lmer(p ~ f - 1 + (1|h/t) + (1|j), set))
Doug:
It seems the nesting syntax is handled a bit differently. Is (1|h/t)
equivalent to the old lme nesting syntax?
>
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/lis
> To sum up, I can't figure out how MLWin calculates the
> standardized residuals. But I understand that this is not a
> question for the R list.
> Nevertheless, it would help if someone could point me to some
> arguments why not to use them and stick to the results
> obtainable by ranef().
;m not sure why the help says that
the standardized random effects are divided by the corresponding SE.
Maybe he can clarify if he has time.
I hope that helps
Harold
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Doran, Harold
>
> > Why do the results differ although the estimates (random
> effects and
> > thus their variances) are almost identical? I noticed that
> lme() does
> > not compute the standard errors of the variances of the
> random effects
> > - for several reasons, but if this is true, how does
> ran
You can have one observation per subject with multiple subjects nested in a
group. If you only have 1 observation per group, then there is no multilevel
structure to your data.
For example, 30 students in a classroom or 20 employees in an office division
are appropriate data structures. On the
Nantachai
It seems as though you have created the model matrices from the matrix
representation of the model. This is unecessary as lme will construct those for
you from your data frame.
The first thing I recommend you do is use lmer instead of lme. Second, you
should look in the vignette in t
I just sent this to you in a personal response, but for purposes of
archives, the following is one reference:
@book{mccu:sear:2002,
author ={Charles E. McCulloch and Shayle Searle},
year={2002},
title ={Generalized, Linear, and
One option is substr()
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Wade Wall
> Sent: Monday, July 24, 2006 9:42 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] trim function in R
>
> Hi all,
> I am looking for a function in R to trim the last
1 - 100 of 299 matches
Mail list logo