[R] Difference betweeen cor.test() and formula everyone says to use

2014-10-16 Thread Jeremy Miles
I'm trying to understand how cor.test() is calculating the p-value of
a correlation. It gives a p-value based on t, but every text I've ever
seen gives the calculation based on z.

For example:
> data(cars)
> with(cars[1:10, ], cor.test(speed, dist))

Pearson's product-moment correlation

data:  speed and dist
t = 2.3893, df = 8, p-value = 0.04391
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 0.02641348 0.90658582
sample estimates:
  cor
0.6453079

But when I use the regular formula:
> r <- cor(cars[1:10, ])[1, 2]
> r.z <- fisherz(r)
> se <- se <- 1/sqrt(10 - 3)
> z <- r.z / se
> (1 - pnorm(z))*2
[1] 0.04237039

My p-value is different.  The help file for cor.test doesn't (seem to)
have any reference to this, and I can see in the source code that it
is doing something different. I'm just not sure what.

Thanks,

Jeremy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] mice - undefined columns selected

2014-09-11 Thread Jeremy Miles
I've got a problem with the mice package that I don't understand.

Here's the code:
library(mice)
d <- read.csv("https://dl.dropboxusercontent.com/u/24381951/employment.csv";,
 as.is=TRUE, row.names=1)d.imp <- mice(data=d, m=1)

Result is:
Error in `[.data.frame`(data, , jj) : undefined columns selected

I hope I'm doing something foolish,

thanks,

Jeremy

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] metafor package: changing decimal in forest plot to midline decimal

2014-07-07 Thread Jeremy Miles
I've found that if you want really fine control over an issue like this in
a chart, the easiest thing to do is to export it as PDF, and then directly
edit the chart in Illustrator (not free) or Inkscape (free).




On 7 July 2014 10:21, Viechtbauer Wolfgang (STAT) <
wolfgang.viechtba...@maastrichtuniversity.nl> wrote:

> I tried this:
>
> library(metafor)
> data(dat.bcg)
> res <- rma(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg,
> slab=paste(author, year, sep=", "))
> options(OutDec="\xB7")
> forest(res)
>
> No warning, no scrambling, and all decimals shown in midline (also on the
> x-axis). But this is on Windows.
>
> My guess it's a font issue. There may be others that can give more useful
> advice.
>
> Best,
> Wolfgang
>
> > -Original Message-
> > From: Dietlinde Schmidt [mailto:schmidt.dietli...@web.de]
> > Sent: Monday, July 07, 2014 10:09
> > To: Viechtbauer Wolfgang (STAT); r-help@r-project.org
> > Subject: Re: [R] metafor package: changing decimal in forest plot to
> > midline decimal
> >
> > Thanks for that link, Wolfgang. Unfortunately, there comes the Warning
> > with it:
> > "(process:3634): Pango-WARNING **: Invalid UTF-8 string passed to
> > pango_layout_set_text()"
> > and decimal being "scrambled" in forest plot and not displaying the
> > midline decimal.
> >
> > I think it has to do with the fact, that only 1-byte-codes are allowed
> > for options(OutDec="\xB7").
> > Or does it have to do with me using Ubuntu?
> >
> > Apart from that the options-command does not seem to change the decimal
> > of values on the "true" x-axis under the plot.
> >
> > Still searching for a solution.
> >
> > Cheers,
> > Linde
> >
> > Am 05.07.2014 19:06, schrieb Viechtbauer Wolfgang (STAT):
> > > I found this:
> > >
> > > https://stat.ethz.ch/pipermail/r-help/2012-August/321057.html
> > >
> > > So, use this before drawing the forest plot:
> > >
> > > options(OutDec="\xB7")
> > >
> > > Best,
> > > Wolfgang
> > >
> > > --
> > > Wolfgang Viechtbauer, Ph.D., Statistician
> > > Department of Psychiatry and Psychology
> > > School for Mental Health and Neuroscience
> > > Faculty of Health, Medicine, and Life Sciences
> > > Maastricht University, P.O. Box 616 (VIJV1)
> > > 6200 MD Maastricht, The Netherlands
> > > +31 (43) 388-4170 | http://www.wvbauer.com
> > > 
> > > From: r-help-boun...@r-project.org [r-help-boun...@r-project.org] On
> > Behalf Of Dietlinde Schmidt [schmidt.dietli...@web.de]
> > > Sent: Thursday, July 03, 2014 3:07 PM
> > > To: r-help@r-project.org
> > > Subject: [R] metafor package: changing decimal in forest plot to
> > midlinedecimal
> > >
> > > Dear R-Community,
> > >
> > > I need to change the punctuation of the reported weights, effect sizes
> > > and confidence intervals in a forest plot created with the
> > > forest()-function in the metafor-package.
> > >
> > > Midline decimal means that it looks like this (23*6) rather than that
> > > (23.6).
> > >
> > > Do I need to change the forest()-function and if yes which part
> > exactly?
> > > Or is there an otherway how I can do it maybe by changing the
> > > rma()-function, of which the forest()-function is then applied to?
> > >
> > > Thanks for any hints and tipps!
> > >
> > > Cheers, Linde
> > >
> > > __
> > > R-help@r-project.org mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide http://www.R-project.org/posting-
> > guide.html
> > > and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R for Android

2014-05-08 Thread Jeremy Miles
It exists:
https://play.google.com/store/apps/details?id=com.appsopensource.R

No graphics.

Jeremy




On 8 May 2014 05:44, Kevin E. Thorpe  wrote:

> This is a question asked purely out of idle curiosity (and may also be in
> wrong list). Are there plans for porting R to Android devices or
> chromebooks? Maybe it's as simple as compiling the source, but I don't know
> what tools are available.
>
> One of the current advantages of R is it runs on all commonly used
> platforms. If chromebooks and android devices get into greater use, it
> would be cool if R were available.
>
> Kevin
>
> --
> Kevin E. Thorpe
> Head of Biostatistics,  Applied Health Research Centre (AHRC)
> Li Ka Shing Knowledge Institute of St. Michael's
> Assistant Professor, Dalla Lana School of Public Health
> University of Toronto
> email: kevin.tho...@utoronto.ca  Tel: 416.864.5776  Fax: 416.864.3016
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lavaan Model Specification

2014-04-30 Thread Jeremy Miles
No picture attached, and you don't tell us what your trouble is, but your
model has -4 df, so it's incorrectly specified.

Take out these lines:

congressmanAttitudes ~~ congressmenPerceptConstiuentAttitudes
congressmenPerceptConstiuentAttitudes ~~ constiuentAttitudes
rollCallBehav ~~ congressmenPerceptConstiuentAttitudes
rollCallBehav ~~ congressmanAttitudes

And your model runs, with 0 df.

Your latent variable does nothing, becuase you only only have one variable
which relates to it.

Jeremy

P.S. There's a lavaan group on google groups.



On 30 April 2014 08:02, Patzelt, Edward  wrote:

> R Community -
>
> I'm trying to build the model in the image below, but having troubles
> correctly specifying the syntax:
>
> library(lavaan)
>
> library(semPlot)
>
> lower <- matrix(c(1, 0, 0, 0, .475, 1, 0, 0, .738, .643, 1, 0, .608, .721,
> .823, 1), 4, 4, byrow = TRUE)
>
>
> colnames(lower) <- rownames(lower) <- c("constiuentAttitudes",
> "congressmanAttitudes", "congressmenPerceptConstiuentAttitudes",
> "rollCallBehav")
>
>
> mod1 <- '
>
> # latent vars
>
> constiuentAttitudes =~ congressmenPerceptConstiuentAttitudes
>
>
> # regressions
>
>
>
> congressmanAttitudes ~ congressmenPerceptConstiuentAttitudes
>
> rollCallBehav ~ congressmenPerceptConstiuentAttitudes +
> congressmanAttitudes
>
>
> # error
>
> congressmanAttitudes ~~ congressmenPerceptConstiuentAttitudes
>
> congressmenPerceptConstiuentAttitudes ~~ constiuentAttitudes
>
> rollCallBehav ~~ congressmenPerceptConstiuentAttitudes
>
> rollCallBehav ~~ congressmanAttitudes
>
> '
>
>
> mod1fit <- sem(mod1, sample.cov = lower, sample.nobs = 116)
>
>
> semPaths(mod1fit, what = "est", layout = "tree", title = TRUE, style
> ="LISREL"
> , nCharNodes = 10)
>
>
>
>
>
>
>
> --
>
> *Edward H Patzelt | Clinical Science PhD StudentPsychology | Harvard
> University *
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lavaan fit indices & Chronbach's alphas

2014-03-26 Thread Jeremy Miles
On 26 March 2014 10:13, Dimitri Liakhovitski  wrote:

> Hello!
>
> I've run SEM using lavaan and after I used summary(myfit) I saw the
> following fit indices:
>
> Model Chi Squared
> CFI
> TLI
> RMSEA
> SRMR
>
> I was wondering if these are the only fit indices lavaan produces, e.g.:
> GFI
> AGFI
> RMR
>
>
GFI and AGFI are pretty frowned upon, and not much use. What's the use of
RMR? It's not meaningful unless it's standardized.


> Also - does lavaan automatically estimate Chronbach's Alphas for
> measurement models present?
>
>
No. Cronbach's alpha is tangentially related to SEM. Do you mean composite
reliability? That's not automatic either, but it can be programmed.

Jeremy

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Converting code to R Question

2013-02-25 Thread Jeremy Miles
Here's a direct translation:
 Variable <- 0
 Variable <- ifelse(item1 == 1, Variable +1, Variable)
 Variable <- ifelse(item2 == 1, Variable +1, Variable)
 Variable <- ifelse(item3 == 1, Variable +1, Variable)
 Variable <- ifelse(item4 == 1, Variable +1, Variable)

Here's another way to do it:

Variable <- 0 + (item1 == 1) + (item2 == 1) + (item3 == 1) + (item4 == 1)

Note that I haven't worried about missing data - do you have NAs in
your items? If you do, and you want NA to be not equal to 1 (rather
than equal to NA):

Variable <- sum((item1 == 1), (item2 == 1) , (item3 == 1) , (item4 ==
1), na.rm=TRUE)


Jeremy


On 25 February 2013 17:02, Craig J  wrote:
> I'm learning R and am converting some code from SPSS into R.  My background
> is in SAS/SPSS so the vectorization is new to me and I'm trying to learn
> how to NOT use loops...or use them sparingly.  I'm wondering what the
> most efficient to tackle a problem I'm working on is. Below is an example
> piece of code.  Essentially what it does is set a variable to zero, loop
> through item responses, and add one if a condition is met. For example, if
> item one was responded as a 1 then add one to the final variable.  If item
> two was responded as a 2 then add one to the final variable.  I have to do
> this for five scales with each scale having 6 items half of which may have
> the 1 response pattern and half the 2 pattern.
>
> Any suggestions on how best to tackle this in R would be helpful.
>
> Craig
>
> **
> Old SPSS code sample
> **
>
> Compute Variable = 0.
>
> IF (item1 = 1) Variable = Variable +1 .
>
> IF (item2= 2) Variable = Variable +1 .
>
> IF (item3 = 1) Variable = Variable +1.
>
> IF (item4 = 2) Variable = Variable +1.
>
> EXECUTE .
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MIMIC latent variable with PLS Path Modelling with R ?

2013-02-16 Thread Jeremy Miles
By MIMIC do you mean multiple indicator/multiple cause?  Something like
this: http://www.jeremymiles.co.uk/misc/fun/img059.gif

If so, you can use sem, Lavaan, or openMx.

Jeremy




On 13 February 2013 05:11, Hervé Guyon  wrote:

> I want estimate MIMIC latent variable with R in a Monte Carlo simulation.
> The packages plspm and semPLS don't permit to introduce MIMIC variable but
> only reflexives or formatives variables.
> The only one program which permits to use MIMIC latent variable with PLSPM
> seems to be XLSTAT, which can not be used to simulate a lot of data bases.
> It is a real challenge to develop a package with PLSPM and MIMIC latent
> variable…. And I prefer to use a package which exists.
> Has someone a solution in R ?
>
> Hervé
>
> __**
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/**listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/**
> posting-guide.html 
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interpret R-squared and cor in R

2013-02-15 Thread Jeremy Miles
On 15 February 2013 21:26, Janesh Devkota  wrote:

> Hi I am trying to find the relationship between two variables.
>
> First I fitted a linear model between two variables and I found the
> following results:
> Residual standard error: 0.03253 on 2498 degrees of freedom
> Multiple R-squared: 0.5551, Adjusted R-squared: 0.5549
> F-statistic:  3116 on 1 and 2498 DF,  p-value: < 2.2e-16
>
> Then I used the cor function to see the correlation between two variable
> I get the following result
> -0.7450344
>
>
r is a correlation (it actually stands for regression).

R (upper case) is a multiple correlation. But you only have one predictor,
so it's a correlation.

R squared is R (or r), squared.  So -0.7450433^2 = 0.555.



> How can we interpret the result based on R-squared and correlation ? From
> the p-value we can see that there is very strong relationship between
> variables as it is  way less that 0.001
>
>

The p-value doesn't tell you about the strength of the relationship.


> Can anyone kindly explain the difference between Multiple R squared,
> adjusted R-squared and correlation and how to report these values while
> writing a report ?
>
>
I can suggest a number of books that do this much better than I could in an
email. But you probably have a favorite of your own.

Jeremy

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Question about Linear Regression

2012-12-28 Thread Jeremy Miles
You can run that as it is. The term to search for on Google is 'dummy
coding'.

Jeremy

On 28 December 2012 07:45, Lorenzo Isella  wrote:

>
> where x3 is a dichotomous variable assuming only 0 and 1 values (x1 and x2
> are continuous variables).
> Is there any particular caveat I should be aware of? Can I code this as a
> simple multiple linear regression into R or is there anything else I should
> bear in mind?
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non-linear regression analysis in R

2012-12-19 Thread Jeremy Miles
Could you provide the code that you're running, so we can see what
you're trying to do?  Even better would be a repeatable example.

Jeremy

On 19 December 2012 09:42, Yann Labou  wrote:
> Hey all,
>
> I'm trying to fit a non-linear model y ~ a * constant ^ b * x ^ c and
> estimates the paramaters a, b and c.
>
> Using the nls function, I'm getting following error message:
>
> Error in nlsModel(formula, mf, start, wts) :
>   singular gradient matrix at initial parameter estimates
>
> If I logarithmize the whole equation log(y) ~ log (a) + b * log(constant) +
> c * log(x) and fit the equation with lm, I get only NAs estimates for the
> second term on the right side:
>
> Coefficients: (1 not defined because of singularities)
>
> Do you have any hints on how to fit this equation or any alternatives to
> nls?
>
> Thanks,
> Yann
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help for a function

2012-12-04 Thread Jeremy Miles
What are you expecting?

What do you get?

What is the problem?

J

On 4 December 2012 06:01, anoumou  wrote:
> Hello all,
> I need a help.
> I am modeling a disease and a create a R function like that:
>
> Lambda<-function (x,date1,r,h,a){
>   ndate1 <- as.Date(date1, "%d/%m/%Y")
>   t1 <- as.numeric(ndate1)
>   x[order(x$i),]
>   t <-x[,"t"]
>   i <-x[,"i"]
>   CONTAGIEUX <-x[,"CONTAGIEUX"]
>   while ( t1 < min(t) ){
>   for (i in 1:length(i) ){
> {for (j in 1:CONTAGIEUX[length(CONTAGIEUX)]){
>   res1[j] <-(a*h)
>   res2 <-sum( res1[j])
> }
>  }
>   lambda[i] <- r*res2
>   }
>   }
> x<-data.frame(x,lambda)
> x
> }
>
> on such data :
>
> DATEi   Symptomes   t   Incubation  CONTAGIEUX
> 1   2009-04-29   Canada 13  14363   13  13
> 2   2009-05-01   Israel 2   14365   2   2
> 3   2009-05-09  argentina   1   14373   1   1
> 5   2009-05-09  australia   1   14373   1   1
> 6   2009-05-10  australia   1   14374   2   2
> 7   2009-04-29  Austria 1   14363   1   1
> 8   2009-04-30  Austria 1   14364   2   2
> 9   2009-05-01  Austria 1   14365   2   3
> 10  2009-05-02  Austria 1   14366   2   4
> 11  2009-05-03  Austria 1   14367   2   5
> 17  2009-05-09  Austria 1   14373   2   7
> 18  2009-05-10  Austria 1   14374   2   7
> 19  2009-05-08  brasil  4   14372   4   4
> 20  2009-05-09  brazil  6   14373   6   6
> 21  2009-05-10  brazil  6   14374   12  12
> 22  2009-05-02  canada  51  14366   51  51
> 23  2009-05-03  canada  85  14367   136 136
> 24  2009-05-04  canada  101 14368   186 237
> 31  2009-04-27  Canada  6   14361   6   6
> 32  2009-04-28  Canada  6   14362   6   6
> 33  2009-04-30  Canada  19  14364   25  25
> 34  2009-05-01  Canada  34  14365   53  59
> 35  2009-05-01  China,HongKong, SAR 1   14365   1   1
> 36  2009-05-02  China,HongKong, SAR 1   14366   2   2
> 37  2009-05-03  China,HongKong, SAR 1   14367   2   3
> 38  2009-05-04  China,HongKong, SAR 1   14368   2   4
> 44  2009-05-10  China,HongKong, SAR 1   14374   2   7
> 45  2009-05-04  colombia1   14368   1   1
> 46  2009-05-05  colombia1   14369   2   2
> 47  2009-05-06  colombia
>
> But i do not get the results,i try by all means but i d'ont understant the
> problem.
> Thanks for your help.
>
>
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/Help-for-a-function-tp4652054.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Replacing string in matrix with zero

2012-11-14 Thread Jeremy Miles
You can use ifelse()

#Create data for example
x <- matrix(data=c(Inf, 2, 3, 4, Inf, 6, 7, 8, Inf), nrow=3)
#Turn Inf into zero.
x <- ifelse(x == Inf, 0, x)

Jeremy





On 14 November 2012 14:13, Nick Duncan  wrote:

> Dear All,
>
> I have a matrix in which the diagonal has the string "Inf" in it. In
> order to be able to do cluster analysis this needs to be replaced with
> a Zero.
> I can do this by putting it into Excel, replacing  and putting it back
> into R but it's tedious, and I am sure there is a simple way to do it
> in R.
> If you have the route to do this, it would be much appreciated.
> Best wishes
> Nick Duncan
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R and SPSS

2012-11-06 Thread Jeremy Miles
I think we'll need some output to know so we can see the differences. (And
data and code would be useful too, if you could provide a small example).

One thought is that the programs might remove a variable that is completely
collinear, but the different programs might remove different variables - so
check that the same variables have been removed.


Jeremy


On 6 November 2012 13:39, Hui Du  wrote:

> Hi group:
>
>
> I have a data set, which has severe colinearity problem. While running
> linear regression in R and SPSS, I got different models. I am wondering if
> somebody knows how to make the two software output the same results. (I
> guess the way R and SPSS handling singularity is different, which leads to
> different models.)
>
>
> Thanks.
>
>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Error: object 'CO2' not found

2012-10-22 Thread Jeremy Miles
You need to load the dataset.

First, run

data(CO2)

Then it should work.

Jeremy

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] negative AIC and BIC values in gls

2012-08-22 Thread Jeremy Miles
It's fine. Just interpret them as you would any other (lower is better).


On 22 August 2012 16:43, Gary Dong  wrote:
> Dear R users,
>
> I obtained negative AIC and BIC and positive Loglik values in a gls model.
> Is this normal? how should I interpret them? Thanks!
>
>AIC   BIC   logLik
>   -659.978  -587.5541   345.989
>
> Best
> Gary
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Metafor package: Including multiple (categorical) predictors

2012-08-02 Thread Jeremy Miles
The test of moderator coefficients (QM) is chi-square distributed.You
can use the change in this value when you add a predictor to the model
as a chi-square test, with df equal to the change in df.

Jeremy

On 2 August 2012 05:54, Bexkens, Anika  wrote:
> Dear Metafor users,
>
> I'd like to test a model with 2 continuous and 2 categorical moderators in a 
> meta regression. One categorical parameter has 2 levels and the other has 4 
> levels. If I understand correctly, when I include all moderators in the 
> model, Metafor returns main effects of the continuous parameters and 
> contrasts of each level of categorical moderators with the intercept (which 
> includes the reference level of the categorical parameters).
>
> This makes it possible to see whether different levels of the categorical 
> moderator are differentially related to effect size. I include multiple 
> moderators and would like to report for each variable whether it is 
> significantly moderating effect size. Is it possible to obtain an overall 
> main effect of each categorical variable, instead of the contrast effects? Or 
> can I only obtain this by including one categorical moderator at a time and 
> reporting the omnibus moderator test?
>
> Many thanks,
>
> Anika
>
>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help in interpreting paired t-test

2011-09-21 Thread Jeremy Miles
> cor(A, B)
[1] 0.9986861

The data are very, very highly correlated. The higher the correlation,
the greater the power of the t-test to detect the same difference
between the means.

Jeremy

On 20 September 2011 10:46, Pedro Mardones  wrote:
> Dear all;
>
> A very basic question. I have the following data:
>
> 
>
> A <- 1/1000*c(347,328,129,122,18,57,105,188,57,257,53,108,336,163,
> 62,112,334,249,45,244,211,175,174,26,375,346,153,32,
> 89,32,358,202,123,131,88,36,30,67,96,135,219,122,
> 89,117,86,169,179,54,48,40,54,568,664,277,91,290,
> 116,80,107,401,225,517,90,133,36,50,174,103,192,150,
> 225,29,80,199,55,258,97,109,137,90,236,109,204,160,
> 95,54,50,78,98,141,508,144,434,100,37,22,304,175,
> 72,71,111,60,212,73,50,92,70,148,28,63,46,85,
> 111,67,234,65,92,59,118,202,21,17,95,86,296,45,
> 139,32,21,70,185,172,151,129,42,14,13,75,303,119,
> 128,106,224,241,112,395,78,89,247,122,212,61,165,30,
> 65,261,415,159,316,182,141,184,124,223,39,141,103,149,
> 104,71,259,86,85,214,96,246,306,11,129)
>
> B <- 1/1000*c(351,313,130,119,17,50,105,181,58,255,51,98,335,162,
> 60,108,325,240,44,242,208,168,170,27,356,341,150,31,
> 85,29,363,185,124,131,85,35,27,63,92,147,217,117,
> 87,119,81,161,178,53,45,38,50,581,661,254,87,281,
> 110,76,100,401,220,507,94,123,36,47,154,99,184,146,
> 232,26,77,193,53,264,94,110,128,87,231,110,195,156,
> 95,51,50,75,93,134,519,139,435,96,37,21,293,169,
> 70,80,104,64,210,70,48,88,67,140,26,52,45,90,
> 106,63,219,62,91,56,113,187,18,14,95,86,284,39,
> 132,31,22,69,181,167,150,117,42,14,11,73,303,109,
> 129,106,227,249,111,409,71,88,256,120,200,60,159,27,
> 63,268,389,150,311,175,136,171,116,220,30,145,95,148,
> 102,70,251,88,83,199,94,245,305,9,129)
>
> 
>
> plot(A,B)
> abline(0,1)
>
> At a glance, the data look very similar. Data A and B are two
> measurements of the same variable but using different devices (on a
> same set of subjects). Thus, I thought that a paired t-test could be
> appropriate to check if the diff between measurement devices = 0.
>
> t.test(A-B)
>
> 
>
> One Sample t-test
>
> data:  A - B
> t = 7.6276, df = 178, p-value = 1.387e-12
> alternative hypothesis: true mean is not equal to 0
> 95 percent confidence interval:
>  0.002451622 0.004162903
> sample estimates:
>  mean of x
> 0.003307263
>
> 
> The mean diff is 0.0033 but the p-value indicates a strong evidence to
> reject H0.
>
> I was expecting to find no differences so I'm wondering whether the
> t-test is the appropriate test to use. I'll appreciate any comments or
> suggestions.
>
> BR,
> PM
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Differences in SAS and R defaults

2011-08-29 Thread Jeremy Miles
Do you mean things like treatment of categorical variables in regression
procedures (which have different defaults in different procedures in SAS),
and different default as to the reference category in logistic regression?

Jeremy



On 29 August 2011 04:46, n  wrote:

> Hello all,
>
> I am looking for theories and statistical analyses where the defaults
> employed in R and SAS are different. As a result, the outputs under
> the defaults should (at least slightly) differ for the same input.
>
> Could anyone kindly point any such instance?
>
> Thanks
>
> Nikhil
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] questions about "metafor" package

2011-08-17 Thread Jeremy Miles
.
>
> - Firstly, for each observation, I have means for a treatment and for 
> a control, but I don’t always have corresponding standard deviations (52 of a 
> total of 93 observations don’t have standard deviations). Nevertheless I have 
> the sample sizes for all observations so I wonder if it was possible to 
> weight observations by sample size in the package « metafor ».

Following what Wolfgang said, do you have some other information, such
as p-values, or standard errors of the difference, or confidence
intervals, which would allow you to calculate (or approximate) the
pooled SD?

jeremy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Automatic creation of binary logistic models

2011-08-04 Thread Jeremy Miles
Sounds like you want a best subsets regression, the bestglm() function,
found in the bestglm() package will do the trick.

Jeremy

On 4 August 2011 12:23, Paul Smith  wrote:

> Dear All,
>
> Suppose that you are trying to create a binary logistic model by
> trying different combinations of predictors. Has R got an automatic
> way of doing this, i.e., is there some way of automatically generating
> different tentative models and checking their corresponding AIC value?
> If so, could you please direct me to an example?
>
> Thanks in advance,
>
> Paul
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to search for R related topics in search engines?

2011-07-27 Thread Jeremy Miles
Use rseek.org.

Jeremy

On 27 July 2011 07:12, Paul Menzel  wrote:
> Dear R folks,
>
>
> I am having problems getting good results when searching for R related
> topics, that means I have not found out yet what keywords I should use
> to get only relevant results. Most of the time I get also MATLAB related
> things and nothing related at all. The nature of this is of course the
> name of R consisting just of one letter.
>
> What keywords do you use? Or do you just go to certain sites and do not
> use any search engine at all?
>
> Until now I am always adding the keywords »r statistical« to my search,
> but it is not optimal.
>
>
> Thanks,
>
> Paul
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Message for R-help mailing list

2011-07-26 Thread Jeremy Miles
This is clearly a message for the R-help mailing list, since it was
sent to the R help mailing list.

 fisher.test(x)[1]

Jeremy




On 26 July 2011 14:51, Zmarz, Pawel  wrote:
> Dear r-helpers,
>
> I would be very grateful if you could post the message below on the r-help 
> discussion board. Thank you very much!
>
> Best Wishes,
> Pawel
>
>
> Hello R community,
>
> I am generating lots of results using the fisher.test function, testing many 
> 2x2 tables of SNPs for association with a particular phenotype.
>
> A typical output of the fisher.test function would be (for example):
>
> data:  data1
> p-value = 0.9837
> alternative hypothesis: true odds ratio is greater than 1
> 95 percent confidence interval:
>  0.4162551       Inf
> sample estimates:
> odds ratio
>  0.6262607
>
> That's lovely, but my problem is that I am only interested in the "p-value" 
> result for each SNP that I check. If it is possible, I would like to extract 
> all the p-values I generate (I use a loop to generate all of them), chuck 
> them into a matrix, and then into a text file. Or perhaps directly export 
> them into a text file, without using a matrix -- whatever is easier. However, 
> I am stuck on how to do this...
>
> Maybe one way would be to save the full results (as they are above) into a 
> text file and then get R to read just the specific part of the file (i.e. the 
> p-value part) and then build a matrix...but I do not know what the code would 
> be for the "specific reading" part..?
>
> Would anyone have any ideas??
>
> Thank you so much for your help!!!
>
> Best Wishes,
> Pawel
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] function lm, get back the coefficient

2011-07-26 Thread Jeremy Miles
Will:

result$coef[[2]]

Give you want you want?

Jeremy



On 26 July 2011 08:21, ascoquel  wrote:
> Hi,
>
> I've done a linear fit on my data and I would like to get back the a (time)
> coefficient ...
>
> mod<-lm(res_sql2$Lx0x~0+time)
> result<-data.frame()
> result<-coef(mod)
> print("result")
> print(result)
> [1] "result"
>      time
> 0.02530191
>
> But I would like just the value 0.02530191 ... I tried result$time but it
> doesn't work ...
>
> Thanks for your help.
> Anne-Sophie
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/function-lm-get-back-the-coefficient-tp3696109p3696109.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Extract elements from objects in a list

2011-06-28 Thread Jeremy Miles
Excellent, thanks.

On 28 June 2011 16:36, jim holtman  wrote:
> forgot to sent the sapply solution:
>
>> sapply(x, '[', 3)
> Median Median Median Median Median Median Median Median Median Median
> Median Median Median Median Median Median
> 0.4769 0.4880 0.4916 0.4021 0.4474 0.4449 0.5169 0.5067 0.5189 0.4088
> 0.4887 0.5392 0.4964 0.4141 0.5155 0.4461
> Median Median Median Median Median Median Median Median Median Median
> Median Median Median Median Median Median
> 0.4918 0.4910 0.5432 0.4784 0.5482 0.6263 0.5420 0.4933 0.5534 0.5066
> 0.5900 0.4553 0.4859 0.5721 0.5442 0.5105
> Median Median Median Median Median Median Median Median Median Median
> Median Median Median Median Median Median
> 0.4580 0.5268 0.4833 0.5178 0.5210 0.5808 0.4720 0.5457 0.5910 0.5796
> 0.5329 0.5178 0.4674 0.4280 0.4061 0.5665
> Median Median Median Median Median Median Median Median Median Median
> Median Median Median Median Median Median
> 0.4963 0.5013 0.4791 0.5329 0.4770 0.5926 0.4709 0.6042 0.5020 0.4788
> 0.5261 0.5010 0.4394 0.5339 0.5655 0.5200
> Median Median Median Median Median Median Median Median Median Median
> Median Median Median Median Median Median
> 0.5586 0.5362 0.5719 0.4851 0.4831 0.5458 0.5331 0.5611 0.4336 0.4727
> 0.5497 0.4768 0.5305 0.5261 0.5667 0.5107
> Median Median Median Median Median Median Median Median Median Median
> Median Median Median Median Median Median
> 0.5209 0.5635 0.4789 0.5428 0.5372 0.5403 0.5086 0.5470 0.4219 0.4758
> 0.4824 0.5165 0.5035 0.4833 0.4754 0.5227
> Median Median Median Median
> 0.6169 0.4904 0.4773 0.4779
>
> On Tue, Jun 28, 2011 at 7:22 PM, Jeremy Miles  wrote:
>> Hi All,
>>
>> I want to extract elements of elements in a list.
>>
>> Here's an example of what I mean:
>>
>> If I create a list:
>>
>> x <- as.list(100)
>> for(loop in c(1:100)) {
>>        x[[loop]] <- summary(runif(100))
>>        }
>>
>>
>>> head(x)
>> [[1]]
>>   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
>> 0.02271 0.25260 0.58130 0.52120 0.77270 0.99670
>>
>> [[2]]
>>    Min.  1st Qu.   Median     Mean  3rd Qu.     Max.
>> 0.006796 0.259700 0.528100 0.515500 0.781900 0.993100
>>
>> [[3]]
>>   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
>> 0.00927 0.22800 0.40780 0.46410 0.69460 0.98780
>>
>> I want to extract (say) the medians as a vector.  This would be:
>> x[[1]][[3]]
>> x[[2]][[3]]
>> x[[3]][[3]]
>>
>> I thought there would be a way of doing this with something like
>> apply(), but I cannot work it out.  Is there a way of doing this
>> without a loop?  Thanks,
>>
>> Jeremy
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Jim Holtman
> Data Munger Guru
>
> What is the problem that you are trying to solve?
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Extract elements from objects in a list

2011-06-28 Thread Jeremy Miles
Hi All,

I want to extract elements of elements in a list.

Here's an example of what I mean:

If I create a list:

x <- as.list(100)
for(loop in c(1:100)) {
x[[loop]] <- summary(runif(100))
}


> head(x)
[[1]]
   Min. 1st Qu.  MedianMean 3rd Qu.Max.
0.02271 0.25260 0.58130 0.52120 0.77270 0.99670

[[2]]
Min.  1st Qu.   Median Mean  3rd Qu. Max.
0.006796 0.259700 0.528100 0.515500 0.781900 0.993100

[[3]]
   Min. 1st Qu.  MedianMean 3rd Qu.Max.
0.00927 0.22800 0.40780 0.46410 0.69460 0.98780

I want to extract (say) the medians as a vector.  This would be:
x[[1]][[3]]
x[[2]][[3]]
x[[3]][[3]]

I thought there would be a way of doing this with something like
apply(), but I cannot work it out.  Is there a way of doing this
without a loop?  Thanks,

Jeremy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Factor Analysis with orthogonal and oblique rotation

2011-06-22 Thread Jeremy Miles
Varimax is orthogonal, promax is oblique.  Varimax is generally not
recommended.  See: Preacher, K. J., & MacCallum, R. C. (2003).
Repairing Tom Swift's electric factor analysis machine. Understanding
Statistics, 2(1), 13-43.   (Google the title and you'll find a PDF).

The fa() function in the psych package has more flavors of extraction
and rotation.

Jeremy


On 22 June 2011 14:02, Rosario Garcia Gil  wrote:
> Hello
> I seem to find only two types of rotation for the factanal function in R, the 
> Varimax and Promax, but is it possible to run a orthogonal and oblique 
> rotations in R?
>
> Thanks in advance
> Rosario
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Bartlett's Test of Sphericity

2011-06-17 Thread Jeremy Miles
cortest.bartlett() in the psych package.

I've never seen a non-significant Bartlett's test.

Jeremy



On 17 June 2011 12:43, thibault grava  wrote:
> Hello Dear R user,
>
> I want to conduct a Principal components analysis and I need to run two
> tests to check whether I can do it or not. I found how to run the KMO
> test, however i cannot find an R fonction for the Bartlett's test of
> sphericity. Does somebody know if it exists?
>
> Thanks for your help!
>
> Thibault
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BIZARRE results from wilcox.test()

2011-06-14 Thread Jeremy Miles
The results weren't BIZARRE (or even bizarre).  You didn't understand
them, but that doesn't make them bizarre.  (I didn't understand them
either, but thanks to the replies, now I do).

Why not send something more similar to your dataset to ensure you get
relevant answers ?

Jeremy



On 14 June 2011 15:26, genecleaner  wrote:
> Dear Daniel and Sarah,
>
> Thanks you for your rude replies .
> The script that I provided was only an example and to illustrate the
> problem. It makes perfectly sense to use the Wilcoxon test on my datasets.
> However, you replies were nonsensical, since you could not solve the problem
> but rather just bullied me.
>
> Anyway, this is the solution to the problem: the exact=TRUE statement should
> be added
>
>> w <- wilcox.test(c(1:50),(c(1:50)+100))
>> w$p.value
> [1] 7.066072e-18
>> w <- wilcox.test(c(1:50),(c(1:50)+100), exact=TRUE)
>> w$p.value
> [1] 1.982331e-29
>
> Best regards,
> genecleaner
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/BIZARRE-results-from-wilcox-test-tp3597818p3598039.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Results of CFA with Lavaan

2011-06-08 Thread Jeremy Miles
What do you mean by latent estimate?

The table of variances has  variances for each factors.

Is there something different in the sem output that you don't see here?

Yes, this looks normal.

Jeremy



On 8 June 2011 13:14, R Help  wrote:
> I've just found the lavaan package, and I really appreciate it, as it
> seems to succeed with models that were failing in sem::sem.  I need
> some clarification, however, in the output, and I was hoping the list
> could help me.
>
> I'll go with the standard example from the help documentation, as my
> problem is much larger but no more complicated than that.
>
> My question is, why is there one latent estimate that is set to 1 with
> no SD for each factor?  Is that normal?  When I've managed to get
> sem::sem to fit a model this has not been the case.
>
> Thanks,
> Sam Stewart
>
> HS.model <- ' visual  =~ x1 + x2 + x3
>              textual =~ x4 + x5 + x6
>              speed   =~ x7 + x8 + x9 '
> fit <- sem(HS.model, data=HolzingerSwineford1939)
> summary(fit, fit.measures=TRUE)
> Lavaan (0.4-8) converged normally after 35 iterations
>
>  Number of observations                           301
>
>  Estimator                                         ML
>  Minimum Function Chi-square                   85.306
>  Degrees of freedom                                24
>  P-value                                        0.000
>
> Chi-square test baseline model:
>
>  Minimum Function Chi-square                  918.852
>  Degrees of freedom                                36
>  P-value                                        0.000
>
> Full model versus baseline model:
>
>  Comparative Fit Index (CFI)                    0.931
>  Tucker-Lewis Index (TLI)                       0.896
>
> Loglikelihood and Information Criteria:
>
>  Loglikelihood user model (H0)              -3737.745
>  Loglikelihood unrestricted model (H1)      -3695.092
>
>  Number of free parameters                         21
>  Akaike (AIC)                                7517.490
>  Bayesian (BIC)                              7595.339
>  Sample-size adjusted Bayesian (BIC)         7528.739
>
> Root Mean Square Error of Approximation:
>
>  RMSEA                                          0.092
>  90 Percent Confidence Interval          0.071  0.114
>  P-value RMSEA <= 0.05                          0.001
>
> Standardized Root Mean Square Residual:
>
>  SRMR                                           0.065
>
> Parameter estimates:
>
>  Information                                 Expected
>  Standard Errors                             Standard
>
>
>                   Estimate  Std.err  Z-value  P(>|z|)
> Latent variables:
>  visual =~
>    x1                1.000
>    x2                0.554    0.100    5.554    0.000
>    x3                0.729    0.109    6.685    0.000
>  textual =~
>    x4                1.000
>    x5                1.113    0.065   17.014    0.000
>    x6                0.926    0.055   16.703    0.000
>  speed =~
>    x7                1.000
>    x8                1.180    0.165    7.152    0.000
>    x9                1.082    0.151    7.155    0.000
>
> Covariances:
>  visual ~~
>    textual           0.408    0.074    5.552    0.000
>    speed             0.262    0.056    4.660    0.000
>  textual ~~
>    speed             0.173    0.049    3.518    0.000
>
> Variances:
>    x1                0.549    0.114    4.833    0.000
>    x2                1.134    0.102   11.146    0.000
>    x3                0.844    0.091    9.317    0.000
>    x4                0.371    0.048    7.778    0.000
>    x5                0.446    0.058    7.642    0.000
>    x6                0.356    0.043    8.277    0.000
>    x7                0.799    0.081    9.823    0.000
>    x8                0.488    0.074    6.573    0.000
>    x9                0.566    0.071    8.003    0.000
>    visual            0.809    0.145    5.564    0.000
>    textual           0.979    0.112    8.737    0.000
>    speed             0.384    0.086    4.451    0.000
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] best subset regression in R

2011-05-04 Thread Jeremy Miles
On 4 May 2011 09:47, FMH  wrote:
> Dear All,
>
> Could someone please give some advice the way to do linear modelling via best 
> subset regression in R? I'd really appreciate for your kindness.
>


Google is your friend here:
http://www.google.com/search?q=best+subsets+regression+R , and sends
me to this page:
http://www.statmethods.net/stats/regression.html

Jeremy

-- 
Jeremy Miles
Support Dan and Alex's school: Vote for Goethe Charter School to
receive a grant from Pepsi to help build a library:
http://www.refresheverything.com/gicslibrary

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fisher exact for > 2x2 table

2011-04-29 Thread Jeremy Miles
On 29 April 2011 08:43, viostorm  wrote:
>
> After I shared comments form the forum yesterday with the biostatistician he
> indicated this:
>
> "Fisher's exact test is the non-parametric analog for the Chi-square
> test for 2x2 comparisons. A version (or extension) of the Fisher's Exact
> test, known as the Freeman-Halton test applies to comparisons for tables
> greater than 2x2. SAS can calculate both statistics using the following
> instructions.
>
>  proc freq; tables a * b / fisher;"
>


SAS documentation says:

"Fisher's exact test was extended to general R×C tables by Freeman and
Halton (1951), and this test is *also* known as the Freeman-Halton
test."

Emphasis mine.

Jeremy



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Rcmdr vs SPSS in hungarian

2011-04-21 Thread Jeremy Miles
Just because it comes from a book does not make it true or correct.
Books are subject to considerably less peer review than journal
articles.  Publishers will publish a book written by (almost) anyone -
I know this, because I've written some of them and they were
published.

There really isn't much difference, most of the time, between
different sorts of residuals, usually they are used for eyeballing
potential problems in your data, in which case it doesn't matter which
you use.  If you want residuals where you know the distribution under
the null hypothesis, then you should use the studentized (which SPSS
calls studentized-deleted).

Jeremy



2011/4/21 Tamas Barjak :
> Remélem valaki ezt is elolvassa, megérti, és válaszol a problémámra.
>
> A gondom a következő:
>
> Kiszámoltattam a maradékokat az R commanderrel, és az SPSS -el is. És itt
> kezdődik igazán a gond. Az SPSS a Studentizált törölt maradékokra ugyanazt
> az eredményt dobta, mint az Rcmdr a Studentizált maradékokra. Az SPSS
> outputja biztos hogy jó, mivel egy könyv adatait használtam fel, és az
> eredmények megegyeznek a könyv eredményeivel. Az Rcmdr outputja viszont nem.
> Nem értem, hogy ha én a Studentizált maradékokat akarom kiszámolni az
> Rcmdr-rel, akkor miért a Studentizált törölt maradékokat kapom??? Hogy kapom
> meg a Studentizált maradékokat???
>
> Rcmdr--> Models--> Add observation statistics to data--> Studentized
> residuals
>
> A fórumra tegnap este vetettem fel ezt a problémát, de a hiányos
> nyelvtudásom miatt sokan nem értették a problémám.
>
> Remélem valaki válaszol!
>
> Köszönöm szépen előre is!
>
>        [[alternative HTML version deleted]]
>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Rcmdr vs SPSS

2011-04-20 Thread Jeremy Miles
What's the mistake?  They look like the same numbers to me.  (Although
I didn't check them all).

Oh, hang on, are you saying that they're different kinds of residuals,
but they are the same?  This is because SPSS names its residuals
wrongly.

SPSS has standardized residuals, these are residuals divided by the
standard deviation of the residuals, calculated as the overall SD.

SPSS has studentized residuals.  Everyone else calls these
standardized residuals.

SPSS has studentized deleted residuals, which follow a student's
t-distribution.  Everyone else calls the studentized residuals
(because they follow the distribution).  John Fox's regression book
<http://socserv.mcmaster.ca/jfox/Books/Companion/> explains all of
this nicely.

Jeremy




On 20 April 2011 14:36, Tamas Barjak  wrote:
> Hy all!
>
> Excuse me for the inaccurate composition, but I do not speak well in
> English.
>
> I noticed a mistake in Rcmdr (?) -- Models menu --- Add observation
> statistics to data --- Studentized residuals.
> My output :
>
> (Rcmdr !!!)
>
> rstudent.RegModel.1 (= *Studentized residuals*)
>
> -1.5690952
> -0.0697492
> -0.6830684
> 1.0758056
> 0.2719739
> 0.3626101
> 0.8361803
> 1.0180479
> 0.8936783
> -0.4630021
> -3.2972946
>
> AND!!!
>
> SPSS (= *Studentized DELETED residuals*)
>
> -1,56910
> -0,06975
> -0,68307
> 1,07581
> 0,27197
> 0,36261
> 0,83618
> 1,01805
> 0,89368
> -0,46300
> -3,29729
>
> This wrong???
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Monte Carlo Simulation

2011-04-15 Thread Jeremy Miles
On 15 April 2011 12:03, Shane Phillips  wrote:
> Here's a script of what I have so far.  I have a few problems.  First, the 
> correlations.  Next, recoding that categorical variable into dichotomous 
> variables.  Finally, the iterative filename thing.
>

 Where?

Perhaps give the list one question at a time?

Here's a start for one of the questions, selected (almost) at random.
Give 80% of people a score of 1 on a x1, and 20% of people zero.

tempVar <- runif(1000)
x1 <- ifelse(tempVar < 0.8, 1, 0)
rm(tempVar)


Tell us how far you've got.  Do you need to know about write.table()
for the saving the files, or is the problem with splitting a large
file, or generating the 1000 names ...


Jeremy



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Correlation Matrix

2011-04-07 Thread Jeremy Miles
On 7 April 2011 12:09, Dmitry Berman  wrote:
> Listers,
>
> I have a question regarding correlation matrices. It is fairly straight
> forward to build a correlation matrix of an entire data frame. I simply use
> the command cor(MyDataFrame). However, what I would like to do is construct
> a smaller correlation matrix using just three of the variable out of my data
> set.
>
> When I run this:
> cor(MyDataFrame$variable1, MyDataFrame$variable2,MyDataFrame$variable3) I
> get an error.
>
> Is there a way to do this through a built in function or is this something I
> have to construct manually?
>


You can use cbind().

cor(cbind(MyDataFrame$variable1, MyDataFrame$variable2,MyDataFrame$variable3) )

Jeremy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Structural equation modeling in R(lavaan,sem)

2011-04-03 Thread Jeremy Miles
On 3 April 2011 12:38, jouba  wrote:
>
> Daer all,
> I have a question concerning longitudinal data:
> When we have a longitudinal data and we have to do sem analysis there is in 
> the package lavaan some functions,options in this package that help to do 
> this or we can treat these data like non longitudinal data
>


No, and (qualified) no.

1. There are (AFAIK) no options, functions that are specific to
longitudinal data.

2. You don't treat these data as non-longitudinal data, you add
parameters that are appropriate though.  For example, look at the
model shown on http://lavaan.ugent.be.  dem60 and dem65 are two
measures of the same construct at different timepoints, so there are
correlations over time for each pair of measured variables that are
measures of that construct - i.e. y1 ~~ y5

3. You would get much better answers on the SEM mailing list - semnet.
You can join it here: http://www2.gsu.edu/~mkteer/semnet.html#Joining.

Jeremy



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Structural equation modeling in R(lavaan,sem)

2011-03-29 Thread Jeremy Miles
sem (the package) documentation is not intended to teach you how to do
SEM (the technique) (there's very little R documentation that is
intended to teach you how to do a particular statistical technique).

There are several good books out there, but here's a free access
journal article, which will help.

http://www.biomedcentral.com/1756-0500/3/267

Might I also suggest you take a look at the semnet list, which is
populated by practitioners of SEM.

Jeremy



On 29 March 2011 12:25, jouba  wrote:
> Dear all,
> There is some where documentation to understand all  indices in the output
> of the function sem(package lavaan ) ??
> for example  Chi-square test baseline model, Full model versus baseline
> model, Loglikelihood and Information Criteria, Root Mean Square Error of
> Approximation, Standardized Root Mean Square Residual…
> Th same question for the sem funtion (sem package)
> Thanks in advance for your help
>
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/Structural-equation-modeling-in-R-lavaan-sem-tp3409642p3415954.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Structural equation modeling in R(lavaan,sem)

2011-03-28 Thread Jeremy Miles
On 28 March 2011 09:00, jouba  wrote:

Your syntax is not very tidy. That makes it hard to check.


> x1 <->x1, sigmma7, NA
> for me this  an exogen variable and i am not obliged to specify this
> equation
>
> model.se<-specify.model()
> x1->x2,gamm1,NA
> x2->x3,gamm2,NA
> x3>x4,gamm3,NA
>

That's probably wrong.


> x4->x5,gamm4,NA
> x7->x6,gamm5,NA
> x6->x5,gamm6,NA
>

Are the above two correct?


> x2 <->x2 ,sigmma1,NA
> x3 <->x3 ,simma2,NA
> x4 <->x4 ,sigmma3,NA
> x5 <->x5 ,sigmma4,NA
> x7 <->x7 ,sigmma5,NA
>
x6 <->x6 ,sigmma6,NA
>
>

It's a somewhat unusual looking model. What are you trying to do?

Jeremy


-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Structural equation modeling in R(lavaan,sem)

2011-03-27 Thread Jeremy Miles
On 27 March 2011 12:12, jouba  wrote:

> I am a new user of the function sem in package sem and lavaan for
> structural
> equation modeling
> 1. I don’t know what is the difference between this function and CFA
> function, I know that cfa for confirmatory analysis but I don’t  know what
> is the difference between confirmatory analysis and  structural equation
> modeling in the package lavaan.
>

Confirmatory factor analyses are a class of SEMs.  All CFAs are SEMs, some
SEMs are CFA.  Usually (but definitions vary), if you have a measurement
model only, that's a CFA.  If you have a structural model too, that's SEM.

If you don't understand this distinction, might I suggest a little more
reading before you launch into the world of lavaan?  Things can get quite
tricky quite quickly.


> 2. I have data that I want to analyse but I have some missing data I must
> to
> impute these missing data and I use this package or there is a method that
> can handle missing data (I want to avoid to delete observations where I
> have
> some missing data)
>

No, you can use full information maximum likelihood estimation (= direct ML)
to model data in the presence of missing data.


> 3. I have to use variables that arn’t normally distributed , even if I
> tried
> to do some transformation to theses variables t I cant success to have
> normally distributed data , so I decide to  work with these data non
> normally distributed, my question  my result will be ok even if I have non
> normally distributd data.
>

Depends.  Lavaan can do things like Satorra-Bentler scaled chi-square, which
are robust to non-normality, and corrects your chi-square for (multivariate)
kurtosis.


> 4. If I work with the package ggm for separation d , without latent
> variables we will have the same result as SEM function I guess
>

Not familiar with ggm.  I'll leave that for someone else.


> 5. How about when we have the number of observation is small n, and what
>  is
> the method to know that we have the minimum of observation required??
>
>
>
>
Another very difficult question.  Short answer:  it depends.  Sometimes you
see recommendations based on the number of participants per parameter, which
is usually around 5-10.  These are somewhat flawed, but it's better than
nothing.

Again, I should reiterate that you have a hard road in front of you, and it
will be made much easier if you read a couple of introductory SEM texts,
which will  answer this sort of question.


Jeremy



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] testing power of correlation

2011-03-05 Thread Jeremy Miles
Can you clarify what you mean?  The strength of the correlation is the
correlation. One (somewhat) useful definition is Cohen's, who said 0.1
is small, 0.3 is medium and 0.5 is large.

Or do you (as your subject says) want to get the power for a
correlation?  This is a different thing.

Jeremy



On 5 March 2011 12:02, Anna Gretschel  wrote:
> Dear List,
>
> does anyone know how I can test the strength of a correlation?
>
> Cheers, Anna
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sem problem - did not converge

2011-02-14 Thread Jeremy Miles
40 has 1 in the diag, but any 0
>
> what should i do? i tryed several things...
>
> all value positive..
>
> #
>
>> eigen(hetdados40)$values
>  [1] 14.7231030  4.3807378  1.6271780  1.4000193  1.0670784  1.0217670
>  [7]  0.8792466  0.8103790  0.7397817  0.7279262  0.6909955  0.6589746
> [13]  0.6237204  0.6055884  0.550  0.5712017  0.5469284  0.5215437
> [19]  0.5073809  0.4892339  0.4644124  0.4485545  0.4372404  0.4290573
> [25]  0.4270672  0.4071262  0.3947753  0.3763811  0.3680527  0.3560231
> [31]  0.3537934  0.3402836  0.3108977  0.3099143  0.2819351  0.2645035
> [37]  0.2548654  0.2077900  0.2043732  0.1923942
>> eigen(dados40.cov)$values
>  [1] 884020.98 337855.95 138823.30 126291.58  87915.21  79207.04  73442.71
>  [8]  68388.11  60625.26  58356.54  55934.05  54024.00  50505.10  48680.26
> [15]  46836.47  45151.23  43213.65  41465.42  40449.59  37824.73  37622.43
> [22]  36344.34  35794.22  33959.29  33552.64  32189.94  31304.44  30594.85
> [29]  30077.32  29362.66  26928.12  26526.72  26046.47  24264.50  23213.18
> [36]  21503.97  20312.55  18710.97  17093.24  14372.21
>
> #
>
>
> there are 40 variables and 1004 subjects, should not be a problem the number
> of variables also!
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/sem-problem-did-not-converge-tp3305200p3305200.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Revolution Analytics reading SAS datasets

2011-02-10 Thread Jeremy Miles
On 10 February 2011 12:01, Matt Shotwell  wrote:
> On Thu, 2011-02-10 at 10:44 -0800, David Smith wrote:
>> The SAS import/export feature of Revolution R Enterprise 4.2 isn't
>> open-source, so we can't release it in open-source Revolution R
>> Community, or to CRAN as we do with the ParallelR packages (foreach,
>> doMC, etc.).
>
> Judging by the language of Dr. Nie's comments on the page linked below,
> it seems unlikely this feature is the result of a licensing agreement
> with SAS. Is that correct?
>


There was some discussion of this on the SAS email list.  People who
seem to know what they were talking about said that they would have
had to reverse engineer it to decode the file format.  It's slightly
tricky legal ground - the file format can't be copyrighted but
publishing the algorigthm might not be allowed.  I guess if they
release it as open source, that could be construed as publishing the
algorithm. (SPSS and WPS both can open SAS files, and I'd be surprised
if SAS licensed to them.  [Esp WPS, who SAS are (or were) suing for
all kinds of things in court in London.)

Jeremy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Generate data from correlation matrix

2011-02-07 Thread Jeremy Miles
Hi All,

I was wondering if anyone knew of a function which would generate data
from a pre-specified correlation matrix (as in the Stata command
r2corr) or sampled from a population with a specific
covariance/correlation.  (I thought I'd check before I wrote something
inelegant and slow.)

Jeremy

-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] HLM Model

2011-01-27 Thread Jeremy Miles
The empirical statement on the proc mixed line gives you robust
standard errors, I don't think you get them in R.

In SAS you specify that the predictors are to be dummy coded using the
class .  Are they factors in R?  I can't tell from the SAS output,
because the formatting has been lost.  However, it appears that in R
you did not dummy code them.  It also appears you haven't given use
all of the SAS output.

Jeremy





On 27 January 2011 15:52, Belle  wrote:
>
> Hi Harold:
>
> I know the outputs are different between SAS and R, but the results that I
> got have big difference.
>
> Here is part of the result based on the SAS code I provided earlier:
>
>                                Cov Parm     Subject    Estimate       Error
> Value      Pr > Z
>
>                                UN(1,1)      team        177.53      273.66
> 0.65       0.2583
>                                Residual                    2161.15
> 67.1438      32.19      <.0001
>
>                                                   Solution for Fixed
> Effects
>
>
> Standard
>  Effect          pairs    grade      school        Estimate       Error
> DF    t Value    Pr > |t|
>
>  Intercept                                             638.82       4.6127
> 5     138.49      <.0001
>  trt                                                      -0.2955
> 3.4800       5      -0.08      0.9356
>  pairs              1                                    0.1899       7.1651
> 5       0.03      0.9799
>  pairs              2                                    31.1293      6.0636
> 5       5.13      0.0037
>   .
>   .
>   .
>
> In R:
> library(lme4)
> mixed<- lmer(Pre~trt+pairs+grade+school+(1|team), test)
>
> result:
>
> Random effects:
>  Groups                 Name        Variance Std.Dev.
>  team               (Intercept)     568.61  23.846
>  Residual                               2161.21  46.489
>
> Fixed effects:
>                          Estimate        Std. Error    t value
> (Intercept)                540.402     43.029       12.559
> trt                           7.291        13.084        0.557
> pairs                       -3.535         6.150        -0.575
>
> In random effect, the variance of team in SAS is 177.53, but it is 568.61 in
> R. Also I have negative estimate for trt in SAS but positive estimate for
> trt in R. I am wondering how this happened, and how can I solve this problem
> so that I can get similar result from both software.
>
> Also does R provides result for fixed effect of each level? For example, the
> result of pair1, pair2,pair3,..., and grade1, grade2, grade3,...
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/HLM-Model-tp3242999p3243475.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Merge() error

2010-12-13 Thread Jeremy Miles
Hi All,

I'm getting some weird problems with merge(), which give me the

Error in match.names(clabs, names(xi)) :
  names do not match previous names

Error.

I've found other people discussing this error, but they don't seem to match
my situation, and the strange thing is that changing the order of the data
frames that I'm merging can remove the error.

For example:


bt <- merge(assessmentb, assessmentb2,  by=("caseid"), all=TRUE)
bt <- merge(bt, tbassessment2,  by=("caseid"), all=TRUE)
bt <- merge(bt, tbarms2,  by=("caseid"), all=TRUE)
bt <- merge(bt, tbstudydetail,  by=("caseid"), all=TRUE)

But if I change it to:


bt <- merge(assessmentb, assessmentb2,  by=("caseid"), all=TRUE)
bt <- merge(bt, tbarms2,  by=("caseid"), all=TRUE)
bt <- merge(bt, tbstudydetail,  by=("caseid"), all=TRUE)
bt <- merge(bt, tbassessment2,  by=("caseid"), all=TRUE)# <<<  This one
moved from second place to the end


The error occurs.

SImilarly, I find:

bt <- merge(np, bt,  by=("caseid"), all=TRUE)

Gives the error, but swapping the order of the data frames does not

bt <- merge( bt, np,  by=("caseid"), all=TRUE) #  Changed from np, bt to
 bt,np.


The code always worked fine before, until someone 'helpfully' duplicated
some of the variables across the data frames, so in addition to caseid, each
also contains AuthorYr and StudyDesign.

Thanks,

Jeremy









-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how do I make a correlation matrix positive definite?

2010-10-21 Thread Jeremy Miles
Let me rephrase the answer. :)

Correlation matrices are a kind of covariance matrix, where all of the
variances are equal to 1.00.

>From what I understand of make.positive.definite() [which is very
little], it (effectively) treats the matrix as a covariance matrix,
and finds a matrix which is positive definite.   This now comprises a
covariance matrix where the variances are not 1.00.  Imagine you had
some data which generated that covariance matrix.  You could calculate
the correlations - that's exactly what you do, by standardizing the
data, or more easily by standardizing the matrix, which turns it from
a (positive definite) covariance matrix to the equivalent (positive
definite) correlation matrix.

Maybe I've misunderstood, but this seems to be what you're after.  If
not, can you explain what you are after (or perhaps wait for another
answer, from someone who has not misunderstood.  :)

Jeremy






On 21 October 2010 16:14, HAKAN DEMIRTAS  wrote:
>
> I know.
>
> Let me re-phrase the question: How do I convert a non-positive definite
> correlation matrix to a positive-definite correlation matrix in R? I don't
> think cov2cor is relevant here.
>
> Example:
>
>>  print(corr.mat)
>
>     [,1]  [,2]  [,3]  [,4]
> [1,]  1.00 -0.95 -0.28 -0.64
> [2,] -0.95  1.00 -0.81 -0.38
> [3,] -0.28 -0.81  1.00 -0.11
> [4,] -0.64 -0.38 -0.11  1.00
>>
>> is.positive.definite(corr.mat)
>
> [1] FALSE
>>
>> make.positive.definite(corr.mat)
>
>          [,1]       [,2]       [,3]       [,4]
> [1,]  1.2105898 -0.7221551 -0.1246443 -0.4971036
> [2,] -0.7221551  1.2465138 -0.6419150 -0.2253951
> [3,] -0.1246443 -0.6419150  1.1146085 -0.0045829
> [4,] -0.4971036 -0.2253951 -0.0045829  1.0969628
>
>
>
> - Original Message -
> You could use cov2cor() to convert from covariance matrix to
> correlation matrix.  If the correlation is >1, the matrix won't be
> positive definite, so you can restandardize the matrix to get a pos
> def correlation matrix.
>
> Jeremy
>
>
> On 21 October 2010 15:50, HAKAN DEMIRTAS  wrote:
>>
>> Hi,
>>
>> If a matrix is not positive definite, make.positive.definite() function in
>> corpcor library finds the nearest positive definite matrix by the method
>> proposed by Higham (1988).
>>
>> However, when I deal with correlation matrices whose diagonals have to be
>> 1 by definition, how do I do it? The above-mentioned function seem to mess
>> up the diagonal entries. [I haven't seen this complication, but obviously
>> all entries must remain in (-1,1) range after conversion.]
>>
>> Any R tools to handle this?
>>
>> I'd appreciate any help.
>>
>> Hakan Demirtas
>>
>>
>> [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Jeremy Miles
> Psychology Research Methods Wiki: www.researchmethodsinpsychology.com
>
>
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how do I make a correlation matrix positive definite?

2010-10-21 Thread Jeremy Miles
You could use cov2cor() to convert from covariance matrix to
correlation matrix.  If the correlation is >1, the matrix won't be
positive definite, so you can restandardize the matrix to get a pos
def correlation matrix.

Jeremy


On 21 October 2010 15:50, HAKAN DEMIRTAS  wrote:
> Hi,
>
> If a matrix is not positive definite, make.positive.definite() function in 
> corpcor library finds the nearest positive definite matrix by the method 
> proposed by Higham (1988).
>
> However, when I deal with correlation matrices whose diagonals have to be 1 
> by definition, how do I do it? The above-mentioned function seem to mess up 
> the diagonal entries. [I haven't seen this complication, but obviously all 
> entries must remain in (-1,1) range after conversion.]
>
> Any R tools to handle this?
>
> I'd appreciate any help.
>
> Hakan Demirtas
>
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] FDR

2010-10-07 Thread Jeremy Miles
This is correct.  Wikipedia is not bad, and provides some references.
 Another web page:
http://courses.ttu.edu/isqs6348-westfall/images/6348/BonHolmBenHoch.htm



<http://courses.ttu.edu/isqs6348-westfall/images/6348/BonHolmBenHoch.htm>
Jeremy





On 7 October 2010 10:37,  wrote:

> Dear R users,
>
> I am wondering about the following results:
> > p.adjust(c(0.05,0.05,0.05),"fdr")
> [1] 0.05 0.05 0.05
> > p.adjust(c(0.05,0.04,0.03),"fdr")
> [1] 0.05 0.05 0.05
>
> Why does p.adjust(..., "fdr") not adjust p-values, if they are constant?
> Does somebody have an explanation or can point to a reference?
>
> Thanks in advance,
>
> Will
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] can I add line breaks to the paste() function?

2010-09-30 Thread Jeremy Miles
Try using cat instead.  Then "\n" is the new line character.

E.g.
 cat("1st line\n2nd line\n")

Jeremy




On 30 September 2010 13:30, David LeBauer  wrote:
> Can I add a line break to the paste() function to return the following:
>
> 'this is the first line'
> 'this is the second line'
>
> instead of
> 'this is the first line this is the second line'
>
> ?
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R Founding

2010-09-16 Thread Jeremy Miles
> I know from organizing a conference in Germany that the only really good way
> was and is ordinary money transfer via BIC and IBAN numbers. Unfortunately,
> this system is pretty unknown in the US. Europeans can easily use money
> transfer to the R foundation.
>

Paypal?

Many open source  projects have a 'donate with paypal' button.

Jeremy


-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] which one give clear picture-pdf, jpg or tiff?

2010-08-19 Thread Jeremy Miles
PNG is better for black and white graphs, tiff better for more colors,
but gives a big file.  JPG has a lossy compression, and is good with
detailed colors (like photos) if you want a smaller file.  You can't
import (AFAIK) a PDF into word.

On 19 August 2010 20:32, Roslina Zakaria  wrote:
> Hi,
>
> I need some opinion.  I would like to use graph that I generate from R code 
> and
> save it into word document.  Which format is better? pdf, jpeg or tiff?
>
> Thank you.
>
>
>
>        [[alternative HTML version deleted]]
>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] meta-analysis in R

2010-08-19 Thread Jeremy Miles
>
>
> I am trying to explore the citation bias by perfroming meta-analysis. I need 
> to plot a forest plot > on some other proportions other than the usual effect 
> size OR,RR, RD.
>

For meta-analsysis, it does not matter what the effect size is
(usually). One calculates the effect size, and one calculates the
standard error of that effect size.  The effect size and standard
error are then fed into the meta-analysis procedure.You tell us
what you don't want to do , but you don't tell us what you want to do.
 However, if you can calculate the statistic, you can use it.

I like the metafor package, which uses escalc() to calculate effect
sizes (although you can get your effect sizes from anywhere) and then
rma() to meta-analyze them.  It will do things like forest plots.

> I still do not have any idea after searching google and reading relevant 
> books. Can anyone
> kindly help? Thank you in advance.
>

We can try, but tell us what you want to do.

Jeremy






-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A %nin% operator?

2010-08-05 Thread Jeremy Miles
A related hint, Google doesn't let you search for %nin%, because it
ignores % symbols (and most other punctuation), but cuil does allow
you to search:
http://cuil.com/search?q=%25nin%25+R

On 5 August 2010 08:53, David Winsemius  wrote:
> The examples in the help page for "%in%" (shared by "match") has the
> definition of a "%w/o%" binary operator.
>
> "%w/o%" <- function(x,y) x[!x %in% y] #-- x without y
> since:
>  "%in%" <- function(x, table) match(x, table, nomatch = 0) > 0
> It appears that you have just re-invented the without-wheel. (which also
> seems to be happening a lot in Formula 1 races lately.)
> --
> David.
> On Aug 5, 2010, at 11:19 AM, Ken Williams wrote:
>
>> Sometimes I write code like this:
>>
>>> qf.a <- subset(qf, pubid %in% c(104, 106, 107, 108))
>>> qf.b <- subset(qf, !pubid %in% c(104, 106, 107, 108))
>>
>> and I get a little worried that maybe I've remembered the precedence rules
>> wrong, so I change it to
>>
>>> qf.a <- subset(qf, pubid %in% c(104, 106, 107, 108))
>>> qf.b <- subset(qf, !(pubid %in% c(104, 106, 107, 108)))
>>
>> and pretty soon my code looks like fingernail clippings (or Lisp) and I'm
>> thinking about precedence rather than my original task.  So I write a
>> %nin%
>> operator which I define as:
>>
>>> `%nin%` <- function (x, table) match(x, table, nomatch = 0L) == 0L
>>
>> and then I'm happy again.
>>
>> I wonder, would something like this find a home in core R?  Or is that too
>> much syntactic sugar for your taste?
>>
>> --
>> Ken Williams
>
>
> David Winsemius, MD
> West Hartford, CT
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Function to return variable name

2010-07-28 Thread Jeremy Miles
I'd like a function that returns the variable name.

As in:

MyData$Var1

Would return:

Var1

There should be a straightforward way to do this, but I can't see it.

Thanks,

Jeremy



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Note on PCA (not directly with R)

2010-06-30 Thread Jeremy Miles
See if you can track down Thurstone's box problem dataset. It comes (I
believe) on a CD with Loehlin's book 'latent variable models', but I'd
be surprised if you couldn't find it elsewhere.  Thurstone measured
boxes, and using EFA (rather than PCA, but they might be similar
enough to start off with) found that they were three dimensional.


Jeremy



On 28 June 2010 02:27, Christofer Bogaso  wrote:
> Dear all, I am looking for some interactive study materials on Principal
> component analysis. Basically I would like to know what we are actually
> doing with PCA? What is happening within the dataset at the time of doing
> PCA.
>
> Probably a 3-dimensional interactive explanation would be best for me.
>
> I have gone through some online materials specially Wikipedia etc, however
> what I need a "movable explanation" to understand that.
>
> Any suggestion please?
>
> Thanks for your time
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Logistic regression with multiple imputation

2010-06-29 Thread Jeremy Miles
Hi Daniel

First, newer versions of SPSS have dramatically improved their ability
to do stuff with missing data - I believe it's an additional module,
and in SPSS-world, each additional module = $$$.

Analyzing missing data is a 3 step process.  First, you impute,
creating multiple datasets, then you analyze each dataset in the
conventional way, then you combine the results.   There are two (that
I know of) packages for imputaton - these are mi and mice.  rseek.org
will find them for you.

Hope that helps,

Jeremy




On 29 June 2010 22:14, Daniel Chen  wrote:
> Hi,
>
> I am a long time SPSS user but new to R, so please bear with me if my
> questions seem to be too basic for you guys.
>
> I am trying to figure out how to analyze survey data using logistic
> regression with multiple imputation.
>
> I have a survey data of about 200,000 cases and I am trying to predict the
> odds ratio of a dependent variable using 6 categorical independent variables
> (dummy-coded). Approximatively 10% of the cases (~20,000) have missing data
> in one or more of the independent variables. The percentage of missing
> ranges from 0.01% to 10% for the independent variables.
>
> My current thinking is to conduct a logistic regression with multiple
> imputation, but I don't know how to do it in R. I searched the web but
> couldn't find instructions or examples on how to do this. Since SPSS is
> hopeless with missing data, I have to learn to do this in R. I am new to R,
> so I would really appreciate if someone can show me some examples or tell me
> where to find resources.
>
> Thank you!
>
> Daniel
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Popularity of R, SAS, SPSS, Stata...

2010-06-24 Thread Jeremy Miles
I think you need speech marks though:

http://www.google.com/insights/search/#q=%22r%20code%20for%22%2C%22sas%20code%20for%22%2C%22spss%20code%20for%22&cmpt=q

(There's not a lot of people looking for SPSS code ...)

Jeremy

On 24 June 2010 16:56, Joris Meys  wrote:
> Nice idea, but quite sensitive to search terms, if you compare your
> result on "... code" with "... code for":
> http://www.google.com/insights/search/#q=r%20code%20for%2Csas%20code%20for%2Cspss%20code%20for&cmpt=q
>
> On Thu, Jun 24, 2010 at 10:48 PM, Dario Solari  wrote:
>> First: excuse for my english
>>
>> My opinion: a useful font for measuring "popoularity" can be Google
>> Insights for Search - http://www.google.com/insights/search/#
>>
>> Every person using a software like R, SAS, SPSS needs first to learn
>> it. So probably he make a web-search for a manual, a tutorial, a
>> guide. One can measure the share of this kind of serach query.
>> This kind of results can be useful to determine trends of
>> "popularity".
>>
>> Example 1: "R tutorial/manual/guide", "SAS tutorial/manual/guide",
>> "SPSS tutorial/manual/guide"
>> http://www.google.com/insights/search/#q=%22r%20tutorial%22%2B%22r%20manual%22%2B%22r%20guide%22%2B%22r%20vignette%22%2C%22spss%20tutorial%22%2B%22spss%20manual%22%2B%22spss%20guide%22%2C%22sas%20tutorial%22%2B%22sas%20manual%22%2B%22sas%20guide%22&cmpt=q
>>
>> Example 2: "R software", "SAS software", "SPSS software"
>> http://www.google.com/insights/search/#q=%22r%20software%22%2C%22spss%20software%22%2C%22sas%20software%22&cmpt=q
>>
>> Example 3: "R code", "SAS code", "SPSS code"
>> http://www.google.com/insights/search/#q=%22r%20code%22%2C%22spss%20code%22%2C%22sas%20code%22&cmpt=q
>>
>> Example 4: "R graph", "SAS graph", "SPSS graph"
>> http://www.google.com/insights/search/#q=%22r%20graph%22%2C%22spss%20graph%22%2C%22sas%20graph%22&cmpt=q
>>
>> Example 5: "R regression", "SAS regression", "SPSS regression"
>> http://www.google.com/insights/search/#q=%22r%20regression%22%2C%22spss%20regression%22%2C%22sas%20regression%22&cmpt=q
>>
>> Some example are cross-software (learning needs - Example1), other can
>> be biased by the tarditional use of that software (in SPSS usually you
>> don't manipulate graph, i think)
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Joris Meys
> Statistical consultant
>
> Ghent University
> Faculty of Bioscience Engineering
> Department of Applied mathematics, biometrics and process control
>
> tel : +32 9 264 59 87
> joris.m...@ugent.be
> ---
> Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there a non-parametric repeated-measures Anova in R ?

2010-06-16 Thread Jeremy Miles
It's possible to use the ordinal regression model if your data are
ordered categories.  The standard non-parametric test is the Friedman
test.

?friedman.test

Jeremy

On 16 June 2010 10:22, Tal Galili  wrote:
> Hello Prof. Harrell and dear R-help mailing list,
>
> I wish to perform a non-parametric repeated measures anova.
>
> If what I read online is true, this could be achieved using a mixed Ordinal
> Regression model (a.k.a: Proportional Odds Model).
> I found two packages that seems relevant, but couldn't find any vignette on
> the subject:
> http://cran.r-project.org/web/packages/repolr/
> http://cran.r-project.org/web/packages/ordinal/
>
> So being new to the subject matter, I was hoping for some directions from
> people here.
>
> Are there any tutorials/suggested-reading on the subject?  Even better, can
> someone suggest a simple example code for how to run and analyse this in R
> (e.g: "non-parametric repeated measures anova") ?
>
> I waited a week to repost this question.  If I should have waited longer, or
> not repost this at all - then I am truly sorry.
>
> Thanks for any help,
> Tal
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Dataframe to word, using R2wd

2010-05-14 Thread Jeremy Miles
Hi All,

I'm trying to use R2wd to send a dataframe to Word.  The dataframe
isn't huge - 300 rows, 12 variables, although it has some long strings
in it.

Using:

wdTable(format(myDataFrame))

or

wdTable(myDataFrame)

Produces a very complex table, which Word struggles to process and
layout.  (I can't work out what the table is - it seems to be nested
tables. Converting to text gives one long column.)

Using

wdBody(MyDataFrame)

or

wdNormal(MyDataFrame)

Is there another way to use R2wd to send the dataframe to word?

Thanks (in advance)

Jeremy




-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Nonparametric generalization of ANOVA

2010-03-05 Thread Jeremy Miles
Two links for you which will get your answer much quicker than a mailing list:

http://lmgtfy.com/?q=non-parametric+anova+R

or

http://www.justfuckinggoogleit.com/search.pl?query=non+parametric+anova+R

Jeremy


On 5 March 2010 05:19, blue sky  wrote:
> My interpretation of the relation between 1-way ANOVA and Wilcoxon's
> test (wilcox.test() in R) is the following.
>
> 1-way ANOVA is to test if two or multiple distributions are the same,
> assuming all the distributions are normal and have equal variances.
> Wilcoxon's test is to test two distributions are the same without
> assuming what their distributions are.
>
> In this sense, I'm wondering what is the generalization of Wilcoxon's
> test to more than two distributions. And, more general, what is the
> generalization of Wilcoxon's test to multi-way ANOVA with arbitrary
> complex model formula? What are the equivalent F statistics and t
> statistics in the generalization of Wilcoxon's test?
>
> Note that I'm not interested in looking for a specific nonparametric
> test for a particular dataset right now, although this is important in
> practice. What I'm interested the general nonparametric statistical
> framework that parallels ANOVA. Could somebody give some hints on what
> references I should look for? I have google searched this topic, but
> don't find a page that exactly answered my question.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R ANOVA gives diferent results than SPSS

2010-02-11 Thread Jeremy Miles
I've always found exactly the same results.

If you post code that allows us to reproduce this, I suspect someone
would be able to shed light on it.  And output too.

J





On 11 February 2010 06:47, Protzko  wrote:
>
> I guess my subject says it all.  But I loaded a dataset in spss and used the
> foreign package to read and save it in R.  Running an anova (using the aov
> command) gives a different F and p value in R than it does in SPSS.  ANy
> idea what is going on?
> --
> View this message in context: 
> http://n4.nabble.com/R-ANOVA-gives-diferent-results-than-SPSS-tp1477322p1477322.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Structural Equation Models(SEM)

2009-12-02 Thread Jeremy Miles
In the world of SEM, GLS has pretty much fallen by the wayside - I
can't recall anything I've seen arguing for it's use in the past 10
years, and I also can't recall anyone using it over ML.   The
recommendations for non-normal distributions tend to be robust-ML, or
robust weighted least squares.  These are more computationally
intensive, and I *think* that John Fox (author of sem) has written
somewhere that it wouldn't be possible to implement them within R,
without using a lower level language - or rather that it might be
possible, but it would be really, really slow.

However, ML and GLS are pretty similar, if you dug around in the
source code, you could probably make the change (see,
http://www2.gsu.edu/~mkteer/discrep.html for example, for the
equations; in fact GLS is somewhat computationally simpler, as you
don't need to invert the implied covariance matrix at each iteration).
 However, the fact that it's not hard to make the change, and that no
one has made the change, is another argument that it's not a change
that needs to be made.

Jeremy



2009/12/2 Ralf Finne :
> Hi R-colleagues.
>
> I have been using the sem(sem) function.  It uses
> maximum likelyhood as optimizing. method.
> According to simulation study in Umeå Sweden
> (http://www.stat.umu.se/kursweb/vt07/stad04mom3/?download=UlfHolmberg.pdf
> Sorry it is in swedish, except the abstract)
> maximum likelihood is OK for large samples and normal distribution
> the SEM-problem should be optimized by GLS (Generalized Least Squares).
>
>
> So to the question:
>
> Is there any R-function that solves SEM with GLS?
>
>
> Ralf Finne
> Novia University of Applied Science
> Vasa  Finland
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Simple 2-Way Anova issue in R

2009-11-08 Thread Jeremy Miles
If I've understood correctly, you have cell sizes of 1.  This is not enough.

ANOVA compares within group variance to between group variance, and
your within group variances are zero.

You need more data, or to collapse some cells.

Jeremy




2009/11/8 znd :
>
> Hello, I'm new to R and have been following many guides including the two-way
> anova (http://www.personality-project.org/r/r.anova.html).  Using that
> walkthrough including the supplied data I do str(data.ex2) and receive the
> appropriate types of data as follows:
>> str(data.ex2)
> 'data.frame':   16 obs. of  4 variables:
>  $ Observation: int  1 2 3 4 5 6 7 8 9 10 ...
>  $ Gender     : Factor w/ 2 levels "f","m": 2 2 2 2 2 2 2 2 1 1 ...
>  $ Dosage     : Factor w/ 2 levels "a","b": 1 1 1 1 2 2 2 2 1 1 ...
>  $ Alertness  : int  8 12 13 12 6 7 23 14 15 12 ...
>
> aov.ex2 = aov(Alertness~Gender*Dosage,data=data.ex2)
>
> summary(aov.ex2)
>
> Outputs:
>              Df  Sum Sq Mean Sq F value Pr(>F)
> Gender         1  76.562  76.562  2.9518 0.1115
> Dosage         1   5.062   5.062  0.1952 0.6665
> Gender:Dosage  1   0.063   0.063  0.0024 0.9617
> Residuals     12 311.250  25.938
>
> However, when I got to use my data that I made in csv format I have to tell
> R to interpret my factors which are year and depth as factors...
> datafilename="C:/Rclass/hmwk1pt2.csv"
> data.ex2=read.csv(datafilename,header=T)
> data.ex2$Year<-as.factor(data.ex2$Year)
> data.ex2$Depth<-as.factor(data.ex2$Depth)
> data.ex2
> str(data.ex2)
>
> This outputs what I would expect:
>
>> str(data.ex2)
> 'data.frame':   12 obs. of  4 variables:
>  $ Year      : Factor w/ 3 levels "1999","2000",..: 1 1 1 1 2 2 2 2 3 3 ...
>  $ Depth     : Factor w/ 4 levels "10","15","20",..: 1 2 3 4 1 2 3 4 1 2 ...
>  $ Replicate1: num  14.3 15.1 16.7 17.3 16.3 17.4 18.6 20.9 22.9 23.9 ...
>  $ Replicate2: num  14.7 15.6 16.9 17.9 16.4 17.2 19.6 21.3 22.7 23.3 ...
>
> But something is not causing my anova to carry through...this is what I
> have.
>
> ANOVA = aov(Replicate1~Year*Depth,data=data.ex2)
> summary(ANOVA)
>
> which outputs:
>
>> summary(ANOVA)
>            Df  Sum Sq Mean Sq
> Year         2 143.607  71.803
> Depth        3  17.323   5.774
> Year:Depth   6   2.587   0.431
>
> There is no F-value or Pr(>F) columns.
>
> I also can't boxplot this correctly, again following the example at that
> website above they have:
>
> boxplot(Alertness~Dosage*Gender,data=data.ex2)
>
> which outputs:
>
> http://old.nabble.com/file/p26258684/87o3uicpf6dt4kkdyvfv.jpeg
>
> My code is:
>
> boxplot(Replicate1~Year*Depth,data=data.ex2)
>
> which outputs:
>
> http://old.nabble.com/file/p26258684/gik02vyhvvbmcvw3ia2h.jpeg
>
> This is incorrect, it's multiplying my factors but I thought that when I did
> the str() on my data it recognized the Year and Depth as factors, not
> numbers or integers.
>
> My csv file is:
> http://old.nabble.com/file/p26258684/hmwk1pt2.csv hmwk1pt2.csv
>
> Any help on what is going one would be greatly appreciated because I need to
> perform one-way, two-way, nested, and factorial anovas but I first need to
> solve this problem before I can continue.
>
>
> --
> View this message in context: 
> http://old.nabble.com/Simple-2-Way-Anova-issue-in-R-tp26258684p26258684.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.