Re: [R] metafor package: study level variation

2012-09-10 Thread Jarrett Byrnes
Thanks for this.  A few things

First, yes, my lmer syntax was indeed bad - I was writing this as an example of 
what my data & code look like.  Apologies. So, 1|studyID indeed.  Also 
1/variance.  

I've also been wondering - I often have more than 2 drugs - so, T1, T2, T3, 
etc.  It depends on what slice of the data I'm looking at.  As some of the data 
sets have, say, T1 and T2, and others have T1 and T3, while still others have 
just T1  - perhaps the way to go is to look at each effect size for each drug 
independently, rather than pool them in one analysis.  That does not count for 
the non-independence, though, of different effect sizes being calculated 
sometimes using the same control - which you rightly identify.

I'm curious, what other packages in R would be useful for this?  I can also 
change these from log ratios to log odds ratios - or at least obtain # with 
desirable response v. # with negative response for all treatment - perhaps this 
would be another way to go about it using meta.bin in the meta package - again, 
looking at each metric separately.

On Sep 10, 2012, at 4:52 AM, Viechtbauer Wolfgang (STAT) wrote:

> As usual, Michael was faster than I in responding. Let me add a few thoughts 
> of my own. See comments below in text.
> 
> Best,
> Wolfgang
> 
> --  
> Wolfgang Viechtbauer, Ph.D., Statistician  
> Department of Psychiatry and Psychology  
> School for Mental Health and Neuroscience  
> Faculty of Health, Medicine, and Life Sciences  
> Maastricht University, P.O. Box 616 (VIJV1)  
> 6200 MD Maastricht, The Netherlands  
> +31 (43) 388-4170 | http://www.wvbauer.com  
> 
> 
> > -Original Message-
> > From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> > On Behalf Of Jarrett Byrnes
> > Sent: Friday, September 07, 2012 16:02
> > To: R help
> > Subject: [R] metafor package: study level variation
> >
> > Hello.  A quick question about incorporating variation due to study in the
> > metafor package.  I'm working with a particular data set for meta-analysis
> > where some studies have multiple measurements.  Others do not.  So, let's
> > say the effect I'm looking at is response to two different kinds of drug
> > treatment - let's call their effect sizes T1 and T2.  Some studies have
> > multiple experiments measuring  T1 and T2.  Some have one of each.  Some
> > only have T1 or T2.
> 
> I assume there is also a control group/condition in each of these studies, so 
> in other words, you have a bunch of studies where some are two-arm studies 
> comparing Trt1 *or* Trt2 to control and some are three-arm studies comparing 
> both Trt1 *and* Trt2 to control.
> 
> > Now, in metafor, I've been using
> >
> > rma(yi = logRatio, vi=varLogRatio, mods=~ Drug.Type, data=mydata)
> 
> So, drug.type is a dummy variable (either Trt1 or Trt2), so the code above 
> will fit the model:
> 
> yij = beta0 + beta1 Trt2 + uij + eij,
> 
> where yij is the jth observed outcome in the ith study, beta0 then 
> corresponds to the (average) outcome for Trt1, beta1 indicates how much 
> higher or lower the (average) outcome is for Trt2 compared to Trt1, uij ~ 
> N(0, tau^2), and eij ~ N(0, varLogRatio). This model will treat three-arm 
> studies as if they were two (independent) two-arm studies. Probably not ideal.
> 
> > This works fine.  Out of curiosity, I ran a quickie model in lme4
> >
> > lmer(logRatio ~ Drug.Type + (1+studyID), data=mydata, weights=varLogRatio)
> >
> > and I noticed that the results are quite different, and this appears due
> > to some variation due to study (after inspecting ranef - note, I included
> > Drug.Type as a fixed effect as there were only two levels).
> 
> 1) Did you use (1+studyID) or (1 | studyID)? The latter is probably what you 
> meant/want to use.
> 2) You need to specify the *inverse* of the variances as weights.
> 3) This model assumes that the sampling variances are known up to a 
> proportionality constant, not exactly known. You will therefore get what is 
> sometimes called a multiplicative model for heterogeneity, with heterogeneity 
> reflected in a residual variance estimate larger than 1. This model is 
> different from the additive model (which is typically used), where the 
> sampling variances are assumed to be known exactly and we *add* an additional 
> random effect to reflect heterogeneity.
> 
> So, with (1 | studyID) and inverse sampling variance weights, you get the 
> model:
> 
> yij = beta0 + beta1 Trt2 + ui + eij,
> 
> where ui ~ N(0, tau^2), eij ~ N(0, sigma^2 * varLogRatio). Now tau^2 reflects 
> study-level variability and sigma^2 reflects multiplicative hete

[R] metafor package: study level variation

2012-09-07 Thread Jarrett Byrnes
Hello.  A quick question about incorporating variation due to study in the 
metafor package.  I'm working with a particular data set for meta-analysis 
where some studies have multiple measurements.  Others do not.  So, let's say 
the effect I'm looking at is response to two different kinds of drug treatment 
- let's call their effect sizes T1 and T2.  Some studies have multiple 
experiments measuring  T1 and T2.  Some have one of each.  Some only have T1 or 
T2.

Now, in metafor, I've been using

rma(yi = logRatio, vi=varLogRatio, mods=~ Drug.Type, data=mydata)

This works fine.  Out of curiosity, I ran a quickie model in lme4

lmer(logRatio ~ Drug.Type + (1+studyID), data=mydata, weights=varLogRatio)

and I noticed that the results are quite different, and this appears due to 
some variation due to study (after inspecting ranef - note, I included 
Drug.Type as a fixed effect as there were only two levels).

So, I went back to metafor and ran

rma(yi = logRatio, vi=varLogRatio, mods=~ Drug.Type+studyID, data=mydata)

which yielded the error

Error in qr.solve(wX, diag(k)) : singular matrix 'a' in solve
In addition: Warning message:
In rma(yi = logRatio, vi = varLogRatio, data = mydata, mods = ~Drug.Type  :
  Cases with NAs omitted from model fitting.

which appears to be due to the unbalanced nature of the dataset (some studies 
having T1 and T2, some having multiple measures of T1 and T2).

So, is there a way to properly incorporate studyID in a metafor using rma?  Is 
there an argument I'm missing, or perhaps should be using a different function?

Thanks!

-Jarrett
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] turning coefficients into an lm obect

2011-07-15 Thread Jarrett Byrnes
I'm working with a dataset and fitting and comparing various lms.  I also have 
a fitted model parameter values and SE estimated from the literature.  In doing 
my comparison, I'd like to turn these estimates into an lm object itself for 
ease of use with some of the code I'm writing.  While putting in the 
coefficients is a simple matter - just take a fitted model object and change 
the values of the mylm$coefficients, for example, it is not transparent to me 
how I could incorporate the parameter variance and, say, the unexplained 
variance in the previous fit.

Although, thinking about it further, the unexplained variance is specific to 
that dataset - so, I shouldn't have to worry about that.  But how can I 
incorporate known variance in the parameter estimates?

Thanks!

-Jarrett
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multicore mclapply error

2010-08-12 Thread Jarrett Byrnes
Ah.  Indeed, this is from the glmulti.  I had not realized there would be 
problems using Java.  Is there a way around this to still use a multicore 
approach?  For what other packages that use multiple cores will this not be a 
problem?

-Jarrett

On Aug 12, 2010, at 11:02 AM, Thomas Lumley wrote:

> On Thu, 12 Aug 2010, Jarrett Byrnes wrote:
> 
>> I'm running r 2. on a mac running 10.6.4 and a dual-core macbook pro.  I'm 
>> having a funny time with multicore.  When I run it with 2 cores, mclapply, R 
>> borks with the following error.
>> 
>> The process has forked and you cannot use this CoreFoundation functionality 
>> safely. You MUST exec().
>> Break on 
>> __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
>>  to debug.
>> 
>> 
>> If, however, I crank the # of cores back to 1, it runs just fine.
>> 
>> The code looks as follows:
>> 
>> 
>> mmi_fits<-mclapply(responses, function(a_response){
>>  gmult<-glmulti(glm(make_formula(a_response, sp), 
>> data=df, family=binomial))
>>  return(gmult)
>>  },
>>  mc.cores=2)
> 
> You don't say what glmulti() is. If you mean the function from glmulti 
> package, that package uses Java and rjava, and it wouldn't be altogether 
> surprising if the connection to Java or the Java environment reacted badly to 
> being forked.
> 
>   -thomas
> 
> Thomas Lumley
> Professor of Biostatistics
> University of Washington, Seattle
> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] multicore mclapply error

2010-08-12 Thread Jarrett Byrnes
I'm running r 2. on a mac running 10.6.4 and a dual-core macbook pro.  I'm 
having a funny time with multicore.  When I run it with 2 cores, mclapply, R 
borks with the following error.

The process has forked and you cannot use this CoreFoundation functionality 
safely. You MUST exec().
Break on 
__THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__()
 to debug.


If, however, I crank the # of cores back to 1, it runs just fine.

The code looks as follows:


mmi_fits<-mclapply(responses, function(a_response){
gmult<-glmulti(glm(make_formula(a_response, sp), 
data=df, family=binomial))
return(gmult)
}, 
mc.cores=2)


-Jarrett
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] a question regarding updating formulas with coefficients

2010-08-11 Thread Jarrett Byrnes
I have formulae with coefficents that I would like to update.  However, I get 
some strange results.  For example, see the following:

For the formula y ~ d+ 3*r+t I want to add a variable p, so

> update(y~d+0*r+t, .~.+p)

produces

y ~ d + t + p - 1

If the coefficient is not 0, but rather, something else - say, 3, I get the 
following:

> update(y~d+3*r+t, .~.+p)

Error in terms.formula(tmp, simplify = TRUE) : 
  invalid model formula in ExtractVars
> 


Is there a way to do this, or a different call I should be trying?

-Jarrett
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sem by variable x

2010-07-20 Thread Jarrett Byrnes
You may want to take a look at the lavaan package and use the multigroup 
analysis there (and see if you even need to group by country as well).

Otherwise, you could do something like

library(sem)
library(plyr)

cfa_func<-function(a.df){

cfa<-sem(ses.model, cov(a.df[,2:7], nrow(a.df)))
print(summary(cfa))
}
d_ply(data, "idcntry", cfa_func)

-Jarrett

On Jul 20, 2010, at 6:22 AM, Daniel Caro wrote:

> Hi R users,
> 
> I am new in R. I would like to perform confirmatory factor analysis
> for a data set of countries. My data are:
> 
> data <- read.csv("ses.raw", header = TRUE)
> attach(data)
> names(data)
> 
> [1] "idcntry" "momed"   "daded"   "dadocu"  "momocu"  "hompos"  "finan"
> 
> 
> The country id is "idcntry", my model is "ses.model", and variables to
> be included in the analysis are "momed"   "daded"   "dadocu"  "momocu"
> "hompos"  "finan" . How can I run
> 
> cfa<-sem(ses.model, cov(data[,2:7], nrow(data)))
> summary(cfa)
> 
> by country? I am able to perform sem on all data by not by country. I tried
> 
> by(data[,2:7], idcntry, function(x) sem(ses.model, cov(data[,2:7]), 
> nrow(data)))
> 
> but the output is the same for all countries.
> 
> Thank you,
> Daniel
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] SEM interaction

2010-05-25 Thread Jarrett Byrnes
Have you looked into multigroup analysis?  Is that what you're after?

If so, you can do multigroup analysis in lavaan fairly easily or, if you're 
using the sem package, drop me a line.  I have some scripts that will do 
multigroup analysis for an equal sample size.

See 
http://faculty.chass.ncsu.edu/garson/PA765/structur.htm#multiplegroupanalysis
and
http://www.structuralequations.com/3.html

for some examples.

On May 25, 2010, at 8:38 AM, Anthony Dick wrote:

> Hello all,
> 
> This is a general stats question--I realize it is an R help list, so tell me 
> to go away if it is inappropriate.
> 
> I have a 2 X 2 design, and I have specified four identical path models (one 
> for each level of each factor). I want to test for an interaction at each 
> path--essentially (A1 - A2) - (B1 - B2) != 0. I was thinking of computing a 
> contrast for each path of interest, such that I compute the difference of the 
> difference in the (non-standardized) path weights (as above), and then divide 
> this by the pooled standard errors of the estimates (average of the standard 
> errors).
> 
> Does this seem statistically sound, or am I way off base? I typically compare 
> path weights across two levels (e.g., age differences in a path weight for 
> the same theoretical model) using the stacked model approach, but am not sure 
> how to apply this to the interaction.
> 
> Anthony
> 
> -- 
> Anthony Steven Dick, Ph.D.
> Post-Doctoral Fellow
> Human Neuroscience Laboratory
> Department of Neurology
> The University of Chicago
> 5841 S. Maryland Ave. MC-2030
> Chicago, IL 60637
> Phone: (773)-834-7770
> Email: ad...@uchicago.edu
> Web: http://home.uchicago.edu/~adick/
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] calculate response probabilities using sem-analysis

2010-03-22 Thread Jarrett Byrnes

Did you back-calculate to estimate an intercept?
Alternately, I've been working on a function that takes a fitted sem  
and gets predicted values given an input.  Contact me off-list and  
I'll send it to you.


On Mar 22, 2010, at 8:37 AM, Tryntsje Wesselius wrote:


Hi everyone,

I just conducted a structural equation model for estimating a response
model. This model should predict the probability that someone is  
responding

to a direct mailing. I used the sem package for this. When I have my
coefficients I want to know how well my model predicts the  
probability of

response. How can I calculate these probabilities?
I tried to use the unstandardized coefficients, just like a regression
coefficient in the following formula:
Y = b1*x1 + b2*x2
But then I have values larger than 1, so that aren't probabilities.  
Does

anyone dealt with this problem before?
You can be of great help to me!!

Kind regards,

Tryntsje

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] SEM error

2010-02-22 Thread Jarrett Byrnes
I have often found this to happen if the scale of one variable is  
orders of magnitude different than the scale of other variables.  Have  
you tried inspecting the covariance matrix and log transforming any  
such variables?


On Feb 22, 2010, at 8:14 AM, Uwe Ligges wrote:




On 20.02.2010 08:51, Dan Edgcumbe wrote:
I'm trying to do some confirmatory factor analysis on some data. My  
SEM
model solves in 22 iterations, but when I try to look at the  
modification

indices, using mod.indices, I get the following error message:

Error in solve.default(hessian) :
  system is computationally singular: reciprocal condition number =
4.40283e-18

What does this mean?


That the method you apply tries to invert some object called  
"hessian" (maybe a hessian? ;-)) but fails since a singular matrix  
cannot be inverted. Perhaps (as I often found for people doing sem  
analyses) you have less observations than parameters to estimate or  
only certain combinations for some factors?


Uwe Ligges







Many thanks,

Dan

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] suppress printing within a function

2010-02-16 Thread Jarrett Byrnes
I'm working with a few functions (e.g. do.base.descriptions in the  
netstat package) that, in addition to returning an object with  
variables I want to extract, also print output.  There is no way to  
turn this default printing behavior off in many of the functions.

Is there a blanket way to suppress such printing, say, within a loop  
or a ddply statement?

Thanks!

-Jarrett





Jarrett Byrnes
Postdoctoral Associate, Santa Barbara Coastal LTER
Marine Science Institute
University of California Santa Barbara
Santa Barbara, CA 93106-6150
http://www.lifesci.ucsb.edu/eemb/labs/cardinale/people/byrnes/index.html


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] interpreting error estimate in SEM

2010-02-09 Thread Jarrett Byrnes
If it is not standardized, I have the following script that I use to  
calculate the r^2 for each variable of a model.  I do wonder, however,  
if there is a way to calculate a model-wide r^2 like index.

-Jarrett

##
#returns the r^2 for endogenous variables and error path coefficients
#
# Last updated 8/18/08
##

rsquare.sem<-function (sem.obj){
output.info<-list()

for (index in 1:length(sem.obj$S[,1])){
var.name<-sem.obj$var.names[index]
obs.var<-sem.obj$S[index,index]
est.var<-sem.obj$P[index,index]
r.sq<-1-(est.var/obs.var)
std.error.coef<-sqrt(1-r.sq)

#if it's 0, or close to (due to rounding error), it's exogenous
if(r.sq > 1e-10){
output.info<-c(output.info, var.name, obs.var, est.var, r.sq,  
std.error.coef)
}

}

output.matrix<-matrix(output.info, ncol=5, byrow=TRUE)
colnames(output.matrix)<-c("Variable", "Observed.Variance",  
"Estimated.Variance", "R^2", "Standardized.Error.Coefficient")
output.matrix
}






Jarrett Byrnes
Postdoctoral Associate, Santa Barbara Coastal LTER
Marine Science Institute
University of California Santa Barbara
Santa Barbara, CA 93106-6150
http://www.lifesci.ucsb.edu/eemb/labs/cardinale/people/byrnes/index.html

On Feb 9, 2010, at 4:17 AM, John Fox wrote:

> Dear Kathryn,
>
> I assume that MWDError is the error variance associated with MWD. If  
> MWD is
> a standardized variable (i.e., with a variance of 1) then 59% of its
> variance is unaccounted for.
>
> I hope this helps,
> John
>
> 
> John Fox
> Senator William McMaster
>  Professor of Social Statistics
> Department of Sociology
> McMaster University
> Hamilton, Ontario, Canada
> web: socserv.mcmaster.ca/jfox
>
>> -Original Message-
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org 
>> ]
> On
>> Behalf Of Kathryn Barto
>> Sent: February-09-10 6:23 AM
>> To: r-help@r-project.org
>> Subject: [R] interpreting error estimate in SEM
>>
>> hi,
>>
>> I'm using the sem package, and I want to make sure I'm interpreting  
>> the
>> output correctly. Here is an excerpt of my output. When I report  
>> the MWD
>> error, should I say that 59% of the variation in MWD is not  
>> explained by
> the
>> model, or that 59% of the variation is explained, or something  
>> entirely
>> different.
>>
>> Parameter Estimates
>>   Estimate  Std Error z value  Pr(>|z|)
>> NplantRoots 0.462029   0.120803   3.82466 1.3095e-04
>> plantRoots
> <--
>> - N
>> ...
>> CoError 1.11 0.151850   6.58553 4.5325e-11 Co <--> Co
>> carbonateError  1.09 0.156217   6.40142 1.5394e-10 carbonate
> <-->
>> carbonate
>> MWDError0.589788   0.092128   6.40185 1.5351e-10MWD  
>> <-->
> MWD
>>
>> Thanks
>> Kathryn
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Structural Equation Models(SEM)

2009-12-15 Thread Jarrett Byrnes
Joerg Everman has a great solution to this.  He changed the middle of  
the sem.mod code to include a variable, fit, and then used the  
following approach around where you define the objectives:


if (fit=="ml") {
   objective.1 <- function(par){
   A <- P <- matrix(0, m, m)
   val <- ifelse (fixed, ram[,5], par[sel.free])
   A[arrows.1] <- val[one.head]
   P[arrows.2t] <- P[arrows.2] <- val[!one.head]
   I.Ainv <- solve(diag(m) - A)
   C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
   Cinv <- solve(C)
   f <- sum(diag(S %*% Cinv)) + log(det(C))
   attributes(f) <- list(C=C, A=A, P=P)
   f
   }
   objective.2 <- function(par){
   A <- P <- matrix(0, m, m)
   val <- ifelse (fixed, ram[,5], par[sel.free])
   A[arrows.1] <- val[one.head]
   P[arrows.2t] <- P[arrows.2] <- val[!one.head]
   I.Ainv <- solve(diag(m) - A)
   C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
   Cinv <- solve(C)
   f <- sum(diag(S %*% Cinv)) + log(det(C))
   grad.P <- correct * t(I.Ainv) %*% t(J) %*% Cinv %*% (C - S) %* 
% Cinv %*% J %*% I.Ainv

   grad.A <- grad.P %*% P %*% t(I.Ainv)
   gradient <- rep(0, t)
   gradient[unique.free.1] <- tapply(grad.A[arrows. 
1.free],ram[ram[,1]==1 & ram[,4]!=0, 4], sum)
   gradient[unique.free.2] <- tapply(grad.P[arrows. 
2.free],ram[ram[,1]==2 & ram[,4]!=0, 4], sum)

   attributes(f) <- list(C=C, A=A, P=P, gradient=gradient)
   f
   }
}
if (fit=="gls") {
   objective.1 <- function(par){
   A <- P <- matrix(0, m, m)
   val <- ifelse (fixed, ram[,5], par[sel.free])
   A[arrows.1] <- val[one.head]
   P[arrows.2t] <- P[arrows.2] <- val[!one.head]
   I.Ainv <- solve(diag(m) - A)
   C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
   Sinv <- solve(S)
   f <- 0.5 * sum(diag(   ((S - C) %*% Sinv) %*%  ((S - C) %*%  
Sinv) ))

   attributes(f) <- list(C=C, A=A, P=P)
   f
   }
   objective.2 <- function(par){
   A <- P <- matrix(0, m, m)
   val <- ifelse (fixed, ram[,5], par[sel.free])
   A[arrows.1] <- val[one.head]
   P[arrows.2t] <- P[arrows.2] <- val[!one.head]
   I.Ainv <- solve(diag(m) - A)
   C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
   Sinv <- solve(S)
   f <- sum(diag(   ((S - C) %*% Sinv) %*%  ((S - C) %*% Sinv) ))
#grad.P <- correct * t(I.Ainv) %*% t(J) %*% Cinv %*% (C - S) %* 
% Cinv %*% J %*% I.Ainv

#grad.A <- grad.P %*% P %*% t(I.Ainv)
#gradient <- rep(0, t)
#gradient[unique.free.1] <- tapply(grad.A[arrows. 
1.free],ram[ram[,1]==1 & ram[,4]!=0, 4], sum)
#gradient[unique.free.2] <- tapply(grad.P[arrows. 
2.free],ram[ram[,1]==2 & ram[,4]!=0, 4], sum)

#attributes(f) <- list(C=C, A=A, P=P, gradient=gradient)
   attributes(f) <- list(C=C, A=A, P=P)
   f
   }
}   
if (fit=="uls") {
   objective.1 <- function(par){
   A <- P <- matrix(0, m, m)
   val <- ifelse (fixed, ram[,5], par[sel.free])
   A[arrows.1] <- val[one.head]
   P[arrows.2t] <- P[arrows.2] <- val[!one.head]
   I.Ainv <- solve(diag(m) - A)
   C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
   Sinv <- solve(S)
   f <- 0.5 * sum(diag(   (S - C) %*%  (S - C)  ))
   attributes(f) <- list(C=C, A=A, P=P)
   f
   }
   objective.2 <- function(par){
   A <- P <- matrix(0, m, m)
   val <- ifelse (fixed, ram[,5], par[sel.free])
   A[arrows.1] <- val[one.head]
   P[arrows.2t] <- P[arrows.2] <- val[!one.head]
   I.Ainv <- solve(diag(m) - A)
   C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
   Sinv <- solve(S)
   f <- 0.5 * sum(diag(   (S - C) %*%  (S - C)  ))
#grad.P <- correct * t(I.Ainv) %*% t(J) %*% Cinv %*% (C - S) %* 
% Cinv %*% J %*% I.Ainv

#grad.A <- grad.P %*% P %*% t(I.Ainv)
#gradient <- rep(0, t)
#gradient[unique.free.1] <- tapply(grad.A[arrows. 
1.free],ram[ram[,1]==1 & ram[,4]!=0, 4], sum)
#gradient[unique.free.2] <- tapply(grad.P[arrows. 
2.free],ram[ram[,1]==2 & ram[,4]!=0, 4], sum)

#attributes(f) <- list(C=C, A=A, P=P, gradient=gradient)
   attributes(f) <- list(C=C, A=A, P=P)
   f
   }
}   


On Dec 2, 2009, at 10:22 AM, Jeremy Miles wrote:


In the world of SEM, GLS has pretty much fallen by the wayside - I
can't recall anything I've seen arguing for it's use in the past 10
years, and I also can't recall anyone using it over ML.   The
recommendations for non-normal distributions tend to be robust-ML, or
robust weighted least squares.  These are more computationally
intensive, and I *think* that John Fox (author of sem) has written
somewhere that it wouldn't be possible to implement them within R,
without using a lower level language - or rather that it might be
possible, but it would be really, really slow.

However, ML and GLS are pretty simila

Re: [R] Structural Equation Models(SEM)

2009-12-02 Thread Jarrett Byrnes
Indeed, looking at sem.R in the package, we see that at the heart of  
sem is a version of the maximum likelihood discrepancy function.  It  
should be easy to use, say, another flag (e.g. set the default to  
method="ML" for the current behavior) and for other methods, use  
different discrepancy functions.  One would only need an if statement.

A little extra work might be needed to incorporate ADF methods, but it  
should not be intractable.  Note, the sem package is on r-forge.

-Jarrett


--------
Jarrett Byrnes
Postdoctoral Associate, Santa Barbara Coastal LTER
Marine Science Institute
University of California Santa Barbara
Santa Barbara, CA 93106-6150
http://www.lifesci.ucsb.edu/eemb/labs/cardinale/people/byrnes/index.html

On Dec 2, 2009, at 10:22 AM, Jeremy Miles wrote:

> In the world of SEM, GLS has pretty much fallen by the wayside - I
> can't recall anything I've seen arguing for it's use in the past 10
> years, and I also can't recall anyone using it over ML.   The
> recommendations for non-normal distributions tend to be robust-ML, or
> robust weighted least squares.  These are more computationally
> intensive, and I *think* that John Fox (author of sem) has written
> somewhere that it wouldn't be possible to implement them within R,
> without using a lower level language - or rather that it might be
> possible, but it would be really, really slow.
>
> However, ML and GLS are pretty similar, if you dug around in the
> source code, you could probably make the change (see,
> http://www2.gsu.edu/~mkteer/discrep.html for example, for the
> equations; in fact GLS is somewhat computationally simpler, as you
> don't need to invert the implied covariance matrix at each iteration).
> However, the fact that it's not hard to make the change, and that no
> one has made the change, is another argument that it's not a change
> that needs to be made.
>
> Jeremy
>
>
>
> 2009/12/2 Ralf Finne :
>> Hi R-colleagues.
>>
>> I have been using the sem(sem) function.  It uses
>> maximum likelyhood as optimizing. method.
>> According to simulation study in Umeå Sweden
>> (http://www.stat.umu.se/kursweb/vt07/stad04mom3/?download=UlfHolmberg.pdf
>> Sorry it is in swedish, except the abstract)
>> maximum likelihood is OK for large samples and normal distribution
>> the SEM-problem should be optimized by GLS (Generalized Least  
>> Squares).
>>
>>
>> So to the question:
>>
>> Is there any R-function that solves SEM with GLS?
>>
>>
>> Ralf Finne
>> Novia University of Applied Science
>> Vasa  Finland
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> -- 
> Jeremy Miles
> Psychology Research Methods Wiki: www.researchmethodsinpsychology.com
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] adding points to a wireframe

2009-08-18 Thread Jarrett Byrnes
A quick question.  I'm trying to plot a surface from a fitted model  
along with the original points, as in the following example:



df<-data.frame(expand.grid(100*runif(1:100), 100*runif(1:100)))
df$Var3<-rnorm(length(df$Var1), mean=df$Var1*df$Var2, sd=10)

my.lm<-lm(Var3 ~ Var1*Var2, data=df)

my.fun<-function(x,y) predict(my.lm, data.frame(Var1=x, Var2=y),  
type="response")


df2<-data.frame(expand.grid(x=seq(1,100,10), y=seq(1,100,10)))
df2$z<-my.fun(df2$x, df2$y)

wireframe(z ~ x*y, data=df2, col="grey",
panel.3d.wireframe=function(x,y,z,...){
panel.3dwire(x = x, y = y, z = z, ...)
panel.3dscatter(x=df$Var1,
y=df$Var2,
z=df$Var2,
...)
})

Which I coded according to what I saw here:
http://www.nabble.com/add-points-to-wireframe-td12901155.html
http://www.nabble.com/wireframe---add-data-points-td16984174.html

However, this plots the wireframe, then puts the error message:
Error using packet 1
'x' and 'units' must have length > 0

Any thoughts on what is incorrect here?

Thanks.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] using ddply but preserving some of the outside data

2009-08-05 Thread Jarrett Byrnes
I have a bit of a quandy.  I'm working with a data set for which I  
have sampled sites at a variety of dates.  I want to use this data,  
and get a running average of the sampled values for the current and  
previous date.


I originally thought something like ddply would be ideal for this,  
however, I cannot break up my data by date, and then apply a function  
that requires information about the previous dates.


I had thought to use a for loop and merge, but that doesn't quite seem  
to be working.


So, my questions are twofold

1) Is there a way to use something like the plyr library to do this  
efficiently
	1a) Indeed, is there a way to use ddply or its ilk to have a function  
that returns a vector of values, and then assign the variables you are  
sorting by to the whole vector?  Or maybe making each value it's own  
column in the new data frame, and then using reshape is the answer.   
Hrm.  Seems clunky.


2) Or, can a for loop around a plyr-kind of statement do the trick  
(and if so, pointers on why the below code won't work) (also, it, too,  
seems clunkier than I would like)



sites<-c("a", "b", "c")
dates<-1:5

a.df<-expand.grid(sites=sites, dates=dates)
a.df$value<-runif(15,0,100)
a.df<-as.data.frame(a.df)


#now, I want to get the average of the
mean2<-function(df, date){
sub.df<-subset(df, df$dates-date<1 &
df$dates-date>-1 )
return(mean(df$value))
}

my.df<-data.frame(sites=NA, dates=NA, V1=NA)
for(a.date in a.df$dates){
new.df<-ddply(a.df, "sites", function(df) mean2 (df, a.date))
my.df<-merge(my.df, new.df) #doesn't seem to work
}

my.df

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Another SEM question

2009-07-21 Thread Jarrett Byrnes

You should also need coefficients for your error terms.

On Jul 21, 2009, at 12:36 AM, Stein, Luba (AIM SE) wrote:


Hello Jarrett,

Thank you very much indeed for your help. I could solve my problem  
and you were right that I had to choose the connections in the model  
right.

Thus the entry
model <- specify.model()
Z -> M, z1, NA
Z -> USM, z2, NA
Z -> R, z3, 1
M <-> M
USM <-> USM
R <-> R
Z <-> Z

works and gave me moreover a really good fit.

So thank you for your support once again.

Best wishes,
Luba







-Urspr?ngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r- 
project.org] Im Auftrag von Stein, Luba (AIM SE)

Gesendet: Dienstag, 21. Juli 2009 09:13
An: Jarrett Byrnes
Cc: r-help@r-project.org
Betreff: Re: [R] Another SEM question

Hello,

Perhaps this is a good point. I use the Eclipse platform. The  
problem was that when I first used the structure
Z -> M, z1, NA the compiler took only the value Z ->M. Thus I erased  
it totally.

Maybe it is really an Eclipse problem.

Do you know how to solve this difficulty?

Thanks for all your support,
Luba






-Urspr?ngliche Nachricht-
Von: Jarrett Byrnes [mailto:byr...@msi.ucsb.edu]
Gesendet: Dienstag, 21. Juli 2009 09:01
An: Stein, Luba (AIM SE)
Cc: r-help@r-project.org
Betreff: Re: AW: [R] Another SEM question

Ah, the larger problem is in how you specify your model.  You provide
no parameter names, nor starting estimates (even an NA).  See the sem
help file for an example.  Basically, it must look something like as
follows


model <- specify.model()
Z -> M, zm, NA
Z -> I, zi, NA

etc.


On Jul 20, 2009, at 11:37 PM, Stein, Luba (AIM SE) wrote:



Hi,


[,1]  [,2]  [,3]
[1,]  4.820719e-03 -5.558801e-05 -5.718939e-05
[2,] -5.558801e-05  4.763194e-06 -7.661872e-06
[3,] -5.718939e-05 -7.661872e-06  1.662150e-03

This is mod.cov. It is the covariance matrix of (R, I, M).
R, I and M are vectors of length 109 which are contained in the file
data4.csv.

As far as I understood the package sem. I consider R, I and M as the
external veriables and Z as the latent variable which I will receive
as an result after calculating the estimated errors and parameters.
This is what atually is missing in the output.
Moreover, the output provides the information about the quality of
the fitted model. I have to admit that this model does not fit quite
well.
Nevertheless, it should provide the estimated errors like it does
just for the first variable Z ->M.

Thanks a lot for your help,
Luba


-Urspr?ngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
project.org] Im Auftrag von Jarrett Byrnes
Gesendet: Dienstag, 21. Juli 2009 08:19
An: Stein, Luba (AIM SE)
Cc: r-help@r-project.org
Betreff: Re: [R] Another SEM question

You don't appear to be defining Z here.

Might that be the problem?

Or, I, M, and R may not be defined either.  It is unclear.  What does
mod.cov look like?

On Jul 20, 2009, at 11:11 PM, Stein, Luba (AIM SE) wrote:


Thank you for your advice. So I am sending the whole code

data.dir <- file.path(home.dir, "Data")
file <- file.path(data.dir, "data4.csv")

SEM <- read.csv(file)
print(SEM)

library(sem)
SEM1 <- as.matrix(cbind(SEM$R1, SEM$I1, SEM$M1))
print(SEM1)
mod.cov <- cov(SEM1)
print(mod.cov)

I <- SEM$I1
M <- SEM$M1
R <- SEM$R1

model <- specify.model()
Z -> M
Z -> I
Z -> R
M <-> M
I <-> I
R <-> R
Z <-> Z

sem.mod <- sem(model, mod.cov, N=109)
summary(sem.mod)



All vectors have a length of 109.


Thank you for your help once again.

Best wishes,
Luba



-Urspr?ngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
project.org] Im Auftrag von Jarrett Byrnes
Gesendet: Montag, 20. Juli 2009 18:25
An: Stein, Luba (AIM SE)
Cc: r-help@r-project.org
Betreff: Re: [R] Another SEM question

Luba,

If you could provide the code you ran, perhaps the listserv can be  
of

help.

On Jul 20, 2009, at 7:55 AM, Stein, Luba (AIM SE) wrote:


Hello,

I use the function sem the following way
sem.mod <- sem(model, mod.cov, N=109) where the variables are
modelled:

Z -> M
Z -> I
Z -> R
M <-> M
I <-> I
R <-> R
Z <-> Z

The output is
...

Normalized Residuals
Min. 1st Qu. Median Mean 3rd Qu. Max.
-7.3300 -0.2750 -0.2670 -0.1290 -0.0369 9.0300

Parameter Estimates
Estimate Std Error z value Pr(>|z|)
0.0021625 0.00017037 12.693 0 M <--- Z

Iterations = 13


In "Structural Equation Modeling With the sem Package in R" by John
Fox is stated that there should be an output for each external
variable.

Where is my fault, that I receive the output only for the first
variable?


Thanks for your help,
Luba





[[alternative HTML version deleted]]

__
R-help@r-project.org mailin

Re: [R] Another SEM question

2009-07-21 Thread Jarrett Byrnes
Ah, the larger problem is in how you specify your model.  You provide  
no parameter names, nor starting estimates (even an NA).  See the sem  
help file for an example.  Basically, it must look something like as  
follows



model <- specify.model()
Z -> M, zm, NA
Z -> I, zi, NA

etc.


On Jul 20, 2009, at 11:37 PM, Stein, Luba (AIM SE) wrote:



Hi,


 [,1]  [,2]  [,3]
[1,]  4.820719e-03 -5.558801e-05 -5.718939e-05
[2,] -5.558801e-05  4.763194e-06 -7.661872e-06
[3,] -5.718939e-05 -7.661872e-06  1.662150e-03

This is mod.cov. It is the covariance matrix of (R, I, M).
R, I and M are vectors of length 109 which are contained in the file  
data4.csv.


As far as I understood the package sem. I consider R, I and M as the  
external veriables and Z as the latent variable which I will receive  
as an result after calculating the estimated errors and parameters.  
This is what atually is missing in the output.
Moreover, the output provides the information about the quality of  
the fitted model. I have to admit that this model does not fit quite  
well.
Nevertheless, it should provide the estimated errors like it does  
just for the first variable Z ->M.


Thanks a lot for your help,
Luba


-Urspr?ngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r- 
project.org] Im Auftrag von Jarrett Byrnes

Gesendet: Dienstag, 21. Juli 2009 08:19
An: Stein, Luba (AIM SE)
Cc: r-help@r-project.org
Betreff: Re: [R] Another SEM question

You don't appear to be defining Z here.

Might that be the problem?

Or, I, M, and R may not be defined either.  It is unclear.  What does
mod.cov look like?

On Jul 20, 2009, at 11:11 PM, Stein, Luba (AIM SE) wrote:


Thank you for your advice. So I am sending the whole code

data.dir <- file.path(home.dir, "Data")
file <- file.path(data.dir, "data4.csv")

SEM <- read.csv(file)
print(SEM)

library(sem)
SEM1 <- as.matrix(cbind(SEM$R1, SEM$I1, SEM$M1))
print(SEM1)
mod.cov <- cov(SEM1)
print(mod.cov)

I <- SEM$I1
M <- SEM$M1
R <- SEM$R1

model <- specify.model()
Z -> M
Z -> I
Z -> R
M <-> M
I <-> I
R <-> R
Z <-> Z

sem.mod <- sem(model, mod.cov, N=109)
summary(sem.mod)



All vectors have a length of 109.


Thank you for your help once again.

Best wishes,
Luba



-Urspr?ngliche Nachricht-----
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
project.org] Im Auftrag von Jarrett Byrnes
Gesendet: Montag, 20. Juli 2009 18:25
An: Stein, Luba (AIM SE)
Cc: r-help@r-project.org
Betreff: Re: [R] Another SEM question

Luba,

If you could provide the code you ran, perhaps the listserv can be of
help.

On Jul 20, 2009, at 7:55 AM, Stein, Luba (AIM SE) wrote:


Hello,

I use the function sem the following way
sem.mod <- sem(model, mod.cov, N=109) where the variables are
modelled:

Z -> M
Z -> I
Z -> R
M <-> M
I <-> I
R <-> R
Z <-> Z

The output is
...

Normalized Residuals
Min. 1st Qu. Median Mean 3rd Qu. Max.
-7.3300 -0.2750 -0.2670 -0.1290 -0.0369 9.0300

Parameter Estimates
Estimate Std Error z value Pr(>|z|)
0.0021625 0.00017037 12.693 0 M <--- Z

Iterations = 13


In "Structural Equation Modeling With the sem Package in R" by John
Fox is stated that there should be an output for each external
variable.

Where is my fault, that I receive the output only for the first
variable?


Thanks for your help,
Luba





[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Another SEM question

2009-07-20 Thread Jarrett Byrnes

You don't appear to be defining Z here.

Might that be the problem?

Or, I, M, and R may not be defined either.  It is unclear.  What does  
mod.cov look like?


On Jul 20, 2009, at 11:11 PM, Stein, Luba (AIM SE) wrote:


Thank you for your advice. So I am sending the whole code

data.dir <- file.path(home.dir, "Data")
file <- file.path(data.dir, "data4.csv")

SEM <- read.csv(file)
print(SEM)

library(sem)
SEM1 <- as.matrix(cbind(SEM$R1, SEM$I1, SEM$M1))
print(SEM1)
mod.cov <- cov(SEM1)
print(mod.cov)

I <- SEM$I1
M <- SEM$M1
R <- SEM$R1

model <- specify.model()
Z -> M
Z -> I
Z -> R
M <-> M
I <-> I
R <-> R
Z <-> Z

sem.mod <- sem(model, mod.cov, N=109)
summary(sem.mod)



All vectors have a length of 109.


Thank you for your help once again.

Best wishes,
Luba



-Urspr?ngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r- 
project.org] Im Auftrag von Jarrett Byrnes

Gesendet: Montag, 20. Juli 2009 18:25
An: Stein, Luba (AIM SE)
Cc: r-help@r-project.org
Betreff: Re: [R] Another SEM question

Luba,

If you could provide the code you ran, perhaps the listserv can be of
help.

On Jul 20, 2009, at 7:55 AM, Stein, Luba (AIM SE) wrote:


Hello,

I use the function sem the following way
sem.mod <- sem(model, mod.cov, N=109) where the variables are
modelled:

Z -> M
Z -> I
Z -> R
M <-> M
I <-> I
R <-> R
Z <-> Z

The output is
...

Normalized Residuals
Min. 1st Qu. Median Mean 3rd Qu. Max.
-7.3300 -0.2750 -0.2670 -0.1290 -0.0369 9.0300

Parameter Estimates
Estimate Std Error z value Pr(>|z|)
0.0021625 0.00017037 12.693 0 M <--- Z

Iterations = 13


In "Structural Equation Modeling With the sem Package in R" by John
Fox is stated that there should be an output for each external
variable.

Where is my fault, that I receive the output only for the first
variable?


Thanks for your help,
Luba





[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Another SEM question

2009-07-20 Thread Jarrett Byrnes

Luba,

If you could provide the code you ran, perhaps the listserv can be of  
help.


On Jul 20, 2009, at 7:55 AM, Stein, Luba (AIM SE) wrote:


Hello,

I use the function sem the following way
sem.mod <- sem(model, mod.cov, N=109) where the variables are  
modelled:


Z -> M
Z -> I
Z -> R
M <-> M
I <-> I
R <-> R
Z <-> Z

The output is
...

Normalized Residuals
Min. 1st Qu. Median Mean 3rd Qu. Max.
-7.3300 -0.2750 -0.2670 -0.1290 -0.0369 9.0300

Parameter Estimates
Estimate Std Error z value Pr(>|z|)
0.0021625 0.00017037 12.693 0 M <--- Z

Iterations = 13


In "Structural Equation Modeling With the sem Package in R" by John  
Fox is stated that there should be an output for each external  
variable.


Where is my fault, that I receive the output only for the first  
variable?



Thanks for your help,
Luba





[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] altering a global variable

2009-06-15 Thread Jarrett Byrnes
I'm working on a program that loads several large data files.  I'm  
using ddply (plyr is really awesome) but I want to minimize the amount  
of times a large data file is read in.  One solution is to keep track  
of which data file is open with a global variable and then change the  
currently open data file globally as needed.  However, I'm unclear on  
how to alter a global variable from within a function.  For example

m<-3
a<-function(x) m<-4

a(3)

m

shows that m is still 3 at the end of it all.  With the code above,  
how could I modify the function a to actually set m to whatever value  
I wanted globally?

Thanks for any pointers!

-Jarrett




--------
Jarrett Byrnes
Postdoctoral Associate, Santa Barbara Coastal LTER
Marine Science Institute
University of California Santa Barbara
Santa Barbara, CA 93106-6150
http://www.lifesci.ucsb.edu/eemb/labs/cardinale/people/byrnes/index.html


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] SEM/path question

2009-05-30 Thread Jarrett Byrnes
I've enjoyed Jim Grace's Structural Equation Modeling and Natural  
Systems


http://www.amazon.com/Structural-Equation-Modeling-Natural-Systems/dp/0521546532/ref=sr_1_2?ie=UTF8&s=books&qid=1243719710&sr=8-2

 as well as Rex Kline's  Principles and Practice of Structural  
Equation Modeling


http://www.amazon.com/Principles-Practice-Structural-Equation-Methodology/dp/1572306904/ref=pd_sim_b_9

On May 29, 2009, at 12:27 PM, Erin Hodgess wrote:


Dear R People:

Could someone recommend a "baby book" on path analysis and SEM,  
please?


Or if someone has an example that they use in the classroom setting,
that would be very cool too.

Thanks in advance,
Sincerely,
Erin


--
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: erinm.hodg...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] CFA in R/sem package

2009-04-09 Thread Jarrett Byrnes
Sure, something like that.  Store each model as an element of a list,  
and then use something like


for(i in 1:4){
indices<-combn(1:4, i)
for (j in 1:length(indices[1,])){
  new.model<-combine.models(model.pieces[ indices[,j] ] )
  #code for analysis
}
}

Or, if this doesn't fit your problem exactly, some similar approach.

On Apr 9, 2009, at 4:23 PM, Iuri Gavronski wrote:


Jarret,

I've donwloaded the zip file and installed, but maybe have lost some
pre-req check. I have manually installed sna.

Anyway, which would be the approach you suggest? Making (using my
example) 4 different models, one for each construct, then use
combine.models and add.to.models to create the 12 models to be
compared?

Best,

Iuri.

On Thu, Apr 9, 2009 at 8:13 PM, Jarrett Byrnes   
wrote:
install.packages("sem-additions",repos="http://R-Forge.R- 
project.org")


Sorry, it's sem-additions on r-forge.  Not sem.additions, which is  
what I
had originally called it.  But they won't take . in the name of a  
package.


On Apr 9, 2009, at 4:07 PM, Iuri Gavronski wrote:


Jarret,

Look:


install.packages("sem.additions", repos="http://R-Forge.R-project.org 
")


Warning message:
package ‘sem.additions’ is not available




Best,

Iuri.

On Thu, Apr 9, 2009 at 3:10 PM, Jarrett Byrnes 
wrote:


Ivan,

I recently put together the sem.additions package over at R forge  
in part
for just such a multiple model problem.  THere are a variety of  
methods

that
make it easy to add/delete links that could be automated with a  
for loop

and
something from the combn package, I think.

http://r-forge.r-project.org/projects/sem-additions/

-Jarrett

On Apr 9, 2009, at 6:39 AM, Iuri Gavronski wrote:


Hi,

I am not sure if R-help is the right forum for my question. If  
not,

please let me know.

I have to do some discriminant validity tests with some  
constructs. I
am using the method of doing a CFA constraining the correlation  
of a

pair of the constructs to 1 and comparing the chi-square of this
constrained model to the unconstrained model. If the chi-square
difference is not significant, then I cannot reject the null
hypothesis that the two constructs are equal.

Well, if you are going to test, say, 4 constructs (A, B, C, and  
D),
you will have to have 2*C(4,2) = 12 models to test, 5  
constructs, 20

models, and so forth. A tedious and error prone process...

So far, I have been using AMOS for that shake, given that 1) my
university has the license, 2) my other colleagues use it, and  
3) I

know it ;)

I would like to know if any of you use R, namely the sem  
package, for

that application and if you can share your thoughts/experiences on
using it. I don't thing I would have problems "porting" my  
models to
R/sem, but I would like to know if there is an optimized process  
of
doing that tests, without manually coding all the dozens of  
models.


Best,

Iuri.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] CFA in R/sem package

2009-04-09 Thread Jarrett Byrnes

install.packages("sem-additions",repos="http://R-Forge.R-project.org";)

Sorry, it's sem-additions on r-forge.  Not sem.additions, which is  
what I had originally called it.  But they won't take . in the name of  
a package.


On Apr 9, 2009, at 4:07 PM, Iuri Gavronski wrote:


Jarret,

Look:
install.packages("sem.additions", repos="http://R-Forge.R- 
project.org")

Warning message:
package ‘sem.additions’ is not available




Best,

Iuri.

On Thu, Apr 9, 2009 at 3:10 PM, Jarrett Byrnes   
wrote:

Ivan,

I recently put together the sem.additions package over at R forge  
in part
for just such a multiple model problem.  THere are a variety of  
methods that
make it easy to add/delete links that could be automated with a for  
loop and

something from the combn package, I think.

http://r-forge.r-project.org/projects/sem-additions/

-Jarrett

On Apr 9, 2009, at 6:39 AM, Iuri Gavronski wrote:


Hi,

I am not sure if R-help is the right forum for my question. If not,
please let me know.

I have to do some discriminant validity tests with some  
constructs. I

am using the method of doing a CFA constraining the correlation of a
pair of the constructs to 1 and comparing the chi-square of this
constrained model to the unconstrained model. If the chi-square
difference is not significant, then I cannot reject the null
hypothesis that the two constructs are equal.

Well, if you are going to test, say, 4 constructs (A, B, C, and D),
you will have to have 2*C(4,2) = 12 models to test, 5 constructs, 20
models, and so forth. A tedious and error prone process...

So far, I have been using AMOS for that shake, given that 1) my
university has the license, 2) my other colleagues use it, and 3) I
know it ;)

I would like to know if any of you use R, namely the sem package,  
for

that application and if you can share your thoughts/experiences on
using it. I don't thing I would have problems "porting" my models to
R/sem, but I would like to know if there is an optimized process of
doing that tests, without manually coding all the dozens of models.

Best,

Iuri.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] CFA in R/sem package

2009-04-09 Thread Jarrett Byrnes

Ivan,

I recently put together the sem.additions package over at R forge in  
part for just such a multiple model problem.  THere are a variety of  
methods that make it easy to add/delete links that could be automated  
with a for loop and something from the combn package, I think.


http://r-forge.r-project.org/projects/sem-additions/

-Jarrett

On Apr 9, 2009, at 6:39 AM, Iuri Gavronski wrote:


Hi,

I am not sure if R-help is the right forum for my question. If not,
please let me know.

I have to do some discriminant validity tests with some constructs. I
am using the method of doing a CFA constraining the correlation of a
pair of the constructs to 1 and comparing the chi-square of this
constrained model to the unconstrained model. If the chi-square
difference is not significant, then I cannot reject the null
hypothesis that the two constructs are equal.

Well, if you are going to test, say, 4 constructs (A, B, C, and D),
you will have to have 2*C(4,2) = 12 models to test, 5 constructs, 20
models, and so forth. A tedious and error prone process...

So far, I have been using AMOS for that shake, given that 1) my
university has the license, 2) my other colleagues use it, and 3) I
know it ;)

I would like to know if any of you use R, namely the sem package, for
that application and if you can share your thoughts/experiences on
using it. I don't thing I would have problems "porting" my models to
R/sem, but I would like to know if there is an optimized process of
doing that tests, without manually coding all the dozens of models.

Best,

Iuri.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ryan's Q Post-Hoc for ANOVA

2008-11-21 Thread Jarrett Byrnes
I realize this is a little late, but I recently ended up rolling my  
own Ryan's Q in R.  If anyone is interested, or wishes to make  
improvements, you can find it here:


http://homes.msi.ucsb.edu/~byrnes/r_files/ryans_q.r

On Oct 25, 2005, at 10:15 AM, Jarrett Byrnes wrote:


I'm using lm to run an ANOVA, and would like to use Ryan's Q as my
post-hoc (as recommended by Day and Quinn, 1989, Ecological
Monographs).  I can't seem to find any methods in the base stats
package that implement this post-hoc.  Is there a good package of
post-hoc methods out there, or has someone written a method for Ryan's
Q previously?

Thanks!

-Jarrett

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Displaying Equations in Documentation

2008-08-25 Thread Jarrett Byrnes
I'm currently working on writing up some documentation for some of my  
code, but am having the darndest time coding in equations.  For  
example, the equation in the following:


\details{ Calculated the R Squared for observed endogenous variables  
in a structural equation model, as well as several other useful  
summary statistics about the error in thoe variables.


R Squared values are calculated as

\deqn{R^{2} = 1-\frac{estimated variance}{observed variance}}

Standardized error coefficients are then calculated as sqrt(1 - R^2).
}

While it shows normally using R CMD Rd2dvi, when I actually compile  
and load the package, displays as follows:


R^{2} = 1-frac{estimated variance}{observed variance}


I have also tried

\deqn{R^{2} = 1-\frac{{estimated variance}{observed variance}}}

and

\deqn{R^{2} = \frac{1-{estimated variance}{observed variance}}}

with the same result - Rd2dvi is happy, but the display is still wonky  
in practice.  I've also tried subbing in \eqn{R^{2}} in the rest of  
the text in a few places, but, again, it shows as R^{2}.  Is there  
something I'm missing about inserting equations into R documentation?


-Jarrett

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] question about se of predicted glm values

2008-05-05 Thread Jarrett Byrnes
Hey, all.  I had a quick question about fitting new glm values and  
then looking at the error around them.  I'm working with a glm using a  
Gamma distribution and a log link with two types of treatments.   
However, when I then look at the predicted values for each category, I  
find for the one that is close to 0, the error (using se.fit=T with  
predicted) actually makes it overlap 0.  This is not possible, as  
non-0 values have no meaning.

Am I missing something in the interpretation?  I'm sure I am.  Is  
there are better way to plot this that is accurate?  Below is some  
sample code for an example problem.  Note that the error for the value  
for category a (plotted at the end) does cross 0.

Note: this is a simple example.  The model I'm using has covariates,  
etc, but, I wanted to work out the correct method for the simple  
example so that I could take it back to my more complex model.  Thanks!

#data
x<-as.factor(c(rep("a",4),rep("b",4)))
y<-c(0.5,2,0.3,1.2,32.6,43,23.8,2.92)

#plot the raw data
plot(y ~ as.factor(x))

#fit a glm
my.glm<-glm(y ~ x, family=Gamma(link=log))

#get predicted values and their error
a.fit<-predict(my.glm, data.frame(x="a"), se.fit=T)
b.fit<-predict(my.glm, data.frame(x="b"), se.fit=T)

#visualize it and see the SE cross 0
plot(1:2,c(a.fit$fit,b.fit$fit), pch=19, ylim=c(-2,4))
segments(1:2, c(a.fit$fit-a.fit$se.fit, b.fit$fit-b.fit$se.fit),
1:2, c(a.fit$fit+a.fit$se.fit, b.fit$fit+b.fit$se.fit))
lines(0:3,rep(0,4), lty=2)
        

-Jarrett





Jarrett Byrnes
Population Biology Graduate Group, UC Davis
Bodega Marine Lab
707-875-1969
http://www-eve.ucdavis.edu/stachowicz/byrnes.shtml


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] glht with a glm using a Gamma distribution

2008-04-14 Thread Jarrett Byrnes
Quick question about the usage of glht.  I'm working with a data set   
from an experiment where the response is bounded at 0 whose variance  
increases with the mean, and is continuous.  A Gamma error  
distribution with a log link seemed like the logical choice, and so  
I've modeled it as such.

However, when I use glht to look for differences between groups, I get  
significant differences where there are none.  Now, I'm all for  
eyeballing means +/- 95% CIs.  However, I've had reviewers and  
committee members all tell me that I needed them.  Oy.  Here's the  
code and some of the sample data that, when visualized, is clearly not  
different in the comparisons I'm making, and, yet, glht (at least, how  
I'm using it, which might be improper) says that the differences are  
there.

Hrm.

I'm guessing I'm just using glht improperly, but, any help would be  
appreciated!

trt<-c("d", "b", "c", "a", "a", "d", "b", "c", "c", "d", "b", "a")
trt<-as.factor(trt)

resp<-c(0.432368576, 0.265148862, 0.140761439, 0.218506998,  
0.105017007,  0.140137615, 0.205552589, 0.081970097, 0.24352179,  
0.158875904, 0.150195422, 0.187526698)

#take a gander at the lack of differences
boxplot(resp ~ trt)

#model it
a.glm<-glm(resp ~ trt, family=Gamma(link="log"))

summary(a.glm)

#set up the contrast matrix
contra<-rbind("A v. B" = c(-1,1,0,0),
"A v. C" = c(-1,0,1,0),
"A v. D" = c(-1,0,0,1))
library(multcomp)   
summary(glht(a.glm, linfct=contra))
  ---
Yields:

Linear Hypotheses:
 Estimate Std. Error z value p value
A v. B == 0   1.9646 0.6201   3.168 0.00314 **
A v. C == 0   1.6782 0.6201   2.706 0.01545 *
A v. D == 0   2.1284 0.6201   3.433 0.00137 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Adjusted p values reported)


-Jarrett





Jarrett Byrnes
Population Biology Graduate Group, UC Davis
Bodega Marine Lab
707-875-1969
http://www-eve.ucdavis.edu/stachowicz/byrnes.shtml


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] manova with non-normal error distribution

2008-03-13 Thread Jarrett Byrnes
This may be a silly question to ask, but, is it possible do do a  
MANOVA-style analysis with a generalized linear model?  I have a data  
set that I'm working with that, for each variable (time in this case,  
as it's a repeated measures MANOVA) is fit much better using glm  
rather than a traditional linear model - even after transforming the  
data so that the resulting (I'm log transforming the data for lm,  
whereas under glm the data is most appropriate for a model with a  
poisson error and an exponential link).

However, under glm, if I try and fit my matrix of response variable, I  
get the error
Error: (subscript) logical subscript too long

Is there a library or method of analysis that handles this type of  
problem?

Much obliged!

-Jarrett




--------
Jarrett Byrnes
Population Biology Graduate Group, UC Davis
Bodega Marine Lab
707-875-1969
http://www-eve.ucdavis.edu/stachowicz/byrnes.shtml


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Asking, are simple effects different from 0

2008-03-04 Thread Jarrett Byrnes
Hello, R-i-zens.  I'm working on an data set with a factorial ANOVA  
that has a significant interaction.  I'm interested in seeing whether  
the simple effects are different from 0, and I'm pondering how to do  
this.  So, I have

my.anova<-lm(response ~ trtA*trtB)

The output for which gives me a number of coefficients and whether  
they are different from 0.  However, I want the simple effects only,  
incorporating their intercepts, with their error estimates.  Can I  
somehow manipulate this object to get that?  Or, would a good shortcut  
be

my.simple.anova<-lm(response ~ trtA:trtB + 0)

and then use those coefficients and their error estimates?

If so, I realize that R gives me t tests for each coefficient.  Now,  
for those I know I'm using the residual degrees of freedom.  Would it  
then be more appropriate to use those, all with the same residual DF  
and apply a bonferroni correction, or, use the mean and SE estimate  
with the sample size for that particular treatment and perform an  
uncorrected one sample t-test to see if the value is different from 0?

Sorry for the bonehead question, but it's a situation I haven't seen  
before.

-Jarrett

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] difference between lme and lmer in df calculation

2008-02-17 Thread Jarrett Byrnes
Hello all.  I'm currently working with mixed models, and have noticed  
a curious difference between the nlme and lmer packages.  While I  
realize that model selection with mixed models is a tricky issue, the  
two packages currently produce different AIC scores for the same  
model, but they systematically differ by 2.  In looking at the logLik  
values for each method, I find that they indeed differ by 1.  So, the  
following code:

utils::data(npk, package="MASS")

library(lme4)
a<-lmer(yield ~ 1+(1|block), data=npk)
logLik(a)

library(nlme)
b<-lme(yield ~ 1, random=~1|block, data=npk)
logLik(b)

produces a df of 2 for a, and a df of 3 for b.  I'm guessing that lmer  
is not accounting for the level-1 variance.  Is this the case, and, if  
so, will this be fixed?

I see that this issue was brought up sometime back.  Is there a reason  
it has not been addressed?
https://stat.ethz.ch/pipermail/r-help/2006-March/102520.html

Incidentally, I'm also curious what folk think about the approach to  
using the conditional AIC value as posted here
https://stat.ethz.ch/pipermail/r-help/2008-February/154389.html

Thanks!

-Jarrett

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] extracting AIC scores from lmer and other objects

2008-02-04 Thread Jarrett Byrnes
I have a slight conundrum.  I'm attempting to write a scrip that will  
take a number of objects (lm, glm, and lmer) and return AIC scores  
and weights.  I've run into 3 problems, and was wondering if anyone  
had any pointers.

1) is there any convenient way to extract the name of the objects?   
Simply, if I have a vector of objects c(my.lm, my.lmer) and I want to  
get a character vector c("my.lm", "my.lmer"), how would one do this?   
I see this as far simpler than jumping into the ugly details of each  
object type and extracting coefficient lists and such - not to  
mention tidier.

2) I'm repeatedly getting the error

Error in UseMethod("logLik") : no applicable method for "logLik"

in a variety of different contexts.  The first is if I have to get  
an AIC for an lmer object.  AIC(my.lmer) give me the error above.   
However, I can circumvent this with a very silly solution -

myAIC<-function(object) {a<-logLik(object)
return(-2*a[1] +2*attr(a, 'df'))}

I use this, and I do not get an error.

3) I do, however, get the above error if I have a vector of model  
objects.  So, again, if I have something like model.list<-c(my.lm,  
my.lmer) or even just c(my.lm, my.lm2) and then call the following on  
the vector of models

aiclist<-vector
for(index in 1:length(model.list)){
aiclist<-c(aiclist, myAIC(model.list[index]))
}

it again yields the Error in UseMethod("logLik").  Given that this is  
true either for lm, glm, or lmer objects, I'm guessing there's a more  
general issue here that I'm missing.  Any pointers?

Thanks!

-Jarrett





Jarrett Byrnes
Population Biology Graduate Group, UC Davis
Bodega Marine Lab
707-875-1969
http://www-eve.ucdavis.edu/stachowicz/byrnes.shtml


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.