Joerg Everman has a great solution to this. He changed the middle of the sem.mod code to include a variable, fit, and then used the following approach around where you define the objectives:

        if (fit=="ml") {
   objective.1 <- function(par){
       A <- P <- matrix(0, m, m)
       val <- ifelse (fixed, ram[,5], par[sel.free])
       A[arrows.1] <- val[one.head]
       P[arrows.2t] <- P[arrows.2] <- val[!one.head]
       I.Ainv <- solve(diag(m) - A)
       C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
       Cinv <- solve(C)
       f <- sum(diag(S %*% Cinv)) + log(det(C))
       attributes(f) <- list(C=C, A=A, P=P)
       f
       }
   objective.2 <- function(par){
       A <- P <- matrix(0, m, m)
       val <- ifelse (fixed, ram[,5], par[sel.free])
       A[arrows.1] <- val[one.head]
       P[arrows.2t] <- P[arrows.2] <- val[!one.head]
       I.Ainv <- solve(diag(m) - A)
       C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
       Cinv <- solve(C)
       f <- sum(diag(S %*% Cinv)) + log(det(C))
grad.P <- correct * t(I.Ainv) %*% t(J) %*% Cinv %*% (C - S) %* % Cinv %*% J %*% I.Ainv
       grad.A <- grad.P %*% P %*% t(I.Ainv)
       gradient <- rep(0, t)
gradient[unique.free.1] <- tapply(grad.A[arrows. 1.free],ram[ram[,1]==1 & ram[,4]!=0, 4], sum) gradient[unique.free.2] <- tapply(grad.P[arrows. 2.free],ram[ram[,1]==2 & ram[,4]!=0, 4], sum)
       attributes(f) <- list(C=C, A=A, P=P, gradient=gradient)
       f
       }
                }
        if (fit=="gls") {
   objective.1 <- function(par){
       A <- P <- matrix(0, m, m)
       val <- ifelse (fixed, ram[,5], par[sel.free])
       A[arrows.1] <- val[one.head]
       P[arrows.2t] <- P[arrows.2] <- val[!one.head]
       I.Ainv <- solve(diag(m) - A)
       C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
       Sinv <- solve(S)
f <- 0.5 * sum(diag( ((S - C) %*% Sinv) %*% ((S - C) %*% Sinv) ))
       attributes(f) <- list(C=C, A=A, P=P)
       f
       }
   objective.2 <- function(par){
       A <- P <- matrix(0, m, m)
       val <- ifelse (fixed, ram[,5], par[sel.free])
       A[arrows.1] <- val[one.head]
       P[arrows.2t] <- P[arrows.2] <- val[!one.head]
       I.Ainv <- solve(diag(m) - A)
       C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
       Sinv <- solve(S)
       f <- sum(diag(   ((S - C) %*% Sinv) %*%  ((S - C) %*% Sinv) ))
# grad.P <- correct * t(I.Ainv) %*% t(J) %*% Cinv %*% (C - S) %* % Cinv %*% J %*% I.Ainv
#        grad.A <- grad.P %*% P %*% t(I.Ainv)
#        gradient <- rep(0, t)
# gradient[unique.free.1] <- tapply(grad.A[arrows. 1.free],ram[ram[,1]==1 & ram[,4]!=0, 4], sum) # gradient[unique.free.2] <- tapply(grad.P[arrows. 2.free],ram[ram[,1]==2 & ram[,4]!=0, 4], sum)
#        attributes(f) <- list(C=C, A=A, P=P, gradient=gradient)
       attributes(f) <- list(C=C, A=A, P=P)
       f
       }
                }                       
        if (fit=="uls") {
   objective.1 <- function(par){
       A <- P <- matrix(0, m, m)
       val <- ifelse (fixed, ram[,5], par[sel.free])
       A[arrows.1] <- val[one.head]
       P[arrows.2t] <- P[arrows.2] <- val[!one.head]
       I.Ainv <- solve(diag(m) - A)
       C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
       Sinv <- solve(S)
       f <- 0.5 * sum(diag(   (S - C) %*%  (S - C)  ))
       attributes(f) <- list(C=C, A=A, P=P)
       f
       }
   objective.2 <- function(par){
       A <- P <- matrix(0, m, m)
       val <- ifelse (fixed, ram[,5], par[sel.free])
       A[arrows.1] <- val[one.head]
       P[arrows.2t] <- P[arrows.2] <- val[!one.head]
       I.Ainv <- solve(diag(m) - A)
       C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
       Sinv <- solve(S)
       f <- 0.5 * sum(diag(   (S - C) %*%  (S - C)  ))
# grad.P <- correct * t(I.Ainv) %*% t(J) %*% Cinv %*% (C - S) %* % Cinv %*% J %*% I.Ainv
#        grad.A <- grad.P %*% P %*% t(I.Ainv)
#        gradient <- rep(0, t)
# gradient[unique.free.1] <- tapply(grad.A[arrows. 1.free],ram[ram[,1]==1 & ram[,4]!=0, 4], sum) # gradient[unique.free.2] <- tapply(grad.P[arrows. 2.free],ram[ram[,1]==2 & ram[,4]!=0, 4], sum)
#        attributes(f) <- list(C=C, A=A, P=P, gradient=gradient)
       attributes(f) <- list(C=C, A=A, P=P)
       f
       }
                }                       


On Dec 2, 2009, at 10:22 AM, Jeremy Miles wrote:

In the world of SEM, GLS has pretty much fallen by the wayside - I
can't recall anything I've seen arguing for it's use in the past 10
years, and I also can't recall anyone using it over ML.   The
recommendations for non-normal distributions tend to be robust-ML, or
robust weighted least squares.  These are more computationally
intensive, and I *think* that John Fox (author of sem) has written
somewhere that it wouldn't be possible to implement them within R,
without using a lower level language - or rather that it might be
possible, but it would be really, really slow.

However, ML and GLS are pretty similar, if you dug around in the
source code, you could probably make the change (see,
http://www2.gsu.edu/~mkteer/discrep.html for example, for the
equations; in fact GLS is somewhat computationally simpler, as you
don't need to invert the implied covariance matrix at each iteration).
However, the fact that it's not hard to make the change, and that no
one has made the change, is another argument that it's not a change
that needs to be made.

Jeremy



2009/12/2 Ralf Finne <ralf.fi...@novia.fi>:
Hi R-colleagues.

I have been using the sem(sem) function.  It uses
maximum likelyhood as optimizing. method.
According to simulation study in UmeƄ Sweden
(http://www.stat.umu.se/kursweb/vt07/stad04mom3/?download=UlfHolmberg.pdf
Sorry it is in swedish, except the abstract)
maximum likelihood is OK for large samples and normal distribution
the SEM-problem should be optimized by GLS (Generalized Least Squares).


So to the question:

Is there any R-function that solves SEM with GLS?


Ralf Finne
Novia University of Applied Science
Vasa  Finland

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




--
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to