[R] Question for install Rapache package.
Hi, this is Charlie and I am trying to embed R in my apache server. However, I am having problem with installation. The R projet says that we can install add-on package with command $R CMD (rapache.0.1.4.tar.gz). However, I try to use it in command windows (windows xp) and an error shows. Please tell me when I can use the command (in c?? or in the same folder with the file). thx. [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Simplify simple code
On Apr 16, 2007, at 1:37 AM, Dong-hyun Oh wrote: > Dear expeRts, > > I would simplify following code. > - > youtput <- function(x1, x2){ >n <- length(x1) >y <- vector(mode="numeric", length=n) >for(i in 1:n){ > if(x1[i] >=5 & x1[i] <= 10 & x2[i] >=5 & x2[i] <=10) >y[i] <- 0.631 * x1[i]^0.55 * x2[i]^0.65 > if(x1[i] >=10 & x1[i] <= 15 & x2[i] >=5 & x2[i] <=10) >y[i] <- 0.794 * x1[i]^0.45 * x2[i]^0.65 > if(x1[i] >=5 & x1[i] <= 10 & x2[i] >=10 & x2[i] <=15) >y[i] <- 1.259 * x1[i]^0.55 * x2[i]^0.35 > if(x1[i] >=10 & x1[i] <= 15 & x2[i] >=10 & x2[i] <=15) >y[i] <- 1.585 * x1[i]^0.45 * x2[i]^0.35 >} >y > } > -- > Anyone can help me? > I hope someone comes up with something better, but here is one way: youtput <- function(x1, x2) { co1 <- matrix(c(0.631,0.794,1.259,1.585), c(2,2)) co2 <- c(0.55,0.45) co3 <- c(0.65,0.35) p1 <- findInterval(x1,c(5,10,15)) p2 <- findInterval(x2,c(5,10,15)) return( diag(co1[p1,p2]) * x1^co2[p1] * x2^co3[p2] ) } It is not clear at all what you wanted to happen when x1 and/or x2 is not between 5 and 15, so I did not deal with those case. The above command will choke in that case, and should be modified accordingly depending on what you want. > Sincerely, > > === > Dong H. Oh > > Ph. D Candidate > Techno-Economics and Policy Program > College of Engineering, Seoul National University, > Seoul, 151-050, Republic of Korea > > E-mail:[EMAIL PROTECTED] > Mobile: +82-10-6877-2109 > Office : +82-2-880-9142 > Fax: +82-2-880-8389 Haris Skiadas Department of Mathematics and Computer Science Hanover College __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] priceIts and yahoo
Dear R People: About 2 years ago, there were a few messages about the function priceIts from library(its) generating error messages. One of the suggested fixes at that time was to check security software and such. I'm getting the same message tonight, and have checked both from Windows and a Linux installation. The other suggestion was to determine if there is still a problem with the yahoo Finance website. This may still be the problem. Does anyone have any other suggestions, please? Thanks in advance! Sincerely, Erin Erin Hodgess Associate Professor Department of Computer and Mathematical Sciences University of Houston - Downtown mailto: [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Simplify simple code
Dear expeRts, I would simplify following code. - youtput <- function(x1, x2){ n <- length(x1) y <- vector(mode="numeric", length=n) for(i in 1:n){ if(x1[i] >=5 & x1[i] <= 10 & x2[i] >=5 & x2[i] <=10) y[i] <- 0.631 * x1[i]^0.55 * x2[i]^0.65 if(x1[i] >=10 & x1[i] <= 15 & x2[i] >=5 & x2[i] <=10) y[i] <- 0.794 * x1[i]^0.45 * x2[i]^0.65 if(x1[i] >=5 & x1[i] <= 10 & x2[i] >=10 & x2[i] <=15) y[i] <- 1.259 * x1[i]^0.55 * x2[i]^0.35 if(x1[i] >=10 & x1[i] <= 15 & x2[i] >=10 & x2[i] <=15) y[i] <- 1.585 * x1[i]^0.45 * x2[i]^0.35 } y } -- Anyone can help me? Sincerely, === Dong H. Oh Ph. D Candidate Techno-Economics and Policy Program College of Engineering, Seoul National University, Seoul, 151-050, Republic of Korea E-mail:[EMAIL PROTECTED] Mobile: +82-10-6877-2109 Office : +82-2-880-9142 Fax: +82-2-880-8389 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] newbie rgl (3d interacting plotting) question
I'm looking for a way to 'reuse' existing rgl device windows. Right now, every time I run my script I have to close the preexisting windows and the new windows get assigned ever-increasing numbers. I know how to do it for regular R plotting device windows but can not find a solution for rgl. thanks -- David Cottrell http://www.math.mcgill.ca/~cottrell __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] indexing a subset dataframe
Hello I am having problems indexing a subset dataframe, which was created as: > waspsNoGV<-subset(wasps,site!="GV") Fitting a linear model revealed some data points which had high leverage, so I attempted to redo the regression without these data points: >wasps.lm<-lm(r~Nt,data=waspsNoGV[-c(61,69,142),]) which resulted in a "subscript out of bounds" error. I'm pretty sure the problem is that the data points identified in the regression as having high leverage were the row names carried over from the original dataframe which had 150 rows, but when I try to remove data point #142 from the subset dataframe this tries to reference by a numerical index but there are only 130 data points in the subset dataframe hence the "subscript out of bounds" message. So I guess my question is how do I reference the data points to drop from the regression by name? Thanks Mandy WARNING: This email and any attachments may be confidential ...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Marginal Effects from GLM
Dear Friends, Is there a direct way to extract the marginal effects when running discrete choice models such as Probit or Logit using glm? Thanks and Regards Anup - Ahhh...imagining that irresistible "new car" smell? [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Reasons to Use R (no memory limitations :-))
This thread discussed R memory limitations, compared handling with S and SAS. Since I routinely use R to process multi-gigababyte sets on computers with sometimes 256mb of memory - here are some comments on that. Most memory limitations vanish if R is used with any relational database. [My personal preference is SQLite (RSQLite packaga) because of speed and no-admin (used in embedded mode)]. The comments below apply to any relational database, unless otherwise stated. Most people appear to think about database tables as dataframes - that is to store and load the _whole_ dataframe in one go - probably because appropriate function names are suggesting this approach. Also, it is a natural mapping. This is convenient if the data set can fit fully in memory - but limits the size of the data set the same way as without using the database. However, using SQL language directly one can expand the size of the data set R is capable of operating on - we just have to stop treating database tables as 'atomic'. For example, assume we have a set of several million patients and want to analyze some specific subset - the following SQL statement SELECT * FROM patients WHERE gender='M" AND AGE BETWEEN 30 AND 35 will result in bringing to R much smaller dataframe than selection of the whole table. [Also, such subset selection may take _less_time_ then selecting from the total dataframe - assuming the table is properly indexed]. Also, direct SQL statements can be used to pre-compute some characteristics internally in the database and bring only the summaries to R: SELECT AVG(age) FROM patients GROUP BY gender will bring a data frame of two rows only. Admittedly, if the data set is really large and we cannot operate on its subsets, the above does not help. Though I do not believe that this would the the majority of the situations. Naturally, going for a 64bit system with enough memory will solve some problems without using the database - but not all of them. Relational databases can be very efficient at selecting subsets as they do not have to do linear scans [when the tables are indexed] - while R has to do a linear scan every time(??? I did not look up the source code of R - please correct me if I am wrong). Two other areas where a database is better than R, especially for large data sets: - verification of data correctness for individual points [a frequent problem with large data sets] - combining data from several different types of tables into one dataframe In summary: using SQL from R allows to process extremely large data sets in a limited memory, sometimes even faster then if we had a large memory and kept our data set fully in it. Relational database perfectly complements R capabilities. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] adjusting a power model in R
Dear R-gurus, How can I fit a power model in R? I would like adjust Y = b0*X^b1 or something like. Kind regards, Miltinho Brazil. __ [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Does the smooth terms in GAM have a functional form?
Hi, all, Does anyone know how to get the functional form of the smooth terms in GAM? eg. I fit y=a+b*s(x) where s is the smooth function. After fitting this model with GAM in R, I want to know the form of the s(x). Any suggestion is appreciated. Thanks, Jin - Ahhh...imagining that irresistible "new car" smell? [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] mac ghostscript help
Erin, Here's one way to try to find where Ghostscript is on your system: R> system("which gs") On my Mac it's in /usr/local/bin/gs So I would use R> Sys.putenv(R_GSCMD="/usr/local/bin/gs") Hope this helps, Stephen Rochester, Minn., USA On 4/15/07, Erin Berryman <[EMAIL PROTECTED]> wrote: > Hello R community, > > I am hoping to use a new package that I just installed called > "grImport" to import ps images into R for further manipulation using > either base graphics or grid. I downloaded the most recent version > of Ghostscript from http://www.cs.wisc.edu/~ghost/ that I could find > (v.8.56) for Mac OSX. However, I am apparently quite ignorant about > how the required Ghostscript software works. > When I run PostScriptTrace after loading the grImport package, I get > this response: > > > PostScriptTrace("/Users/erinberryman/Documents/data.ps") > > Error in PostScriptTrace("/Users/erinberryman/Documents/data.ps") : > status 256 in running command 'gs -q -dBATCH -dNOPAUSE - > sDEVICE=pswrite -sOutputFile=/dev/null -sstdout=data.ps.xml > capturedata.ps' > ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 > > I am confused by the mention of a Ghostscript 7.07.1, because that is > not the version that I have installed on my computer. After running a > search of the R archives, I began to think that maybe I needed to > tell R where my Ghostscript is, so I began to look for it. Now here > is where I feel I am missing some key concept, because I find a > folder called "ghostscript-8.56" that contains many files and folders > with more files, and I have no idea which file is THE ghostscript > that grImport wants to use. I used the following code to try to > direct R to the correct folder for Ghostscript: > > > Sys.putenv(R_GSCMD="/Users/erinberryman/ghostscript-8.56") > > PostScriptTrace("/Users/erinberryman/Documents/data.ps") > > Error in PostScriptTrace("/Users/erinberryman/Documentsdata.ps") : > status 32256 in running command '/Users/erinberryman/ > ghostscript-8.56 -q -dBATCH -dNOPAUSE -sDEVICE=pswrite -sOutputFile=/ > dev/null -sstdout=data.ps.xml capturedata.ps' > /bin/sh: line 1: /Users/erinberryman/ghostscript-8.56: is a directory > > Right, it is a directory indeed, but I do not know which specific > file to give, since there are so many of them in that Ghostscript > install. > > Hopefully there is a solution that a non-computer whiz like me can > handle? > > Thank you, > > Erin > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] unable to find inherited method for function "edges", for signature "ugsh", "missing"
Søren Højsgaard <[EMAIL PROTECTED]> writes: > I am new to using S4 methods and have run into this problem (on > Windows XP using R 2.4.1): I am writing a package in which I use the > graph package. I define my own classes of graphs as: > > setOldClass("graphsh") > setOldClass("ugsh") > setIs("ugsh", "graphsh") > > (I know that I "should have" used setClass instead - and I will > eventually - but right now I am just puzzled about the reason for my > problem...) It isn't clear that your problems aren't being caused by your non-standard approach to defining classes and I would recommend you fix this part of your code first. If you are depending on the graph package, I'm surprised you don't want to extend one of the graph classes there. Perhaps: setClass("graphsh", contains="graph") Or setClass("graphsh", contains="graphNEL") You can override whatever methods you need to, but don't have to write new methods for those that work as you want. > I need an 'edges' method for ugsh graphs, so I set: > > if (!isGeneric("edges")) { > if (is.function("edges")) > fun <- edges > else > fun <- function(object,which) standardGeneric("edges") > setGeneric("edges", fun) > } > setMethod("edges", signature(object = "graphsh"), > function(object, which) { > .links(object) > }) Do you want to have your own generic distinct from the edges generic defined in the graph package or do you want to simply attach new methods to the edges generic defined in graph. I see no benefit to this conditional approach and it _can_ cause confusion. > I can get this to work in the sense that it passes R cmd > check. However, if I add the following (to me innocently looking > function) to my package I get problems: > > nodeJoint <- function(bn, set, normalize=TRUE){ > vars <- set > a<- vallabels(gmd)[vars] ^^^ Where is that defined? > levs <- as.data.frame(table(a)) > levs <- levs[,1:length(a)] > levs2 <- do.call("cbind",lapply(levs, as.character)) > p<-sapply(1:nrow(levs2), function(i) > pevidence(enterEvidence(bn, nodes=vars, states=levs2[i,])) > ) > if (normalize) > p <- p / sum(p) > > levs$.p <- p > return(levs) > } I can't see where a call to edges is made. Is there one hiding in one of the function calls? > When running R cmd check I get: >> ug <- >> ugsh(c("me","ve"),c("me","al"),c("ve","al"),c("al","an"),c("al","st"),c("an","st")) >> edges(ug) > Error in function (classes, fdef, mtable) : > unable to find an inherited method for function "edges", for > signature "ugsh", "missing" > Execution halted > > (I never use the function nodeJoint in my .Rd files, so it just > "sits there" and causes problems. > > I am puzzled about what the error message means and about why this > function causes problems. Can anyone help. Thanks in advance. Does your package have a name space? What does your package's DESCRIPTION file look like? Do any of the examples call library() or require()? + seth -- Seth Falcon | Computational Biology | Fred Hutchinson Cancer Research Center http://bioconductor.org __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Use estimated non-parametric model for sensitivity analysis
Dear all, I fitted a non-parametric model using GAM function in R. i.e., gam(y~s(x1)+s(x2)) #where s() is the smooth function Then I obtained the coefficients(a and b) for the non-parametric terms. i.e., y=a*s(x1)+b*s(x2) Now if I want to use this estimated model to do optimization or sensitivity analysis, I am not sure how to incorporate the smooth function since s() may not be recognized outside GAM environment. Thank you in advance! Jin Huang North Carolina State University - Ahhh...imagining that irresistible "new car" smell? [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] correlation between multiple adjacency matrix graphs
I'm looking for a way to do (product moment) graph correlation between multiple unlabeled graphs G to Gn. Basically I have 900 individual samples of a 48x48 adjacency matrix which I've listed as a 3rd dimension of a single array. So it looks something like [48,48,i] where i is each individual subject's adjacency matrix. If i run the gcor function on any two graphs for example [,,1] & [,, 2] it returns a single graph correlation value however if I run it across the entire 3 dimensional array i get a 48x48 graph correlation matrix. For example it sort of looks like this with the exception that it would be 48x48 [ , ,1] A B C D E A0 0 0 0 0 B1 0 0 0 0 C0 0 0 0 0 D1 1 0 0 0 E 0 1 0 1 0 [ , ,1] A B C D E A0 0 0 0 0 B1 0 0 0 0 C0 1 0 0 0 D1 0 0 0 0 E 0 1 0 0 0 all the way to [ , ,900] A B C D E A0 0 0 0 0 B1 0 0 0 0 C0 1 0 0 0 D1 0 0 0 0 E 0 1 0 0 0 Is there a way to generate a single Pearson's product-moment correlation coefficient across all 900 individual adjacency matrices? The only way i can think of so far is to create my own function that keeps looping graphs pairs using gcor but I am hopeful there is something known way to do this in a simpler manner. I'm also unclear about what the standard nomenclature for this is. Sometimes I hear multidimensional Array other times i hear people refer to it as a multiple array list. This might help as maybe I'm just looking up the wrong thing. Thanks. --- Namanh Vu Hoang Department of Sociology: Undergraduate [EMAIL PROTECTED] [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] unable to find inherited method for function "edges", for signature "ugsh", "missing"
I am new to using S4 methods and have run into this problem (on Windows XP using R 2.4.1): I am writing a package in which I use the graph package. I define my own classes of graphs as: setOldClass("graphsh") setOldClass("ugsh") setIs("ugsh", "graphsh") (I know that I "should have" used setClass instead - and I will eventually - but right now I am just puzzled about the reason for my problem...) I need an 'edges' method for ugsh graphs, so I set: if (!isGeneric("edges")) { if (is.function("edges")) fun <- edges else fun <- function(object,which) standardGeneric("edges") setGeneric("edges", fun) } setMethod("edges", signature(object = "graphsh"), function(object, which) { .links(object) }) I can get this to work in the sense that it passes R cmd check. However, if I add the following (to me innocently looking function) to my package I get problems: nodeJoint <- function(bn, set, normalize=TRUE){ vars <- set a<- vallabels(gmd)[vars] levs <- as.data.frame(table(a)) levs <- levs[,1:length(a)] levs2 <- do.call("cbind",lapply(levs, as.character)) p<-sapply(1:nrow(levs2), function(i) pevidence(enterEvidence(bn, nodes=vars, states=levs2[i,])) ) if (normalize) p <- p / sum(p) levs$.p <- p return(levs) } When running R cmd check I get: > ug <- > ugsh(c("me","ve"),c("me","al"),c("ve","al"),c("al","an"),c("al","st"),c("an","st")) > edges(ug) Error in function (classes, fdef, mtable) : unable to find an inherited method for function "edges", for signature "ugsh", "missing" Execution halted (I never use the function nodeJoint in my .Rd files, so it just "sits there" and causes problems. I am puzzled about what the error message means and about why this function causes problems. Can anyone help. Thanks in advance. Søren __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] mac ghostscript help
Hello R community, I am hoping to use a new package that I just installed called "grImport" to import ps images into R for further manipulation using either base graphics or grid. I downloaded the most recent version of Ghostscript from http://www.cs.wisc.edu/~ghost/ that I could find (v.8.56) for Mac OSX. However, I am apparently quite ignorant about how the required Ghostscript software works. When I run PostScriptTrace after loading the grImport package, I get this response: > PostScriptTrace("/Users/erinberryman/Documents/data.ps") Error in PostScriptTrace("/Users/erinberryman/Documents/data.ps") : status 256 in running command 'gs -q -dBATCH -dNOPAUSE - sDEVICE=pswrite -sOutputFile=/dev/null -sstdout=data.ps.xml capturedata.ps' ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 I am confused by the mention of a Ghostscript 7.07.1, because that is not the version that I have installed on my computer. After running a search of the R archives, I began to think that maybe I needed to tell R where my Ghostscript is, so I began to look for it. Now here is where I feel I am missing some key concept, because I find a folder called "ghostscript-8.56" that contains many files and folders with more files, and I have no idea which file is THE ghostscript that grImport wants to use. I used the following code to try to direct R to the correct folder for Ghostscript: > Sys.putenv(R_GSCMD="/Users/erinberryman/ghostscript-8.56") > PostScriptTrace("/Users/erinberryman/Documents/data.ps") Error in PostScriptTrace("/Users/erinberryman/Documentsdata.ps") : status 32256 in running command '/Users/erinberryman/ ghostscript-8.56 -q -dBATCH -dNOPAUSE -sDEVICE=pswrite -sOutputFile=/ dev/null -sstdout=data.ps.xml capturedata.ps' /bin/sh: line 1: /Users/erinberryman/ghostscript-8.56: is a directory Right, it is a directory indeed, but I do not know which specific file to give, since there are so many of them in that Ghostscript install. Hopefully there is a solution that a non-computer whiz like me can handle? Thank you, Erin __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nls.control( ) has no influence on nls( ) !
Dear Friends. I tried to use nls.control() to change the 'minFactor' in nls( ), but it does not seem to work. I used nls( ) function and encountered error message "step factor 0.000488281 reduced below 'minFactor' of 0.000976563". I then tried the following: 1) Put "nls.control(minFactor = 1/(4096*128))" inside the brackets of nls, but the same error message shows up. 2) Put "nls.control(minFactor = 1/(4096*128))" as a separate command before the command that use nls( ) function, again, the same thing happens, although the R responds to the nls.control( ) function immediately: - $maxiter [1] 50 $tol [1] 1e-05 $minFactor [1] 1.907349e-06 -- I am wondering how may I change the minFactor to a smaller value? The manual that comes with the R software about nls( ) is very sketchy --- the only relevant example I see is a separate command like 2). A more relevent question might be, is lower the 'minFactor' the only solution to the problem? What are the other options? Best Wishes Yuchen Luo [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Export multiple data files as gpr-files
Dear R-users, I have 10 data files in gpr format (dat1.gpr, dat10.gpr). I want to read in these files one by one in R and then add one extra column (called log) to each data file as below data.file=sort(dir(path ='C:/Documents and Settings/ Mina dokument/data1, pattern = ".gpr$",full.names = TRUE)) num.data.files<- length(data.file) num.data.files i=1 ### read one data file data<-read.table(file = data.file[i],skip=31,header=T,sep='\t',na.strings="NA") ### Define the log ratio using values in column 2 resp 8 log=as.matrix(log((data[,2])/(data[,8]))) ### append column called log to data frame data, for the reading data file data=cbind(data,log) ### Read remaining data files for (i in 2:num.data.files) { data<-read.table(file=data.file[i],header=T,skip=31,sep='\t',na.strings="NA") log=as.matrix(log((data[,2])/(data[,8]))) data=cbind(data,log) } Now I want to export these files (with an extra column in each) as gpr-files in a folder called data2 but dont know exactly how to do it, can you help me out ? Thanks for your help, Jenny - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fit sem: problem of 'Error in solve.default(C[ind, ind]) : Lapack routine dgesv: system is exactly singular.'
Dear adschai, This model is almost surely underidentified, but it's hard to figure that out for sure because of its very odd structure: you have the three latent variables all influencing each other mutually and all with exactly the same observed indicators, but the structural disturbances are all specified to be uncorrelated. Additionally, there are observed exogenous variables that affect the latent endogenous variables, but these exogenous variables are also specified to be uncorrelated. It's hard for me to imagine that you really intended this, and even if the model is identified, I seriously doubt that you can fit it to the data. Finally, you should verify that the input covariance matrix is positive-definite. I think that the issues of model specification go well beyond how to use the software, and I strongly suggest that you try to find someone local to talk to about your research. John John Fox Department of Sociology McMaster University Hamilton, Ontario Canada L8S 4M4 905-525-9140x23604 http://socserv.mcmaster.ca/jfox > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > [EMAIL PROTECTED] > Sent: Sunday, April 15, 2007 2:16 PM > To: [EMAIL PROTECTED] > Subject: [R] Fit sem: problem of 'Error in > solve.default(C[ind, ind]) : Lapack routine dgesv: system is > exactly singular.' > > Hi - I would need help another time. I would the model and > wire the correlation matrix into the sem() method. I got the > system is exactly singular? Is my model underidentified? Here > I have my model below along with the result from traceback() > funciton. Any help would be really appreciated. Thank you again! > > Average_Daily_Traffic -> Lat_Env, > gam1_1, NA > Average_Daily_Truck_Traffic -> Lat_Env, > gam1_2, NA > BridgeAges -> Lat_Env, > gam1_3, NA > > Design_Load -> Lat_Specs, > gam2_1, NA > Type_of_Service_On_Bridge -> Lat_Specs, > gam2_2, NA > Railroad_Beneath -> Lat_Specs, > gam2_3, NA > BridgeAges -> Lat_Specs, > gam2_4, NA > Highway_Beneath -> Lat_Specs, > gam2_5, NA > > Main_Structure_Material -> Lat_Design, > gam3_1, NA > Main_Structure_Design -> Lat_Design, > gam3_2, NA > Length_of_Maximum_Span -> Lat_Design, > gam3_4, NA > Number_of_Spans_In_Main_Unit -> Lat_Design, > gam3_5, NA > BridgeAges -> Lat_Design, > gam3_6, NA > > Lat_Env -> Lat_Specs, > beta2_1, NA > Lat_Env -> Lat_Design, > beta3_1, NA > Lat_Specs -> Lat_Env, > beta1_2, NA > Lat_Specs -> Lat_Design, > beta3_2, NA > Lat_Design -> Lat_Env, > beta1_3, NA > Lat_Design -> Lat_Specs, > beta1_2, NA > > Lat_Env -> Operating_Rating, NA, > 1 > Lat_Env -> Deck_Cond_Rating, > lamy2_1, NA > Lat_Env -> Superstructure_Cond_Rating, > lamy3_1, NA > Lat_Env -> Substructure_Cond_Rating, > lamy4_1, NA > Lat_Specs -> Operating_Rating,NA, > 1 > Lat_Specs -> Deck_Cond_Rating, > lamy2_2, NA > Lat_Specs -> Superstructure_Cond_Rating, > lamy3_2, NA > Lat_Specs -> Substructure_Cond_Rating, > lamy4_2, NA > Lat_Design -> Operating_Rating, NA, > 1 > Lat_Design -> Deck_Cond_Rating, > lamy2_3, NA > Lat_Design -> Superstructure_Cond_Rating, > lamy3_3, NA > Lat_Design -> Substructure_Cond_Rating, > lamy4_3, NA > > Lat_Env <-> Lat_Specs, > psi2_1, NA > Lat_Env <-> Lat_Design, > psi3_1, NA > Lat_Specs <-> Lat_Design, > psi3_2, NA > Lat_Env <-> Lat_Env, > psi1_1, NA > Lat_Specs <-> Lat_Specs, > psi2_2, NA > Lat_Design <-> Lat_Design, > psi3_3, NA > > Operating_Rating <-> Operating_Rating, > thesp1, NA > Deck_
Re: [R] Expression for pedices
On 4/15/2007 2:05 PM, Cressoni, Massimo (NIH/NHLBI) [F] wrote: > I know that this maybe a trivial question. I am not able to plot pedices in > graph axes. > Instead I am able to plot different math symbols : I think you mean subscripts. > > XLABEL <- expression(paste(cmH,lim(f(x), x %->% 0),"O PEEP")) > works well > > XLABEL <- expression(paste(cmH,[2],"O PEEP")) > is considered a wrong expression. Yes, you don't want the comma before the bracket: XLABEL <- expression(paste(cmH[2],"O PEEP")) Duncan Murdoch __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Fit sem: problem of 'Error in solve.default(C[ind, ind]) : Lapack routine dgesv: system is exactly singular.'
Hi - I would need help another time. I would the model and wire the correlation matrix into the sem() method. I got the system is exactly singular? Is my model underidentified? Here I have my model below along with the result from traceback() funciton. Any help would be really appreciated. Thank you again! Average_Daily_Traffic -> Lat_Env, gam1_1, NA Average_Daily_Truck_Traffic -> Lat_Env, gam1_2, NA BridgeAges -> Lat_Env,gam1_3, NA Design_Load -> Lat_Specs, gam2_1, NA Type_of_Service_On_Bridge -> Lat_Specs, gam2_2, NA Railroad_Beneath -> Lat_Specs,gam2_3, NA BridgeAges -> Lat_Specs, gam2_4, NA Highway_Beneath -> Lat_Specs, gam2_5, NA Main_Structure_Material -> Lat_Design,gam3_1, NA Main_Structure_Design -> Lat_Design, gam3_2, NA Length_of_Maximum_Span -> Lat_Design, gam3_4, NA Number_of_Spans_In_Main_Unit -> Lat_Design, gam3_5, NA BridgeAges -> Lat_Design, gam3_6, NA Lat_Env -> Lat_Specs, beta2_1, NA Lat_Env -> Lat_Design,beta3_1, NA Lat_Specs -> Lat_Env, beta1_2, NA Lat_Specs -> Lat_Design, beta3_2, NA Lat_Design -> Lat_Env,beta1_3, NA Lat_Design -> Lat_Specs, beta1_2, NA Lat_Env -> Operating_Rating, NA, 1 Lat_Env -> Deck_Cond_Rating, lamy2_1, NA Lat_Env -> Superstructure_Cond_Rating,lamy3_1, NA Lat_Env -> Substructure_Cond_Rating, lamy4_1, NA Lat_Specs -> Operating_Rating,NA, 1 Lat_Specs -> Deck_Cond_Rating,lamy2_2, NA Lat_Specs -> Superstructure_Cond_Rating, lamy3_2, NA Lat_Specs -> Substructure_Cond_Rating,lamy4_2, NA Lat_Design -> Operating_Rating, NA, 1 Lat_Design -> Deck_Cond_Rating, lamy2_3, NA Lat_Design -> Superstructure_Cond_Rating, lamy3_3, NA Lat_Design -> Substructure_Cond_Rating, lamy4_3, NA Lat_Env <-> Lat_Specs,psi2_1, NA Lat_Env <-> Lat_Design, psi3_1, NA Lat_Specs <-> Lat_Design, psi3_2, NA Lat_Env <-> Lat_Env, psi1_1, NA Lat_Specs <-> Lat_Specs, psi2_2, NA Lat_Design <-> Lat_Design,psi3_3, NA Operating_Rating <-> Operating_Rating,thesp1, NA Deck_Cond_Rating <-> Deck_Cond_Rating,thesp2, NA Superstructure_Cond_Rating <-> Superstructure_Cond_Rating, thesp3, NA Substructure_Cond_Rating <-> Substructure_Cond_Rating, thesp4, NA Average_Daily_Traffic <-> Average_Daily_Traffic, theps5, NA Average_Daily_Truck_Traffic <-> Average_Daily_Truck_Traffic, thesp6,NA Design_Load <-> Design_Load, thesp7, NA Type_of_Service_On_Bridge <-> Type_of_Service_On_Bridge, thesp8,NA Railroad_Beneath <-> Railroad_Beneath,thesp9, NA Highway_Beneath <-> Highway_Beneath, thesp10, NA Main_Structure_Material <-> Main_Structure_Material, thesp11, NA Main_Structure_Design <-> Main_Structure_Design, thesp12, NA Number_of_Spans_In_Main_Unit <-> Number_of_Spans_In_Main_Unit, thesp13, NA BridgeAges <-> BridgeAges,thesp14, NA Length_of_Maximum_Span <-> Length_of_Maximum_Span,thesp15, NA === Result from traceback(): observed variables: [1] "1:Type_of_Service_On_Bridge" "2:BridgeAges" [3] "3:Average_Daily_Traffic" "4:Average_Daily_Truck_Traffic" [5] "5:Design_Load" "6:Railroad_Beneath" [7] "7:Highway_Beneath" "8:Main_Structure_Material" [9] "9:Main_Structure_Design" "10:Length_of_Maximum_Span" [11] "11:Number_of_Spans_In_Main_Unit" "12:Operating_Rating" [13] "13:Deck_Cond_R
[R] Expression for pedices
I know that this maybe a trivial question. I am not able to plot pedices in graph axes. Instead I am able to plot different math symbols : XLABEL <- expression(paste(cmH,lim(f(x), x %->% 0),"O PEEP")) works well XLABEL <- expression(paste(cmH,[2],"O PEEP")) is considered a wrong expression. Thanks __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fit sem model with intercept
Thank you John. I got it now. Regarding the model I specified below, I totally agree with you. I listed only portions of my model the illustrate the point of my questions. Therefore, I showed only the arrow part. No correlations are listed here. However, I have those in my complete model. Thank you so much for your help. It was really helpful. - adschai - Original Message - From: John Fox Date: Sunday, April 15, 2007 8:31 am Subject: RE: [R] Fit sem model with intercept To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] > Dear adschai, > > You needn't look too far, since the last example in ?sem is for > a model with > an intercept. One would use the raw-moment matrix as input to > sem(), either > entered directly or calculated with the raw.moments() function > in the sem > package. The row/column of the raw-moment matrix is given a name > just like > the other columns. You could use the name "1"; in the example, > the name is > "UNIT". > > As you say, however, you're using polychoric and polyserial > correlations as > input. Since the origin and scale of the latent continuous variables > underlying the ordinal variables are entirely arbitrary, I can't > imaginewhat the purpose of a model with an intercept would be, > but it's possible > that I'm missing something. If you think that this makes some > sense, then > you could convert the correlations to raw moments by using the > means and > standard deviations of the observed variables along with the > means and > standard deviations that you assign to the latent variables > derived from the > ordinal variables (the latter on what basis I can't imagine, but > I suppose > you could fix them to 0s and 1s). > > Finally, if the sem model that you show is meant to be a complete > specification, I notice that it includes no covariance > components; moreover, > if this is the complete structural part of the model, then I > think it is > underidentified, and the two parts of the model (those involving > eta1 and > eta2) appear entirely separate. > > I hope this helps, > John > > > John Fox > Department of Sociology > McMaster University > Hamilton, Ontario > Canada L8S 4M4 > 905-525-9140x23604 > http://socserv.mcmaster.ca/jfox > > > > -Original Message- > > From: [EMAIL PROTECTED] > > [mailto:[EMAIL PROTECTED] On Behalf Of > > [EMAIL PROTECTED] > > Sent: Sunday, April 15, 2007 4:28 AM > > To: [EMAIL PROTECTED] > > Subject: [R] Fit sem model with intercept > > > > Hi - I am trying to fit sem model with intercepts. Here is > > what I have in my model. > > > > Exogeneous vars: x1 (continous), x2 (ordinal), x3 (ordinal), > > x4(continuous) Endogeneous vars: y1 (continuous), y2 > > (ordinal), y3 (ordinal) > > > > SEM model: > > x1 -> eta1; x2 -> eta1; x3 -> eta2; x4 -> eta2; eta1 -> > > y1, eta1 -> y2, eta2 -> y2, eta2 -> y3 > > > > However, in these arrow models, I don't know how to add > > intercept onto it. I am trying to find an example code using > > sem package on how to incorporate intercept but cannot find > > any documents on the web. Or we can simply add something like > > this? '1 -> eta1'? This is my first question. > > > > Also, note that since my y2 and y3 are ordinal, I used the > > 'hetcor' to calculate correlation of observed variables. > > However, from the document, I would need to use the > > covariance matrix rather then the correlation. And I need > > additional column for 1. I am not sure how this matrix should > > look like and how I can obtain this? If there is any example > > you could point me to, I would really appreciate. Thank you. > > > > - adschai > > > > [[alternative HTML version deleted]] > > > > __ > > [EMAIL PROTECTED] mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide > > http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > > > [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Scott&Knot test
>From Enio Jelihovschi [EMAIL PROTECTED] Subject: Scott&Knot test To any one who happens to know in which R-package can I find the Scott-Knot test of multiple comparison of means? Thank you very much. Enio Jelihovschi UESC- Bahia Brasil [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] software comparison
Dear R Users, May be you are interested in an article that compares 9 statistical softwares (including R). Any comments are appreciate. Article: Keeling, Kellie B.; Pavur, Robert J."A comparative study of the reliability of nine statistical software packages" Computational Statistics and Data Analysis, Volume: 51, Issue: 8, 2007, pp. 3811-3831 Abstract The reliabilities of nine software packages commonly used in performing statistical analysis are assessed and compared. The (American) National Institute of Standards and Technology (NIST) data sets are used to evaluate the performance of these software packages with regard to univariate summary statistics, one-way ANOVA, linear regression, and nonlinear regression. Previous research has examined various versions of these software packages using the NIST data sets, but typically with fewer software packages than used in this study. This study provides insight into a relative comparison of a wide variety of software packages including two free statistical software packages, basic and advanced statistical software packages, and the popular Excel package. Substantive improvements from previous software reliability assessments are noted. Plots of principal components of a measure of the correct number of significant digits reveal how these packages tend to cluster for ANOVA and nonlinear regression. Best, Rob __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Fit sem model with intercept
Dear adschai, You needn't look too far, since the last example in ?sem is for a model with an intercept. One would use the raw-moment matrix as input to sem(), either entered directly or calculated with the raw.moments() function in the sem package. The row/column of the raw-moment matrix is given a name just like the other columns. You could use the name "1"; in the example, the name is "UNIT". As you say, however, you're using polychoric and polyserial correlations as input. Since the origin and scale of the latent continuous variables underlying the ordinal variables are entirely arbitrary, I can't imagine what the purpose of a model with an intercept would be, but it's possible that I'm missing something. If you think that this makes some sense, then you could convert the correlations to raw moments by using the means and standard deviations of the observed variables along with the means and standard deviations that you assign to the latent variables derived from the ordinal variables (the latter on what basis I can't imagine, but I suppose you could fix them to 0s and 1s). Finally, if the sem model that you show is meant to be a complete specification, I notice that it includes no covariance components; moreover, if this is the complete structural part of the model, then I think it is underidentified, and the two parts of the model (those involving eta1 and eta2) appear entirely separate. I hope this helps, John John Fox Department of Sociology McMaster University Hamilton, Ontario Canada L8S 4M4 905-525-9140x23604 http://socserv.mcmaster.ca/jfox > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > [EMAIL PROTECTED] > Sent: Sunday, April 15, 2007 4:28 AM > To: [EMAIL PROTECTED] > Subject: [R] Fit sem model with intercept > > Hi - I am trying to fit sem model with intercepts. Here is > what I have in my model. > > Exogeneous vars: x1 (continous), x2 (ordinal), x3 (ordinal), > x4(continuous) Endogeneous vars: y1 (continuous), y2 > (ordinal), y3 (ordinal) > > SEM model: > x1 -> eta1; x2 -> eta1; x3 -> eta2; x4 -> eta2; eta1 -> > y1, eta1 -> y2, eta2 -> y2, eta2 -> y3 > > However, in these arrow models, I don't know how to add > intercept onto it. I am trying to find an example code using > sem package on how to incorporate intercept but cannot find > any documents on the web. Or we can simply add something like > this? '1 -> eta1'? This is my first question. > > Also, note that since my y2 and y3 are ordinal, I used the > 'hetcor' to calculate correlation of observed variables. > However, from the document, I would need to use the > covariance matrix rather then the correlation. And I need > additional column for 1. I am not sure how this matrix should > look like and how I can obtain this? If there is any example > you could point me to, I would really appreciate. Thank you. > > - adschai > > [[alternative HTML version deleted]] > > __ > [EMAIL PROTECTED] mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] collaboration
Dear R users I am looking for a part time R programmer to help us with couple of R based projects on loss distributions modelling. We are based in Milan (Italy). The ideal candidate should be very familiar with R/S+ programming environment and have a good background in applied statistics and probability theory. Knowledge of C programming language and postgresql database would be great. I think that a ph.D, student or a free consultant will be the ideal solution. For further information please do not hesithate to contact me at [EMAIL PROTECTED] Thanks in advance for your help. Regards Andrea __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Fit sem model with intercept
Hi - I am trying to fit sem model with intercepts. Here is what I have in my model. Exogeneous vars: x1 (continous), x2 (ordinal), x3 (ordinal), x4(continuous) Endogeneous vars: y1 (continuous), y2 (ordinal), y3 (ordinal) SEM model: x1 -> eta1; x2 -> eta1; x3 -> eta2; x4 -> eta2; eta1 -> y1, eta1 -> y2, eta2 -> y2, eta2 -> y3 However, in these arrow models, I don't know how to add intercept onto it. I am trying to find an example code using sem package on how to incorporate intercept but cannot find any documents on the web. Or we can simply add something like this? '1 -> eta1'? This is my first question. Also, note that since my y2 and y3 are ordinal, I used the 'hetcor' to calculate correlation of observed variables. However, from the document, I would need to use the covariance matrix rather then the correlation. And I need additional column for 1. I am not sure how this matrix should look like and how I can obtain this? If there is any example you could point me to, I would really appreciate. Thank you. - adschai [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Hotelling T-Squared vs Two-Factor Anova
Sean Scanlan wrote: > Hi, > > I am a graduate student at Stanford University and I have a general > statistics question. What exactly is the difference between doing a > two-factor repeated measures ANOVA and a Hotelling T-squared test for > a paired comparison of mean vectors? > > Given: > > Anova: repeated measures on both factors, 1st factor = two different > treatments, 2nd factor = 4 time points, where you are measuring the > blood pressure at each of the time points. > > Hotelling T^2: You look at the difference in the 4x1 vector of blood > pressure measurements for the two different treatments, where the four > rows in the vector are the four time points. > > > I am mainly interested in the main effects of the two treatments. Can > someone please explain if there would be a difference in the two > methods or any advantage in using one over the other? > > In a few words (the full story takes a small book), the difference is in the assumptions, and in the hypothesis being tested. In the most common incarnation, T^2 tests for *any* difference in the means, whereas ANOVA removes the average before comparing the shapes of the time course. If you look at intra-individual differences (e.g. x2-x1, x3-x2, x4-x3, but other choices are equivalent), then T^2 on these three variables will test the same hypothesis about the means. The remaining difference is then that ANOVA assumes a particular pattern of the covariance matrix, whereas T^2 allows a general covariance structure. In particular, T^2 applies even when your response variables are not of the same quantity, say if you had simultaneous measurements of heart rate and blood pressure. The standard assumption for ANOVA is "compound symmetry" (one value on the diagonal, another off-diagonal), which can be weakened to "sphericity" (covariance of differences behave as they would under comp.symm.). On closer inspection, sphericity actually means that the covariance matrix for differences is proportional to a known matrix. Since T^2 has more parameters to estimate it will have less power if both methods are applicable. Even if the assumptions are not quite right, procedure based on the ANOVA F may still be stronger, but this requires correction terms to be applied (these are known as Greenhouse-Geisser and Huynh-Feldt epsilons). > Thanks, > Sean > > __ > [EMAIL PROTECTED] mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Hotelling T-Squared vs Two-Factor Anova
I take it all subjects are measured at the same time points, or Hotelling's T^2 becomes rather messy. The essential difference lies in the way the variance matrix is modelled. The usual repeated measures model would model the variance matrix as equal variances and equal covariances, i.e. with two parameters, (though you can vary this using, e.g. lme). Hotelling's T^2 would model the variance matrix as a general symmetric matrix, i.e. for the 4x4 case using 4+3+2+1 = 10 parameters. If it is appropriate, the repeated measures model is much more parsimonious. Bill Venables. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Sean Scanlan Sent: Saturday, 14 April 2007 5:38 PM To: [EMAIL PROTECTED] Subject: [R] Hotelling T-Squared vs Two-Factor Anova Hi, I am a graduate student at Stanford University and I have a general statistics question. What exactly is the difference between doing a two-factor repeated measures ANOVA and a Hotelling T-squared test for a paired comparison of mean vectors? Given: Anova: repeated measures on both factors, 1st factor = two different treatments, 2nd factor = 4 time points, where you are measuring the blood pressure at each of the time points. Hotelling T^2: You look at the difference in the 4x1 vector of blood pressure measurements for the two different treatments, where the four rows in the vector are the four time points. I am mainly interested in the main effects of the two treatments. Can someone please explain if there would be a difference in the two methods or any advantage in using one over the other? Thanks, Sean __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] How to lower the 'miniFactor' when using nls( ) function?
Dear Friends. I used nls( ) function and encountered error message "step factor 0.000488281 reduced below 'minFactor' of 0.000976563". I then tried the following: 1) Put "nls.control(minFactor = 1/(4096*128))" inside the brackets of nls, but the same error message shows up. 2) Put "nls.control(minFactor = 1/(4096*128))" as a separate command before the command that use nls( ) function, again, the same thing happens, although the R responds to the nls.control( ) function immediately: - $maxiter [1] 50 $tol [1] 1e-05 $minFactor [1] 1.907349e-06 -- I am wondering how may I change the minFactor to a smaller value? The manual that comes with the R software about nls( ) is very sketchy --- the only relevant example I see is a separate command like 2). A more relevent question might be, is lower the 'minFactor' the only solution to the problem? What are the other options? Best Wishes Yuchen Luo [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.