Re: [R] cluster - identify variables contributions to clusters of cases
I am not sure this might help, but you are perhaps lookign at variable selection. There is a 2006 JASA paper by Raftery and Dean which may help. Many thanks, Ranjan On Fri, 27 Jul 2007 17:32:02 +1000 <[EMAIL PROTECTED]> wrote: > Hi List, > > How would I go about best identifying the variables contributing most > to the specific clusters? > eg using either aglomerative or partitioning methods, but with mixed > variables (ie including categorical) eg: > > factor(as.integer(runif(min=1, max=5,nrow(USArrests->t1 > as.data.frame(cbind(USArrests,categ=t1))->test > agnes(test,metric="gower", method="ward")->test1 > cutree(test1,k=5)->clust > > > ?where to go from here? > > Any hints appreciated > Thanx > Herry > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html and provide commented, > minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] using contrasts on matrix regressions (using gmodels, perhaps): 2 Solutions
Dear list, I got two responses to my post. One was from Soren with a follow-up on personal e-mail, and the other I leave anonymous since he contacted me on personal e-mail. Anyway, here we go: The first (Soren): library(doBy) Y <- as.data.frame(Y) lapply(Y,function(y){reg<- lm(y~X); esticon(reg, c(0,0, 0, 1, 0, -1) )}) Confidence interval ( WALD ) level = 0.95 Confidence interval ( WALD ) level = 0.95 Confidence interval ( WALD ) level = 0.95 Confidence interval ( WALD ) level = 0.95 Confidence interval ( WALD ) level = 0.95 $V1 beta0 Estimate Std.Error t.value DF Pr(>|t|) Lower.CI Upper.CI 1 0 0.6701771 0.517921 1.293976 4 0.2653302 -0.767802 2.108156 $V2 beta0 Estimate Std.Errort.value DF Pr(>|t|) Lower.CI Upper.CI 1 0 -0.2789954 0.64481 -0.4326784 4 0.687 -2.069275 1.511284 $V3 beta0 Estimate Std.Errort.value DF Pr(>|t|) Lower.CI Upper.CI 1 0 -0.7677927 0.9219688 -0.8327751 4 0.4518055 -3.327588 1.792003 $V4 beta0 Estimate Std.Error t.value DF Pr(>|t|) Lower.CI Upper.CI 1 0 -0.6026635 0.4960805 -1.214850 4 0.29123 -1.980004 0.7746768 $V5 beta0 Estimate Std.Error t.value DF Pr(>|t|) Lower.CI Upper.CI 1 0 2.001558 1.004574 1.992444 4 0.117123 -0.787587 4.790703 One thing I do not know how to handle is the output "Confidence interval ( WALD ) level = 0.95" which shows up for every regression. When I do millions of regressions, this seriously slows it all down. Any idea how I can suppress that? The second solution uses gmodels, with a lucid explanation which I reproduce. Thanks! The second (anon): For a standard (non-matrix) regression, you could test the hypothesis X3=X4 using estimable(reg, c("(Intercept)"=0, X1=0, X2=0, X3=1, X4=0, X5=-1) ) but this won't currently work with the mlm object created by a matrix regression. The best way to solve this problem is to write an estimable.mlm() function that simply extracts the individual regressions from the mlm object and then calls estimable on each of these, pasting the results back together appropriately. Something like this should do the trick: `estimable.mlm` <- function (object, ...) { coef <- coef(object) ny <- ncol(coef) effects <- object$effects resid <- object$residuals fitted <- object$fitted ynames <- colnames(coef) if (is.null(ynames)) { lhs <- object$terms[[2]] if (mode(lhs) == "call" && lhs[[1]] == "cbind") ynames <- as.character(lhs)[-1] else ynames <- paste("Y", seq(ny), sep = "") } value <- vector("list", ny) names(value) <- paste("Response", ynames) cl <- oldClass(object) class(object) <- cl[match("mlm", cl):length(cl)][-1] for (i in seq(ny)) { object$coefficients <- coef[, i] object$residuals <- resid[, i] object$fitted.values <- fitted[, i] object$effects <- effects[, i] object$call$formula[[2]] <- object$terms[[2]] <- as.name(ynames[i]) value[[i]] <- estimable(object, ...) } class(value) <- "listof" value } Now this all works: > X <- matrix(rnorm(50),10,5) > Y <- matrix(rnorm(50),10,5) > reg <- lm(Y~X) > estimable(reg, c("(Intercept)"=0, X1=0, X2=0, X3=1, X4=0, X5=-1) ) Response Y1 : Estimate Std. Error t value DF Pr(>|t|) (0 0 0 1 0 -1) -0.9024065 0.4334235 -2.082043 4 0.1057782 Response Y2 : Estimate Std. Error t value DF Pr(>|t|) (0 0 0 1 0 -1) -0.7017988 0.2199234 -3.191106 4 0.03318115 Response Y3 : Estimate Std. Error t value DF Pr(>|t|) (0 0 0 1 0 -1) 0.5412863 0.2632527 2.056147 4 0.1089276 Response Y4 : Estimate Std. Errort value DF Pr(>|t|) (0 0 0 1 0 -1) -0.1028162 0.5973959 -0.1721073 4 0.87171 Response Y5 : Estimate Std. Error t value DF Pr(>|t|) (0 0 0 1 0 -1) 0.2493330 0.2024061 1.231845 4 0.2854716 On Wed, 25 Jul 2007 18:30:36 -0500 Ranjan Maitra <[EMAIL PROTECTED]> wrote: > Hi, > > I want to test for a contrast from a regression where I am regressing the > columns of a matrix. In short, the following. > > X <- matrix(rnorm(50),10,5) > Y <- matrix(rnorm(50),10,5) > lm(Y~X) > > Call: > lm(formula = Y ~ X) > > Coefficients: > [,1] [,2] [,3] [,4] [,5] > (Intercept) 0.3350 -0.1989 -0.1932 0.7528 0.0727 > X10.2007 -0.8505 0.0520 0.1501 0.3248 > X20.3212 0.7008 -0.0963 -0.2584 0.6711 > X30.3781 -0.7321 0.1907 -0.1721 0.3073 > X4 -0.1778 0.2822 -0.0644 -0.2649 -0.4140 > X5 -0.1079 -0.0475 0.6047 -0.8369 -0.5928 > >
[R] using contrasts on matrix regressions (using gmodels, perhaps)
Hi, I want to test for a contrast from a regression where I am regressing the columns of a matrix. In short, the following. X <- matrix(rnorm(50),10,5) Y <- matrix(rnorm(50),10,5) lm(Y~X) Call: lm(formula = Y ~ X) Coefficients: [,1] [,2] [,3] [,4] [,5] (Intercept) 0.3350 -0.1989 -0.1932 0.7528 0.0727 X10.2007 -0.8505 0.0520 0.1501 0.3248 X20.3212 0.7008 -0.0963 -0.2584 0.6711 X30.3781 -0.7321 0.1907 -0.1721 0.3073 X4 -0.1778 0.2822 -0.0644 -0.2649 -0.4140 X5 -0.1079 -0.0475 0.6047 -0.8369 -0.5928 I want to test for c'b = 0 where c is (lets say) the contrast (0, 0, 1, 0, -1). Is it possible to do so, in one shot, using gmodels or something else? Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] loading package in LINUX
On Fri, 6 Jul 2007 18:34:27 +0530 "Regina Verghis" <[EMAIL PROTECTED]> wrote: > I am comfortable with windows based R. But recently I had shifted to > LINUX(Red Hat Linux Enterprise Guide 4) > 1) I want to load J K Lindsey's repeated library in R. How to install > the packge? Hi, Use install.packages() which is the command line interface to what that Windows gui does > 2) How to create the shared library if I ve the fortran codes(I > haven't done creation of shared library in windows also). Isn't is R CMD SHLIB etc etc. two separate words with a space in between R and CMD? Please note that case matters in the linux/unix world. HTH, Ranjan > I had run the command Rcmd in bin directory but an error message > "bash: Rcmd: command not found" is produced. > Thank You > With Regards, > Regina > > > -- > [EMAIL PROTECTED] > Regina M.Verghis, > Post Graduate Student > Department of Biostatistics, > Christian Medical College, > Vellore, Tamilnadu- 632002 > India. > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html and provide commented, > minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] simple plotting question
Hi, I have a very simple question which I can't see how to figure out. Basically, I have a matrix of plots, but I want no margin space in between them. I want a wide right margin (outside of the block of plots) so that I can place a legend there. I tried the following: par("mar"=c(0,0,0,1), mfrow=c(4,4)) for (i in 8:23) { image(xx$ttt[2*(128:1),2*(128:1),i,1],axes=F,col=gray(0:15/15)) } but I get a margin to the right of every plot. Any ideas? This is usually not a problem for me because I make each plot separately, but the journal requires no subfigures. Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] displaying intensity through opacity on an image (ONE SOLUTION)
On Sat, 19 May 2007 22:05:36 +1000 Jim Lemon <[EMAIL PROTECTED]> wrote: > Ranjan Maitra wrote: > >... > > > > (we are out of R). > > > > And then look at the pdf file created: by default it is Rplots.pdf. > > > > OK, now we can use gimp, simply to convert this to .eps. Alternatively on > > linux, the command pdftops and then psto epsi on it would also work. > > > > Yippee! Isn't R wonderful?? > > > Sure is. You could probably save one step by using postscript() instead > of pdf() and get an eps file directly. The reason I didn't answer the > first time is I couldn't quite figure out how to do what you wanted. > > Jim Thanks, Jim! Not a problem, But will postscript() work? I thought that help file said that only pdf and MacOSX quartz would work (at the time it was written). It certainly does not work for me on the screen. Btw, I made an error in writing the previous e-mail: the command to convert to .eps from .ps is ps2epsi. Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] displaying intensity through opacity on an image (ONE SOLUTION)
Dear list, I did not get any response yet, but after looking around R and other things, I came up with something that works. Basically, I use the rgb() function in R [though I could also use the hsv() function] to help me with the colormap. Anyway, doing a help on rgb gives: This function creates "colors" corresponding to the given intensities (between 0 and 'max') of the red, green and blue primaries. An alpha transparency value can also be specified (0 means fully transparent and 'max' means opaque). If 'alpha' is not specified, an opaque colour is generated. The names argument may be used to provide names for the colors. The values returned by these functions can be used with a 'col=' specification in graphics functions or in 'par'. and later on. Semi-transparent colors ('0 < alpha < 1') are supported only on a few devices: at the time of writing only on the 'pdf' and (on MacOS X) 'quartz' devices. The hsv() function has a similar point on semi-transparent colors. Ok, looks promising: I don't use a Mac, and my potential journal does not accept .pdf, only .tiff or .eps, but we are not totally lost here. So, I tried the following silly example in R: > pdf() > image( matrix(rep(1:5,5), nr = 5), col = gray(0:16/16)) > image( matrix(1:25, nr = 5), col = rgb(rep(1, 15), g=0, b=0, alpha = rep(1:15)/15), add = T) # red with different opacities > q() (we are out of R). And then look at the pdf file created: by default it is Rplots.pdf. OK, now we can use gimp, simply to convert this to .eps. Alternatively on linux, the command pdftops and then psto epsi on it would also work. Yippee! Isn't R wonderful?? Hope this helps: though others may have known about this before, I certainly did not know how to do this in R. Best wishes, Ranjan On Thu, 17 May 2007 19:16:18 -0500 Ranjan Maitra <[EMAIL PROTECTED]> wrote: > Dear colleagues, > > I have an image which I can display in the greyscale using image. On this > image, for some pixels, which I know, I want to display their activity based > on a third measure. One way to do that would be to color these differently, > and use an opacity measure to display the third measure. An example of what I > am trying to do is at: > > http://www.public.iastate.edu/~maitra/papers/mrm02.pdf > > page 26, for instance. There are two different kinds of voxels, given by > greens and red. At the low end, there is transparency on the red scale and at > the upper end there is opacity in the red and the green. > > A simpler example involving only one kind of voxels is on page 24 of the same > paper. Either way, that figure was done using Matlab, but I was wondering how > do i do this using R. > > Any suggestions, please? > > Many thanks and best wishes, > Ranjan > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] displaying intensity through opacity on an image
Dear colleagues, I have an image which I can display in the greyscale using image. On this image, for some pixels, which I know, I want to display their activity based on a third measure. One way to do that would be to color these differently, and use an opacity measure to display the third measure. An example of what I am trying to do is at: http://www.public.iastate.edu/~maitra/papers/mrm02.pdf page 26, for instance. There are two different kinds of voxels, given by greens and red. At the low end, there is transparency on the red scale and at the upper end there is opacity in the red and the green. A simpler example involving only one kind of voxels is on page 24 of the same paper. Either way, that figure was done using Matlab, but I was wondering how do i do this using R. Any suggestions, please? Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] slightly OT: constrained least-squares estimation in a decomvolution model
Dear colleagues, This is not strictly a R question, but more a methodology-related question. I have the following linear model: Y = X\beta + e. Pretty standard stuff, but additionally, X is square, symmetric circulant. So, the LS estimate for \beta is given by just deconvolving Y with the inverse of X, and can be done using 1-d discrete convolution. Now, suppose that I also add in the constraint that some of the \beta's are zero. Is it still possible to still use the convolution property (and the fact that the whole X matrix is circulant, symmetric) in some way? This is important in my application, because discrete convolution is what makes the LS estimate of \beta able to be computed and I have to do it several times. Any ideas or pointers on how to handle this? Has anyone dealt with this, in R or elsewhere? Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Partitioning a kde2d into equal probability areas
Hi David, This is an interesting question! I was wondering is the density function like? What sort of partitions are you looking for? Sorry I am not of much help, but this question perhaps better-defined? Best, Ranjan On Fri, 4 May 2007 14:48:36 -0500 (CDT) David Forrest <[EMAIL PROTECTED]> wrote: > Hi, > > I'd like to partition a 2d probability density function into regions of > equal probability. It is straightforward in the 1d case, like > qnorm(seq(0,1,length=5)) but for 2d I'd need more constraints. > > Any suggestions for how to approach this? Is seems like a spatial > sampling problem but I'm not sure where to look. > > Thanks for your time, > > Dave > -- > Dr. David Forrest > [EMAIL PROTECTED](804)684-7900w > [EMAIL PROTECTED] (804)642-0662h > http://maplepark.com/~drf5n/ > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Calculating Variance-covariance matrix for a multivariate normal distribution.
Dear "stat", Interesting claim to a name! In any case, var(X) where X is the data matrix with n rows of 5-variables should do the trick. Btw, please read the posting guide: your question is legitimate, hiding your identity ("stat stat") is not. Best wishes, Ranjan On Sat, 28 Apr 2007 16:36:55 +0100 (BST) stat stat <[EMAIL PROTECTED]> wrote: > Dear all R users, > > I wanted to calculated a sample Variance covariance matrix of a five-variate > normal distribution. However I stuck to calculate each element of that > matrix. My question is should I calculate ordinary variance and covariances, > taking pairwise variables? or I should take partial covariance between any > two variables, keeping other fixed. In my decent opinion is I should go for > the second option? > > Your help will be highly appreciated. > > Thanks and regards, > stat > > > - > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] queries
First of all, this is not a "Help Desk". Second, please make sure that you put an informative subject line: I thought this was spam and was going to delete it. Thanks, Ranjan On Sat, 21 Apr 2007 12:03:24 -0700 (PDT) Nima Tehrani <[EMAIL PROTECTED]> wrote: > Dear Help Desk, > > Is there any way to change some of the labels on R diagrams? > > Specifically in histograms, I would like to: > > 1. change the word frequency to count. > 2. Make the font of the title (Histogram of ) smaller. > 3. Have a different word below the histogram than the one > occurring in the title (right now if you choose X for your variable, it comes > both above the histogram (in the phrase Histogram of X) and below it). > > Thanks for your time, > Nima > > > - > > > [[alternative HTML version deleted]] > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Partitioning around mediods (PAM)
The question is not reproducable, therefore useless. Perhaps you are forgetting what a medoid is when you expect a mean? HTH, Ranjan On Fri, 20 Apr 2007 17:09:09 +0200 (CEST) nathaniel Grey <[EMAIL PROTECTED]> wrote: > Hi, > > I need some help understanding the output from PAM. When I look at the output > it doesn't list the cluster number by the median vlaues on each of the > variables (like it does with k-means) Instead I have the following: > > So I know for instance cluster 1 has a mean for variable1 of 33.33, however > when I run PAM i get: > variable 1 variable2 > 293212 > 9712 9 > 308 106 8 > 217 62 2 > > Does 29 relate to cluster 1, and 97 to cluster2 etc. > > Hope this makes sense, > > Nathaniel > > > ___ > > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > > > ___ > > now. > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] how to convert the lower triangle of a matrix to a symmetric matrix
Hi, I have a vector of p*(p+1)/2 elements, essentially the lower triangle of a symmetric matrix. I was wondering if there is an easy way to make it fill a symmetric matrix. I have to do it several times, hence some efficient approach would be very useful. Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] is there a function to give significance to correlation?
Hi Jenny, On Thu, 19 Apr 2007 12:48:50 +0100 (BST) Jenny Barnes <[EMAIL PROTECTED]> wrote: > Dear R-Help, > > I am trying to find a function that will give me the significance of the > correlation of 2 variables (in the same dimension arrays) correcting for > serial > autocorrelation. I am not sure what you mean. It appears that you have correlated pairs of observations and you want to find if the correlation between the pairs are significant. I guess you can use a linear model of one array index observation against the other with autocorrelated errors for each pair of your data below. > How can I view the function cor.test's code? I would like to know a lot more > detail about the function than written in the documentation at > http://finzi.psych.upenn.edu/R/library/stats/html/cor.test.html > to see if this would do the job? You could download the .tgz file, extract it and look at cor.test.R. It is in /usr/local/maitra/Desktop/R-2.4.0/src/library/stats/R/cor.test.R depending on which .tgz file you downloaded: I have an old one (R-2.4.0) sitting on this computer (not binary). Note that even if you assume each of your (31, 31) grid to be independent, you will run into the problem of multiple testing, for which you can use false discovery rate methods. However, your (31, 31) grid may not be close to independent. HTH, Best wishes, Ranjan > > I would gratefully appreciate any help you can offer me on these two > interlinked > issues, > > Many thanks for your time and consideration, > > Jenny > > PS. If you would like more detail I have two arrays both of dimensions > [31,31,43]. 31x31 is latitude and longitude, 43 is years of rainfall data. I > have produced a spearmans rank correlation map of these 2 arrays over this 43 > year period. I now need to find the significance for each of the 31x31 grid > points > > > ~~ > Jennifer Barnes > PhD student: long range drought prediction > Climate Extremes Group > Department of Space and Climate Physics > University College London > Holmbury St Mary > Dorking, Surrey, RH5 6NT > Tel: 01483 204149 > Mob: 07916 139187 > Web: http://climate.mssl.ucl.ac.uk > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Labelling boxplot with fivenumber summary
You may wish to provide brief commented source code: that would be most helpful for people referring to it in the archives. Many thanks and best wishes, Ranjan On Sun, 8 Apr 2007 11:43:47 + Daniel Siddle <[EMAIL PROTECTED]> wrote: > > Just wanted to say thanks very much. I used Chuck's 2nd idea as I found it > the easiest to understand as I'm still finding me feet with R. Just for > reference for anyone else, fn[1] and fn[5] actually pasted and gave the > values at the whiskers (~1.5 IQR) so replaced them with a max and min > function which returned the true max and min values and pasted at the correct > heights. > > Thanks once again for all the help. Regards, > > Daniel Siddle > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] transition matrices
On Wed, 4 Apr 2007 07:46:37 -0500 Ranjan Maitra <[EMAIL PROTECTED]> wrote: > Hi, > > It appears that you are trying to separate the states in the transition > matrix such that you have recurrent classes all next to each other. Sorry, I meant non-communicating classes. Btw, thinking about this some more, I think you can do this in a quick and dirty way: convert your transition probability matrix into 0 and 1 which is 0 if the entry is (+)ve. Then, make this into a distance matrix (using as.dist()), use hclust and single linkage and cut the tree at 1(?) to get the equivalence classes. The ordering of the tpm after this is easy. HTH, Ranjan > Here is an idea off the top of my head: make equivalence classes from your > statespace and then use that to create a block diagonal matrix. I did a > search on equivalence classes on RSiteSearch and came up with 17 articles> > You can take a look and see if any of these are useful. > > http://search.r-project.org/cgi-bin/namazu.cgi?query=equivalence+classes&max=20&result=normal&sort=score&idxname=Rhelp02a&idxname=functions&idxname=docs > > HTH, > Ranjan > > > On Wed, 04 Apr 2007 16:34:44 +1000 Richard Rowe <[EMAIL PROTECTED]> wrote: > > > I am working with transition matrices of sequences of animal > > behaviours. What I would like to do is parse the original matrices, > > adjusting row/column order so that the matrix has its main values in blocks > > surrounding the diagonal. This would cause behaviours involved in > > functional groupings (e.g. grooming, resting, foraging etc) to appear as > > blocks. > > This can be done manually by applying subjective 'prior knowledge' of > > sequences, however I would like to have an algorithmic/objective method to > > generate at least a first cut ... > > Any suggestions or hints (even just thoughts) would be much appreciated, > > > > Richard Rowe > > > > Dr Richard Rowe > > Zoology & Tropical Ecology > > School of Tropical Biology > > James Cook University > > Townsville 4811 > > AUSTRALIA > > > > ph +61 7 47 81 4851 > > fax +61 7 47 25 1570 > > JCU has CRICOS Provider Code 00117J > > > > __ > > R-help@stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] transition matrices
Hi, It appears that you are trying to separate the states in the transition matrix such that you have recurrent classes all next to each other. Here is an idea off the top of my head: make equivalence classes from your statespace and then use that to create a block diagonal matrix. I did a search on equivalence classes on RSiteSearch and came up with 17 articles> You can take a look and see if any of these are useful. http://search.r-project.org/cgi-bin/namazu.cgi?query=equivalence+classes&max=20&result=normal&sort=score&idxname=Rhelp02a&idxname=functions&idxname=docs HTH, Ranjan On Wed, 04 Apr 2007 16:34:44 +1000 Richard Rowe <[EMAIL PROTECTED]> wrote: > I am working with transition matrices of sequences of animal > behaviours. What I would like to do is parse the original matrices, > adjusting row/column order so that the matrix has its main values in blocks > surrounding the diagonal. This would cause behaviours involved in > functional groupings (e.g. grooming, resting, foraging etc) to appear as > blocks. > This can be done manually by applying subjective 'prior knowledge' of > sequences, however I would like to have an algorithmic/objective method to > generate at least a first cut ... > Any suggestions or hints (even just thoughts) would be much appreciated, > > Richard Rowe > > Dr Richard Rowe > Zoology & Tropical Ecology > School of Tropical Biology > James Cook University > Townsville 4811 > AUSTRALIA > > ph +61 7 47 81 4851 > fax +61 7 47 25 1570 > JCU has CRICOS Provider Code 00117J > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Info on SPATSTAT window and maps
And what does this have to do with the lead thread subject line "Transition Matrices"? I guess the only way we can start enforcing thread discipline is to stop responding to threads that hijack others? Any thoughts? Ranjan On Wed, 4 Apr 2007 11:31:33 +0200 "Giuseppe Brundu" <[EMAIL PROTECTED]> wrote: > I wonder if there is any tutorial explaining, step by step, how to convert a > (georeferenced) map boundary (from esri shape-file) into a Spatstat window, > for performing the analysis of marked point patterns surveyed inside that > map. Any help on the subject will be really appreciated. > > Giuseppe Brundu > > (University of Sassari, Italy) > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Genetic programming with R?
On Sun, 01 Apr 2007 09:04:01 +0300 kone <[EMAIL PROTECTED]> wrote: > Hello everybody, > > I'm interesting in evolutionary algorithms. I have tested genetic > algorithms with R but has someone tried with genetic programming? Do > you know, if there are code somewhere written in R. > Hi Atte, I have no experience with either, but a > RSiteSearch("genetic algorithms") gives me the following: http://search.r-project.org/cgi-bin/namazu.cgi?query=genetic+algorithms&max=20&result=normal&sort=score&idxname=Rhelp02a&idxname=functions&idxname=docs looks like quite a few packages in genetic algorithms in R. HTH, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] warnings on adapt
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. No one can help you. Ranjan On Wed, 28 Mar 2007 11:47:03 -0600 (MDT) [EMAIL PROTECTED] wrote: > Hi all > > I was wondering if someone could help me. > > I have to estimate some parameters, so I am using the function nlm. Inside > this function I have to integrate, hence > I am using the function adapt. > I don't understand why it is giving the following warnings: > > At the beginning: > > Warning: a final empty element has been omitted > the part of the args list of 'c' being evaluated was: >(paste("Ifail=2, lenwrk was too small. -- fix adapt() !\n", "Check the > returned relerr!"), "Ifail=3: ndim > 20 -- rewrite the fortran code ;-) > !", "Ifail=4, minpts > maxpts; should not happen!", "Ifail=5, internal > non-convergence; should not happen!", ) > > When it finishes: > > Warning messages: > 1: Ifail=2, lenwrk was too small. -- fix adapt() ! > Check the returned relerr! in: adapt(ndim = 1 + numcycles, lower = > rep(lowlim.int, (numcycles + > 2: NA/Inf replaced by maximum positive value > > Some people already asked similar questions but couldn't find the answer. > > Thanks in advance > > Luz > > PS: When using the function adapt only (not within nlm) it gives no warnings. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Kmeans
The answer is correct, and is the way it should be. Cluster indicators are only nominal: there is no ordering in the ids, and hence the means are in different order. Either way, the reason for this (and you could get totally different answers also) is because of how R initializes kmeans (random starts) which is clearly explained in the help on the subject. Ranjan On Wed, 28 Mar 2007 17:48:18 +0200 "Sergio Della Franca" <[EMAIL PROTECTED]> wrote: > Dear R-Helpers, > > I performed kmeans clustering on the following data set(y): > > YEAR PRODUCTS > 1 10 > 2 42 > 3 25 > 4 42 > 5 40 > 6 45 > 7 44 > 8 47 > 9 42 > > > with this code: > > cluster<-kmeans(y[,c("YEAR","PRODUCTS")],3). > > Every time i run this code the components of cluster ("mean" "vector") > changed value,i.e. > > First run: > > Cluster means > > YEAR PRODUCTS > 1 7.50 44.5 > 2 3.67 41.3 > 3 2.00 17.5 > > Clustering vector: > 1 2 3 4 5 6 7 8 9 > 3 2 3 2 2 1 1 1 1 > > Second run: > Cluster means > YEAR PRODUCTS > 1 2.00 17.5 > 2 3.67 41.3 > 3 7.50 44.5 > > Clustering vector: > 1 2 3 4 5 6 7 8 9 > 1 2 1 2 2 3 3 3 3 > > > How can i modify, if it is possible, the code to obtain the same value > ("mean" "vector") every time i'll run the code? > > Thank you in advance. > > Sergio Della Franca. > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] catching a console output
see ?sink or ?capture.output -- they may do your task. hth, ranjan On Tue, 27 Mar 2007 14:37:12 +0200 "manuel.martin" <[EMAIL PROTECTED]> wrote: > Hello all, > I cannot figure out how to catch the console output from a function > which does not return anything but a console output, any hints? > > Thank you in advance, Manuel > > -- > > Manuel Martin - Unité InfoSol > INRA > INSTITUT NATIONAL RECHERCHE AGRONOMIQUE > 2163 AVENUE DE LA POMME DE PIN > BP 20619 ARDON > 45166 OLIVET CEDEX > Tél. : 02 38 41 48 21 > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] sampling from the uniform distribution over a convex hu
Thanks, all, and thanks especially to Ted for your investigations in the other thread with the same title! Does spatstat handle higher dimensions than 2? Best wishes, Ranjan On Mon, 26 Mar 2007 10:31:15 - (BST) (Ted Harding) <[EMAIL PROTECTED]> wrote: > I just wrote: > > > Thanks, Adrian! I should have remembered about 'spatstat' after > > the Baddeley et al. paper to the RSS in June 2005, where the > > package was extensively used! (Though neither convexhull() nor > > runifpoint() are in the package I downloaded at the time; but of > > course things have moved on!). > > Apologies -- that last statement is false! > Ted. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Installing R on a machine with 64-bit Opteron processors
Hi Stan, I haven't used SuSE for four years now, but the errors all point to libraries and programs that need to be installed. I used in on a 32-bit machine then. As I recall, SuSE used to have a fabulous installation tool called yast2 (in my opinion, the best of the lot in packaging), though FC seems to have done something similar with pirut. In any case, isn't this option available with SuSE for 64-bit machines? That would be really strange. Anyway, I distinctly recall that R was in the SuSE package in those days, so perhaps it should still be there (I think the package used to be called r-stat, but my memory is a little unclear on this.) If so, you are better off installing that because the dependencies are installed automatically. The other option is to use yum or synaptic, but you will need to set the repositories up. I am curious: why choose SuSE? It used to be the rage some time ago, especially because it was one of the first with a GUI installation interface, and the above mentioned YaST tool, but fewer people use seem to use it nowadays. HTH. Best, Ranjan On Sun, 25 Mar 2007 13:36:47 -0400 Stan Horwitz <[EMAIL PROTECTED]> wrote: > I have been tasked with installing statistical and other data > analysis applications on a new Sun Fire X4600 M2 x64 server that came > equipped with eight AMD dual core Opteronn 64-bit processors. It is > running the 64-bit version of Suse Linux 9. > > I have read through the installation docs, and I guess I don't > understand what to do, or even how to identify which version, if any, > of this software is suitable for my computer environment. > > What I want to know is if R has been ported to that environment and > if so, how do I install it? I tried downloading it from one of the > mirror web sites, and when I install it, I get > > euler src/R-base-2.4.1# rpm -i R-base-2.4.1-2.1.i586.rpm > warning: R-base-2.4.1-2.1.i586.rpm: V3 DSA signature: NOKEY, key ID > 6b9d6523 > error: Failed dependencies: > xorg-x11-fonts-100dpi is needed by R-base-2.4.1-2.1 > xorg-x11-fonts-75dpi is needed by R-base-2.4.1-2.1 > xorg-x11-libs is needed by R-base-2.4.1-2.1 > blas is needed by R-base-2.4.1-2.1 > libreadline.so.5 is needed by R-base-2.4.1-2.1 > euler src/R-base-2.4.1# > > I also tried the x386 version with the same results. > > The documentation lists some prerequisite items to install such as > blas, but I have searched and I can't see to locate that software. Is > it public domain or a commercial product? I also see that my server > already has x11 fonts installed, so I don't know what those errors > are about. > > Here's what rpm says about fonts ... > > fontconfig-2.2.92.20040221-28.13 > fontconfig-32bit-9-200407011229 > fontconfig-devel-32bit-9-200407011229 > ghostscript-fonts-other-7.07.1rc1-195.8 > XFree86-fonts-75dpi-4.3.99.902-43.71 > fontforge-20060715-7.3 > ghostscript-fonts-std-7.07.1rc1-195.8 > fontconfig-devel-2.2.92.20040221-28.13 > xorg-x11-fonts-scalable-6.9.0-48 > efont-unicode-0.4.0-630.1 > > So isn't "xorg-x11-fonts-scalable-6.9.0-48" what it wants? I guess > not. Is there a list of what this software needs AND where to > download or purchase each item for Suse 9? > > I am also wondering if this list has searchable archives? The list's > web site shows archived postings, but I don't see a way to search them. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] sampling from the unoform distrubuton over a convex hull
Dear list, Does anyone have a suggestion (or better still) code for sampling from the uniform distribution over the convex hull of a set of points? Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Is this an appropriate credit (Re: question on suppressing error messages with Rmath library)
Dear list, As a followup to my post here yesterday, I was wondering if this is an appropriate enough credit for the piece of code if experiments show promise and I eventually make this publicly available as part of some other code to interested researchers and practitioners? I am modifying pnt and dnt to serve my purpose of switching off low-precision warnings. /* This program calculates the density and c.d.f of the non-central t distribution. It is the same as the functions dnt and pnt in the R mathemtical library, and indeed is copied with minor modifications from them. The modification is to stop warning messages: the modified functions are called pnoncentralt and dnoncentralt with the same arguments as pnt and dnt. The reason for this separate function is to get around the warnings of low precision that come in when we use the R math functions. This can slow down the program considerably if there is a huge number of calls. Please use it responsibly. The function uses R's standalone mathematical library, hence the renaming to avoid conflicts with pnt and dnt. Modified by Ranjan Maitra, Ames, IA 50014, USA. 2007/03//21. Since this is really a minor modification, all credits should go the R team below. If you modify this function in order to get around an undiscovered bug or to speed it up/make it more accurate, please let me know. */ I make no changes to any further acknowledgment comments that came with the pnt.c and dnt.c source files. Is this good enough? Am I missing something I should be including? Many thanks and best wishes, Ranjan On Wed, 21 Mar 2007 12:55:51 -0500 Ranjan Maitra <[EMAIL PROTECTED]> wrote: > Hi Luke, > > Thanks! Sorry, my error which I did not realize until after sending out the > program. I think I will just extricate the pnt code and compile that > separately and that should be fine. > > Thanks very much again, to you and everybody else who replied. > > Best wishes, > Ranjan > > > On Wed, 21 Mar 2007 10:04:05 -0500 (CDT) Luke Tierney <[EMAIL PROTECTED]> > wrote: > > > You might get less noise in the replies if you were explicit about > > using Rmath stand-alone and asked on r-devel. > > > > As far as I can see you would need to compile a version of the > > stand-alone library that defines the macros for handling of warning > > messages differently -- the current one just calls printf in the > > stand-alone library. (You might be able to trick the linker into using > > a version of printf for calls from within Rmath that does nothing, but > > I suspect recompiling the sourses is easier.) We will probably be > > rethinking this soon in conjunction with some other changes to > > vectorized math in R. > > > > > > Best, > > > > luke > > > > On Wed, 21 Mar 2007, Ranjan Maitra wrote: > > > > > Dear list, > > > > > > I have been using the Rmath library for quite a while: in the current > > > instance, I am calling dnt (non-central t density function) repeatedly > > > for several million. When the argument is small, I get the warning > > > message: > > > > > > full precision was not achieved in 'pnt' > > > > > > which is nothing unexpected. (The density calls pnt, if you look at the > > > function dnt.) However, to have this happen a huge number of times, when > > > the optimizer is churning through the dataset is bothersome, but more > > > importantly, a bottleneck in terms of speed. Is it possible to switch > > > this off? Is there an setting somewhere that I am missing? > > > > > > Many thanks and best wishes, > > > Ranjan > > > > > > __ > > > R-help@stat.math.ethz.ch mailing list > > > https://stat.ethz.ch/mailman/listinfo/r-help > > > PLEASE do read the posting guide > > > http://www.R-project.org/posting-guide.html > > > and provide commented, minimal, self-contained, reproducible code. > > > > > > > -- > > Luke Tierney > > Chair, Statistics and Actuarial Science > > Ralph E. Wareham Professor of Mathematical Sciences > > University of Iowa Phone: 319-335-3386 > > Department of Statistics andFax: 319-335-3017 > > Actuarial Science > > 241 Schaeffer Hall email: [EMAIL PROTECTED] > > Iowa City, IA 52242 WWW: http://www.stat.uiowa.edu > > > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] question on suppressing error messages with Rmath library
Hi Luke, Thanks! Sorry, my error which I did not realize until after sending out the program. I think I will just extricate the pnt code and compile that separately and that should be fine. Thanks very much again, to you and everybody else who replied. Best wishes, Ranjan On Wed, 21 Mar 2007 10:04:05 -0500 (CDT) Luke Tierney <[EMAIL PROTECTED]> wrote: > You might get less noise in the replies if you were explicit about > using Rmath stand-alone and asked on r-devel. > > As far as I can see you would need to compile a version of the > stand-alone library that defines the macros for handling of warning > messages differently -- the current one just calls printf in the > stand-alone library. (You might be able to trick the linker into using > a version of printf for calls from within Rmath that does nothing, but > I suspect recompiling the sourses is easier.) We will probably be > rethinking this soon in conjunction with some other changes to > vectorized math in R. > > > Best, > > luke > > On Wed, 21 Mar 2007, Ranjan Maitra wrote: > > > Dear list, > > > > I have been using the Rmath library for quite a while: in the current > > instance, I am calling dnt (non-central t density function) repeatedly for > > several million. When the argument is small, I get the warning message: > > > > full precision was not achieved in 'pnt' > > > > which is nothing unexpected. (The density calls pnt, if you look at the > > function dnt.) However, to have this happen a huge number of times, when > > the optimizer is churning through the dataset is bothersome, but more > > importantly, a bottleneck in terms of speed. Is it possible to switch this > > off? Is there an setting somewhere that I am missing? > > > > Many thanks and best wishes, > > Ranjan > > > > __ > > R-help@stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > > -- > Luke Tierney > Chair, Statistics and Actuarial Science > Ralph E. Wareham Professor of Mathematical Sciences > University of Iowa Phone: 319-335-3386 > Department of Statistics andFax: 319-335-3017 > Actuarial Science > 241 Schaeffer Hall email: [EMAIL PROTECTED] > Iowa City, IA 52242 WWW: http://www.stat.uiowa.edu > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] question on suppressing error messages with Rmath library
Thanks, Jim and Ronggui! This would work, of course! But I apologize for not being clear in the first post. I am calling Rmath from a C program. I could not figure out how to set this without going back and recompiling: the message in in pnt.c and pulls it from the header file nmath.h. Thanks, Ranjan On Wed, 21 Mar 2007 22:08:16 +0800 ronggui <[EMAIL PROTECTED]> wrote: > op <- options(warn=-1) > [main codes here] > options(op) > > > > On 3/21/07, Ranjan Maitra <[EMAIL PROTECTED]> wrote: > > Dear list, > > > > I have been using the Rmath library for quite a while: in the current > > instance, I am calling dnt (non-central t density function) repeatedly for > > several million. When the argument is small, I get the warning message: > > > > full precision was not achieved in 'pnt' > > > > which is nothing unexpected. (The density calls pnt, if you look at the > > function dnt.) However, to have this happen a huge number of times, when > > the optimizer is churning through the dataset is bothersome, but more > > importantly, a bottleneck in terms of speed. Is it possible to switch this > > off? Is there an setting somewhere that I am missing? > > > > Many thanks and best wishes, > > Ranjan > > > > __ > > R-help@stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > > > -- > Ronggui Huang > Department of Sociology > Fudan University, Shanghai, China > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] question on suppressing error messages with Rmath library
Dear list, I have been using the Rmath library for quite a while: in the current instance, I am calling dnt (non-central t density function) repeatedly for several million. When the argument is small, I get the warning message: full precision was not achieved in 'pnt' which is nothing unexpected. (The density calls pnt, if you look at the function dnt.) However, to have this happen a huge number of times, when the optimizer is churning through the dataset is bothersome, but more importantly, a bottleneck in terms of speed. Is it possible to switch this off? Is there an setting somewhere that I am missing? Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] kmeans
Please read "An Introduction to R" which is available if your type > help.start() at the R-prompt. HTH. Ranjan On Tue, 20 Mar 2007 19:10:17 +0100 "Sergio Della Franca" <[EMAIL PROTECTED]> wrote: > Dear R-helpers, > > I have this dataset(y): > > YEAR PRODUCTS > 1 10 > 2 42 > 3 25 > 4 42 > 5 40 > 6 45 > 7 44 > 8 47 > 9 42 > > I perform kmeans clustering, and the results are the following: > > > Cluster means: > YEAR PRODUCTS > 1 3.67 41.3 > 2 7.50 44.5 > 3 2.00 17.5 > > Clustering vector: > 1 2 3 4 5 6 7 8 9 > 3 1 3 1 1 2 2 2 2 > Now my problem is add acolumn at my dataset(y) whit the information of > clustering vector, i.e.: > >YEAR PRODUCTS *clustering vector* > 1 10*3* > 2 42*1* > 3 25*3* > 4 42*1* > 5 40*1* > 6 45*2* > 7 44*2* > 8 47*2* > 9 42*2* > > > How can I obtain my new dataset with the information of clustering > vector? > > > Thank you in advance. > > Sergio Della Franca. > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] k-means clustering
kmeans(y[,c("AGE", "PRODUCTS")], 3) should do what I think you want. Note that you should try several starting points for good optimality of the partitioning. HTH, Ranjan On Mon, 19 Mar 2007 20:12:10 +0100 "Sergio Della Franca" <[EMAIL PROTECTED]> wrote: > Dear R-helpers, > > I'm trying to perform k-means clustering. > > For example, I have this dataset(y): > > AGE PRODUCTS SEX > 92 3253 M > 43 4144 F > 67 3246 M > 22 4144 F > 56 4087 F > 89 3836 M > 47 4379 M > > My situation is the following: > - If i use this code: cluster<-kmeans(y,3), the program doesn't run because > the variable "SEX" isn't numeric. > - If i use this code: cluster<-kmeans(y[,{"AGE"}],3), the program run > correctly. > - If i use this code: cluster<-kmeans(y[,{"AGE" ; "PRODUCTS"}],3), the > program run correctly, but the k-means clustering is performed only on the > variable "PRODUCTS". > > I would like to perform the k-means clustering on the two numeric variable i > have. > How can i modify the k-means code to develop the clustering on numeric > variable that i decide to use? > > > Thank you in advance. > > Sergio Della Franca. > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] regarding cointegration
Please do not mess up the thread by posting as a reply to some other topic. Thanks, Ranjan On Thu, 15 Mar 2007 16:51:20 +0530 [EMAIL PROTECTED] wrote: > > Hi All > > Thanks for supporting people like me. > What is cointegration and its connection with granger causality test ? > what is its use and mathematical methodology behind it. Secondly, is > cointegration test like "Phillips-Ouliaris Cointegration Test" of tseries > package or of urca package is the same as cointegration ? Please tell me > how to go about it and interpret the results ? > > Thanks in advance > cheers :-) > -gaurav > > > DISCLAIMER AND CONFIDENTIALITY CAUTION:\ \ This message and ...{{dropped}} > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] reading raw matrix saved with writeBin
On Wed, 14 Mar 2007 18:45:53 -0700 (PDT) Milton Cezar Ribeiro <[EMAIL PROTECTED]> wrote: > Dear Friends, > > I saved a matrix - which contans values 0 and 1 - using following command: > writeBin (as.integer(mymatrix), "myfile.raw", size=1). > > It is working fine and I can see the matrix using photoshop. But now I need > read the matrices again (in fact I have a thousand of them) as matrix into R > but when > I try something like mat.dat<-readBin ("myfile.raw",size=1) I can´t access > the > matrix > > Kind regards, > > Miltinho > > __ > > > [[alternative HTML version deleted]] > Look up the help file. There is an explicit example. Basically, you need to tell the file to read in binary. In fact, I am a little surprised your first command works while writing. HTH! Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Connecting R-help and Google Groups?
I agree with Bert on this one! Any commercial entity's future policies will not be decided by some group's past understanding. Everything can be explained in terms of shareholder value. I don't see any advantages with tying up to Google groups. We get enough posts every day here to keep us all busy with even a fraction of them. I also think people should be encouraged to follow the policies such as read the basics "An Intro" etc, before running off and posting. Besides, and most importantly, I prefer having statisticians or those interested in statistics applied to their problems discuss their issues and software, and I learn a lot in this mailing list even in lurk mode. I could do without random posters. Btw, anyone using R should be encouraged to use RSiteSearch to search this mailing list on some topic. Best, Ranjan On Wed, 14 Mar 2007 13:36:33 -0400 "Paul Lynch" <[EMAIL PROTECTED]> wrote: > Well, I don't see what danger could arise from the fact that Google > Groups is owned by a company. Google Groups provides access to all of > usenet, plus many mailing lists (e.g. the ruby-talk mailing list for > Ruby programmers). They don't control any of the newgroups or mailing > lists that they provide access to. It is a free service, supported by > advertising. > > As for the issue of whether there might be future access problems > (e.g. if Google goes bankrupt, which currently seems unlikely) R > users would still have access to the r-help list through the means > that they have now. I am not recommending replacing any of the > current means of access to the r-help list; I am just asking about > adding an additional means of access. > > --Paul > > On 3/14/07, Bert Gunter <[EMAIL PROTECTED]> wrote: > > I know nothing about Google Groups, but FWIW, I think it would be most > > unwise for R/CRAN to hook up to **any** commercially sponsored web portals. > > Future changes in their policies, interfaces,or access conditions may make > > them inaccessible or unfreindly to R users. So long as we have folks willing > > and able to host and maintain our lists as part of the CRAN infrastructure, > > CRAN maintains control. I think this is wise and prudent. > > > > I am happy to be educated to the contrary if I misunderstand how this would > > work. > > > > Bert Gunter > > Genentech Nonclinical Statistics > > South San Francisco, CA 94404 > > 650-467-7374 > > > > > > -Original Message- > > From: [EMAIL PROTECTED] > > [mailto:[EMAIL PROTECTED] On Behalf Of Paul Lynch > > Sent: Wednesday, March 14, 2007 8:48 AM > > To: R-help@stat.math.ethz.ch > > Subject: [R] Connecting R-help and Google Groups? > > > > This morning I tried to see if I could find the r-help mailing list on > > Google Groups, which has an interface that I like. I found three > > Google Groups ("The R Project for Statistical Computing", "rproject", > > and "rhelp") but none of them are connected to the r-help list. > > > > Is there perhaps some reason why it wouldn't be a good thing for there > > to be a connected Google Group? I think it should be possible to set > > things up so that a post to the Google Group goes to the r-help > > mailing list, and vice-versa. > > > > Also, does anyone know why the three existing R Google Groups failed > > to get connected to r-help? It might require some action on the part > > of the r-help list administrator. > > > > Thanks, > > --Paul > > > > -- > > Paul Lynch > > Aquilent, Inc. > > National Library of Medicine (Contractor) > > > > __ > > R-help@stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > > > > > -- > Paul Lynch > Aquilent, Inc. > National Library of Medicine (Contractor) > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] useR! 2007 --- Call for papers and posters
Di, You probably wanted to say first useR! (hosted) in North America, isn't that right? Anyway, I was wondering: what are the conditions for submitting a paper? Are the guidelines the same as JSS (or are they bringing it out)? If I can figure out how to make R packages, I might submit a paper on a package. Ranjan On Fri, 9 Mar 2007 15:26:24 -0600 Dianne Cook <[EMAIL PROTECTED]> wrote: > R Users and Developers, > > The first North American useR! will be held at Iowa State University, > Ames, Iowa, August 8___10, 2007. Information about the meeting can be > found at http://www.user2007.org/. > > We are now ready to accept paper and poster submissions. > > Papers are encouraged in all areas, but particular emphasis is given > to work describing newly created or improved R packages. Papers will > be refereed and a best paper/presentation award is likely. Your full > paper needs to be submitted by April 23, 5:00PM CST, to be considered > for the meeting. > > There will also be the opportunity to present your work as a poster > instead of a paper. Poster submissions will be in the form of an > abstract and needs to be submitted by June 30. > > Submit full papers, and poster abstracts, to [EMAIL PROTECTED] > > useR! Program Committee > [EMAIL PROTECTED] > > ___ > [EMAIL PROTECTED] mailing list > https://stat.ethz.ch/mailman/listinfo/r-announce > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Error in La.svd(X) : error code 1 from Lapack routine 'dgesdd'
On Mon, 05 Mar 2007 09:14:17 +0100 Sophie Richier <[EMAIL PROTECTED]> wrote: > Dear R helpers, > I am working with R 2.4.1 GUI 1.18 (4038) for MacOSX. I have a matrix of > 10 000 genes and try to run the following commands: > > model.mix<-makeModel (data=data, formula=~Dye+Array+Sample+Time, > random=~Array+Sample) > > anova.mix<-fitmaanova (data, model.mix) > > test.mix<-matest (data, model=model.mix, term="Time", n.perm=100, > test.method=c(1,0,1,1)) > > I get the following error message: > Doing F-test on observed data ... > Doing permutation. This may take a long time ... > Error in La.svd(X) : error code 1 from Lapack routine 'dgesdd' > > What does this mean? is my matrix too big? What can I do? > Thanks a lot in adavance > > Sophie from the help file: Unsuccessful results from the underlying LAPACK code will result in an error giving a positive error code: these can only be interpreted by detailed study of the FORTRAN code. from the manpages: man dgesdd INFO(output) INTEGER = 0: successful exit. < 0: if INFO = -i, the i-th argument had an illegal value. > 0: DBDSDC did not converge, updating process failed. I don't know what DBDSDC is, but it appears that there may be some convergence issue for you. Unless someone else has better ideas, look up www.netlib.org/lapack and the routines in there to investigate further. HTH! Best, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] permutation tests on autocorrelations
Dear list, I have this huge array of numbers, of dimensions 67 x 33 x 51 x 6, the 6 being the replications. I wanted to test for evidence of autocorrelation between the 6 replications, marginally. I can calculate the first-order autocorrelation very easily using an appropriately defined function and apply on the first three dimensions. But how can I perform a permutation test to test for significance?** I have looked around with RSiteSearch and searched for permutation tests and autocorrelations, but did not come up with something that I could use. Anything I am missing? ** I understand that with such a large number of tests, there will be the issue of getting some false positives and will look into that issue separately, but right now, I need to be able to get a p-value for the marginal tests even before I get there. Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R File IO Slow?
I decided to run an experiment: just reading in a file which is 78MB in binary format (of ints). It takes less than 30s using a laptop with 512 MB RAM, 2.3 GHz Intel-4 single processor. At that point, I did not notice that Ramzi was talking about a .RData file. For huge files, I usually do not save my files. I run the R code whenever I need it: the entire exercise usually takes a few minutes, at the most. If something takes very long, I usually save the output into a file and read from there. I have found that this is more efficient (besides helping in reproducing my results). HTH! Ranjan On Thu, 01 Mar 2007 13:04:54 -0500 "Roger D. Peng" <[EMAIL PROTECTED]> wrote: > A 27MB .RData file is relatively big, in may experience. What do you think > is > slow? Maybe it's your computer that is slow? > > -roger > > ramzi abboud wrote: > > Is R file IO slow in general or am I missing > > something? It takes me 5 minutes to do a load(MYFILE) > > where MYFILE is a 27 MB Rdata file. Is there any way > > to speed this up? > > > > The one idea I have is having R call a C or Perl > > routine, reading the file in that language, converting > > the data in to R objects, then sending them back into > > R. This is more work that I want to do, however, in > > loading Rdata files. > > > > Any ideas would be appreciated. > > Ramzi Aboud > > University of Rochester > > > > > > > > > > > > > > > > Need Mail bonding? > > > > __ > > R-help@stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > > -- > Roger D. Peng | http://www.biostat.jhsph.edu/~rpeng/ > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Row-wise two sample T-test on subsets of a matrix
Here's one suggestion: convert the matrix into a three-dimensional array and use apply on it. Ranjan On Thu, 1 Mar 2007 11:51:29 -0600 (CST) Nameeta Lobo <[EMAIL PROTECTED]> wrote: > Hello all, > > I am trying to run a two sample t-test on a matrix which is a > 196002*22 matrix. I want to run the t-test, row-wise, with the > first 11 columns being a part of the first group and columns > 12-22 being a part of the second group. > > I tried running something like (temp.matrix being my 196002*22 > matrix) > > t.test(temp.matrix[,1:11],temp.matrix[,12:22],paired=TRUE) > > or somthing like > > as.numeric(t.test(temp.matrix[,1:11],temp.matrix[,12:22],paired=TRUE)[[1]]) > so as to only capture the t-value alone and > > and I get a result for the whole matrix instead of a row-wise > result. > > I want to avoid using a "for" loop to increment the number of > rows as it would take a huge amount of time. > > > Any suggestions would be really appreciated. > > thanks > nameeta > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] count the # of appearances...
Hello Bunny/Lautioscrew, sum(your vector == your chosen element) should do what you want... HTH, Ranjan On Thu, 1 Mar 2007 15:20:19 +0100 "bunny , lautloscrew.com" <[EMAIL PROTECTED]> wrote: > Hi there, > > is there a possibility to count the number of appearances of an > element in a vector ? > i mean of any given element.. deliver all elements which are exactly > xtimes in this vector ? > > thx in advance !! > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] legend question
On Wed, 28 Feb 2007 17:06:18 +0100 Emili Tortosa-Ausina <[EMAIL PROTECTED]> wrote: > y<-c(1960, 1965, 1970, 1975) > z<-c(1, 2, 3, 4) > plot(y, z, type="l", col = 2) > legend(x = -3, y = .9, "legend text", pch = 1, xjust = 0.5) your x and y are outside the plotting area. try using a different set, or better still use locator() to specify x, y interactively. hth, ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Combining Dataframes
Hi, This question has no connection with the original thread. Please do not post like this since it messes up threads since making searching by thread topics in archives useless. Thank you, Ranjan On Sun, 25 Feb 2007 17:27:25 +0100 "Bert Jacobs" <[EMAIL PROTECTED]> wrote: > Hi, > > What is the best way to combine several dataframes (approx a dozen, all > having one column) into one? All dataframes have a different rowlength, and > do not contain numbers. > As this new dataframe should have the length of the dataframe with the most > rows, the difference in rows with the other dataframes can be filled with > the value NA. > > I've tried merge (only possible with 2 df) and cbind (gives error) > > Thx for helping me out. > > > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Alberto Vieira > Ferreira Monteiro > Sent: 25 February 2007 16:52 > To: r-help@stat.math.ethz.ch > Subject: Re: [R] Random Integers > > Charles Annis, P.E. wrote: > > > > rpois(n, lambda) > > > > ... will do it. But you should tell us something about how you want your > > numbers to be distributed, since rpois() produces integers having a > Poisson > > distribution. > > > > rpois does not generate random _integers_, it generates random > _natural numbers_. > > > The question should be more descriptive. "Random" is half of the things > we need to know, the other half is how deterministic you want your integers. > > For example, if you want to generate random integers in such a way that > all integers have the same probability, then this can't be done. OTOH, if > you want to simulate random integers that distribute "like integers appear > in Nature", then it's still not precise, but there are serious attempts > to reproduce this behaviour. Check in the wikipedia (www.wikipedia.org) > those distributions: Zip's law, Zeta distribution, Benford's law, > Zipf-Mandelbrot law. The problem is that all of them generate positive > random integers, but it's not difficult to extrapolate them to integers. > > Alberto Monteiro > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] random uniform sample of points on an ellipsoid (e.g. WG
"My" method is for the surface, not for the interior. The constraint d*X/|| \Gamma^{-1/2}X ||ensures the constraint, no? The uniformity is ensured by the density restricted to satisfy the constraint which makes it a constant. Ranjan On Sat, 24 Feb 2007 22:49:25 - (GMT) (Ted Harding) <[EMAIL PROTECTED]> wrote: > On 24-Feb-07 Ranjan Maitra wrote: > > Hi, > > Sorry for being a late entrant to this thread, but let me see if I > > understand the problem. > > > > The poster wants to sample from an ellipsoid. Let us call this > > ellipsoid X'\Gamma X - d^2= 0. > > > > There is no loss in assuming that the center is zero, otherwise the > > same can be done. > > > > Let us consider the case when Gamma = I first. > > > > Then, let X \sim N_p(0, I) (any radially symmetric distribution will > > do here), then d*X/||X|| is uniform on the sphere of radius d. > > > > How about imitating the same? > > > > Let X \sim N_p(0, \Sigma), where \Sigma = \Gamma^{-1} then X restricted > > to X'\Gamma X = d^2 gives the required uniform density on the > > ellipsoid. > > > > How do we get this easily? I don't think rejection works or is even > > necessary. > > > > Isn't d*X / ||\Gamma^{1/2}X|| enough? Here \Gamma^{1/2} is the square > > root matrix of \Gamma. > > > > Note that any distribution of the kind f(X'\Gamma X) would work, but > > the multivariate Gaussian is a convenient tool, for which two-lines of > > R code should be enough. > > > > > > Many thanks and best wishes, > > Ranjan > > As I understand Rusell Senior's original query, he wants to > generate a uniform distribution on the surface of the ellipsoid, > not over its interior: > > "I am interested in making a random sample from a uniform >distribution of points over the surface of the earth, >using the WGS84 ellipsoid as a model for the earth." > > Your solution, and the solution given in Roger Bivand's reference > > Section "UNIFORM_IN_ELLIPSOID_MAP maps uniform points into an ellipsoid" > in: > http://www.csit.fsu.edu/~burkardt/f_src/random_data/random_data.f90 > > is valid for uniform points in the interior of an ellipsoid, I think. > > But, since it is a linear transformation, it is not valid for the > points on the surface, as I explain for the case of an ellipse > (in particular, it results in higher linear density along the > circumference of an ellipse near the ends of the major axis > than near the ends of the minor axis). > > It is also the method adopted for points on the surface in the > Section "UNIFORM_ON_ELLIPSOID_MAP maps uniform points onto an ellipsoid", > and I think this is wrong, as I explained. > > Ted. > > > On Sat, 24 Feb 2007 13:45:56 - (GMT) (Ted Harding) > > <[EMAIL PROTECTED]> wrote: > > > >> [Apologies if this is a repeated posting for you. Something seems > >> to have gone amiss with my previous attempts to post this reply, > >> as seen from my end] > >> > >> On 22-Feb-07 Roger Bivand wrote: > >> > On 21 Feb 2007, Russell Senior wrote: > >> > > >> >> > >> >> I am interested in making a random sample from a uniform > >> >> distribution > >> >> of points over the surface of the earth, using the WGS84 ellipsoid > >> >> as > >> >> a model for the earth. I know how to do this for a sphere, but > >> >> would > >> >> like to do better. I can supply random numbers, want latitude > >> >> longitude pairs out. > >> >> > >> >> Can anyone point me at a solution? Thanks very much. > >> >> > >> > > >> > http://www.csit.fsu.edu/~burkardt/f_src/random_data/random_data.html > >> > > >> > looks promising, untried. > >> > >> Hmmm ... That page didn't seem to be directly useful, since > >> on my understanding of the code (and comments) listed under > >> "subroutine uniform_on_ellipsoid_map(dim_num, n, a, r, seed, x)" > >> "UNIFORM_ON_ELLIPSOID_MAP maps uniform points onto an ellipsoid." > >> in > >> > >> http://www.csit.fsu.edu/~burkardt/f_src/random_data/random_data.f90 > >> > >> it takes points uniformly distributed on a sphere and then > >> linearly transforms these onto an ellipsoid. This will not > >> give unform density over the surface of the ellipsoid: indeed > >> the
Re: [R] random uniform sample of points on an ellipsoid (e.g. WG
Hi, Sorry for being a late entrant to this thread, but let me see if I understand the problem. The poster wants to sample from an ellipsoid. Let us call this ellipsoid X'\Gamma X - d^2= 0. There is no loss in assuming that the center is zero, otherwise the same can be done. Let us consider the case when Gamma = I first. Then, let X \sim N_p(0, I) (any radially symmetric distribution will do here), then d*X/||X|| is uniform on the sphere of radius d. How about imitating the same? Let X \sim N_p(0, \Sigma), where \Sigma = \Gamma^{-1} then X restricted to X'\Gamma X = d^2 gives the required uniform density on the ellipsoid. How do we get this easily? I don't think rejection works or is even necessary. Isn't d*X / ||\Gamma^{1/2}X|| enough? Here \Gamma^{1/2} is the square root matrix of \Gamma. Note that any distribution of the kind f(X'\Gamma X) would work, but the multivariate Gaussian is a convenient tool, for which two-lines of R code should be enough. Many thanks and best wishes, Ranjan On Sat, 24 Feb 2007 13:45:56 - (GMT) (Ted Harding) <[EMAIL PROTECTED]> wrote: > [Apologies if this is a repeated posting for you. Something seems > to have gone amiss with my previous attempts to post this reply, > as seen from my end] > > On 22-Feb-07 Roger Bivand wrote: > > On 21 Feb 2007, Russell Senior wrote: > > > >> > >> I am interested in making a random sample from a uniform distribution > >> of points over the surface of the earth, using the WGS84 ellipsoid as > >> a model for the earth. I know how to do this for a sphere, but would > >> like to do better. I can supply random numbers, want latitude > >> longitude pairs out. > >> > >> Can anyone point me at a solution? Thanks very much. > >> > > > > http://www.csit.fsu.edu/~burkardt/f_src/random_data/random_data.html > > > > looks promising, untried. > > Hmmm ... That page didn't seem to be directly useful, since > on my understanding of the code (and comments) listed under > "subroutine uniform_on_ellipsoid_map(dim_num, n, a, r, seed, x)" > "UNIFORM_ON_ELLIPSOID_MAP maps uniform points onto an ellipsoid." > in > > http://www.csit.fsu.edu/~burkardt/f_src/random_data/random_data.f90 > > it takes points uniformly distributed on a sphere and then > linearly transforms these onto an ellipsoid. This will not > give unform density over the surface of the ellipsoid: indeed > the example graph they show of points on an ellipse generated > in this way clearly appear to be more dense at the "ends" of > the ellipse, and less dense on its "sides". See: > > http://www.csit.fsu.edu/~burkardt/f_src/random_data/ > uniform_on_ellipsoid_map.png > [all one line] > > Indeed, if I understand their method correctly, in the case > of a horizontal ellipse it is equivalent (modulo rotating > the result) to distributing the points uniformly over a circle, > and then stretching the circle sideways. This will preserve > the vertical distribution (so at the two ends of the major axis > it has the same density as on the circle) but diluting the > horizontal distribution (so that at the two ends of the minor > axis the density isless than on the circle). > > I did have a notion about this, but sat on it expecting that > someone would come up with a slick solution -- which hasn't > happened yet. > > For the application you have in hand, uniform distribution > over a sphere is a fairly close approximation to uniform > distriobution over the ellipspoid -- but not quite. > > But a rejection method, applied to points uniform on the sphere, > can give you points uniform on the ellipsoid and, because of > the close approximation of the sphere to the ellipsoid, you > would not be rejecting many points. > > The outline strategy I had in mind (I haven't worked out details) > is based on the following. > > Consider a point X0 on the sphere, at radial distance r0 from > the centre of the sphere (same as the centre of the ellipsoid). > Let the radius through that point meet the ellipsoid at a point > X1, at radial distance R1. > > Let dS0 be an element of area at X0 on the sphere, which projects > radially onto an element of area dS1 on the ellipsoid. You want > all elements dS1 of equal size to be equally likely to receive > a random point. > > Let the angle between the tangent plane to the ellipsoid at X1, > and the tangent plane to the sphere at X0, be phi. > > The the ratio of areas dS1/dS0 is R(X0), say, where > > R(X0) = dS1/dS0 = r1^2/(r0^2 * cos(phi)) > > and the smaller this ratio, the less likely you want a point > u.d. on the sphere to give rise to a point on the ellipsoid. > > Now define an acceptance probability P(X0) by > > P(X0) = R(X0)/sup[R(X)] > > taking the supremum over X on the sphere. Then sample points X0 > unformly on the sphere, accepting each one with probability > P(X0), and continue sampling until you have the number of > points that you need. > > Maybe someone has a better idea ... (or code for the above!) > > Ted.
Re: [R] TRUE/FALSE as numeric values
On Fri, 23 Feb 2007 14:38:56 +0100 Thomas Preuth <[EMAIL PROTECTED]> wrote: > Hello, > > I want to select in a column of a dataframe all numbers smaller than a > value x > but when I type in test<-(RSF_EU$AREA<=x) I receiv as answer: > > test > [1] TRUE FALSE FALSE TRUE TRUE TRUE FALSE FALSE TRUE TRUE TRUE > FALSE TRUE TRUE TRUE TRUE TRUE > [18] TRUE TRUE TRUE TRUE FALSE FALSE TRUE TRUE TRUE TRUE TRUE > TRUE TRUE FALSE TRUE TRUE TRUE > [35] FALSE TRUE TRUE TRUE TRUE FALSE FALSE FALSE TRUE TRUE FALSE > TRUE TRUE FALSE FALSE TRUE FALSE > [52] TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE FALSE TRUE > > How can i get the values smaller than x and not the TRUE/FALSE reply? if the dataframe is called RSF_EU, and you want the entire dataframe for those rows, then RSF_EU [ (RSF_EU$AREA <= x ), ] if you want to get only that column vector and nothing else RSF_EU$AREA [ ( RSF_EU$AREA <= x ) ] Such concepts are very well-explained in "An Introduction to R" which you would benefit by reading at the earliest. Ranjan > Thanks in advance, > Thomas > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] applying lm on an array of observations with common design matrix
On Thu, 22 Feb 2007 08:17:38 + (GMT) Prof Brian Ripley <[EMAIL PROTECTED]> wrote: > On Thu, 22 Feb 2007, Petr Klasterecky wrote: > > > Ranjan Maitra napsal(a): > >> On Sun, 18 Feb 2007 07:46:56 + (GMT) Prof Brian Ripley <[EMAIL > >> PROTECTED]> wrote: > >> > >>> On Sat, 17 Feb 2007, Ranjan Maitra wrote: > >>> > >>>> Dear list, > >>>> > >>>> I have a 4-dimensional array Y of dimension 330 x 67 x 35 x 51. I have a > >>>> design matrix X of dimension 330 x 4. I want to fit a linear regression > >>>> of each > >>>> > >>>> lm( Y[, i, j, k] ~ X). for each i, j, k. > >>>> > >>>> Can I do it in one shot without a loop? > >>> Yes. > >>> > >>> YY <- YY > >>> dim(YY) <- c(330, 67*35*51) > >>> fit <- lm(YY ~ X) > >>> > >>>> Actually, I am also interested in getting the p-values of some of the > >>>> coefficients -- lets say the coefficient corresponding to the second > >>>> column of the design matrix. Can the same be done using array-based > >>>> operations? > >>> Use lapply(summary(fit), function(x) coef(x)[3,4]) (since there is a > >>> intercept, you want the third coefficient). > >> > >> In this context, can one also get the variance-covariance matrix of the > >> coefficients? > > > > Sure: > > > > lapply(summary(fit), function(x) {"$"(x,cov.unscaled)}) > > But that is not the variance-covariance matrix (and it is an unusual way > to write x$cov.unscaled)! > > > Add indexing if you do not want the whole matrix. You can extract > > whatever you want, just take a look at ?summary.lm, section Value. > > It is unclear to me what the questioner expects: the estimated > coefficients for different responses are independent. For a list of > matrices applying to each response one could mimic vcov.lm and do > > lapply(summary(fit, corr=FALSE), > function(so) so$sigma^2 * so$cov.unscaled) Thanks! Actually, I am really looking to compare the coefficients (let us say second and the third) beta2 - beta4 = 0 for each regression. Basically, get the two-sided p-value for the test statistic for each regression. One way of doing that is to get the dispersion matrix of each regression and then to compute the t-statistic and the p-value. That is the genesis of the question above. Is there a better way? Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] How to print a double quote
try > cat("Open fnd \"test\"") which is the same as for C. HTH. Ranjan On Thu, 22 Feb 2007 14:09:26 -0500 "Bos, Roger" <[EMAIL PROTECTED]> wrote: > Can anyone tell me how to get R to include a double quote in the middle > of a character string? > > For example, the following code is close: > > > fnd<-"Open fnd 'test'" > >cat(fnd) > Open fnd 'test'> > > > > But instead of Open fnd 'test' I need: Open fnd "test". Difference > seems minor, but I am writing batch files for another program to read in > and it has to have the double quotes to work. > > Thanks in advance for any help or ideas, > > Roger > > ** * > This message is for the named person's use only. It may > contain confidential, proprietary or legally privileged > information. No right to confidential or privileged treatment > of this message is waived or lost by any error in > transmission. If you have received this message in error, > please immediately notify the sender by e-mail, > delete the message and all copies from your system and destroy > any hard copies. You must not, directly or indirectly, use, > disclose, distribute, print or copy any part of this message > if you are not the intended recipient. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] applying lm on an array of observations with common design matrix
On Sun, 18 Feb 2007 07:46:56 + (GMT) Prof Brian Ripley <[EMAIL PROTECTED]> wrote: > On Sat, 17 Feb 2007, Ranjan Maitra wrote: > > > Dear list, > > > > I have a 4-dimensional array Y of dimension 330 x 67 x 35 x 51. I have a > > design matrix X of dimension 330 x 4. I want to fit a linear regression > > of each > > > > lm( Y[, i, j, k] ~ X). for each i, j, k. > > > > Can I do it in one shot without a loop? > > Yes. > > YY <- YY > dim(YY) <- c(330, 67*35*51) > fit <- lm(YY ~ X) > > > Actually, I am also interested in getting the p-values of some of the > > coefficients -- lets say the coefficient corresponding to the second > > column of the design matrix. Can the same be done using array-based > > operations? > > Use lapply(summary(fit), function(x) coef(x)[3,4]) (since there is a > intercept, you want the third coefficient). In this context, can one also get the variance-covariance matrix of the coefficients? Thank you, and best wishes! Ranjan > Note that this will give a vector, so set its dimension to c(67,35,51) to > relate to the original array. > > I have not BTW looked into the memory requirements here, and you might > want to do this on slices of the array for that reason. > > -- > Brian D. Ripley, [EMAIL PROTECTED] > Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ > University of Oxford, Tel: +44 1865 272861 (self) > 1 South Parks Road, +44 1865 272866 (PA) > Oxford OX1 3TG, UKFax: +44 1865 272595 > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] simple question on one-sided p-values for coef() on output of lm()
Yes, of course! Thank you. So, I guess the answer is that R itself can not be made to do so directly. Many thanks for confirming this. Sincerely, Ranjan On Wed, 21 Feb 2007 20:23:55 + (GMT) Prof Brian Ripley <[EMAIL PROTECTED]> wrote: > On Wed, 21 Feb 2007, Ranjan Maitra wrote: > > > I was wondering if it is possible to get the p-values for one-sided > > tests on the parameters of a linear regression. > > > > For instance, I use lm() and store the result in an object. lm() gives > > me a matrix, using summary() and coef() on which gives me a matrix > > containing the coefficients, the standard errors, the t-statistics and > > the two-sided p-values by default. Can I get it to provide me with > > one-sided p-values (something like alternative less than or greater > > than)? > > Not 'it', but you can easily do the calculation yourself from the output. > E.g. > > example(lm) > s <- summary(lm.D90) > pt(coef(s)[, 2], s$df[2], lower=FALSE) # or TRUE > > -- > Brian D. Ripley, [EMAIL PROTECTED] > Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ > University of Oxford, Tel: +44 1865 272861 (self) > 1 South Parks Road, +44 1865 272866 (PA) > Oxford OX1 3TG, UKFax: +44 1865 272595 > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] simple question on one-sided p-values for coef() on output of lm()
Dear list, I was wondering if it is possible to get the p-values for one-sided tests on the parameters of a linear regression. For instance, I use lm() and store the result in an object. lm() gives me a matrix, using summary() and coef() on which gives me a matrix containing the coefficients, the standard errors, the t-statistics and the two-sided p-values by default. Can I get it to provide me with one-sided p-values (something like alternative less than or greater than)? Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Installing Package rgl - Compilation Fails - Summary
Hi Duncan, I don't know if this will list all the dependencies for your documentation, since Rick's error messages did not involve libraries and header files already installed by him for something else, perhaps. Just a thought. Ranjan On Tue, 20 Feb 2007 11:59:24 -0500 Rick Bilonick <[EMAIL PROTECTED]> wrote: > Summarizing: > > I'm running R 2.4.1 on a current FC6 32-bit system. In order to have the > rgl R package install, I needed to install both mesa-libGLU-devel (FC6 > version is 6.5.1-9) and libXext-devel (FC6) rpm packages. Thanks to > everyone who commented. > > Rick B. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Installing Package rgl - Compilation Fails
The error is different now. It now cannot find Xext library. Do a yum search on this and install that. yum provides libXext which will give you the package Xext which needs to be installed. You may also need to install the libXext-devel.i386 package. HTH. Ranjan On Mon, 19 Feb 2007 16:02:09 -0500 Rick Bilonick <[EMAIL PROTECTED]> wrote: > On Mon, 2007-02-19 at 19:56 +, Oleg Sklyar wrote: > > Check again your error message: > > > > opengl.hpp:24:20: error: GL/glu.h: No such file or directory > > > > you need to install > > > > mesa-libGLU-devel FC6 version is 6.5.1-7 > > > > which will provide development files for glut3. Needless to say the > > above will probably pool some dependencies and (-devel) means it will > > install *.h files as well. Start your FC package manager and search for > > "GLU", install the above and try again. > > > > Best, > > Oleg > > > > I installed a slightly newer version (the one that yum found): > > mesa-libGLU-devel-6.5.1-9.fc6 > > but it still fails (I'm installing as root): > > > install.packages("rgl") > --- Please select a CRAN mirror for use in this session --- > Loading Tcl/Tk interface ... done > trying URL 'http://lib.stat.cmu.edu/R/CRAN/src/contrib/rgl_0.70.tar.gz' > Content type 'application/x-gzip' length 705556 bytes > opened URL > > > Deleted a bunch of lines > > > > Disposable.hpp:13: warning: ___struct IDisposeListener___ has virtual > functions but non-virtual destructor > gui.hpp:56: warning: ___class gui::WindowImpl___ has virtual functions but > non-virtual destructor > gui.hpp:90: warning: ___class gui::GUIFactory___ has virtual functions but > non-virtual destructor > g++ -shared -L/usr/local/lib -o rgl.so api.o Background.o BBoxDeco.o > Color.o device.o devicemanager.o Disposable.o FaceSet.o fps.o geom.o > gl2ps.o glgui.o gui.o Light.o LineSet.o LineStripSet.o Material.o math.o > osxgui.o osxlib.o par3d.o pixmap.o PointSet.o PrimitiveSet.o QuadSet.o > RenderContext.o render.o rglview.o scene.o select.o Shape.o SphereMesh.o > SphereSet.o SpriteSet.o String.o Surface.o TextSet.o Texture.o > TriangleSet.o Viewpoint.o win32gui.o win32lib.o x11gui.o x11lib.o -L > -lX11 -lXext -lGL -lGLU -L/usr/lib -lpng12 -L/usr/lib/R/lib -lR > /usr/bin/ld: cannot find -lXext > collect2: ld returned 1 exit status > make: *** [rgl.so] Error 1 > chmod: cannot access `/usr/lib/R/library/rgl/libs/*': No such file or > directory > ERROR: compilation failed for package 'rgl' > ** Removing '/usr/lib/R/library/rgl' > > The downloaded packages are in > /tmp/RtmpMc94yC/downloaded_packages > Warning message: > installation of package 'rgl' had non-zero exit status in: > install.packages("rgl") > > > Rick B. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Installing Package rgl - Compilation Fails
As the error message indicates, there is no GL/glu.h file installed in the system. If it is, the path is not properly set. % yum provides GL/glu.h on FC6 should give you some clues and tell you what to install, and also whether it should be installed. Ranjan On Mon, 19 Feb 2007 14:35:29 -0500 Rick Bilonick <[EMAIL PROTECTED]> wrote: > I'm running R 2.4.1 (with the latest versions of all packages) on an > FC6 32-bit system. When I try to install the rgl package, compilation > fails: > > > install.packages("rgl") > --- Please select a CRAN mirror for use in this session --- > Loading Tcl/Tk interface ... done > trying URL 'http://lib.stat.cmu.edu/R/CRAN/src/contrib/rgl_0.70.tar.gz' > Content type 'application/x-gzip' length 705556 bytes > opened URL > == > downloaded 689Kb > > * Installing *source* package 'rgl' ... > checking for gcc... gcc > checking for C compiler default output file name... a.out > checking whether the C compiler works... yes > checking whether we are cross compiling... no > checking for suffix of executables... > checking for suffix of object files... o > checking whether we are using the GNU C compiler... yes > checking whether gcc accepts -g... yes > checking for gcc option to accept ANSI C... none needed > checking how to run the C preprocessor... gcc -E > checking for X... libraries , headers > checking for libpng-config... yes > configure: using libpng-config > configure: using libpng dynamic linkage > configure: creating ./config.status > config.status: creating src/Makevars > ** libs > g++ -I/usr/lib/R/include -I/usr/lib/R/include -I -DHAVE_PNG_H > -I/usr/include/libpng12 -Iext -I/usr/local/include-fpic -O2 -g > -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic > -fasynchronous-unwind-tables -c api.cpp -o api.o > In file included from glgui.hpp:9, > from gui.hpp:11, > from rglview.h:10, > from Device.hpp:11, > from DeviceManager.hpp:9, > from api.cpp:14: > opengl.hpp:24:20: error: GL/glu.h: No such file or directory > Disposable.hpp:13: warning: ___struct IDisposeListener___ has virtual > functions but non-virtual destructor > types.h:77: warning: ___class DestroyHandler___ has virtual functions but > non-virtual destructor > gui.hpp:56: warning: ___class gui::WindowImpl___ has virtual functions but > non-virtual destructor > gui.hpp:90: warning: ___class gui::GUIFactory___ has virtual functions but > non-virtual destructor > pixmap.h:39: warning: ___class PixmapFormat___ has virtual functions but > non-virtual destructor > api.cpp: In function ___void rgl_user2window(int*, int*, double*, double*, > double*, double*, int*)___: > api.cpp:764: error: ___gluProject___ was not declared in this scope > api.cpp: In function ___void rgl_window2user(int*, int*, double*, double*, > double*, double*, int*)___: > api.cpp:792: error: ___gluUnProject___ was not declared in this scope > make: *** [api.o] Error 1 > chmod: cannot access `/usr/lib/R/library/rgl/libs/*': No such file or > directory > ERROR: compilation failed for package 'rgl' > ** Removing '/usr/lib/R/library/rgl' > > The downloaded packages are in > /tmp/RtmpJY8uNp/downloaded_packages > Warning message: > installation of package 'rgl' had non-zero exit status in: > install.packages("rgl") > > I was able to install this on an 64-bit system running FC4 and R 2.4.1. > > Any ideas on why it fails on FC6? > > Rick B. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] applying lm on an array of observations with common design matrix
Dear list, I have a 4-dimensional array Y of dimension 330 x 67 x 35 x 51. I have a design matrix X of dimension 330 x 4. I want to fit a linear regression of each lm( Y[, i, j, k] ~ X). for each i, j, k. Can I do it in one shot without a loop? Actually, I am also interested in getting the p-values of some of the coefficients -- lets say the coefficient corresponding to the second column of the design matrix. Can the same be done using array-based operations? I would be happy to clarify if anything is unclear. Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] errors when installing new packages
Hello Ying, This is really a Fedora Core 6 question. But anyway, it appears that you do not have gfortran which comes in the appropriate development package installed. Assuming you use yum, you find out the RPM using the following yum provides gfortran which will give you the RPM which contains the binary for gfortran. gcc-gfortran 4.1.1-51.fc6 If yum says this is installed, then your path is not set right. Otherwise, go ahead and install using yum install gcc-gfortran as root or with sudo privileges. HTH. Many thanks and best wishes, Ranjan On Tue, 13 Feb 2007 14:47:51 -0800 (PST) YI ZHANG <[EMAIL PROTECTED]> wrote: > Dear all, > I met a problem when installing new packages on R, my system is linux > fedora 6.0, the following is output. please help me. Thanks. > > > install.packages('lars') > --- Please select a CRAN mirror for use in this session --- > Loading Tcl/Tk interface ... done > trying URL 'http://www.stathy.com/cran/src/contrib/lars_0.9-5.tar.gz' > Content type 'application/x-tar' length 188248 bytes > opened URL > == > downloaded 183Kb > > * Installing *source* package 'lars' ... > ** libs > gfortran -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 > -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 > -march=i386 -mtune=generic -fasynchronous-unwind-tables -c delcol.f > -o delcol.o > make: gfortran: Command not found > make: *** [delcol.o] Error 127 > ERROR: compilation failed for package 'lars' > ** Removing '/usr/lib/R/library/lars' > > The downloaded packages are in > /tmp/RtmpxYqqFS/downloaded_packages > Warning message: > installation of package 'lars' had non-zero exit status in: > install.packages("lars") > > > > > > Get your own web address. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] simulating from Langevin distributions
Thanks to Ravi Varadhan for providing the solution. I guess the answer is that if X is MVN with mean mu and dispersion matrix given by I/K, then X/norm(X) is Langevin with the required parameters. A reference for this is Watson's Statistics on Spheres. Many thanks again and best wishes. Ranjan On Tue, 13 Feb 2007 14:44:08 -0500 "Ravi Varadhan" <[EMAIL PROTECTED]> wrote: > Hi Ranjan, > > I think that the following would work: > > library(MASS) > > rlangevin <- function(n, mu, K) { > q <- length(mu) > norm.sim <- mvrnorm(n, mu=mu, Sigma=diag(1/K, q)) > cp <- apply(norm.sim, 1, function(x) sqrt(crossprod(x))) > sweep(norm.sim, 1, cp, FUN="/") > } > > > mu <- runif(7) > > mu <- mu / sqrt(crossprod(mu)) > > K <- 1.2 > > ylang <- rlangevin(n=10, mu=mu, K=K) > > apply(ylang,1,crossprod) > [1] 1 1 1 1 1 1 1 1 1 1 > > > > I hope that this helps. > > Ravi. > > --- > > Ravi Varadhan, Ph.D. > > Assistant Professor, The Center on Aging and Health > > Division of Geriatric Medicine and Gerontology > > Johns Hopkins University > > Ph: (410) 502-2619 > > Fax: (410) 614-9625 > > Email: [EMAIL PROTECTED] > > Webpage: http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html > > > > > > > > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Ranjan Maitra > Sent: Tuesday, February 13, 2007 1:04 PM > To: r-help@stat.math.ethz.ch > Subject: [R] simulating from Langevin distributions > > Dear all, > > I have been looking for a while for ways to simulate from Langevin > distributions and I thought I would ask here. I am ok with finding an > algorithmic reference, though of course, a R package would be stupendous! > > Btw, just to clarify, the Langevin distribution with (mu, K), where mu is a > vector and K>0 the concentration parameter is defined to be: > > f(x) = exp(K*mu'x) / const where both mu and x are p-variate vectors with > norm 1. > > For p=2, this corresponds to von-Mises (for which algorithms exist, > including in R/Splus) while for p=3, I believe it is called the Fisher > distribution. I am looking for general p. > > Can anyone please help in this? > > Many thanks and best wishes, > Ranjan > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] simulating from Langevin distributions
Dear all, I have been looking for a while for ways to simulate from Langevin distributions and I thought I would ask here. I am ok with finding an algorithmic reference, though of course, a R package would be stupendous! Btw, just to clarify, the Langevin distribution with (mu, K), where mu is a vector and K>0 the concentration parameter is defined to be: f(x) = exp(K*mu'x) / const where both mu and x are p-variate vectors with norm 1. For p=2, this corresponds to von-Mises (for which algorithms exist, including in R/Splus) while for p=3, I believe it is called the Fisher distribution. I am looking for general p. Can anyone please help in this? Many thanks and best wishes, Ranjan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.