Re: [R] Lattice: key with expression function
Thanks to Deepayan...again, for suggesting "parse". Here is how I added the degree symbol to a vector of text for my xyplot legend: auto.key =list(points = FALSE,text=parse(text = paste(levels(as.factor(divertSST2$temp)), "*degree", sep = ""))), For me the tricky part was learning about adding the '*'. I found that in this suggestion: http://finzi.psych.upenn.edu/R/Rhelp02a/archive/78961.html Michael Folkes -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: September 5, 2007 11:27 AM To: Folkes, Michael Cc: r-help@stat.math.ethz.ch Subject: Re: [R] Lattice: key with expression function On 9/5/07, Folkes, Michael <[EMAIL PROTECTED]> wrote: > HI all, > I'm trying (unsuccessfully) to add the degree symbol to each line of > text in my legend (within xyplot). Here is the line of code, which > fails to interpret the expression > function: > > auto.key =list(points = > FALSE,text=paste(levels(as.factor(divertSST2$temp)),expression(degree) > ). > ..), > > I just get: > 7 degree > 8 degree > 9 degree That's because > paste("foo", expression(degree)) [1] "foo degree" > If I place 'expression' outside or just after the paste function it > also doesn't work. auto.key = list(text = expression(paste("foo", degree))) should work. I think the problem is that you want a vector of expressions, and that's a bit harder to get. I'm not sure what the best solution is, but if everything else fails, you could try using parse(text=) -Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Lattice: key with expression function
HI all, I'm trying (unsuccessfully) to add the degree symbol to each line of text in my legend (within xyplot). Here is the line of code, which fails to interpret the expression function: auto.key =list(points = FALSE,text=paste(levels(as.factor(divertSST2$temp)),expression(degree)). ..), I just get: 7 degree 8 degree 9 degree If I place 'expression' outside or just after the paste function it also doesn't work. Any suggestions are well received! Thanks Michael Folkes [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lattice: panel superpose with groups
Thank you again Deepayan. I was failing to grasp that I could use panel.groups as a function. But additionally it's still not intuitive to me where and when I should use "..." to pass arguments on. Additionally, as to why the panel.group function needs to pass the 'lty' argument isn't terribly clear to me! Perhaps it will become clear with time. I greatly appreciate your patience and assistance. Thanks all, Michael Folkes -Original Message- From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] Sent: September 4, 2007 5:11 PM To: Folkes, Michael Cc: r-help@stat.math.ethz.ch Subject: Re: [R] Lattice: panel superpose with groups On 9/4/07, Folkes, Michael <[EMAIL PROTECTED]> wrote: > The example code below allows the plotting of three different groups > per panel. I can't fathom how to write the panel function to add an > additional line for each group, which in this case is just the mean Y > value for each group within each panel. (i.e. there'd be six lines > per panel.) Spent all day working on it and searching the archives to > no avail! Yikes. Any help would be greatly appreciated! xyplot(yvar~year|week2,data=df,layout = c(4, 5), as.table=TRUE, type='l', groups = temp , panel = panel.superpose, panel.groups = function(x, y, ..., lty) { panel.xyplot(x, y, ..., lty = lty) #panel.lines(x, rep(mean(y),length(x)), lty=3, ...) # or panel.abline(h = mean(y), lty=3, ...) }) (see ?panel.superpose for explanation) -Deepayan > Michael Folkes > > # > #This builds fake dataset > > years<-2000:2006 > weeks<-1:20 > yr<-rep(years,rep(length(weeks)*6,length(years))) > wk<-rep(weeks,rep(6,length(weeks))) > temp<-rep(4:9,length(years)*length(weeks)) > yvar<-round(rnorm(length(years)*length(weeks)*6,mean=30,sd=4),0) > xvar<-(rnorm(length(years)*length(weeks)*6)+5)/10 > > df<-data.frame(year=yr,week=wk,temp=temp, yvar=yvar, xvar=xvar) > # > > library(lattice) > df$year2<-as.factor(df$year) > df$week2<-as.factor(df$week) > df<-df[df$temp %in% c(5,7,9),] xyplot(yvar~year|week2,data=df,layout = > c(4, 5), as.table=TRUE, > type='l', > groups=temp , > panel = function(x, y,groups, ...) { > panel.superpose(x,y,groups,...) > panel.xyplot(x,rep(mean(y),length(x)),type='l',lty=3) #<- only generates the panel mean > } > ) > > ___ > Michael Folkes > Salmon Stock Assessment > Canadian Dept. of Fisheries & Oceans > Pacific Biological Station > 3190 Hammond Bay Rd. > Nanaimo, B.C., Canada > V9T-6N7 > Ph (250) 756-7264 Fax (250) 756-7053 [EMAIL PROTECTED] > > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Lattice: panel superpose with groups
The example code below allows the plotting of three different groups per panel. I can't fathom how to write the panel function to add an additional line for each group, which in this case is just the mean Y value for each group within each panel. (i.e. there'd be six lines per panel.) Spent all day working on it and searching the archives to no avail! Yikes. Any help would be greatly appreciated! Michael Folkes # #This builds fake dataset years<-2000:2006 weeks<-1:20 yr<-rep(years,rep(length(weeks)*6,length(years))) wk<-rep(weeks,rep(6,length(weeks))) temp<-rep(4:9,length(years)*length(weeks)) yvar<-round(rnorm(length(years)*length(weeks)*6,mean=30,sd=4),0) xvar<-(rnorm(length(years)*length(weeks)*6)+5)/10 df<-data.frame(year=yr,week=wk,temp=temp, yvar=yvar, xvar=xvar) # library(lattice) df$year2<-as.factor(df$year) df$week2<-as.factor(df$week) df<-df[df$temp %in% c(5,7,9),] xyplot(yvar~year|week2,data=df,layout = c(4, 5), as.table=TRUE, type='l', groups=temp , panel = function(x, y,groups, ...) { panel.superpose(x,y,groups,...) panel.xyplot(x,rep(mean(y),length(x)),type='l',lty=3) #<- only generates the panel mean } ) ___ Michael Folkes Salmon Stock Assessment Canadian Dept. of Fisheries & Oceans Pacific Biological Station 3190 Hammond Bay Rd. Nanaimo, B.C., Canada V9T-6N7 Ph (250) 756-7264 Fax (250) 756-7053 [EMAIL PROTECTED] [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Embedding Audio Files in Interactive Graphs
On 9/4/07, Sam Ferguson <[EMAIL PROTECTED] > wrote: > > Thanks for your reply Bruno. > > No - as I said, I know how to do that - the movie15 and the > multimedia package are basically the same, and it is relatively > straightforward to get an audio file into a pdf with them. However, > real interactivity is not easily achieved in latex IMO (as it's not > its purpose). At least I'm hoping for a bit more flexibility. > > R seems like a better place to do interactivity, and with the field > of information visualisation pointing out that interactivity is a > very useful element for investigation of data it seems that clicking > around graphical displays may become more and more popular in time. > In my field I'm interested in audio data, and so simple interactive > visual and auditory displays would be great. A (very useful) start > would be 5 separate waveform plots that would play their appropriate > sounds when clicked. More complex figures could plot in a 2d space > and allow selection of data points or ranges perhaps. > > I love R for graphics and for Sweave though, and would like to use it > if possible - ideally it would be to produce a figure that included > the appropriate audiofiles and interactive scripts, which could then > be incorporated into a latex document \includegraphics. However, from > the deafening silence on this list it seems like I may be attempting > to push a square block through a round hole unfortunately. Seems I am > back to Matlab and handle graphics - but it won't do this properly > either. Lots of things can be embedded into PDF documents, like javascript, flash and svg. Maybe it would be feasible to use the gridSVG package to output some graphics as svg with javascript to play the sounds and embed that into a pdf? Cheers > Sam > > > On 03/09/2007, at 5:39 PM, Bruno C.. wrote: > > > Are you asking on how to include an audio file into a pdf? > > This is already feasible via latex and the movie 15 package ;) > > > > Ciao > > > >> Hi R-ers, > >> > >> I'm wondering if anyone has investigated a method for embedding audio > >> files in R graphs (pdf format), and allowing their playback to be > >> triggered interactively (by clicking on a graph element for > >> instance). > >> > >> I know how to do this in latex pdfs with the multimedia package, but > >> it seems that R would provide a more appropriate platform for many > >> reasons. > >> > >> Thanks for any help you can provide. > >> Sam Ferguson > >> Faculty of Architecture, Design and Planning > >> The University of Sydney > >> > >> __ > >> R-help@stat.math.ethz.ch mailing list > >> https://stat.ethz.ch/mailman/listinfo/r-help > >> PLEASE do read the posting guide http://www.R-project.org/posting- > >> guide.html > >> and provide commented, minimal, self-contained, reproducible code. > >> > > > > > > -- > > Leggi GRATIS le tue mail con il telefonino i-mode di Wind > > http://i-mode.wind.it/ > > > > -- > Sam Ferguson > Faculty of Architecture > The University of Sydney > [EMAIL PROTECTED] > +61 2 93515910 > 0410 719535 > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lattice:can't subset in panel function using other variables
Thankyou very much Deepayan for pointing me in the correct direction. Your examples work perfectly for me. Much appreciated. Michael From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] Sent: Fri 31/08/2007 5:18 PM To: Folkes, Michael Cc: r-help@stat.math.ethz.ch Subject: Re: [R] Lattice:can't subset in panel function using other variables On 8/31/07, Folkes, Michael <[EMAIL PROTECTED]> wrote: > Thanks Deepayan for your response. > The first subset you suggest was just a test for me and not what I > wanted. > I can't do your second suggested subset action as I wish to plot all the > panel data, but then add a coloured datapoint for just one year (see > example code). > I think I have found my problem but don't know how to solve it. > The subscripts of data going into each panel are almost always the same > length, except maybe one or two panels have 1 less datapoint. > I've attached a script that builds a quick dataset and plots what I was > aiming for. It works great. If you then remove one line of data from > the DF (using "df<-df[-1,]" in the script), the plotting goes awry. > > Any suggestions about dealing with unequal data lengths for panel > function subsetting? If your goal is to highlight one particular year, why not use something like xyplot(yvar~xvar|week2,data=df,layout = c(4, 5), as.table=TRUE, groups = (year == 2005), col = 1, pch = c(1, 16)) ? Your code doesn't work because you don't seem to understand what 'subscripts' is supposed to be (either that, or you are confusing yourself with multiple indices). Here's a version with the correct usage: xyplot(yvar~xvar|week2,data=df,layout = c(4, 5), as.table=TRUE, panel = function(x, y, subscripts, ...) { highlight <- (df$year == 2005) highlight.panel <- highlight[subscripts] panel.xyplot(x, y, type='p', col=1, cex=.5) panel.xyplot(x[highlight.panel], y[highlight.panel], type='p', pch=1, col=3, cex=1.5) }) -Deepayan [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] why doesn't as.character of this factor create a vector ofcharacters?
This message didn't seem to have been somewhat forgotten. Here is a reply. When you constructed the data.frame, all strings were converted to factors. If you didn't want that, it would have been possible to specify it: df<-data.frame(a=a,b=b,c=c,stringsAsFactors=F) Then everything would work as intended: one.line<-as.character(df[df$a=="Abraham",]) Actually, the real problem is that df[df$a=="Abraham",] returns a list. There is no need to use as.character here, just unlist: one.line<-unlist(df[df$a=="Abraham",]) The returned list is also the problem with your code. If you have a data.frame with factors, then df[df$a=="Abraham",] returns a list. Each element of this list is a factor, and has a different set of levels. Thus, look at the following output: > c(df[df$a=="Abraham",]) $a [1] Abraham Levels: Abraham Jonah Moses $b [1] Sarah Levels: Hannah Mary Sarah $c [1] Billy Levels: Billy Bob Joe It is quite obvious why it is so complicated to untangle these. I think the best way would be: one.line<- sapply(df[df$a=="Abraham",],as.character) Michael -Original Message- From: r-help-bounces_at_stat.math.ethz.ch [mailto:r-help-bounces_at_stat.math.ethz.ch <mailto:r-help-bounces_at_stat.math.ethz.ch>] On Behalf Of Andrew Yee Sent: Tuesday, July 10, 2007 8:57 AM To: r-help_at_stat.math.ethz.ch Subject: [R] why doesn't as.character of this factor create a vector ofcharacters? I'm trying to figure out why when I use as.character() on one row of a data.frame, I get factor numbers instead of a character vector. Any suggestions? See the following code: a<-c("Abraham","Jonah","Moses") b<-c("Sarah","Hannah","Mary") c<-c("Billy","Joe","Bob") df<-data.frame(a=a,b=b,c=c) #Suppose I'm interested in one line of this data frame but as a vector one.line <- df[df$a=="Abraham",] #However the following illustrates the problem I'm having one.line <- as.vector(df[df$a=="Abraham",]) #Creates a one row data.frame instead of a vector! #compare above to one.line <- as.character(df[df$a=="Abraham",]) #Creates a vector of 1, 3, 1! #In the end, this creates the output that I'd like: one.line <-as.vector(t(df[df$a=="Abraham",])) #but it seems like a lot of work! __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] why doesn't as.character of this factor create a vector ofcharacters?
This message didn't seem to have been somewhat forgotten. Here is a reply. When you constructed the data.frame, all strings were converted to factors. If you didn't want that, it would have been possible to specify it: df<-data.frame(a=a,b=b,c=c,stringsAsFactors=F) Then everything would work as intended: one.line<-as.character(df[df$a=="Abraham",]) Actually, the real problem is that df[df$a=="Abraham",] returns a list. There is no need to use as.character here, just unlist: one.line<-unlist(df[df$a=="Abraham",]) The returned list is also the problem with your code. If you have a data.frame with factors, then df[df$a=="Abraham",] returns a list. Each element of this list is a factor, and has a different set of levels. Thus, look at the following output: > c(df[df$a=="Abraham",]) $a [1] Abraham Levels: Abraham Jonah Moses $b [1] Sarah Levels: Hannah Mary Sarah $c [1] Billy Levels: Billy Bob Joe It is quite obvious why it is so complicated to untangle these. I think the best way would be: one.line<- sapply(df[df$a=="Abraham",],as.character) Michael -Original Message- From: r-help-bounces_at_stat.math.ethz.ch [mailto:r-help-bounces_at_stat.math.ethz.ch] On Behalf Of Andrew Yee Sent: Tuesday, July 10, 2007 8:57 AM To: r-help_at_stat.math.ethz.ch Subject: [R] why doesn't as.character of this factor create a vector ofcharacters? I'm trying to figure out why when I use as.character() on one row of a data.frame, I get factor numbers instead of a character vector. Any suggestions? See the following code: a<-c("Abraham","Jonah","Moses") b<-c("Sarah","Hannah","Mary") c<-c("Billy","Joe","Bob") df<-data.frame(a=a,b=b,c=c) #Suppose I'm interested in one line of this data frame but as a vector one.line <- df[df$a=="Abraham",] #However the following illustrates the problem I'm having one.line <- as.vector(df[df$a=="Abraham",]) #Creates a one row data.frame instead of a vector! #compare above to one.line <- as.character(df[df$a=="Abraham",]) #Creates a vector of 1, 3, 1! #In the end, this creates the output that I'd like: one.line <-as.vector(t(df[df$a=="Abraham",])) #but it seems like a lot of work! __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] iplots Java
I run R 2.5.1 on a Mac, and use JGR as the front-end. When I performed an update.packages(), I believe it unpdated a component of JGR. I then quit and tried to relaunch JGR. It wouldn't launch. Instead it opened a panel that says: "Cannot find iplot Java classes. Please make sure that the latest iplots R package is correctly installed." I would appreeciate hearing of strategies for solving the problem. _ Professor Michael Kubovy University of Virginia Department of Psychology USPS: P.O.Box 400400Charlottesville, VA 22904-4400 Parcels:Room 102Gilmer Hall McCormick RoadCharlottesville, VA 22903 Office:B011+1-434-982-4729 Lab:B019+1-434-982-4751 Fax:+1-434-982-4766 WWW:http://www.people.virginia.edu/~mk9y/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] memory.size help
At 13:27 31/08/2007, dxc13 wrote: >I keep getting the 'memory.size' error message when I run a program I have >been writing. It always it cannot allocate a vector of a certain size. I >believe the error comes in the code fragement below where I have multiple >arrays that could be taking up space. Does anyone know a good way around >this? It is a bit hard without knowing the dimensions of the arrays but see below. >w1 <- outer(xk$xk1, data[,x1], function(y,z) abs(z-y)) >w2 <- outer(xk$xk2, data[,x2], function(y,z) abs(z-y)) >w1[w1 > d1] <- NA >w2[w2 > d2] <- NA >i1 <- ifelse(!is.na(w1),yvals[col(w1)],NA) >i2 <- ifelse(!is.na(w2),yvals[col(w2)],NA) If I read this correctly after this point you no longer need w1 and w2 so what happens if you remove them? >zk <- numeric(nrow(xk)) #DEFININING AN EMPTY VECTOR TO HOLD ZK VALUES >for(x in 1:nrow(xk)) { > k <- intersect(i1[x,], i2[x,]) > zk[x] <- mean(unlist(k), na.rm = TRUE) >} >xk$zk <- zk >data <- na.omit(xk) >-- >View this message in context: >http://www.nabble.com/memory.size-help-tf4359846.html#a12425401 >Sent from the R help mailing list archive at Nabble.com. Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lattice:can't subset in panel function using other variables
Thanks Deepayan for your response. The first subset you suggest was just a test for me and not what I wanted. I can't do your second suggested subset action as I wish to plot all the panel data, but then add a coloured datapoint for just one year (see example code). I think I have found my problem but don't know how to solve it. The subscripts of data going into each panel are almost always the same length, except maybe one or two panels have 1 less datapoint. I've attached a script that builds a quick dataset and plots what I was aiming for. It works great. If you then remove one line of data from the DF (using "df<-df[-1,]" in the script), the plotting goes awry. Any suggestions about dealing with unequal data lengths for panel function subsetting? Thanks so much Michael ***Start of code # #This builds fake dataset years<-2000:2006 weeks<-1:20 yr<-rep(years,rep(length(weeks)*6,length(years))) wk<-rep(weeks,rep(6,length(weeks))) temp<-rep(4:9,length(years)*length(weeks)) yvar<-round(rnorm(length(years)*length(weeks)*6,mean=30,sd=4),0) xvar<-(rnorm(length(years)*length(weeks)*6)+5)/10 df<-data.frame(year=yr,week=wk,temp=temp,yvar=yvar,xvar=xvar) # library(lattice) df<-df[df$temp==4 ,] df$year2<-as.factor(df$year) df$week2<-as.factor(df$week) #! #df<-df[-1,] #<-run this to see problem if panel data are of unequal length #! print(xyplot(yvar~xvar|week2,data=df,layout = c(4, 5), scales=list(cex=0.7,x=list(rot=45)), par.strip=list(cex=.7), as.table=T, strip = strip.custom(strip.names = F, strip.levels = TRUE), panel=function(x,y,subscripts){ panel.xyplot(x,y,type='p',col=1,cex=.5) panel.xyplot(df$xvar[df$year==2005][subscripts],df$yvar[df$year==2005][s ubscripts],type='p',pch=1,col=3,cex=1.5) }, )) ***End of code -Original Message- From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] Sent: August 31, 2007 2:04 PM To: Folkes, Michael Cc: r-help@stat.math.ethz.ch Subject: Re: [R] Lattice:can't subset in panel function using other variables On 8/30/07, Folkes, Michael <[EMAIL PROTECTED]> wrote: > I've succeeded doing a subset within the panel function of xyplot - if I'm subsetting based on either the value of 'x' or 'y' (e.g. below). However, I wish to subset based on the value of another variable and colour that one plotted point. It's not working. Either it doesn't plot the coloured data point, or if I sort the data differently it colours one datapoint, but the wrong one. I assume this means it's not getting the right subscripts?Finally I can sort of see the light as if I remove the conditioning variable (week) and subset before the xyplot (e.g. week==1) to get just one panel, it plots the correct data including the correct single red point. > Where am I erring? > ___ > print(xyplot(yval~xval|week,data=mydata, > panel=function(x,y,subscripts){ > #panel.xyplot(x,y,type='p',col=1,cex=.5) > panel.xyplot(x[y<=40],y[y<=40],type='p',col=2,cex=.5) # <-this works > > panel.xyplot(x[mydata$yr==2005],y[mydata$yr==2005],type='p',pch=16,col=2 ,cex=.5) # <-sometimes this won't work or it colours wrong datapoint > })) > ___ Why not xyplot(yval~xval|week,data=mydata, subset = yval < 40) or xyplot(yval~xval|week,data=mydata, subset = yr==2005) -Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Lattice:can't subset in panel function using other variables
I've succeeded doing a subset within the panel function of xyplot - if I'm subsetting based on either the value of 'x' or 'y' (e.g. below). However, I wish to subset based on the value of another variable and colour that one plotted point. It's not working. Either it doesn't plot the coloured data point, or if I sort the data differently it colours one datapoint, but the wrong one. I assume this means it's not getting the right subscripts? Finally I can sort of see the light as if I remove the conditioning variable (week) and subset before the xyplot (e.g. week==1) to get just one panel, it plots the correct data including the correct single red point. Where am I erring? ___ print(xyplot(yval~xval|week,data=mydata, panel=function(x,y,subscripts){ #panel.xyplot(x,y,type='p',col=1,cex=.5) panel.xyplot(x[y<=40],y[y<=40],type='p',col=2,cex=.5) # <-this works panel.xyplot(x[mydata$yr==2005],y[mydata$yr==2005],type='p',pch=16,col=2,cex=.5) # <-sometimes this won't work or it colours wrong datapoint })) ___ Thanks very much! Michael Folkes ___ Michael Folkes Salmon Stock Assessment Canadian Dept. of Fisheries & Oceans Pacific Biological Station 3190 Hammond Bay Rd. Nanaimo, B.C., Canada V9T-6N7 Ph (250) 756-7264 Fax (250) 756-7053 [EMAIL PROTECTED] [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] help with aggregate(): tables of means for terms in an mlm
I'm trying to extend some work in the car and heplots packages that requires getting a table of multivariate means for one (or later, more) terms in an mlm object. I can do this for concrete examples, using aggregate(), but can't figure out how to generalize it. I want to return a result that has the factor-level combinations as rownames, and the means as the body of the table (aggregate returns the factors as initial columns). # Examples: m1 & m2 are desired results > library(car) > soils.mod <- lm(cbind(pH,N,Dens,P,Ca,Mg,K,Na,Conduc) ~ Block + Contour*Depth, data=Soils) > term.names(soils.mod) [1] "(Intercept)" "Block" "Contour" "Depth" [5] "Contour:Depth" > > # response variables > resp<- model.response(model.frame(soils.mod)) > # 1-factor means: term="Contour" > m1<-aggregate(resp, list(Soils$Contour), mean) > rownames(m1) <- m1[,1] > ( m1 <- m1[,-1] ) pH N Dens PCaMg KNa Conduc Depression 4.692 0.08731 1.343 188.2 7.101 8.986 0.3781 5.823 6.946 Slope 4.746 0.10594 1.333 159.4 8.109 8.320 0.4156 6.103 6.964 Top4.570 0.11256 1.272 150.9 8.877 8.088 0.6050 4.872 5.856 > > # 2-factor means: term="Contour:Depth" > m2<-aggregate(resp, list(Soils$Contour, Soils$Depth), mean) > rownames(m2) <- paste(m2[,1], m2[,2],sep=":") > ( m2 <- m2[,-(1:2)] ) pH N Dens P CaMg K Na Conduc Depression:0-10 5.353 0.17825 0.9775 333.0 10.685 7.235 0.6250 1.5125 1.473 Slope:0-10 5.508 0.21900 1.0500 258.0 12.248 7.232 0.6350 1.9900 2.050 Top:0-10 5.332 0.19550 1.0025 242.8 13.385 6.590 0.8000 0.9225 1.373 Depression:10-30 4.880 0.08025 1.3575 187.5 7.548 9.635 0.4500 4.6400 5.480 Slope:10-30 5.283 0.10100 1.3475 160.2 9.515 8.980 0.4800 4.9350 4.910 Top:10-304.850 0.11750 1.3325 147.5 10.238 8.090 0.6500 2.9800 3.583 Depression:30-60 4.362 0.05050 1.5350 124.2 5.402 9.918 0.2400 7.5875 9.393 Slope:30-60 4.268 0.06075 1.5100 114.5 5.877 8.968 0.3000 7.6300 8.925 Top:30-604.205 0.07950 1.3225 116.2 6.620 8.742 0.5450 6.2975 7.440 Depression:60-90 4.173 0.04025 1.5025 108.0 4.770 9.157 0.1975 9.5525 11.438 Slope:60-90 3.927 0.04300 1.4225 105.0 4.798 8.100 0.2475 9.8575 11.970 Top:60-903.893 0.05775 1.4300 97.0 5.268 8.928 0.4250 9.2900 11.030 > Here is the current version of a function that doesn't work, because I can't supply the factor names to aggregate in the proper way. Can someone help me make it work? termMeans.mlm <- function( object, term ) { resp<- model.response(model.frame(object)) terms <- term.names(soils.mod) terms <- terms[terms != "(Intercept)"] factors <- strsplit(term, ":") # browser() means <- aggregate(resp, factors, mean) # rownames(means) <- ... # means <- means[, -(1:length(factors)] } > termMeans.mlm(soils.mod, "Contour") Error in FUN(X[[1L]], ...) : arguments must have same length thanks, -Michael -- Michael Friendly Email: friendly AT yorku DOT ca Professor, Psychology Dept. York University Voice: 416 736-5115 x66249 Fax: 416 736-5814 4700 Keele Streethttp://www.math.yorku.ca/SCS/friendly.html Toronto, ONT M3J 1P3 CANADA __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Exact Confidence Intervals for the Ration of Two Binomial
At 18:28 23/08/2007, [EMAIL PROTECTED] wrote: >Hello everyone, > >I will like to know if there is an R package to compute exact confidence >intervals for the ratio of two binomial proportions. If you go to https://www.stat.math.ethz.ch/pipermail/r-help//2006-November/thread.html and search that part of the archive for "relative risk" you will find a number of suggestions. Unfortunately the responses are not all threaded so you need to search the whole thing. >Tony. > [[alternative HTML version deleted]] Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] FAQ 7.x when 7 does not exist. Useability question
> "John" == John Kane <[EMAIL PROTECTED]> writes: > Apologies for the poor quality of the screen capture. I think the first one is a screen cap of http://cran.r-project.org/doc/FAQ/R-FAQ.html. Is that correct? The faq that is part of the r-doc-html package from Debian also has the same "bulleted" table of contents. Mike __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] How to fit an linear model withou intercept
At 10:56 23/08/2007, Michal Kneifl wrote: >Please could anyone help me? >How can I fit a linear model where an intercept has no sense? Well the line has to have an intercept somewhere I suppose. If you use the site search facility and look for "known intercept" you will get some clues. >Thanks in advance.. > >Michael > > Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nls() and numerical integration (e.g. integrate()) working together?
Dear List-Members, since 3 weeks I have been heavily working on reproducing the results of an economic paper. The method there uses the numerical solution of an integral within nonlinear least squares. Within the integrand there is also some parameter to estimate. Is that in the end possible to implement in R [Originally it was done in GAUSS]? I'm nearly into giving up. I constucted an example to showing the problems I face. I have three questions - related to three errors shown below: 1) How to make clear that in the integrand z is the integration variable and b1 is a parameter while x1 is a data variable 2) and 3) How to set up a correct estimation of the integral? library(stats) y <- c(2,15,24,21,5,6,) x1 <- c(2.21,5,3,5,2,1) x2 <- c(4.51,6,2,11,0.4,3) f <- function(z) {z + b1*x1} vf <- Vectorize(f) g <- function(z) {z + x1} vg <- Vectorize(f) Error 1: > nls(y ~ integrate(vf,0,1)+b2*x2,start=list(b1=0.5,b2=2)) Error in function (z) : object "b1" not found Error 2: > nls(y ~ integrate(vg,0,1)+b2*x2,start=list(b1=0.5,b2=2)) Error in integrate(vg, 0, 1) : REAL() can only be applied to a 'numeric', not a 'list' Error 3: > nls(y ~ integrate(g,0,1)+b2*x2,start=list(b1=0.5,b2=2)) Error in integrate(g, 0, 1) + b2 * x2 : non-numeric argument to binary operator In addition: Warning messages: 1: longer object length is not a multiple of shorter object length in: z + x1 With a lot of thanks in advance, Michael __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] clusterCall with replicate function
I am trying to run a monte carlo process using snow with a MPI cluster. I have ~thirty processors to run the algorithm on and I want to run it 5000 times and take the average of the output. A very simple way to do this is to divide 5000 by the number of processors to get a number n and tell each processor to run the algorithm n times. I realize there are more efficient ways to manage the parallelization. To implement this I used the clusterCall command with the replicate function along the lines of clusterCall(cl, replicate, n, function(args)). Because my function is a monte carlo process it relies on drawing from random distributions to generate output. When I do this, all of my processors generate the same random numbers. I copied the following from the command space for a simple example: cl<-makeCluster(cl, replicate,1,runif(2)) clusterCall(cl, replicate, 2, runif(2)) [[1]] 0.65339590.6533959 0.10710510.1071051 [[2]] 0.65339590.6533959 0.10710510.1071051 This is not alleviated by using clusterApply to set a random seed for each processor and seems to be related to the use of the replicate function within clusterCall. I have rearranged the function so that replicate is used to call the clusterCall function (ie. replicate(2, clusterCall(cl, runif,2),simplify=F) ) and resolved the random number issue. However, this also involves much more communication between master and slaves and results in slower computation time. Will rsprng fix this problem? Is there a better way to do this without using replicate? I hope this is somewhat clear. Thanks, Mike __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R on a flash drive
John Kane yahoo.ca> writes: > > > > I usually have R, Tinn-R and portable versions of > > OpenOoffice.org, and Firefox installed on the USB. Tinn-R works well as a portable editor on a USB flash drive. Likewise, by following the instructions here: http://at-aka.blogspot.com/2006/06/portable- emacs-22050-on-usb.html you can run emacs for windows from your USB flash drive. Michael __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] makeSOCKcluster
Hi, I am attempting to implement a mixed (windows/linux) snow sockets parallelism in R, but am running into difficulties similar to a post made Aug 31, 2006 under the same subject heading. I feel like I may be one or two non-obvious steps away from getting it all working, but I'm stuck. If anyone can shed some light on this (I believe Prof. Tierney stated that he has successfully run a snow master on Windows with slave nodes on Linux using ssh.exe through Cygwin, which is exactly what I am attempting), I'd be most grateful. SNOW master on Windows information: WinXP 32bit R-2.5.1 I built a windows package (.zip) version of snow-0.2-3 for R using Rtools and installed it without trouble. Cygwin including ssh as an exe and in the path Linux slave nodes information: ROCKS compute nodes, each 64bit (I mention this difference between the win and linux platforms out of desperation but I don't think it's an issue since snow should be agnostic). (btw, the SOCK version of snow here is just for me to get used to parallelism in R before attempting MPI). R-2.5.1, local directory install (no su, global R install is older). Snow installed to R-2.5.1 instance Unattended ssh authentication through public/private key pairs. Each node allows since they are all NFS. Rgui commands: >library(snow) >cl<-makeCluster(c("ip.path.to.node1","ip.path.to.node2"),type="SOCK") Result: R becomes unresponsive and has to be forcibly closed. Sometimes before that happens message stating env: C:\pathToR\library\snow\RSOCKnode.sh no file found appears (if I make R redraw the interface screen) Troubleshooting step1: Interspersed print commands within C:\localRinstall\library\snow\R\snow file. Result of troubleshooting step1: It appears that this snow file hangs on line: system(paste(rshcmd, "-l", user, machine, "env", env, script)) Resolution of troubleshooting step1: Manually attempted paste commands in R: >system("ssh -l me ip.path.to.node1 ls") Result: gets ls of remote machine; so the script path must be incorrect and the offending command Located RSOCKnode.sh on remote machine (for all remote machines, since it's NFS) Inserted line into C:\localRinstall\library\snow\R\snow file just before the offending line system(paste(rshcmd, "-l", user, machine, "env", env, script)). That inserted line hard-codes the path of the script and the script name like so: script<-"/rootpath/home/cluster/me/R-2.5.1/library/snow/RSOCKnode.sh". This worked, and the next print statement following the system call in the snow file now prints to the screen. But again R hangs, this time at the line: con <- socketConnection(port = port, server=TRUE, blocking=TRUE, open="a+b"). Troubleshooting step2: Attempted manual socketConnection via R commands: > con<-socketConnection("ip.path.to.node1",port=22) > showConnections() description classmode text isopen can read can write 3 "->ip.path.to.node1" "socket" "a+" "text" "opened" "yes""yes" *** so the socketConnection does work, but in SNOW it hangs at this last command, and I'm completely stuck. As a parallel test, I have installed SNOW on the R-2.5.1 instance on the linux cluster. If I access the head node and launch R, I can use SNOW with sockets successfully (from the linux head node as master to the linux compute nodes as slaves). However, running R more or less continually on the head node of a cluster is bad form. Ideally I would like to run a windows snow master to the linux node slaves, but I can't seem to get past this point. For reference, I include the original post below. I am stuck at one command in the snow file past where this last post had problems. I hope this is clear. I am intent on solving this problem, so feel free to ask questions if you have feedback but my description is not clear. I really appreciate any help! Yours, Michael Janis UCLA Bioinformatics *transcript from original post entitles "makeSOCKcluster" follows* makeSOCKcluster Click to flag this post by Hrishikesh Rajpathak Aug 31, 2006; 10:39pm :: Rate this Message: - Use ratings to moderate (?) Reply | Reply to Author | View Threaded | Show Only this Message Hi, I am a newbie to R and trying to implement parallelism in R. I am currently using R-2.3.1, and Cygwin to run R on Windows xp. ssh and all are working fine, When I try to create a socket connection as makeSOCKcluster(c("localhost","localhost")), it just waits for the other prcess on localhost to get created and respond. But this other process is not created. To debug, I put
Re: [R] Artifacts in pdf() of image() (w/o comments)
Hi Duncan, I was trying to learn to remove the artifacts by setting the parameters of the image so that anti-aliasing wouldn't produce them (analogous to making sure that the screens of two halftone screen process images are in register before combining them so as to avoid Moiré patterns). So, I wasn't complaining; I was looking for instruction. Since the image was computed with R, it's reasonable for me to ask R experts to help me out of what I thought was caused by my ineptitude. (Regarding your question, "If it wasn't a bug, why did it bother you so much?"---there are many things that bother me that are not bugs.) Moreover, I'm not opposed to complaining to Apple, once I have been assured that I'm not reporting a bug where there's none. On Aug 13, 2007, at 9:56 AM, Duncan Murdoch wrote: > On 8/13/2007 11:43 AM, Michael Kubovy wrote: >> But is it a bug? Can a program anti-alias text and line drawings >> and not bitmaps? > > Anti-aliasing is the removal of artifacts caused by displaying an > image on a low-resolution bitmapped display. Introducing artifacts > is a bug. > > If it wasn't a bug, why did it bother you so much? And why do you > think it's reasonable to complain about it on R-help, but not to > complain about it to Apple, who are clearly responsible for it? > > Duncan Murdoch > >> On Aug 13, 2007, at 9:30 AM, Duncan Murdoch wrote: >>> On 8/13/2007 11:07 AM, Michael Kubovy wrote: >>>> Dear Friends, >>>> Thanks for your input. >>>> FYI: Preview doesn't show PDF aliasing in the image I produced >>>> if I uncheck the "Anti-alias text and line art" box under the >>>> PDF tab in Preferences. So I'm not yet ready to drop Preview >>>> from my toolbox. >>> >>> An alternative to dropping Preview is to report the bug in it to >>> Apple. Apple has an online bug reporting web page somewhere; I >>> haven't found them as helpful as R-help, but your mileage may vary. >>> >>> Duncan Murdoch >> _ >> Professor Michael Kubovy >> University of Virginia >> Department of Psychology >> USPS: P.O.Box 400400Charlottesville, VA 22904-4400 >> Parcels:Room 102Gilmer Hall >> McCormick RoadCharlottesville, VA 22903 >> Office:B011+1-434-982-4729 >> Lab:B019+1-434-982-4751 >> Fax:+1-434-982-4766 >> WWW:http://www.people.virginia.edu/~mk9y/ _ Professor Michael Kubovy University of Virginia Department of Psychology USPS: P.O.Box 400400Charlottesville, VA 22904-4400 Parcels:Room 102Gilmer Hall McCormick RoadCharlottesville, VA 22903 Office:B011+1-434-982-4729 Lab:B019+1-434-982-4751 Fax:+1-434-982-4766 WWW:http://www.people.virginia.edu/~mk9y/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Artifacts in pdf() of image() (w/o comments)
But is it a bug? Can a program anti-alias text and line drawings and not bitmaps? On Aug 13, 2007, at 9:30 AM, Duncan Murdoch wrote: > On 8/13/2007 11:07 AM, Michael Kubovy wrote: >> Dear Friends, >> Thanks for your input. >> FYI: Preview doesn't show PDF aliasing in the image I produced if >> I uncheck the "Anti-alias text and line art" box under the PDF >> tab in Preferences. So I'm not yet ready to drop Preview from my >> toolbox. > > An alternative to dropping Preview is to report the bug in it to > Apple. Apple has an online bug reporting web page somewhere; I > haven't found them as helpful as R-help, but your mileage may vary. > > Duncan Murdoch _ Professor Michael Kubovy University of Virginia Department of Psychology USPS: P.O.Box 400400Charlottesville, VA 22904-4400 Parcels:Room 102Gilmer Hall McCormick RoadCharlottesville, VA 22903 Office:B011+1-434-982-4729 Lab:B019+1-434-982-4751 Fax:+1-434-982-4766 WWW:http://www.people.virginia.edu/~mk9y/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Artifacts in pdf() of image() (w/o comments)
Dear Friends, Thanks for your input. FYI: Preview doesn't show PDF aliasing in the image I produced if I uncheck the "Anti-alias text and line art" box under the PDF tab in Preferences. So I'm not yet ready to drop Preview from my toolbox. MK On Aug 13, 2007, at 12:02 AM, Mark Wardle wrote: > It may be worth outputting postscript and converting to PDF from > there. Although Preview can do this, it may be worth looking at > Ghostscript which *may* not have simila problems. > > I have also had PDFs which have displayed well in Preview and open > source tools and have been garbled in Adobe Acrobat, so the problems > aren't limited to Preview. > > Best wishes, > > Mark > > On 13/08/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote: > >> Michael Kubovy wrote: >>> On Aug 12, 2007, at 6:24 AM, Duncan Murdoch wrote: >>> >>>> Michael Kubovy wrote: >>>> >>>>> Dear r-helpers, >>>>> >>>>> In my previous message there were comments in the code that may >>>>> have made cutting and pasting awkward. Here it is w/o them. >>>>> >>>>> I have two questions: >>>>> >>>>> (1) The following produces a pdf with artifacts. How do I prevent >>>>> them? >>>>> >>>>> >>>> What artifacts do you see? It looks like a smoothly varying field >>>> when produced by R 2.5.1 and viewed in Acrobat Reader 6.0 on >>>> Windows. >>>> >>>> Duncan Murdoch >>>> >>>>> require(grDevices) >>>>> imSize <- 200 >>>>> lambda <- 10 >>>>> theta <- 15 >>>>> sigma <- 40 >>>>> x <- 1:imSize >>>>> x0 <- x / imSize -.5 >>>>> freq = imSize/lambda >>>>> xf = x0 * freq * 2 * pi >>>>> f <- function(x, y){r <- -((x^2 + y^2)/(sigma ^2)); exp(r)} >>>>> z <- outer(xf, xf, f) >>>>> f1 <- function(x, y){cos(.1 * x)} >>>>> z1 <- outer(xf, xf, f1) >>>>> pdf('gabor.pdf') >>>>> image(xf, xf, z * z1, col = gray(250:1000/1000), >>>>> xlab = '', ylab = '', bty = 'n', axes = FALSE, asp = 1) >>>>> dev.off() >>>>> >>> >>> I'm working on a Mac. You're right, Acrobat 6.05 renders the figure >>> nicely, but when it's included in a LaTeX-produced pdf or viewed >>> with >>> the Mac Preview program, a grid of fine white lines is superimposed >>> on the figure. So I believe that it's a matter of aliasing, which I >>> might be able to prevent by adjusting the parameters of the figures. >>> I just don't know enough to figure this out, and would appreciate >>> guidance. >> I see the artifacts in Preview on a Mac too. So it looks to me >> like a >> Mac bug. >> >> Preview is actually pretty poor at graphics display; see >> <http://www.geuz.org/pipermail/gl2ps/2007/000223.html>. >> >> My only suggestion is not to use Preview. >> >> Duncan Murdoch > -- > Dr. Mark Wardle > Clinical research fellow and specialist registrar, Neurology > Cardiff, UK _ Professor Michael Kubovy University of Virginia Department of Psychology USPS: P.O.Box 400400Charlottesville, VA 22904-4400 Parcels:Room 102Gilmer Hall McCormick RoadCharlottesville, VA 22903 Office:B011+1-434-982-4729 Lab:B019+1-434-982-4751 Fax:+1-434-982-4766 WWW:http://www.people.virginia.edu/~mk9y/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Artifacts in pdf() of image() (w/o comments)
On Aug 12, 2007, at 6:24 AM, Duncan Murdoch wrote: > Michael Kubovy wrote: >> Dear r-helpers, >> >> In my previous message there were comments in the code that may >> have made cutting and pasting awkward. Here it is w/o them. >> >> I have two questions: >> >> (1) The following produces a pdf with artifacts. How do I prevent >> them? >> > > What artifacts do you see? It looks like a smoothly varying field > when produced by R 2.5.1 and viewed in Acrobat Reader 6.0 on Windows. > > Duncan Murdoch >> require(grDevices) >> imSize <- 200 >> lambda <- 10 >> theta <- 15 >> sigma <- 40 >> x <- 1:imSize >> x0 <- x / imSize -.5 >> freq = imSize/lambda >> xf = x0 * freq * 2 * pi >> f <- function(x, y){r <- -((x^2 + y^2)/(sigma ^2)); exp(r)} >> z <- outer(xf, xf, f) >> f1 <- function(x, y){cos(.1 * x)} >> z1 <- outer(xf, xf, f1) >> pdf('gabor.pdf') >> image(xf, xf, z * z1, col = gray(250:1000/1000), >> xlab = '', ylab = '', bty = 'n', axes = FALSE, asp = 1) >> dev.off() I'm working on a Mac. You're right, Acrobat 6.05 renders the figure nicely, but when it's included in a LaTeX-produced pdf or viewed with the Mac Preview program, a grid of fine white lines is superimposed on the figure. So I believe that it's a matter of aliasing, which I might be able to prevent by adjusting the parameters of the figures. I just don't know enough to figure this out, and would appreciate guidance. _ Professor Michael Kubovy University of Virginia Department of Psychology USPS: P.O.Box 400400Charlottesville, VA 22904-4400 Parcels:Room 102Gilmer Hall McCormick RoadCharlottesville, VA 22903 Office:B011+1-434-982-4729 Lab:B019+1-434-982-4751 Fax:+1-434-982-4766 WWW:http://www.people.virginia.edu/~mk9y/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Artifacts in pdf() of image() (w/o comments)
Dear r-helpers, In my previous message there were comments in the code that may have made cutting and pasting awkward. Here it is w/o them. I have two questions: (1) The following produces a pdf with artifacts. How do I prevent them? require(grDevices) imSize <- 200 lambda <- 10 theta <- 15 sigma <- 40 x <- 1:imSize x0 <- x / imSize -.5 freq = imSize/lambda xf = x0 * freq * 2 * pi f <- function(x, y){r <- -((x^2 + y^2)/(sigma ^2)); exp(r)} z <- outer(xf, xf, f) f1 <- function(x, y){cos(.1 * x)} z1 <- outer(xf, xf, f1) pdf('gabor.pdf') image(xf, xf, z * z1, col = gray(250:1000/1000), xlab = '', ylab = '', bty = 'n', axes = FALSE, asp = 1) dev.off() (2) I would like the output to be clipped to a circle, i.e., anything outside the circle tangent to the sides of the square should be transparent. How can I do that? _ Professor Michael Kubovy University of Virginia Department of Psychology USPS: P.O.Box 400400Charlottesville, VA 22904-4400 Parcels:Room 102Gilmer Hall McCormick RoadCharlottesville, VA 22903 Office:B011+1-434-982-4729 Lab:B019+1-434-982-4751 Fax:+1-434-982-4766 WWW:http://www.people.virginia.edu/~mk9y/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Artifacts in pdf() of image()
Dear r-helpers, I have two questions: (1) The following produces a pdf with artifacts. How do I prevent them? require(grDevices) imSize <- 200 lambda <- 10 theta <- 15 sigma <- 40 x <- 1:imSize x0 <- x / imSize -.5 freq = imSize/lambda# compute frequency from wavelength xf = x0 * freq * 2 * pi # convert X to radians: 0 -> ( 2*pi * frequency) f <- function(x, y){r <- -((x^2 + y^2)/(sigma ^2)); exp(r)} z <- outer(xf, xf, f) f1 <- function(x, y){cos(.1 * x)} z1 <- outer(xf, xf, f1) pdf('gabor.pdf') image(xf, xf, z * z1, col = gray(250:1000/1000), xlab = '', ylab = '', bty = 'n', axes = FALSE, asp = 1) dev.off() (2) I would like the output to be clipped to a circle, i.e., anything outside the circle tangent to the sides of the square should be transparent. How can I do that? _ Professor Michael Kubovy University of Virginia Department of Psychology USPS: P.O.Box 400400Charlottesville, VA 22904-4400 Parcels:Room 102Gilmer Hall McCormick RoadCharlottesville, VA 22903 Office:B011+1-434-982-4729 Lab:B019+1-434-982-4751 Fax:+1-434-982-4766 WWW:http://www.people.virginia.edu/~mk9y/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] GLMM: MEEM error due to dichotomous variables
At 14:31 07/08/2007, Elva Robinson wrote: >I am trying to run a GLMM on some binomial data. My fixed factors >include 2 dichotomous variables, day, and distance. When I run the model: > >modelA<-glmmPQL(Leaving~Trial*Day*Dist,random=~1|Indiv,family="binomial") > >I get the error: > >iteration 1 >Error in MEEM(object, conLin, control$niterEM) : >Singularity in backsolve at level 0, block 1 > > From looking at previous help topics,( > http://tolstoy.newcastle.edu.au/R/help/02a/4473.html) >I gather this is because of the dichotomous predictor variables - >what approach should I take to avoid this problem? I seem to remember a similar error message (although possibly not from glmmPQL). Does every combination of Trial * Day * Dist occur in your dataset? You would find it easier to read your code if you used your space bar. Computer storage is cheap. >Thanks, Elva. > >_ >Got a favourite clothes shop, bar or restaurant? Share your local knowledge > > Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Memory Experimentation: Rule of Thumb = 10-15 Times the Memory
Thanks for all the comments, The artificial dataset is as representative of my 440MB file as I could design. I did my best to reduce the complexity of my problem to minimal reproducible code as suggested in the posting guidelines. Having searched the archives, I was happy to find that the topic had been covered, where Prof Ripley suggested that the I/O manuals gave some advice. However, I was unable to get anywhere with the I/O manuals advice. I spent 6 hours preparing my post to R-help. Sorry not to have read the 'R-Internals' manual. I just wanted to know if I could use scan() more efficiently. My hurdle seems nothing to do with efficiently calling scan() . I suspect the same is true for the originator of this memory experiment thread. It is the overhead of storing short strings, as Charles identified and Brian explained. I appreciate the investigation and clarification you both have made. 56B overhead for a 2 character string seems extreme to me, but I'm not complaining. I really like R, and being free, accept that it-is-what-it-is. In my case pre-processing is not an option, it is not a one off problem with a particular file. In my application, R is run in batch mode as part of a tool chain for arbitrary csv files. Having found cases where memory usage was as high as 20x file size, and allowing for a copy of the the loaded dataset, I'll just need to document that it is possible that files as small as 1/40th of system memory may consume it all. That rules out some important datasets (US Census, UK Office of National Statistics files, etc) for 2GB servers. Regards, Mike On 8/9/07, Prof Brian Ripley <[EMAIL PROTECTED]> wrote: > On Thu, 9 Aug 2007, Charles C. Berry wrote: > > > On Thu, 9 Aug 2007, Michael Cassin wrote: > > > >> I really appreciate the advice and this database solution will be useful to > >> me for other problems, but in this case I need to address the specific > >> problem of scan and read.* using so much memory. > >> > >> Is this expected behaviour? > > Yes, and documented in the 'R Internals' manual. That is basic reading > for people wishing to comment on efficiency issues in R. > > >> Can the memory usage be explained, and can it be > >> made more efficient? For what it's worth, I'd be glad to try to help if > >> the > >> code for scan is considered to be worth reviewing. > > > > Mike, > > > > This does not seem to be an issue with scan() per se. > > > > Notice the difference in size of big2, big3, and bigThree here: > > > >> big2 <- rep(letters,length=1e6) > >> object.size(big2)/1e6 > > [1] 4.000856 > >> big3 <- paste(big2,big2,sep='') > >> object.size(big3)/1e6 > > [1] 36.2 > > On a 32-bit computer every R object has an overhead of 24 or 28 bytes. > Character strings are R objects, but in some functions such as rep (and > scan for up to 10,000 distinct strings) the objects can be shared. More > string objects will be shared in 2.6.0 (but factors are designed to be > efficient at storing character vectors with few values). > > On a 64-bit computer the overhead is usually double. So I would expect > just over 56 bytes/string for distinct short strings (and that is what > big3 gives). > > But 56Mb is really not very much (tiny on a 64-bit computer), and 1 > million items is a lot. > > [...] > > > -- > Brian D. Ripley, [EMAIL PROTECTED] > Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ > University of Oxford, Tel: +44 1865 272861 (self) > 1 South Parks Road, +44 1865 272866 (PA) > Oxford OX1 3TG, UKFax: +44 1865 272595 > __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Memory Experimentation: Rule of Thumb = 10-15 Times the Memory
I really appreciate the advice and this database solution will be useful to me for other problems, but in this case I need to address the specific problem of scan and read.* using so much memory. Is this expected behaviour? Can the memory usage be explained, and can it be made more efficient? For what it's worth, I'd be glad to try to help if the code for scan is considered to be worth reviewing. Regards, Mike On 8/9/07, Gabor Grothendieck <[EMAIL PROTECTED]> wrote: > > Just one other thing. > > The command in my prior post reads the data into an in-memory database. > If you find that is a problem then you can read it into a disk-based > database by adding the dbname argument to the sqldf call > naming the database. The database need not exist. It will > be created by sqldf and then deleted when its through: > > DF <- sqldf("select * from f", dbname = tempfile(), > file.format = list(header = TRUE, row.names = FALSE)) > > > On 8/9/07, Gabor Grothendieck <[EMAIL PROTECTED]> wrote: > > Another thing you could try would be reading it into a data base and > then > > from there into R. > > > > The devel version of sqldf has this capability. That is it will use > RSQLite > > to read the file directly into the database without going through R at > all > > and then read it from there into R so its a completely different > process. > > The RSQLite software has no capability of dealing with quotes (they will > > be regarded as ordinary characters) but a single gsub can remove them > > afterwards. This won't work if there are commas within the quotes but > > in that case you could read each row as a single record and then > > split it yourself in R. > > > > Try this > > > > library(sqldf) > > # next statement grabs the devel version software that does this > > source("http://sqldf.googlecode.com/svn/trunk/R/sqldf.R";) > > > > gc() > > f <- file("big.csv") > > DF <- sqldf("select * from f", file.format = list(header = TRUE, > > row.names = FALSE)) > > gc() > > > > For more info see the man page from the devel version and the home page: > > > > http://sqldf.googlecode.com/svn/trunk/man/sqldf.Rd > > http://code.google.com/p/sqldf/ > > > > > > On 8/9/07, Michael Cassin <[EMAIL PROTECTED]> wrote: > > > Thanks for looking, but my file has quotes. It's also 400MB, and I > don't > > > mind waiting, but don't have 6x the memory to read it in. > > > > > > > > > On 8/9/07, Gabor Grothendieck <[EMAIL PROTECTED]> wrote: > > > > If we add quote = FALSE to the write.csv statement its twice as fast > > > > reading it in. > > > > > > > > On 8/9/07, Michael Cassin <[EMAIL PROTECTED]> wrote: > > > > > Hi, > > > > > > > > > > I've been having similar experiences and haven't been able to > > > > > substantially improve the efficiency using the guidance in the I/O > > > > > Manual. > > > > > > > > > > Could anyone advise on how to improve the following scan()? It is > not > > > > > based on my real file, please assume that I do need to read in > > > > > characters, and can't do any pre-processing of the file, etc. > > > > > > > > > > ## Create Sample File > > > > > > > > write.csv(matrix(as.character(1:1e6),ncol=10,byrow=TRUE),"big.csv", > row.names=FALSE) > > > > > q() > > > > > > > > > > **New Session** > > > > > #R > > > > > system("ls -l big.csv") > > > > > system("free -m") > > > > > > > > big1<-matrix(scan("big.csv > ",sep=",",what=character(0),skip=1,n=1e6),ncol=10,byrow=TRUE) > > > > > system("free -m") > > > > > > > > > > The file is approximately 9MB, but approximately 50-60MB is used > to > > > > > read it in. > > > > > > > > > > object.size(big1) is 56MB, or 56 bytes per string, which seems > > > excessive. > > > > > > > > > > Regards, Mike > > > > > > > > > > Configuration info: > > > > > > sessionInfo() > > > > > R version 2.5.1 (2007-06-27) > > > > > x86_64-redhat-linux-gnu > > > > > locale: > > > > > C > > > > > attached base packa
Re: [R] Memory Experimentation: Rule of Thumb = 10-15 Times the Memory
Thanks for looking, but my file has quotes. It's also 400MB, and I don't mind waiting, but don't have 6x the memory to read it in. On 8/9/07, Gabor Grothendieck <[EMAIL PROTECTED]> wrote: > > If we add quote = FALSE to the write.csv statement its twice as fast > reading it in. > > On 8/9/07, Michael Cassin <[EMAIL PROTECTED]> wrote: > > Hi, > > > > I've been having similar experiences and haven't been able to > > substantially improve the efficiency using the guidance in the I/O > > Manual. > > > > Could anyone advise on how to improve the following scan()? It is not > > based on my real file, please assume that I do need to read in > > characters, and can't do any pre-processing of the file, etc. > > > > ## Create Sample File > > write.csv(matrix(as.character(1:1e6),ncol=10,byrow=TRUE),"big.csv", > row.names=FALSE) > > q() > > > > **New Session** > > #R > > system("ls -l big.csv") > > system("free -m") > > big1<-matrix(scan("big.csv > ",sep=",",what=character(0),skip=1,n=1e6),ncol=10,byrow=TRUE) > > system("free -m") > > > > The file is approximately 9MB, but approximately 50-60MB is used to > > read it in. > > > > object.size(big1) is 56MB, or 56 bytes per string, which seems > excessive. > > > > Regards, Mike > > > > Configuration info: > > > sessionInfo() > > R version 2.5.1 (2007-06-27) > > x86_64-redhat-linux-gnu > > locale: > > C > > attached base packages: > > [1] "stats" "graphics" "grDevices" "utils" > "datasets" "methods" > > [7] "base" > > > > # uname -a > > Linux ***.com 2.6.9-023stab044.4-smp #1 SMP Thu May 24 17:20:37 MSD > > 2007 x86_64 x86_64 x86_64 GNU/Linux > > > > > > > > == Quoted Text > > From: Prof Brian Ripley > > Date: Tue, 26 Jun 2007 17:53:28 +0100 (BST) > > > > > > > > > > The R Data Import/Export Manual points out several ways in which you > > can use read.csv more efficiently. > > > > On Tue, 26 Jun 2007, ivo welch wrote: > > > > > dear R experts: > > > > > > I am of course no R experts, but use it regularly. I thought I would > > > share some experimentation with memory use. I run a linux machine > > > with about 4GB of memory, and R 2.5.0. > > > > > > upon startup, gc() reports > > > > > > used (Mb) gc trigger (Mb) max used (Mb) > > > Ncells 268755 14.4 407500 21.8 35 18.7 > > > Vcells 139137 1.1 786432 6.0 444750 3.4 > > > > > > This is my baseline. linux 'top' reports 48MB as baseline. This > > > includes some of my own routines that are always loaded. Good.. > > > > > > > > > Next, I created a s.csv file with 22 variables and 500,000 > > > observations, taking up an uncompressed disk space of 115MB. The > > > resulting object.size() after a read.csv() is 84,002,712 bytes (80MB). > > > > > >> s= read.csv("s.csv"); > > >> object.size(s); > > > > > > [1] 84002712 > > > > > > > > > here is where things get more interesting. after the read.csv() is > > > finished, gc() reports > > > > > > used (Mb) gc trigger (Mb) max used (Mb) > > > Ncells 270505 14.58349948 446.0 11268682 601.9 > > > Vcells 10639515 81.2 34345544 262.1 42834692 326.9 > > > > > > I was a big surprised by this---R had 928MB intermittent memory in > > > use. More interestingly, this is also similar to what linux 'top' > > > reports as memory use of the R process (919MB, probably 1024 vs. 1000 > > > B/MB), even after the read.csv() is finished and gc() has been run. > > > Nothing seems to have been released back to the OS. > > > > > > Now, > > > > > >> rm(s) > > >> gc() > > > used (Mb) gc trigger (Mb) max used (Mb) > > > Ncells 270541 14.56679958 356.8 11268755 601.9 > > > Vcells 139481 1.1 27476536 209.7 42807620 326.6 > > > > > > linux 'top' now reports 650MB of memory use (though R itself uses only > > > 15.6Mb). My guess is that It leaves the trigger memory of 567MB plus > > > the base 48MB. > > > > > > > > > There are two interesting observations for me here:
Re: [R] Memory Experimentation: Rule of Thumb = 10-15 Times the Memory
Hi, I've been having similar experiences and haven't been able to substantially improve the efficiency using the guidance in the I/O Manual. Could anyone advise on how to improve the following scan()? It is not based on my real file, please assume that I do need to read in characters, and can't do any pre-processing of the file, etc. ## Create Sample File write.csv(matrix(as.character(1:1e6),ncol=10,byrow=TRUE),"big.csv",row.names=FALSE) q() **New Session** #R system("ls -l big.csv") system("free -m") big1<-matrix(scan("big.csv",sep=",",what=character(0),skip=1,n=1e6),ncol=10,byrow=TRUE) system("free -m") The file is approximately 9MB, but approximately 50-60MB is used to read it in. object.size(big1) is 56MB, or 56 bytes per string, which seems excessive. Regards, Mike Configuration info: > sessionInfo() R version 2.5.1 (2007-06-27) x86_64-redhat-linux-gnu locale: C attached base packages: [1] "stats" "graphics" "grDevices" "utils" "datasets" "methods" [7] "base" # uname -a Linux ***.com 2.6.9-023stab044.4-smp #1 SMP Thu May 24 17:20:37 MSD 2007 x86_64 x86_64 x86_64 GNU/Linux == Quoted Text From: Prof Brian Ripley Date: Tue, 26 Jun 2007 17:53:28 +0100 (BST) The R Data Import/Export Manual points out several ways in which you can use read.csv more efficiently. On Tue, 26 Jun 2007, ivo welch wrote: > dear R experts: > > I am of course no R experts, but use it regularly. I thought I would > share some experimentation with memory use. I run a linux machine > with about 4GB of memory, and R 2.5.0. > > upon startup, gc() reports > > used (Mb) gc trigger (Mb) max used (Mb) > Ncells 268755 14.4 407500 21.8 35 18.7 > Vcells 139137 1.1 786432 6.0 444750 3.4 > > This is my baseline. linux 'top' reports 48MB as baseline. This > includes some of my own routines that are always loaded. Good.. > > > Next, I created a s.csv file with 22 variables and 500,000 > observations, taking up an uncompressed disk space of 115MB. The > resulting object.size() after a read.csv() is 84,002,712 bytes (80MB). > >> s= read.csv("s.csv"); >> object.size(s); > > [1] 84002712 > > > here is where things get more interesting. after the read.csv() is > finished, gc() reports > > used (Mb) gc trigger (Mb) max used (Mb) > Ncells 270505 14.58349948 446.0 11268682 601.9 > Vcells 10639515 81.2 34345544 262.1 42834692 326.9 > > I was a big surprised by this---R had 928MB intermittent memory in > use. More interestingly, this is also similar to what linux 'top' > reports as memory use of the R process (919MB, probably 1024 vs. 1000 > B/MB), even after the read.csv() is finished and gc() has been run. > Nothing seems to have been released back to the OS. > > Now, > >> rm(s) >> gc() > used (Mb) gc trigger (Mb) max used (Mb) > Ncells 270541 14.56679958 356.8 11268755 601.9 > Vcells 139481 1.1 27476536 209.7 42807620 326.6 > > linux 'top' now reports 650MB of memory use (though R itself uses only > 15.6Mb). My guess is that It leaves the trigger memory of 567MB plus > the base 48MB. > > > There are two interesting observations for me here: first, to read a > .csv file, I need to have at least 10-15 times as much memory as the > file that I want to read---a lot more than the factor of 3-4 that I > had expected. The moral is that IF R can read a .csv file, one need > not worry too much about running into memory constraints lateron. {R > Developers---reducing read.csv's memory requirement a little would be > nice. of course, you have more than enough on your plate, already.} > > Second, memory is not returned fully to the OS. This is not > necessarily a bad thing, but good to know. > > Hope this helps... > > Sincerely, > > /iaw > > __ > R-help_at_stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > -- Brian D. Ripley, ripley_at_stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Problems with nls-function
Dear all I have got some problems with a least-squares regression using the function nls. I want to estimate h, k and X of the following formula by using nls : exp(2*200*(q^2-4*h/k-0.25+(2/k-0.5+4*h^2/k^2)*log(abs((k*q^2+2*h*q-1)/(0.25*k-h-1)/((-k*q^2-2*h*q+1)*X) y as defined by c(0.009747 0.001949 0.00 0.003899 0.00 0.00 0.005848 0.001949) q as defined by c(-0.7500 -0.6875 -0.5625 -0.4875 -0.4625 -0.4375 -0.4125 -0.3875) (length of the real q and y is 46; too long to post them here) i tought the correct using of nls would be: Mic<-nls(y~"function", start = list(k=1.0,h=0.1,X=exp(10)) But it doesn`t work. i tryed an easier formula like : Mic<-nls(y~h*exp(2*k*200*(q^2)), start=list(h=0.1,k=1,X=10)) The result was the same. Isn`t "nls" the function i should use to solve this regression problem? Which things did i make wrong? Thank you very much in advance Michael _ In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten! __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] installing RGtk2
On 8/2/07, steve <[EMAIL PROTECTED]> wrote: > > I am using ubuntu. When I tried install.packages("RGtk2") it downloaded > and seemed to compile successfully: > > ** building package indices ... > * DONE (RGtk2) > > The downloaded packages are in > /tmp/Rtmp57id87/downloaded_packages > > However, when I tried library(RGtk2) I found > > Error in dyn.load(x, as.logical(local), as.logical(now)) : > unable to load shared library > '/usr/local/lib/R/site-library/RGtk2/libs/RGtk2.so': >/usr/local/lib/R/site-library/RGtk2/libs/RGtk2.so: undefined symbol: > cairo_path_data_type_get_type > Error in fun(...) : Failed to load RGtk2 dynamic library: Error in > dyn.load(x, as.logical(local), as.logical(now)) : > unable to load shared library > '/usr/local/lib/R/site-library/RGtk2/libs/RGtk2.so': >/usr/local/lib/R/site-library/RGtk2/libs/RGtk2.so: undefined symbol: > cairo_path_data_type_get_type > > Error : .onLoad failed in 'loadNamespace' for 'RGtk2' > Error: package/namespace load failed for 'RGtk2' > > Any suggestions? It looks like you found a bug in RGtk2 >= 2.10.9 that affects systems with older versions of cairo (pre 1.2.0). This was just a typo. I will upload a fixed version soon. Sorry, Michael Steve > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Estimate parameters of copulas
Dear list, Does anyone have any experience on how to estimate the parameters of the specific copula in R or S+?? For example, calculate the conditional d. f. of F(x2|x1) by using the partial derivatives of the Clayton copula, C(u1|u2)=((u2^(-alpha) +u1^(-alpha) - 1 )/ u2^(-alpha) )^-1/alpha-1 Here, how tto estimate the value of alpha based on the observations on (x1, x2)??? fitCopula function?? I could not find some reference on this function.Also does anyone have some example of fitCopula??? Appreciate for your kind help. With regards, Michael [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] proportional odds model in R
At 08:51 02/08/2007, Ramon Martínez Coscollà wrote: >Hi all!! There is no need to post twice, nor to also post on allstat. Pages 204-205 of MASS for which this software is a support tool provides ample information on how to compare models. >I am using a proportinal odds model to study some ordered categorical >data. I am trying to predict one ordered categorical variable taking >into account only another categorical variable. > >I am using polr from the R MASS library. It seems to work ok, but I'm >still getting familiar and I don't know how to assess goodness of fit. >I have this output, when using response ~ independent variable: > >Residual Deviance: 327.0956 >AIC: 333.0956 > > polr.out$df.residual >[1] 278 > > polr.out$edf >[1] 3 > >When taking out every variable... (i.e., making >formula: response ~ 1), I have: > >Residual Deviance: 368.2387 >AIC: 372.2387 > >How can I test if the model fits well? How can I check that the >independent variable effectively explains the model? Is there any >test? > >Moreover, sendig summary(polr.out) I get this error: > > >Error in optim(start, fmin, gmin, method = "BFGS", hessian = Hess, ...) : >initial value in 'vmmin' is not finite > >Something to do with the optimitation procedure... but, how can I fix >it? Any help would be greatly appreciated. > >Thanks. Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] generating symmetric matrices
At 16:29 30/07/2007, Gregory Gentlemen wrote: >Douglas Bates <[EMAIL PROTECTED]> wrote: On 7/27/07, Gregory >Gentlemen wrote: > > Greetings, > > > I have a seemingly simple task which I have not been able to > solve today. I want to construct a symmetric matrix of arbtriray > size w/o using loops. The following I thought would do it: > > > p <- 6 > > Rmat <- diag(p) > > dat.cor <- rnorm(p*(p-1)/2) > > Rmat[outer(1:p, 1:p, "<")] <- Rmat[outer(1:p, 1:p, ">")] <- dat.cor > > > However, the problem is that the matrix is filled by column and > so the resulting matrix is not symmetric. > >Could you provide more detail on the properties of the symmetric >matrices that you would like to generate? It seems that you are >trying to generate correlation matrices. Is that the case? Do you >wish the matrices to be a random sample from a specific distribution. >If so, what distribution? > >Yes, my goal is to generate correlation matrices whose entries have >been sampled independently from a normal with a specified mean and variance. Would it sufficient to use one of the results of RSiteSearch("random multivariate normal", restrict = "functions") or have I completely misunderstood what you want? (I appreciate this is not exactly what you say you want.) >Thanks for the help. > >Greg > > >- > > [[alternative HTML version deleted]] Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] 2nd R Console
Oh now that *is* something interesting! Thank you Greg, I'll have to give this a try. -Original Message- From: Greg Snow [mailto:[EMAIL PROTECTED] Sent: Monday, July 30, 2007 3:44 PM To: [EMAIL PROTECTED]; r-help@stat.math.ethz.ch Subject: RE: [R] 2nd R Console Have you looked at the nws package? It allows for a common workspace that multiple R sessions can all access. Hope this helps, -Original Message- From: "Michael Janis" <[EMAIL PROTECTED]> To: "r-help@stat.math.ethz.ch" Sent: 7/30/07 7:49 AM Subject: [R] 2nd R Console Hi, I was reading a thread: [R] "2nd R console" and had a similar question regarding having more than one R console open at a time. However, my question differs from that of the thread: Is it possible, or is there a wrapper that will allow one, to open an arbitrary number of R consoles which access the same R session (all objects in that session, etc.). This would be R on linux accessed through a shell - kind of like using GNU screen multi-user such that people could work collaboratively on a given session. The problem with screen is that all commands are interleaved in the same terminal, which is confusing and does not allow access to the command prompt at the same time, rather it would be sequential. I know there will be "why" questions but it is useful in an academic environment. Basically we have a memory machine for large genomic analysis - and we could set that up as an Rserver, but this placing R into a multi-user engine is better suited for our immediate needs. Does anybody have thoughts on this? Thanks for considering, Michael Janis UCLA Bioinformatics __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] 2nd R Console
Hi, I was reading a thread: [R] "2nd R console" and had a similar question regarding having more than one R console open at a time. However, my question differs from that of the thread: Is it possible, or is there a wrapper that will allow one, to open an arbitrary number of R consoles which access the same R session (all objects in that session, etc.). This would be R on linux accessed through a shell - kind of like using GNU screen multi-user such that people could work collaboratively on a given session. The problem with screen is that all commands are interleaved in the same terminal, which is confusing and does not allow access to the command prompt at the same time, rather it would be sequential. I know there will be "why" questions but it is useful in an academic environment. Basically we have a memory machine for large genomic analysis - and we could set that up as an Rserver, but this placing R into a multi-user engine is better suited for our immediate needs. Does anybody have thoughts on this? Thanks for considering, Michael Janis UCLA Bioinformatics __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] 2nd R Console
Hi, I was reading a thread: [R] "2nd R console" and had a similar question regarding having more than one R console open at a time. However, my question differs from that of the thread: Is it possible, or is there a wrapper that will allow one, to open an arbitrary number of R consoles which access the same R session (all objects in that session, etc.). This would be R on linux accessed through a shell - kind of like using GNU screen multi-user such that people could work collaboratively on a given session. The problem with screen is that all commands are interleaved in the same terminal, which is confusing and does not allow access to the command prompt at the same time, rather it would be sequential. I know there will be "why" questions but it is useful in an academic environment. Basically we have a memory machine for large genomic analysis - and we could set that up as an Rserver, but this placing R into a multi-user engine is better suited for our immediate needs. Does anybody have thoughts on this? Thanks for considering, Michael Janis UCLA Bioinformatics __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Problem installing tseries package
Thanks, you solved it. For posterity, here's the extra info: R Session.info(): R version 2.4.1 (2006-12-18) i686-redhat-linux-gnu locale:C attached base packages: [1] "stats" "graphics" "grDevices" "utils" "datasets" "methods" [7] "base" and #uname -a Linux stikir.com 2.6.9-023stab043.1-smp #1 SMP Mon Mar 5 16:35:19 MSK 2007 x86_64 x86_64 x86_64 GNU/Linux So, yes, it was strange that R.i386 was installed. I removed both and reinstalled R.x86_64 (2.5.1) using yum. Now tseries seems to have installed sucessfully, although it still threw many warnings. Thanks again, Mike On 7/26/07, Prof Brian Ripley <[EMAIL PROTECTED]> wrote: > > On Thu, 26 Jul 2007, Michael Cassin wrote: > > > Hi, > > > > I'm running R 2.4.1 on Fedora Core 6 and am unable to install the > tseries > > package. I've resolved a few problems getting to this point, by running > a > > yum update, installing the gcc-gfortran dependency, but now I'm stuck. > > Could someone please point me in the right direction? > > Please read the posting guide and provide the information you were asked > for: only then we may be able to help you. > > You seem to have a system which installed R in /usr/lib/R but has x86_64 > components on it. So what architecture is it that you are trying to run? > > My guess is that you installed a i386 RPM on a x86_64 OS. That will > install and R will run *but* you will not be able to use it to install > packages. If you installed the i386 RPM after the x86_64 one, it will > have overwritten some crucial files including /usr/bin/R. > > It is possible to have i386 and x86_64 R coexisting on x86_64 Linux, but > not by installing RPMs for different architectures. > > > > > > R install.packages output === > > == > > > >> install.packages("tseries") > > > > trying URL ' > > http://www.sourcekeg.co.uk/cran/src/contrib/tseries_0.10-11.tar.gz' > > Content type 'application/x-tar' length 182043 bytes > > opened URL > > == > > downloaded 177Kb > > > > * Installing *source* package 'tseries' ... > > ** libs > > gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include > > -fpic -O3 -g -std=gnu99 -c arma.c -o arma.o > > gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include > > -fpic -O3 -g -std=gnu99 -c bdstest.c -o bdstest.o > > gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include > > -fpic -O3 -g -std=gnu99 -c boot.c -o boot.o > > gfortran -fpic -O2 -g -c dsumsl.f -o dsumsl.o > > In file dsumsl.f:450 > > > > IF (IV(1) - 2) 30, 40, 50 > > 1 > > Warning: Obsolete: arithmetic IF statement at (1) > > In file dsumsl.f:3702 > > > > 10 ASSIGN 30 TO NEXT > > 1 > > Warning: Obsolete: ASSIGN statement at (1) > > In file dsumsl.f:3707 > > > > 20GO TO NEXT,(30, 50, 70, 110) > > 1 > > Warning: Obsolete: Assigned GOTO statement at (1) > > In file dsumsl.f:3709 > > > > ASSIGN 50 TO NEXT > > 1 > > Warning: Obsolete: ASSIGN statement at (1) > > In file dsumsl.f:3718 > > > > ASSIGN 70 TO NEXT > > 1 > > Warning: Obsolete: ASSIGN statement at (1) > > In file dsumsl.f:3724 > > > > ASSIGN 110 TO NEXT > > 1 > > Warning: Obsolete: ASSIGN statement at (1) > > In file dsumsl.f:4552 > > > > IF (IV(1) - 2) 999, 30, 70 > > 1 > > Warning: Obsolete: arithmetic IF statement at (1) > > In file dsumsl.f:4714 > > > > IF (IRC) 140, 100, 210 > > 1 > > Warning: Obsolete: arithmetic IF statement at (1) > > gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include > > -fpic -O3 -g -std=gnu99 -c garch.c -o garch.o > > gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include > > -fpic -O3 -g -std=gnu99 -c ppsum.c -o ppsum.o > > gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include > > -fpic -O3 -g -std=gnu99 -c
[R] offset in coxph
The offset argument used in glm and other functions seems to have been removed from the argument list for coxph. I am wondering if there is a reason for this and if there is a possible work-around in order to produce a cox-ph object without fitting coefficients? Thanks, Mike __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Problem installing tseries package
Hi, I'm running R 2.4.1 on Fedora Core 6 and am unable to install the tseries package. I've resolved a few problems getting to this point, by running a yum update, installing the gcc-gfortran dependency, but now I'm stuck. Could someone please point me in the right direction? R install.packages output === == >install.packages("tseries") trying URL ' http://www.sourcekeg.co.uk/cran/src/contrib/tseries_0.10-11.tar.gz' Content type 'application/x-tar' length 182043 bytes opened URL == downloaded 177Kb * Installing *source* package 'tseries' ... ** libs gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include -fpic -O3 -g -std=gnu99 -c arma.c -o arma.o gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include -fpic -O3 -g -std=gnu99 -c bdstest.c -o bdstest.o gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include -fpic -O3 -g -std=gnu99 -c boot.c -o boot.o gfortran -fpic -O2 -g -c dsumsl.f -o dsumsl.o In file dsumsl.f:450 IF (IV(1) - 2) 30, 40, 50 1 Warning: Obsolete: arithmetic IF statement at (1) In file dsumsl.f:3702 10 ASSIGN 30 TO NEXT 1 Warning: Obsolete: ASSIGN statement at (1) In file dsumsl.f:3707 20GO TO NEXT,(30, 50, 70, 110) 1 Warning: Obsolete: Assigned GOTO statement at (1) In file dsumsl.f:3709 ASSIGN 50 TO NEXT 1 Warning: Obsolete: ASSIGN statement at (1) In file dsumsl.f:3718 ASSIGN 70 TO NEXT 1 Warning: Obsolete: ASSIGN statement at (1) In file dsumsl.f:3724 ASSIGN 110 TO NEXT 1 Warning: Obsolete: ASSIGN statement at (1) In file dsumsl.f:4552 IF (IV(1) - 2) 999, 30, 70 1 Warning: Obsolete: arithmetic IF statement at (1) In file dsumsl.f:4714 IF (IRC) 140, 100, 210 1 Warning: Obsolete: arithmetic IF statement at (1) gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include -fpic -O3 -g -std=gnu99 -c garch.c -o garch.o gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include -fpic -O3 -g -std=gnu99 -c ppsum.c -o ppsum.o gcc -I/usr/lib/R/include -I/usr/lib/R/include -I/usr/local/include -fpic -O3 -g -std=gnu99 -c tsutils.c -o tsutils.o gcc -shared -Bdirect,--hash-stype=both,-Wl,-O1 -o tseries.so arma.o bdstest.o boot.o dsumsl.o garch.o ppsum.o tsutils.o -L/usr/lib/R/lib -lRblas -lgfortran -lm -lgcc_s -lgfortran -lm -lgcc_s -L/usr/lib/R/lib -lR /usr/bin/ld: skipping incompatible /usr/lib/R/lib/libRblas.so when searching for -lRblas /usr/bin/ld: skipping incompatible /usr/lib/R/lib/libRblas.so when searching for -lRblas /usr/bin/ld: cannot find -lRblas collect2: ld returned 1 exit status make: *** [tseries.so] Error 1 ERROR: compilation failed for package 'tseries' ** Removing '/usr/lib/R/library/tseries' = = I presume the priority is addressing the error: "/usr/bin/ld: cannot find -lRblas" I have the libRblas.so file with R 2.4. Do I need to upgrade to R 2.5 - In which case I'll be asking how to fix the problems I'm having doing that ;) [~]# yum provides libRblas.so R.x86_64 2.5.1-2.fc6extras Matched from: /usr/lib64/R/lib/libRblas.so libRblas.so()(64bit) R.x86_64 2.5.1-2.fc6extras Matched from: /usr/lib64/R/lib/libRblas.so libRblas.so()(64bit) R.i386 2:2.4.1-1.fc6 installed Matched from: /usr/lib/R/lib/libRblas.so libRblas.so R.x86_64 2.4.1-4.fc6installed Matched from: /usr/lib64/R/lib/libRblas.so libRblas.so()(64bit) Regards, Mike [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] confidence intervals for multinomial
Hi All, I want to test an H0 hypothesis about the proportions of observed counts in k classes. I know that I can do this with the chisq.test. However, besides of the overall acceptance or rejection of the H0, I would like to know which of the k classes cause(s) rejection and I would like to know the observation-based confidence envelopes for the proportions for the k classes. My quick-and-dirty approach thus far is to do an initial chisq.test on the original k classes and then to lump data into two classes (=one of the original classes and all other original classes lumped into one new class) and do a binom.test. I interpret the result of the binom.test as indicating whether the current class might be the reason for the rejection of the overall H0. Additionally, it gives me a confidence envelope for this class. This approach seems fairly straightforward, but I just do not feel totally comfortable with it. I would feel so much better if there was something like a multinom.test, but to my knowledge there is none. Do you have any suggestions what I could rather do? For instance, I might follow a Monte Carlo-like approach: I simulate proportions for the k classes based on the proportions of observed counts with rmultinom. After exclusion of the most extreme values I construct my confidence envelope based on the remaining simulated proportions. Based on whether the hypothesized proportions fall into the observation-based confidence envelopes, I accept or reject. Do you think that either of these approaches is better or would you suggest doing something totally different? All comments and suggestions are highly appreciated. Kind regards, Michael PS: I guess my request parallels that of Matthias Schmidt from Apr 5, 2004, that was answered by Brian Ripley ... Michael Drescher Ontario Forest Research Institute Ontario Ministry of Natural Resources 1235 Queen St East Sault Ste Marie, ON, P6A 2E3 Tel: (705) 946-7406 Fax: (705) 946-2030 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] How to open an URL using RGtk2
On 7/18/07, d. sarthi maheshwari <[EMAIL PROTECTED]> wrote: > > Hi > > I am working on R 2.5.0 on window. I am trying to provide a Hyper-link to > the user as a result, I have tried using gtkLinkButton to exercise the > facility, however, i am not able to perform the required task, i.e. when I > clicked on the LinkButton actually nothing happened. > > I have gone through the documentation for the same and found that > GtkLinkButtonUriFunc is a function which is require to do something with > the > opening of the given URL. Further, I didn't find any other information > regarding this. You can find documentation on this by typing help(GtkLinkButtonUriFunc). You might have to scroll down a little since the so-called "user functions" are described in the overview file for a type. This is the signature for that function: GtkLinkButtonUriFunc(button, link, user.data) So you can define such a function like: uri_hook <- function(button, link, data) browseURL(link) and set it with gtkLinkButtonSetUriHook(uri_hook, NULL). A case could be made that this should be set as the default by RGtk2. Michael Following is my code: > > messlab <- gtkLabelNew(str = "Please wait!", show = TRUE) > messwin <- gtkWindowNew(type = NULL, show = TRUE) > messwin$Add(messlab) > gtkWindowResize(messwin, 250, 60) > gtkWindowSetTitle(messwin, "Graph Analysis") > > > > > > fihor <- gtkHPanedNew(show = TRUE) > fn <- gtkLinkButtonNewWithLabel("http://cran.r-project.org/";, "Result > Link!") > messwin$Remove(messlab) > gtkLabelSetText(messlab, "Result link is ::") > gtkPanedAdd1(fihor, messlab) > gtkPanedAdd2(fihor, fn) > gtkPanedSetPosition(fihor, 100) > gtkWindowSetTitle(messwin, "Result Link") > gtkWindowResize(messwin, 380, 60) > messwin$Add(fihor) > > I am confused how to make this link workable on click? > > Your replies/suggestions are important to me. Please suggest solution. > > Thanks in advance. > Divya Sarthi > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] SLLOOOWWW function ...
At 12:32 17/07/2007, Johannes Graumann wrote: >Does anybody have any insight into how to make this faster? I am not an expert on R programming by any means but I notice you are growing your new data frame row by row. I believe it is normally recommended to allocate enough space to start with. >I suspect, that the rounding going on may be an issue, as is the stepping >through data frame rows using integers ... > >If you have the patience to teach a noob, he will highly appreciate it ;0) > >Joh > >digit <- 4 >for (minute in seq(from=25,to=lrange[2])){ > # Extract all data associtaed with the current time (minute) > frame <- subset(mylist,mylist[["Time"]] == minute) > # Sort by Intensity > frame <- frame[order(frame[["Intensity"]],decreasing = TRUE),] > # Establish output frame using the most intense candidate > newframe <- frame[1,] > # Establish overlap-checking vector using the most intense candidate > lowppm <- round(newframe[1,][["Mass"]]-newframe[1, >[["Mass"]]/1E6*ppmrange,digits=digit) > highppm <- round(newframe[1,][["Mass"]]+newframe[1, >[["Mass"]]/1E6*ppmrange,digits=digit) > presence <- seq(from=lowppm,to=highppm,by=10^(-digit)) > # Walk through the entire original frame and check whether peaks are >overlap-free ... do so until max of 2000 entries > for (int in seq(from=2,to=nrow(frame))) { > if(nrow(newframe) < 2000) { > lowppm <- round(frame[int,][["Mass"]]-frame[int, >[["Mass"]]/1E6*ppmrange,digits=digit) > highppm <- round(frame[int,][["Mass"]]+frame[int, >[["Mass"]]/1E6*ppmrange,digits=digit) > windowrange <- seq(from=lowppm,to=highppm,by=10^(-digit)) > if (sum(round(windowrange,digits=digit) %in% >round(presence,digits=digit)) < 1) { > newframe <- rbind(newframe,frame[int,]) > presence <- c(presence,windowrange) > } > } else { > break() > } > } Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Forall symbol with plotmath/grid
I am trying to get the forall symbol (upside down "A") as part of the label of a lattice plot. Is there an easy way to do this? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Partial Proportional Odds model using vglm
At 13:01 16/07/2007, Rizwan Younis wrote: >Hi: >I am trying to fit a PPO model using vglm from VGAM, and get an error while >executing the code. You seem to keep posting the same problem. Since the only person who can tell you what is happening inside VGAM is probably the maintainer a more efficient strategy would be to email him as the instructions ask you to do. However if that fails then try simplifying your problem to see if the error goes away 1) try merging ages 1 and 2, and ages 4 and 5 2) try merging column 3 "2" with 2 "1" or 4 "3" >Here is the data, code, and error: > >Data = rc13, first row is the column names. a = age, and 1,2,3, 4 and 5 are >condition grades. > > a 1 2 3 4 5 > 1 1 0 0 0 0 > 2 84 2 7 10 2 > 3 16 0 6 6 2 > 4 13 0 3 4 0 > 5 0 0 0 1 0 > >Library(VGAM) > >rc13<-read.table("icg_rcPivot_group2.txt",header=F) >names(rc13)<-c("a","1","2","3","4","5") > >ppo<-vglm(cbind(rc13[,2],rc13[,3],rc13[,4],rc13[,5],rc13[,6])~a,family = >cumulative(link = logit, parallel = F , reverse = F),na.action=na.pass, >data=rc13) >summary(ppo) > >I get the following error: > >Error in "[<-"(`*tmp*`, , index, value = c(1.13512932539841, >0.533057528200189, : > number of items to replace is not a multiple of replacement length >In addition: Warning messages: >1: NaNs produced in: log(x) >2: fitted values close to 0 or 1 in: tfun(mu = mu, y = y, w = w, res = >FALSE, eta = eta, extra) >3: 19 elements replaced by 1.819e-12 in: checkwz(wz, M = M, trace = trace, >wzeps = control$wzepsilon) > >I will appreciate any help to fix this problem. >Thanks > >Reez You >Grad Student >University of Waterloo >Waterloo, ON Canada Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Difference in linear regression results for Stata and R
At 17:17 12/07/2007, kdestler wrote: >Hi >I recently imported data from r into Stata. I then ran the linear >regression model I've been working on, only to discover that the results are >somewhat (though not dramatically different). the standard errors vary more >between the two programs than do the coefficients themselves. Any >suggestions on what I've done that causes this mismatch? You really need to find a small example which exhibits the problem and then post that. >Thanks, >Kate >-- >View this message in context: >http://www.nabble.com/Difference-in-linear-regression-results-for-Stata-and-R-tf4069072.html#a11563283 >Sent from the R help mailing list archive at Nabble.com. Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] (no subject)
Hi All, I want to automatically generate a number of data frames, each with an automatically generated name and an automatically generated number of rows. The number of rows has been calculated before and is different for all data frames (e.g. c(4,5,2)). The number of columns is known a priori and the same for all data frames (e.g. c(3,3,3)). The resulting data frames could look something like this: > auto.data.1 X1 X2 X3 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 > auto.data.2 X1 X2 X3 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 > auto.data.3 X1 X2 X3 1 0 0 0 2 0 0 0 Later, I want to fill the elements of the data frames with values read from somewhere else, automatically looping through the previously generated data frames. I know that I can automatically generate variables with the right number of elements with something like this: > auto.length <- c(12,15,6) > for(i in 1:3) { + nam <- paste("auto.data",i, sep=".") + assign(nam, 1:auto.length[i]) + } > auto.data.1 [1] 1 2 3 4 5 6 7 8 9 10 11 12 > auto.data.2 [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 > auto.data.3 [1] 1 2 3 4 5 6 But how do I turn these variables into data frames or give them any dimensions? Any commands such as 'as.matrix', 'data.frame', or 'dim' do not seem to work. I also seem not to be able to access the variables with something like "auto.data.i" since: > auto.data.i Error: object "auto.data.i" not found Thus, how would I be able to automatically write to the elements of the data frames later in a loop such as ... > for(i in 1:3) { + for(j in 1:nrow(auto.data.i)) { ### this obviously does not work since 'Error in nrow(auto.data.i) : object "auto.data.i" not found' + for(k in 1:ncol(auto.data.i)) { + auto.data.i[j,k] <- 'some value' + }}} Thanks a bunch for all your help. Best, Michael Michael Drescher Ontario Forest Research Institute Ontario Ministry of Natural Resources 1235 Queen St East Sault Ste Marie, ON, P6A 2E3 Tel: (705) 946-7406 Fax: (705) 946-2030 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] automatically generating and accessing data frames of varying dimensions
Hi All, I want to automatically generate a number of data frames, each with an automatically generated name and an automatically generated number of rows. The number of rows has been calculated before and is different for all data frames (e.g. c(4,5,2)). The number of columns is known a priori and the same for all data frames (e.g. c(3,3,3)). The resulting data frames could look something like this: > auto.data.1 X1 X2 X3 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 > auto.data.2 X1 X2 X3 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 > auto.data.3 X1 X2 X3 1 0 0 0 2 0 0 0 Later, I want to fill the elements of the data frames with values read from somewhere else, automatically looping through the previously generated data frames. I know that I can automatically generate variables with the right number of elements with something like this: > auto.length <- c(12,15,6) > for(i in 1:3) { + nam <- paste("auto.data",i, sep=".") + assign(nam, 1:auto.length[i]) + } > auto.data.1 [1] 1 2 3 4 5 6 7 8 9 10 11 12 > auto.data.2 [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 > auto.data.3 [1] 1 2 3 4 5 6 But how do I turn these variables into data frames or give them any dimensions? Any commands such as 'as.matrix', 'data.frame', or 'dim' do not seem to work. I also seem not to be able to access the variables with something like "auto.data.i" since: > auto.data.i Error: object "auto.data.i" not found Thus, how would I be able to automatically write to the elements of the data frames later in a loop such as ... > for(i in 1:3) { + for(j in 1:nrow(auto.data.i)) { ### this obviously does not work since 'Error in nrow(auto.data.i) : object "auto.data.i" not found' + for(k in 1:ncol(auto.data.i)) { + auto.data.i[j,k] <- 'some value' + }}} Thanks a bunch for all your help. Best, Michael Michael Drescher Ontario Forest Research Institute Ontario Ministry of Natural Resources 1235 Queen St East Sault Ste Marie, ON, P6A 2E3 Tel: (705) 946-7406 Fax: (705) 946-2030 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Plot SpatialLinesDataFrame with xlim & ylim
I'm running windows xp, R 2.3.1 with maptools 0.6-6, I guess. When plotting from a large SpatialLinesDataFrame and using xlim & ylim to reduce the area, the plot axes automatically have the same scale size, even if xlim and ylim ranges differ. E.g.: tmp <- readShapeLines(filepath) plot(tmp,xlim=c(-126,-119),ylim=c(50,51)) The y-axis range is actually 47-54, same range as the x-axis. What am I doing wrong? Should I be using a different object for simple coastline & river data? Thanks in advance! Michael ___ Michael Folkes Salmon Stock Assessment Canadian Dept. of Fisheries & Oceans Pacific Biological Station 3190 Hammond Bay Rd. Nanaimo, B.C., Canada V9T-6N7 Ph (250) 756-7264 Fax (250) 756-7053 [EMAIL PROTECTED] [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lattice: vertical barchart
Sundar Dorai-Raj wrote: > It seems that barchart.table doesn't allow the horizontal = FALSE > argument. With a slight modification to barchart.table this can be > accomplished. Thanks for supplying that. > Also, I don't get a warning with your original code using > R-2.5.1 and lattice 0.16-1. Thanks. I should have specified I am using R-2.4.0. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Lattice: vertical barchart
barchart(Titanic, stack=F) produces a very nice horizontal barchart. Each panel has four groups of two bars. barchart(Titanic, stack=F, horizontal=F) doesn't produce the results I would have expected, as it produces this warning message: Warning message: y should be numeric in: bwplot.formula(x = as.formula(form), data = list(Class = c(1, And it results in each panel having 22 groups of 0-2 bars. How can I produce something just like the original except with the orientation changed? Thanks in advance. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Help in installing rggobi in ubuntu linux
Looks like rggobi can't find GGobi. Make sure that PKG_CONFIG_PATH contains the path to your ggobi.pc file. For example: export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig I would have assumed, however, that the ggobi package would have installed to the /usr prefix, in which case pkg-config should have no problem finding GGobi. On 7/8/07, Kenneth Cabrera <[EMAIL PROTECTED]> wrote: > > Hi R users. > > I am experimenting with ubuntu 7.04 Feisty. > > I install the ggobi package with apt-get. > > I got almost all the packages, but > when I try to obtain rggobi, I got > this message: > > > - > install.packages("rggobi") > Aviso en install.packages("rggobi") : argument 'lib' is missing: using > '/usr/local/lib/R/site-library' > --- Please select a CRAN mirror for use in this session --- > Loading Tcl/Tk interface ... done > probando la URL > 'http://cran.at.r-project.org/src/contrib/rggobi_2.1.4-4.tar.gz' > Content type 'application/x-gzip' length 401451 bytes > URL abierta > == > downloaded 392Kb > > * Installing *source* package 'rggobi' ... > checking for pkg-config... /usr/bin/pkg-config > checking pkg-config is at least version 0.9.0... yes > checking for GGOBI... configure: creating ./config.status > config.status: creating src/Makevars > ** libs > gcc -std=gnu99 -I/usr/share/R/include -I/usr/share/R/include -g > -DUSE_EXT_PTR=1 -D_R_=1 -fpic -g -O2 -c brush.c -o brush.o > En el fichero incluÃdo de brush.c:1: > RSGGobi.h:5:22: error: GGobiAPI.h: No existe el fichero ó directorio > In file included from RSGGobi.h:6, > from brush.c:1: > conversion.h:174: error: expected â=â, â,â, â;â, âasmâ or > â__attribute__â before âasCLogicalâ > conversion.h:176: error: expected â=â, â,â, â;â, âasmâ or > â__attribute__â before âasCRawâ > > --- snip --- > > brush.c:124: error: âtâ no se declaró aquà (primer uso en esta > función) > brush.c:124: error: âsâ no se declaró aquà (primer uso en esta > función) > brush.c:124: error: el objeto âGGOBI()â llamado > no es una función > brush.c: En el nivel principal: > brush.c:135: error: expected â)â before âcidâ > make: *** [brush.o] Error 1 > chmod: no se puede acceder a > `/usr/local/lib/R/site-library/rggobi/libs/*': No existe el fichero ó > directorio > ERROR: compilation failed for package 'rggobi' > ** Removing '/usr/local/lib/R/site-library/rggobi' > > The downloaded packages are in > /tmp/RtmpVCacJd/downloaded_packages > Warning message: > installation of package 'rggobi' had non-zero exit status in: > install.packages("rggobi") > > --- > > What am I doing wrong? > > Thank you for your help. > -- > Kenneth Roy Cabrera Torres > Cel 315 504 9339 > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] parsing strings
Hi All, I have strings made up of an unknown number of letters, digits, and spaces. Strings always start with one or two letters, and always end with one or two digits. A set of letters (one or two letters) is always followed by a set of digits (one or two digits), possibly with one or more spaces between the sets of letters and digits. A set of letters always belongs to the following set of digits and I want to parse the strings into these groups. As an example, the strings and the desired parsing results could look like this: A10B10, desired parsing result: A10 and B10 A10 B5, desired parsing result: A10 and B5 AB 10 CD 12, desired parsing result: AB10 and CD12 A10CD2EF3, desired parsing result: A10, CD2, and EF3 I assume that it is possible to search a string for letters and digits and then break the string where letters are followed by digits, however I am a bit clueless about how I could use, e.g., the 'charmatch' or 'parse' commands to achieve this. Thanks a lot in advance for your help. Best, Michael Michael Drescher Ontario Forest Research Institute Ontario Ministry of Natural Resources 1235 Queen St East Sault Ste Marie, ON, P6A 2E3 Tel: (705) 946-7406 Fax: (705) 946-2030 [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] patch to enhance sound module for 96 kHz/24 bit sample sizes
<-" <- function(s,value){ if (is.null(class(s)) || class(s)!="Sample") stop("Argument 's' must be of class 'Sample'.") - if (mode(value)!="numeric" || (value!=8 && value!=16)) -stop("Number of bits must be 8 or 16.") + if (mode(value)!="numeric" || (value!=8 && value!=16 && value!=24)) +stop("Number of bits must be 8, 16, or 24.") else s$bits <- value return(s) } @@ -375,8 +397,8 @@ "rate<-" <- function(s,value){ if (is.null(class(s)) || class(s)!="Sample") stop("Argument 's' must be of class 'Sample'.") - if (mode(value)!="numeric" || value<1000 || value>48000) -stop("Rate must be an number between 1000 and 48000.") + if (mode(value)!="numeric" || value<1000 || value>96000) +stop("Rate must be an number between 1000 and 96000.") if (rate(s)==value) return(s) ch <- channels(s) sound(s) <- sound(s)[,as.integer(seq(1,sampleLength(s)+.,by=rate(s)/value))] @@ -433,8 +455,8 @@ setBits <- function(s,value){ sampletest <- is.Sample(s) if (!sampletest$test) stop(sampletest$error) - if (mode(value)!="numeric" || (value!=8 && value!=16)) -stop("Number of bits must be 8 or 16.") + if (mode(value)!="numeric" || (value!=8 && value!=16 && value!=24)) +stop("Number of bits must be 8, 16, or 24.") if (is.null(class(s))) s <- loadSample(s,filecheck=FALSE) bits(s) <- value return(s) @@ -443,8 +465,8 @@ setRate <- function(s,value){ sampletest <- is.Sample(s) if (!sampletest$test) stop(sampletest$error) - if (mode(value)!="numeric" || value<1000 || value>48000) -stop("Rate must be a number between 1000 and 48000.") + if (mode(value)!="numeric" || value<1000 || value>96000) +stop("Rate must be a number between 1000 and 96000.") if (is.null(class(s))) s <- loadSample(s,filecheck=FALSE) rate(s) <- value return(s) Only in sound/R: sound.R~ [EMAIL PROTECTED] Desktop]$ I did this for a personal project I'm doing for fun. Let me know whether you need a more formal copyright disclaimer than "I hereby offer this patch to be included in any software licensed under the GNU General Public Lincese (version 2 or later)". Michael Tiemann __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] problem assigning to indexed data frame element
Hi All, Sorry if I ask an obvious thing, I am still new to R ... I created a data frame of given dimensions to which I gave strings as column names. I want to write to elements of the data frame by indexing them with the row number and column name (string). The problem is that I can read elements from the data frame in this way, but I cannot assign to elements in this way. Instead, I get the following error message: Error in Summary.factor(..., na.rm = na.rm) : min not meaningful for factors Please find the code I used farther below. It would be great if someone could help me. Best regards, Michael PS: Coincidentally, I found this same error message mentioned in another context (levelplot) as indicating a bug (original bug report PR# 6005 on Mon, 22 Dec 2003) Michael Drescher Ontario Forest Research Institute Ontario Ministry of Natural Resources 1235 Queen St East Sault Ste Marie, ON, P6A 2E3 Tel: (705) 946-7406 Fax: (705) 946-2030 Code: > sfalls.plot.comp <- matrix(nrow=plot.count, ncol=spec.count, byrow=T) > colnames(sfalls.plot.comp) <- levels(SPECIES) ### SPECIES, SPP_VOL, & PLOT are columns/variables in a previously read data file > sfalls.plot.comp <- data.frame(sfalls.plot.comp) > attach(sfalls.plot.comp) > sfalls.plot.comp[is.na(sfalls.plot.comp)] <- 0 > sfalls.plot.comp Bf Bw Pj Po Sb 1 0 0 0 0 0 2 0 0 0 0 0 > hh <- 1 > current.spec <- SPECIES[hh]; current.vol <- SPP_VOL[hh]; current.plot <- PLOT[hh] > current.spec [1] Bf Levels: Bf Bw Pj Po Sb > current.vol [1] 2 > current.plot [1] 1 > sfalls.plot.comp[current.plot,current.spec] ### thus, reading from the data frame in this way (using the column name/string) works fine [1] 0 > sfalls.plot.comp[current.plot,current.spec] <- current.vol### but assigning in this way does not work Error in Summary.factor(..., na.rm = na.rm) : min not meaningful for factors > sfalls.plot.comp[current.plot,1] <- current.vol ### assigning by using the column number instead of the column name of course does work > sfalls.plot.comp[current.plot,current.spec] [1] 2 > sfalls.plot.comp[current.plot,"Bw"] <- current.vol ### as does assigning when replacing 'current.spec' for its assigned value in quotes, e.g., "Bw" > sfalls.plot.comp[current.plot,"Bw"] [1] 2 > sfalls.plot.comp Bf Bw Pj Po Sb 1 2 2 0 0 0 2 0 0 0 0 0 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lookups in R
the problem I have is that userid's are not just sequential from 1:n_users. if they were, of course I'd have made a big matrix that was n_users x n_fields and that would be that. but, I think what I cando is just use the hash to store the index into the result matrix, nothing more. then the rest of it will be easy. but please tell me more about eliminating loops. In many cases in R I have used lapply and derivatives to avoid loops, but in this case they seem to give me extra overhead simply by the generation of their result lists: > system.time(lapply(1:10^4, mean)) user system elapsed 1.310.001.31 > system.time(for(i in 1:10^4) mean(i)) user system elapsed 0.330.000.32 thanks, mike > I don't think that's a fair comparison--- much of the overhead comes > from the use of data frames and the creation of the indexing vector. I > get > > > n_accts <- 10^3 > > n_trans <- 10^4 > > t <- list() > > t$amt <- runif(n_trans) > > t$acct <- as.character(round(runif(n_trans, 1, n_accts))) > > uhash <- new.env(hash=TRUE, parent=emptyenv(), size=n_accts) > > for (acct in as.character(1:n_accts)) uhash[[acct]] <- list(amt=0, n=0) > > system.time(for (i in seq_along(t$amt)) { > + acct <- t$acct[i] > + x <- uhash[[acct]] > + uhash[[acct]] <- list(amt=x$amt + t$amt[i], n=x$n + 1) > + }, gcFirst = TRUE) >user system elapsed > 0.508 0.008 0.517 > > udf <- matrix(0, nrow = n_accts, ncol = 2) > > rownames(udf) <- as.character(1:n_accts) > > colnames(udf) <- c("amt", "n") > > system.time(for (i in seq_along(t$amt)) { > + idx <- t$acct[i] > + udf[idx, ] <- udf[idx, ] + c(t$amt[i], 1) > + }, gcFirst = TRUE) >user system elapsed > 1.872 0.008 1.883 > > The loop is still going to be the problem for realistic examples. > > -Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lookups in R
__ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lookups in R
i wish it were that simple. unfortunately the logic i have to do on each transaction is substantially more complicated, and involves referencing the existing values of the user table through a number of conditions. any other thoughts on how to get better-than-linear performance time? is there a recommended binary searching/sorting (i.e. BTree) module that I could use to maintain my own index? thanks, mike Peter Dalgaard wrote: > mfrumin wrote: >> Hey all; I'm a beginner++ user of R, trying to use it to do some >> processing >> of data sets of over 1M rows, and running into a snafu. imagine that my >> input is a huge table of transactions, each linked to a specif user >> id. as >> I run through the transactions, I need to update a separate table for >> the >> users, but I am finding that the traditional ways of doing a table >> lookup >> are way too slow to support this kind of operation. >> >> i.e: >> >> for(i in 1:100) { >>userid = transactions$userid[i]; >>amt = transactions$amounts[i]; >>users[users$id == userid,'amt'] += amt; >> } >> >> I assume this is a linear lookup through the users table (in which >> there are >> 10's of thousands of rows), when really what I need is O(constant >> time), or >> at worst O(log(# users)). >> >> is there any way to manage a list of ID's (be they numeric, string, >> etc) and >> have them efficiently mapped to some other table index? >> >> I see the CRAN package for SQLite hashes, but that seems to be going >> a bit >> too far. >> > Sometimes you need a bit of lateral thinking. I suspect that you could > do it like this: > > tbl <- with(transactions, tapply(amount, userid, sum)) > users$amt <- users$amt + tbl[users$id] > > one catch is that there could be users with no transactions, in which > case you may need to replace userid by factor(userid, > levels=users$id). None of this is tested, of course. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lattice: shifting strips to left of axes
[EMAIL PROTECTED] wrote: > myYlabGrob <- > function(..., main.ylab = "") ## ...is lab1, lab2, etc > { > ## you can add arguments to textGrob for more control > ## in the next line > labs <- lapply(list(...), textGrob, rot=90) > main.ylab <- textGrob(main.ylab, rot = 90) > nlabs <- length(labs) > lab.heights <- > lapply(labs, >function(lab) unit(1, "grobheight", > data=list(lab))) > unit1 <- unit(1.2, "grobheight", data = list(main.ylab)) > unit2 <- do.call(max, lab.heights) > lab.layout <- > grid.layout(ncol = 2, nrow = nlabs, > heights = unit(1, "null"), > widths = unit.c(unit1, unit2), > respect = TRUE) > lab.gf <- frameGrob(layout=lab.layout) > for (i in seq_len(nlabs)) > { > lab.gf <- placeGrob(lab.gf, labs[[i]], row = i, col = 2) > } > lab.gf <- placeGrob(lab.gf, main.ylab, col = 1) > lab.gf > } Wow. I don't think I would have been able to come up with that on my own. Thank you! -- Michael Hoffman __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lattice: shifting strips to left of axes
[EMAIL PROTECTED] wrote: > On 7/2/07, Michael Hoffman <[EMAIL PROTECTED]> wrote: >> Consider this plot: >> >> xyplot(mpg ~ disp | cyl, mtcars, strip=F, strip.left=T, layout=c(1, 3), >> scales=list(relation="free"), >> par.settings=list(strip.background=list(col="transparent"))) >> >> I want to have the "cyl" strip labels on the left side of the axis. Is >> this possible? > > No. (It's possible to have a legend there, which could be used to put > row-specific ylab-s, for example, but it will be hard to make it look > like strips) Thanks for the response. Not looking like a real strip is fine. What I want is essentially a secondary ylab for each row, and don't care about niceties such as shingle markings (I should have made the conditional factor(cyl) in the above plot). But it looks like the legend goes to the left of the plot's ylab, and what I really want is for the secondary ylab to be between the primary ylab and the panel. So looks like I would have to eliminate the primary ylab from being drawn automatically and draw it myself in the legend? And I think I would have to manually calculate the panel heights as well, right? I don't see a way for the legend to get this out of the trellis object. > xyplot(mpg ~ disp | cyl, mtcars, strip=F, strip.left=T, layout=c(1, 3), >scales=list(relation="free", y = list(draw = FALSE)), >axis = function(side, ...) { >if (side == "right") >panel.axis(side = "right", outside = TRUE) >else axis.default(side = side, ...) >}, >par.settings= >list(strip.background=list(col="transparent"), > layout.widths = list(axis.key.padding = 5))) This seems a lot easier. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Lattice: shifting strips to left of axes
Consider this plot: xyplot(mpg ~ disp | cyl, mtcars, strip=F, strip.left=T, layout=c(1, 3), scales=list(relation="free"), par.settings=list(strip.background=list(col="transparent"))) I want to have the "cyl" strip labels on the left side of the axis. Is this possible? Failing that, is it possible to remove the left axis and display it on the right instead, despite relation="free"? -- Michael Hoffman __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Meta-Analysis of proportions
At 09:58 28/06/2007, Chung-hong Chan wrote: >OpenBUGS should be something related to Bayesian statistics. > >You may refer to Chapter 12 of Handbook >http://cran.r-project.org/doc/vignettes/HSAUR/Ch_meta_analysis.pdf >It talks about meta-regression. > > > >On 6/28/07, Monica Malta <[EMAIL PROTECTED]> wrote: >>Dear colleagues, >> >>I'm conducting a meta-analysis of studies evaluating adherence of >>HIV-positive drug users into AIDS treatment, therefore I'm looking >>for some advice and syntax suggestion for running the >>meta-regression using proportions, not the usual OR/RR frequently >>used on RCT studies. Monica, you have a number of options. 1 - weight each study equally 2 - weight each individual equally 3 - use the usual inverse variance procedure, possibly transforming the proportions first 4 - something else I have not though of You could do 3 using rmeta which is available from CRAN. Programming 1 or 2 is straightforward. Of course, you do need to decide which corresponds to your scientific question. >>Have already searched already several handbooks, R-manuals, mailing >>lists, professors, but... not clue at all... >> >>Does anyone have already tried this? A colleague of mine recently >>published a similar study on JAMA, but he used OpenBUGS - a >>software I'm not familiar with... >> >>If there is any tip/suggestion for a possible syntax, could someone >>send me? I need to finish this paper before my PhD qualify, but I'm >>completely stuck... >> >>So, any tip will be more than welcome...I will really appreciate it!!! >> >>Thanks in advance and congrats on the amazing mailing-list. >> >> >> >>Bests from Rio de Janeiro, Brazil. >> >>Monica >> >> >> >> >> >>Monica Malta >>Researcher >>Oswaldo Cruz Foundation - FIOCRUZ >>Social Science Department - DCS/ENSP >>Rua Leopoldo Bulhoes, 1480 - room 905 >>Manguinhos >>Rio de Janeiro - RJ 21041-210 >>Brazil >>phone +55.21.2598-2715 >>fax +55.21.2598-2779 >> [[alternative HTML version deleted]] >> >>__ >>R-help@stat.math.ethz.ch mailing list >>https://stat.ethz.ch/mailman/listinfo/r-help >>PLEASE do read the posting guide http://www.R-project.org/posting-guide.html >>and provide commented, minimal, self-contained, reproducible code. > > >-- >"The scientists of today think deeply instead of clearly. One must be >sane to think clearly, but one can think deeply and be quite insane." >Nikola Tesla >http://www.macgrass.com > > Michael Dewey [EMAIL PROTECTED] http://www.aghmed.fsnet.co.uk/home.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] error message survreg.fit
Dear All, I am doing a parametric survival analysis with: fit <- survreg(Surv(xyz$start, xyz$stop, xyz$event, type="interval") ~ 1, dist='loglogistic') At this point I do not want to look into covariates, hence the '~1' as model formulation. As event types I have exact, interval, and right censored lifetime data. Everything works fine. For reasons that are not important here now, I wanted to look for the effects of 'randomly' re-allocating event types to the lifetime data. However, when I did this (keeping everything else the same), R returned this error message: Error in survreg.fit(X, Y, weights, offset, init = init, controlvals = control, : NA/NaN/Inf in foreign function call (arg 9) I am puzzled by this error message and can't find anything in the archives etc. Your input would be greatly appreciated. Best regards, Michael Michael Drescher Ontario Forest Research Institute Ontario Ministry of Natural Resources 1235 Queen St East Sault Ste Marie, ON, P6A 2E3 Tel: (705) 946-7406 Fax: (705) 946-2030 [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lattice: hiding only some strips
[EMAIL PROTECTED] wrote: > On 6/22/07, Michael Hoffman <[EMAIL PROTECTED]> wrote: >> What I want is to draw strips at the very top of the plot and not to >> draw strips that are between panels. > > xyplot(mpg ~ disp | factor(cyl) * HP, mtcars, >par.strip.text = list(lines = 0.5), >strip = function(which.given, which.panel, ...) { >if (which.given == 1) >strip.default(which.given = 1, > which.panel = which.panel[which.given], > ...) >}, >par.settings = >list(layout.heights = > list(strip = rep(c(0, 1), c(5, 1) Thanks, this is just what I was looking for. -- Michael __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Lattice: hiding only some strips
Deepayan Sarkar wrote: > On 6/22/07, Michael Hoffman <[EMAIL PROTECTED]> wrote: >> I am using R 2.4.0 and lattice to produce some xyplots conditioned on a >> factor and a shingle. The shingle merely chops up the data along the >> x-axis, so it is easy to identify which part of the shingle a panel is >> in by looking at the x-axis markings. I only want to have a strip at the >> top for the factor. >> >> Is this possible? I looked into calculateGridLayout() and it seems to me >> that there isn't an easy way to do it without rewriting that function >> (and others). > > It's nowhere near that complicated, you just need to write your own > strip function. E.g., > > mtcars$HP <- equal.count(mtcars$hp) > > xyplot(mpg ~ disp | HP + factor(cyl), mtcars, >par.strip.text = list(lines = 0.5), >strip = function(which.given, which.panel, ...) { >if (which.given == 2) >strip.default(which.given = 1, > which.panel = which.panel[which.given], > ...) >}) Thank you for this response. But it looks like I poorly specified the problem. I only want to have a strip at the very top of the plot, not at the top of each panel. You can probably understand why I want this better if we take your example and swap the givens around: xyplot(mpg ~ disp | factor(cyl) * HP, mtcars, par.strip.text = list(lines = 0.5), strip = function(which.given, which.panel, ...) { if (which.given == 1) strip.default(which.given = 1, which.panel = which.panel[which.given], ...) }) So now I have 4, 6, and 8 at the top of every row of panels as a label for cyl. But I don't need that--really I only need 4, 6, and 8 at the very top (or bottom) of the plot, just like with default settings I only get the axis labels at the top and bottom of the plot, not duplicated for every panel. What I want is to draw strips at the very top of the plot and not to draw strips that are between panels. Can this be done? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Lattice: hiding only some strips
I am using R 2.4.0 and lattice to produce some xyplots conditioned on a factor and a shingle. The shingle merely chops up the data along the x-axis, so it is easy to identify which part of the shingle a panel is in by looking at the x-axis markings. I only want to have a strip at the top for the factor. Is this possible? I looked into calculateGridLayout() and it seems to me that there isn't an easy way to do it without rewriting that function (and others). Many thanks -- Michael Hoffman __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Speed up R
Matthew Keller wrote: > So Mike, let me ask you a question. If R runs out of RAM, does it > begin to use virtual RAM, and hence begin to swap from the hard drive? > If so, I could see how a faster hard drive would speed R up when you > don't have enough RAM... Yes. Virtual memory management is done by any modern operating system. The slowdown will be extreme. (Therefore, a minimum of 2Gb is a good idea for serious crunching -- I'd recommend 3 or 4 if possible. Don't forget that any programming language may have two copies of some arrays in memory during certain operations.) But even when R itself is not using VM, any significant I/O load on a Windows CPU (when (S)ATA disks are used) slows down *at least* all other I/O, and it seems to me that it slows down other interrupt servicing (e.g., responding to mouse clicks) as well. Even if the latter is not strictly true, it may be that the mouse click requires paging something in, like the stupid animation that plays when files are copied. Aside: On a old PC, copying files from the command line was fine, but if I forgot & did it from the Windows Explorer, the stupid animation swapped in from disk and the machine froze for ~30 seconds.) Windows Vista can take advantage of a new gizmo Intel has introducted with a 1 Gb solid-state disk cache. That might reduce such problems. Mike Mike Prager Southeast Fisheries Science Center, NOAA Beaufort, North Carolina USA __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] [R-pkgs] RGtk2 2.10.x series available
The new 2.10.x series of the RGtk2 package has recently become available on CRAN. RGtk2 is a package for creating graphical user interfaces (GUI's) in R and is similar in purpose to the tcltk package. RGtk2 binds to and enables the extension of the GTK+ user interface library, as well as several other libraries that are integrated with GTK+. The gWidgetsRGtk2 package provides an RGtk2 implementation of the elegant toolkit-independent gWidgets API. The cairoDevice package allows embedding of R graphics inside RGtk2 interfaces. RGtk2 2.10.x (currently at 2.10.9-1) brings several major improvements: * Updated bindings to the latest stable versions of all bound libraries, which include: GTK+, GDK, GdkPixbuf, Cairo, Pango and libglade. * The ability to create new GObject classes, including new types of widgets, entirely from within R. * The compilation of RGtk2 now conditions on the versions of the libraries installed on the system. This means that RGtk2 has the same dependencies as the original 2.8.x series, but if newer versions of libraries (ie GTK+ 2.10.x) are available, it will bind to the new API. * Much of the C-level API has been registered to be callable from the C code of other packages (allowing packages binding other GObject-based libraries to borrow from RGtk2). * Many, many bugfixes and minor improvements. RGtk2 offers several advantages over tcltk: * Many more features (too many more to list) * Arguably cleaner API * Integration with gWidgets (via gWidgetsRGtk2); see the 'pmg' package for an example of this in action. * The ability to create new types of widgets from scratch. * Support for building GUI's using the point-and-click interface design tool Glade (via libglade); see the 'rattle' package for example. * Extras: Cairo vector graphics, GdkPixbuf image manipulation, etc. RGtk2 as well as all other packages mentioned here are available on CRAN. Michael Lawrence [[alternative HTML version deleted]] ___ R-packages mailing list [EMAIL PROTECTED] https://stat.ethz.ch/mailman/listinfo/r-packages __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] merging dataframes with diffent rownumbers
At 09:09 18/06/2007, Thomas Hoffmann wrote: >Dear R-Helpers, > >I have following problem: > >I do have two data frames dat1 and dat2 with a commen column BNUM >(long integer). dat1 has a larger number of BNUM than dat2 and >different rows of dat2 have equal BNUM. The numbers of rows in dat1 >and dat2 is not equal. I applied the tapply-function to dat2 with >BNUM as index. I would like to add the columns from dat1 to the results of > >b.sum <- tapply(dat2, BNUM, sum). > >However the BNUM of b.sum are only a subset of the dat1. > >Does anybody knows a elegant way to solve the problem? If I understand you correctly ?merge should help you here >Thanks in advance > >Thomas H. > > Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Responding to a posting in the digest
Moshe Olshansky yahoo.com> writes: > > Is there a convenient way to respond to a particular > posting which is a part of the digest? > I mean something that will automatically quote the > original message, subject, etc. > > Thank you! > > Moshe Olshansky > m_olshansky yahoo.com > > __ > R-help stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > > A simple solution is to respond via the Gmane (web) list. There is a link to the Gmane R lists from the 'search' page of any CRAN mirror. Hope this helps, Michael Bibo [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Difference between prcomp and cmdscale
Hi Mark I think Brian Ripley answered this most effectively and succinctly. I did actually do quite a bit of googling and searching of the R help before posting, and whilst there is quite a lot on each topic individually, I failed to find articles that compare and contrast PCA and MDS. If you know of any, of course I would be happy to read them. Many thanks Mick -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mark Difford Sent: 14 June 2007 12:49 To: r-help@stat.math.ethz.ch Subject: Re: [R] Difference between prcomp and cmdscale Michael, Why should that confuse you? Have you tried reading some of the literature on these methods? There's plenty about them on the Net (Wiki's often a goodish place to start)---and even in R, if you're prepared to look ;). BestR, Mark. michael watson (IAH-C) wrote: > > I'm looking for someone to explain the difference between these > procedures. The function prcomp() does principal components anaylsis, > and the function cmdscale() does classical multi-dimensional scaling > (also called principal coordinates analysis). > > My confusion stems from the fact that they give very similar results: > > my.d <- matrix(rnorm(50), ncol=5) > rownames(my.d) <- paste("c", 1:10, sep="") > # prcomp > prc <- prcomp(my.d) > # cmdscale > mds <- cmdscale(dist(my.d)) > cor(prc$x[,1], mds[,1]) # produces 1 or -1 > cor(prc$x[,2], mds[,2]) # produces 1 or -1 > > Presumably, under the defaults for these commands in R, they carry out > the same (or very similar) procedures? > > Thanks > Mick > > The information contained in this message may be\ confiden...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Responding to a posting in the digest
At 08:26 14/06/2007, Moshe Olshansky wrote: >Is there a convenient way to respond to a particular >posting which is a part of the digest? >I mean something that will automatically quote the >original message, subject, etc. Yes, if you use appropriate mailing software. I use Eudora, receive the emails in MIME format and the responses do get properly threaded as far as I can see from the mailing list archives. >Thank you! > >Moshe Olshansky >[EMAIL PROTECTED] Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Difference between prcomp and cmdscale
I'm looking for someone to explain the difference between these procedures. The function prcomp() does principal components anaylsis, and the function cmdscale() does classical multi-dimensional scaling (also called principal coordinates analysis). My confusion stems from the fact that they give very similar results: my.d <- matrix(rnorm(50), ncol=5) rownames(my.d) <- paste("c", 1:10, sep="") # prcomp prc <- prcomp(my.d) # cmdscale mds <- cmdscale(dist(my.d)) cor(prc$x[,1], mds[,1]) # produces 1 or -1 cor(prc$x[,2], mds[,2]) # produces 1 or -1 Presumably, under the defaults for these commands in R, they carry out the same (or very similar) procedures? Thanks Mick The information contained in this message may be confidentia...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] installing Rgraphviz under fedora 5
On 6/13/07, marco.R.help marco.R.help <[EMAIL PROTECTED]> wrote: > > Dear list, > > I have a lot of troubles installing Rgraphviz. > I installed graphviz 2.13 from "graphviz-2.13.20061222.0540.tar" > I installed the library Rgraphviz > > > getBioC("Rgraphviz") > Running biocinstall version 2.0.8 with R version 2.5.0 > Your version of R requires version 2.0 of Bioconductor. > trying URL ' > > http://bioconductor.org/packages/2.0/bioc/src/contrib/Rgraphviz_1.14.0.tar.gz > ' > Content type 'application/x-gzip' length 1522949 bytes > > etc etc > > but when I do: library(Rgraphviz) > > > library(Rgraphviz) > Error in dyn.load(x, as.logical(local), as.logical(now)) : > unable to load shared library '/home/jke/mazu/SOFTWARE_INST/R- > 2.5.0 > /library/Rgraphviz/libs/Rgraphviz.so': > libgvc.so.3: cannot open shared object file: No such file or directory > Error : .onLoad failed in 'loadNamespace' for 'Rgraphviz' > Error: package/namespace load failed for 'Rgraphviz' > > The path to Rgraphviz.so is correct ! But it's libgvc.so.3 that's missing. Make sure your LD_LIBRARY_PATH environment variable points to the directory containing it. Can someone help with this? > > this is the session info > > > regards > > Marco > > > > sessioninfo() > Error: could not find function "sessioninfo" > > sessionInfo() > R version 2.5.0 (2007-04-23) > x86_64-unknown-linux-gnu > locale: > > LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=en_US.UTF-8;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C > > attached base packages: > [1] "tools" "stats" "graphics" "grDevices" "datasets" "utils" > [7] "methods" "base" > > other attached packages: > geneplotter latticeannotate Biobase graph >"1.14.0""0.15-4""1.14.1""1.14.0""1.14.2" > > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] export to a dat file that SAS can read
At 09:05 13/06/2007, Rina Miehs wrote: >Hello > >i have a data frame in R that some SAS users need to use in their >programs, and they want it in a dat file, is that possible? >and which functions to use for that? Does library(foreign) ?write.foreign get you any further forward? > >my data frame is like this: > > > out13[1:100,] > faridniveau1 niveau3 >p1p3 antal1 >210007995 0.0184891394 4.211306e-10 5.106471e-02 2.594580e-02 > 3 >910076495 0.0140812953 3.858757e-10 1.065804e-01 3.743271e-02 > 3 >10 10081892 0.0241760590 7.429612e-10 1.628295e-02 3.021538e-04 > 6 >13 10101395 0.0319517576 3.257375e-10 2.365204e-03 6.633232e-02 >19 >16 10104692 0.0114040787 3.661169e-10 1.566721e-01 4.550663e-02 > 4 >17 10113592 0.0167586526 4.229634e-10 6.922003e-02 2.543987e-02 > 2 >18 10113697 0.0259205504 2.888646e-10 1.096366e-02 9.118995e-02 > 6 >22 10121697 -0.0135341273 -5.507914e-10 1.157417e-01 5.501947e-03 >16 >28 10146495 0.0093514076 3.493487e-10 2.041883e-01 5.340801e-02 > 4 >29 10150497 0.0091611504 3.455925e-10 2.089893e-01 5.531904e-02 > 4 >36 10171895 0.0089116669 2.956742e-10 2.153844e-01 8.614259e-02 > 4 >42 10198295 0.0078515166 3.147140e-10 2.437943e-01 7.314111e-02 > 5 > > >Thanks > >Rina > > > > > [[alternative HTML version deleted]] Michael Dewey http://www.aghmed.fsnet.co.uk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] open .r files with double-click
Hmmm. Possibly your best bet is to create a batch file, runr.bat or something, and associate .r files with that. The batch file would be something like: "C:/Program Files/R/R-2.5.0/bin/Rgui.exe" --no-save < %1 (I think thats how you reference arguments in dos...) -Original Message- From: [EMAIL PROTECTED] on behalf of [EMAIL PROTECTED] Sent: Fri 08/06/2007 7:52 PM To: r-help@stat.math.ethz.ch Subject: [R] open .r files with double-click Hi Folks, On Windows XP, R 2.5.0. After reading the Installation for Windows and Windows FAQs, I cannot resolve this. I set file types so that Rgui.exe will open .r files. When I try to open a .r file by double-clicking, R begins to launch, but I get an error message saying "Argument 'C:\Documents and Settings\Zoology\My Documents\trial.r' _ignored_" I click OK, and then R GUI opens, but not the script file. Is there a way to change this? thanks, Hank __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Escobar&Meeker example survreg
Dear all, I am new to R and may make beginner mistakes. Sorry. I am learning using R to do survival analysis. As a start I used the example script code provided in the documentation of predict.survreg of the survival package: # Draw figure 1 from Escobar and Meeker fit <- survreg(Surv(time,status) ~ age + age^2, data=stanford2, dist='lognormal') plot(stanford2$age, stanford2$time, xlab='Age', ylab='Days', xlim=c(0,65), ylim=c(.01, 10^6), log='y') pred <- predict(fit, newdata=list(age=1:65), type='quantile', p=c(.1, .5, .9)) matlines(1:65, pred, lty=c(2,1,2), col=1) When I compare the graphical output with Fig. 1 of Escobar and Meeker (1992), I find that my output produces quantiles that are sloping down linearly with age. The quantiles in Fig. 1 of Escobar and Meeker (1992) however are obviously non-linear. I compared this with the corresponding section in the S-Plus manual and found that the R and S-Plus are virtually identical (as they should) and that the predicted quantiles in S-Plus (Fig. 31.3) are also non-linear. I checked the obvious help files and R-archive and found nothing on this. I must be making a very basic mistake but can't find it. Your feedback would be highly appreciated. Best, Michael Ref: Escobar and Meeker (1992). Assessing influence in regression analysis with censored data. Biometrics, 48, 507-528. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Averaging across rows & columns
Check out rowMeans to average over replicate columns first, ie: means <- data.frame(t1=rowMeans(a[,1:3]), t2=rowMeans(a[,4:6]), etc) Then, if you want to aggregate every 14 rows: aggregate(means, by=list(rows=rep(1:(nrow(means)/14), each=14)), mean) Or something... -Original Message- From: [EMAIL PROTECTED] on behalf of Silvia Lomascolo Sent: Thu 07/06/2007 8:26 PM To: r-help@stat.math.ethz.ch Subject: [R] Averaging across rows & columns I use Windows, R version 2.4.1. I have a dataset in which columns 1-3 are replicates, 4-6, are replicates, etc. I need to calculate an average for every set of replicates (columns 1-3, 4-6, 7-9, etc.) AND each set of replicates should be averaged every 14 rows (for more detail, to measure fruit color using a spectrometer, I recorded three readings per fruit -replicates- that I need to average to get one reading per fruit; each row is a point in the light spectrum and I need to calculate an average reading every 5nm -14 rows- for each fruit). Someone proposed to another user who wanted an avg across columns to do a <- matrix(rnorm(360),nr=10) b <- rep(1:12,each=3) avgmat <- aggregate(a,by=list(b)) I tried doing this to get started with the columns first but it asks for an argument FUN that has no default. The help for aggregate isn't helping me much (a new R user) to discover what value to give to FUN -'average' doesn't seem to exist, and 'sum' (whatever it is supposed to sum) gives an error saying that arguments should have the same length- Any help will be much appreciated! Silvia. -- View this message in context: http://www.nabble.com/Averaging-across-rows---columns-tf3885900.html#a11014649 Sent from the R help mailing list archive at Nabble.com. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] reading BMP or TIFF files
On 6/7/07, Bob Meglen <[EMAIL PROTECTED]> wrote: > > I realize that this question has been asked before (2003); > > From: Yi-Xiong Zhou > Date: Sat 22 Nov 2003 - 10:57:35 EST > > but I am hoping that the answer has changed. Namely, I would > rather read the BMP (or TIFF) files directly instead of putting > them though a separate utility for conversion as suggested by, What would you like to do with the images? The GdkPixbuf bindings provided by the RGtk2 package allow you to read both types of images. In conjunction with the cairoDevice package, you could mix the image with R graphics. Another way might be to use some Java library via rJava and use the Java graphics device. Michael From: Prof Brian Ripley > Date: Sat 22 Nov 2003 - 15:23:33 EST > > "Even easier is to convert .bmp to .pnm by an external utility. For > example, `convert' from the ImageMagick suite (www.imagemagick.org) can do > this. " > > > > Thanks, > Robert Meglen > [EMAIL PROTECTED] > [[alternative HTML version deleted]] > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] How to load a big txt file
Erm... Is that a typo? Are we really talking 23800 rows and 49 columns? Because that doesn't seem that many -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of ssls sddd Sent: 07 June 2007 10:48 To: r-help@stat.math.ethz.ch Subject: Re: [R] How to load a big txt file Dear Chung-hong Chan, Thanks! Can you recommend a text editor for splitting? I used UltraEdit and TextPad but did not find they can split files. Sincerely, Alex On 6/6/07, Chung-hong Chan <[EMAIL PROTECTED]> wrote: > > Easy solution will be split your big txt files by text editor. > > e.g. 5000 rows each. > > and then combine the dataframes together into one. > > > > On 6/7/07, ssls sddd <[EMAIL PROTECTED]> wrote: > > Dear list, > > > > I need to read a big txt file (around 130Mb; 23800 rows and 49 columns) > > for downstream clustering analysis. > > > > I first used "Tumor <- read.table("Tumor.txt",header = TRUE,sep = "\t")" > > but it took a long time and failed. However, it had no problem if I just > put > > data of 3 columns. > > > > Is there any way which can load this big file? > > > > Thanks for any suggestions! > > > > Sincerely, > > Alex > > > > [[alternative HTML version deleted]] > > > > __ > > R-help@stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > > > -- > "The scientists of today think deeply instead of clearly. One must be > sane to think clearly, but one can think deeply and be quite insane." > Nikola Tesla > http://www.macgrass.com > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Suppressing the large amount of white space in heatmap.2 in gplots
Hi OK, quick question - I can suppress the calculation and drawing of the column dendrogram by using Colv=FALSE and dendrogram="row", but that leaves me with a large amount of white space at the top of the plot where the dendrogram would have been drawn... Is there a way of getting rid of that? Thanks Mick The information contained in this message may be confidentia...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Mandriva Spring 2007 and R
Roland Rau gmail.com> writes: > > Hi Jonathan, > > Jonathan Morse wrote: > > I am new to Linux (not to R) and recently installed Mandriva Spring 2007 on my partitioned hard drive. My > next objective is to install R in the Linux environment, unfortunately Mandriva is not one of the Linux > distributions available for download... Could someone please let me know which distribution I should > use? > > > One possibility is, of course, that you compile it yourself for your > computer. Compiling R was my first shot at compiling programs when I was > new to Linux, and it was not very difficult. It is described nicely in > the R Installation Administration Manual. > http://cran.r-project.org/doc/manuals/R-admin.html > > Basically, you only need to take care of the following steps to get you > started: > - did you download and unpack the source distribution (see section 1.1 > of the manual)? > - do you have the required tools installed (see section A.1 of the > manual)? (C compiler, Fortran compiler, libreadline, libjpeg, libpng, > tex/latex, Perl5, xorg-x11-dev) > - compilation (see section 2.1 in the manual) > > I hope this helps? > > Best, > Roland > > __ > R-help stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > > Just to add to Roland's comments - remember to install the "-dev" packages for all of the tools he mentions. I learned this from the R installation manual, and it has been valuable in installing other software from source. The output from "./configure" will usually tell you if something is missing. Personally, I no longer use Mandriva, but comments I made re v10.1 may or may not be relevant: http://finzi.psych.upenn.edu/R/Rhelp02a/archive/54320.html. Hope this helps, Michael __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] [R-pkgs] New Package: rateratio.test
Hi, I just uploaded a new package rateratio.test. The package contains one function of the same name and it performs an exact rate ratio test for Poisson counts. Unlike binom.test and fisher.test the p-values and confidence intervals are internally consistent. In other words, if the p-value implies that the null hypothesis should be rejected, then the confidence interval also implies that the null should be rejected. There is a vignette discussing this issue. Mike ** Michael P. Fay, PhD Mathematical Statistician National Institute of Allergy and Infectious Diseases Tel: 301-451-5124 Fax:301-480-0912 (U.S. postal mail address) 6700B Rockledge Drive MSC 7609 Bethesda, MD 20892-7609 (Overnight mail address) 6700-A Rockledge Drive, Room 5133 Bethesda, MD 20817 ** Disclaimer: The information in this e-mail and any of its attachments is confidential and may contain sensitive information. It should not be used by anyone who is not the original intended recipient. If you have received this e-mail in error please inform the sender and delete it from your mailbox or any other storage devices. National Institute of Allergy and Infectious Diseases shall not accept liability for any statements made that are sender's own and not expressly made on behalf of the NIAID by one of its representatives [[alternative HTML version deleted]] ___ R-packages mailing list [EMAIL PROTECTED] https://stat.ethz.ch/mailman/listinfo/r-packages __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] [R-pkgs] R Package: ssanv - Sample size adjustments for nonadherence and variability of input parameters
Hi, This is a late announcement for a package that has been up on CRAN since Feb 2006. The package is called ssanv and it calculates sample sizes for two group tests with adjustments for nonadherence and variability of the input parameters. As an example, suppose you want a sample size for a two group difference in normal means but you get your estimate of your variance from a previous study. Then with this package, you can account for the variability of that variance estimate in your sample size calculations. The details are described in: Fay, MP, Halloran, ME, Follmann, DA. Accounting for Variability in Sample Size Estimation with Applications to Nonadherence and Estimation of Variance and Effect Size. Biometrics 2007, 63: 465-474. http://www.blackwell-synergy.com/doi/abs/10./j.1541-0420.2006.00703.x Please let me know of any problems with the package or have any comments. Thanks, Mike ** Michael P. Fay, PhD Mathematical Statistician National Institute of Allergy and Infectious Diseases Tel: 301-451-5124 Fax:301-480-0912 (U.S. postal mail address) 6700B Rockledge Drive MSC 7609 Bethesda, MD 20892-7609 (Overnight mail address) 6700-A Rockledge Drive, Room 5133 Bethesda, MD 20817 ** Disclaimer: The information in this e-mail and any of its attachments is confidential and may contain sensitive information. It should not be used by anyone who is not the original intended recipient. If you have received this e-mail in error please inform the sender and delete it from your mailbox or any other storage devices. National Institute of Allergy and Infectious Diseases shall not accept liability for any statements made that are sender's own and not expressly made on behalf of the NIAID by one of its representatives [[alternative HTML version deleted]] ___ R-packages mailing list [EMAIL PROTECTED] https://stat.ethz.ch/mailman/listinfo/r-packages __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] [R-pkgs] R package: Mchtest - Monte Carlo hypothesis testing allowing Sequential Stopping
Hi, This is an announcement for a package that has been up on CRAN since March 2006 but was never announced. The package is Mchtest - for Monte Carlo hypothesis tests allowing sequential stopping. The idea is to use the sequential probability ratio test boundaries to stop resampling for a Monte Carlo hypothesis test such as a bootstrap or permutation test. This means that you will take many samples when the p-value is close to the significance level (e.g., 0.05), but you may stop after a much fewer samples if it becomes clear that the p-value is either very likely to be less than or more than 0.05. The details are described in: Fay, MP, Kim, H-J, and Hachey, M. (2007). "On using Truncated Sequential Probability Ratio Test Boundaries for Monte Carlo Implementation of Hypothesis Tests." (to appear in Journal of Graphical and Computational Statistics). http://www3.niaid.nih.gov/about/organization/dcr/BRB/staff/michael Please let me know of any problems or if you have any comments. Mike ****** Michael P. Fay, PhD Mathematical Statistician National Institute of Allergy and Infectious Diseases Tel: 301-451-5124 Fax:301-480-0912 (U.S. postal mail address) 6700B Rockledge Drive MSC 7609 Bethesda, MD 20892-7609 (Overnight mail address) 6700-A Rockledge Drive, Room 5133 Bethesda, MD 20817 ** Disclaimer: The information in this e-mail and any of its attachments is confidential and may contain sensitive information. It should not be used by anyone who is not the original intended recipient. If you have received this e-mail in error please inform the sender and delete it from your mailbox or any other storage devices. National Institute of Allergy and Infectious Diseases shall not accept liability for any statements made that are sender's own and not expressly made on behalf of the NIAID by one of its representatives [[alternative HTML version deleted]] ___ R-packages mailing list [EMAIL PROTECTED] https://stat.ethz.ch/mailman/listinfo/r-packages __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Linear Discriminant Analysis
Region and Name are effectively the same variable cor(olive[,4:11]) will also show you that there are strong correlations between some of the variables - this is something you might want to avoid From: [EMAIL PROTECTED] on behalf of Soare Marcian-Alin Sent: Wed 06/06/2007 4:45 PM To: Uwe Ligges; R-help@stat.math.ethz.ch Subject: Re: [R] Linear Discriminant Analysis Thanks for explaining... Im just sitting at the homework for 6 hours after taking for one week antibiotica, because i had an amygdalitis... I just wanted some tipps for solving this homework, but thanks, I will try to get help on another way :) I think i solved it, but I still get this Error :( ## Loading Data library(MASS) olive <- url(" http://www.statistik.tuwien.ac.at/public/filz/students/multi/ss07/olive.R";) print(load(olive)) dim(olive) summary(olive) index <- sample(nrow(olive), 286) train <- olive[index,-11] test <- olive[-index,-11] summary(train) summary(test) table(train$Region) table(test$Region) # Linear Discriminant Analysis z <- lda(Region ~ . , train) zn <- predict(z, newdata=test)$class mean(zn != test$Region) 2007/6/6, Uwe Ligges <[EMAIL PROTECTED]>: > > > So what about asking your teacher (who seems to be Peter Filzmoser) and > try to find out your homework yourself? > You might want to think about some assumptions that must hold for LDA > and look at the class of your explaining variables ... > > Uwe Ligges > > > > Soare Marcian-Alin wrote: > > Hello, > > > > I want to make a linear discriminant analysis for the dataset olive, and > I > > get always this error:# > > Warning message: > > variables are collinear in: lda.default(x, grouping, ...) > > > > ## Loading Data > > library(MASS) > > olive <- url(" > > > http://www.statistik.tuwien.ac.at/public/filz/students/multi/ss07/olive.R > ") > > print(load(olive)) > > > > y <- 1:572 > > x <- sample(y) > > y1 <- x[1:286] > > > > train <- olive[y1,-11] > > test <- olive[-y1,-11] > > > > summary(train) > > summary(test) > > > > table(train$Region) > > table(test$Region) > > > > # Linear Discriminant Analysis > > z <- lda(Region ~ . , train) > > predict(z, train) > > > > z <- lda(Region ~ . , test) > > predict(z, test) > > > > Thanks in advance! > > > > > > > > > > > > __ > > R-help@stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > -- Mit freundlichen Grüssen / Best Regards Soare Marcian-Alin [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R help
Yes, but you need to be a bit more specific... When it comes to graphs and drawing lines, there isn't much R can't do... -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of scott flemming Sent: 06 June 2007 04:49 To: r-help@stat.math.ethz.ch Subject: [R] R help Hi, I wonder whether R can finish the following project: I want to make a chart to represent 10 genes. Each gene has orientation and length. Therefore, a gene can be represented by arrows. Can R be used to draw 10 arrows in one line ? scott - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Standard errors of the predicted values from a lme (or lmer)-object
On Jun 1, 2007, at 6:08 AM, Dieter Menne wrote: > Fränzi Korner oikostat.ch> writes: > >> how do I obtain standard errors of the predicted values from a lme >> (or >> lmer)-object? > > Not totally clear what you mean. intervals(lmeresult) gives the > confidence > intervals for the coefficients. Otherwise, you can do some > calculations with > residuals(lmeresult). Most useful for diagnostic purposes is plot > (lmeresult). Perhaps ?estimable in the gmodels package _ Professor Michael Kubovy University of Virginia Department of Psychology USPS: P.O.Box 400400Charlottesville, VA 22904-4400 Parcels:Room 102Gilmer Hall McCormick RoadCharlottesville, VA 22903 Office:B011+1-434-982-4729 Lab:B019+1-434-982-4751 Fax:+1-434-982-4766 WWW:http://www.people.virginia.edu/~mk9y/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] mahalanobis
Yianni You probably would have gotten more helpful replies if you indicated the substantiative problem you were trying to solve. From your description, it seems like you want to calculate leverage of predictors, (X1, X2) in the lm( y ~ X1+X2). My crystal ball says you may be an SPSS user, for whom mahalanobis D^2 of the predictors is what you have to beg for to get leverages. In R, you will get the most happiness from ?leverage.plot in the car package. mahalanobois D^2 are proportional to leverage. -Michael [EMAIL PROTECTED] wrote: > Hi, I am not sure I am using correctly the mahalanobis distnace method... > Suppose I have a response variable Y and predictor variables X1 and X2 > > all <- cbind(Y, X1, X2) > mahalanobis(all, colMeans(all), cov(all)); > > However, my results from this are different from the ones I am getting > using another statistical software. > > I was reading that the comparison is with the means of the predictor > variables which led me to think that the above should be transformed > into: > > predictors <- cbind(X1, X2) > mahalanobis(all, colMeans(predictors), cov(all)) > > But still the results are different > > Am I doing something wrong or have I misunderstood something in the > use of the function mahalanobis? Thanks. > -- Michael Friendly Email: friendly AT yorku DOT ca Professor, Psychology Dept. York University Voice: 416 736-5115 x66249 Fax: 416 736-5814 4700 Keele Streethttp://www.math.yorku.ca/SCS/friendly.html Toronto, ONT M3J 1P3 CANADA __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] A matrix with mixed character and numerical columns
OK, where is the best place to post these to to get them incorporated in R? -Original Message- From: Petr PIKAL [mailto:[EMAIL PROTECTED] Sent: 31 May 2007 14:23 To: michael watson (IAH-C) Cc: r-help@stat.math.ethz.ch Subject: Re: [R] A matrix with mixed character and numerical columns [EMAIL PROTECTED] napsal dne 31.05.2007 14:32:01: > What I am trying to do is create an x-y plot from the numerical values, > and the output of row() or col() gives me an excellent way of > calculating an x- or y- co-ordinate, with the value in the data.frame > being the other half of the pair. > > Thanks for the code, Petr - I'm sure you would agree, however, that it's > a bit 'clumsy' (no fault of yours). > > Can we just adjust row() and col() for data.frames? > > col <- function (x, as.factor = FALSE) > { > if (is.data.frame(x)) { > x <- as.matrix(x) > } > if (as.factor) > factor(.Internal(col(x)), labels = colnames(x)) > else .Internal(col(x)) > } > > row <- function (x, as.factor = FALSE) > { > if (is.data.frame(x)) { > x <- as.matrix(x) > } > if (as.factor) > factor(.Internal(row(x)), labels = rownames(x)) > else .Internal(row(x)) > } > > Is there any reason why these won't work? Am I oversimplifying it? Seems to me that both works. At least on data.frame I tried it. Regards Petr > > Mick > -Original Message- > From: Petr PIKAL [mailto:[EMAIL PROTECTED] > Sent: 31 May 2007 12:57 > To: michael watson (IAH-C) > Cc: r-help@stat.math.ethz.ch > Subject: Odp: [R] A matrix with mixed character and numerical columns > > Hi > [EMAIL PROTECTED] napsal dne 31.05.2007 12:48:11: > > > Is it possible to have one? > > > > I have a data.frame with two character columns and 6 numerical > columns. > > > > I converted to a matrix as I needed to use the col() and row() > > functions. > > However, if I convert the data.frame to a matrix, using as.matrix, the > > numerical columns get converted to characters, and that messes up some > > of the calculations. > > > > Do I really have to split it up into two matrices, one character and > the > > other numerical, just so I can use the col() and row() functions? Are > > there equivalent functions for data.frames? > > AFAIK I do not remember equivalent functions for data frame. If you just > > want column or row index you can use > > 1:dim(DF)[1] or 1:dim(DF)[2] for rows and columns > > if you want repeat these indexes row or columnwise use > > rrr<-rep(1:dim(DF)[1], dim(DF)[2]) > matrix(rrr,dim(DF)[1], dim(DF)[2]) > > rrr<-rep(1:dim(DF)[2], dim(DF)[1]) > matrix(rrr,dim(DF)[1], dim(DF)[2], byrow=T) > > Regards > Petr > > > > > > > __ > > R-help@stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] A matrix with mixed character and numerical columns
What I am trying to do is create an x-y plot from the numerical values, and the output of row() or col() gives me an excellent way of calculating an x- or y- co-ordinate, with the value in the data.frame being the other half of the pair. Thanks for the code, Petr - I'm sure you would agree, however, that it's a bit 'clumsy' (no fault of yours). Can we just adjust row() and col() for data.frames? col <- function (x, as.factor = FALSE) { if (is.data.frame(x)) { x <- as.matrix(x) } if (as.factor) factor(.Internal(col(x)), labels = colnames(x)) else .Internal(col(x)) } row <- function (x, as.factor = FALSE) { if (is.data.frame(x)) { x <- as.matrix(x) } if (as.factor) factor(.Internal(row(x)), labels = rownames(x)) else .Internal(row(x)) } Is there any reason why these won't work? Am I oversimplifying it? Mick -Original Message- From: Petr PIKAL [mailto:[EMAIL PROTECTED] Sent: 31 May 2007 12:57 To: michael watson (IAH-C) Cc: r-help@stat.math.ethz.ch Subject: Odp: [R] A matrix with mixed character and numerical columns Hi [EMAIL PROTECTED] napsal dne 31.05.2007 12:48:11: > Is it possible to have one? > > I have a data.frame with two character columns and 6 numerical columns. > > I converted to a matrix as I needed to use the col() and row() > functions. > However, if I convert the data.frame to a matrix, using as.matrix, the > numerical columns get converted to characters, and that messes up some > of the calculations. > > Do I really have to split it up into two matrices, one character and the > other numerical, just so I can use the col() and row() functions? Are > there equivalent functions for data.frames? AFAIK I do not remember equivalent functions for data frame. If you just want column or row index you can use 1:dim(DF)[1] or 1:dim(DF)[2] for rows and columns if you want repeat these indexes row or columnwise use rrr<-rep(1:dim(DF)[1], dim(DF)[2]) matrix(rrr,dim(DF)[1], dim(DF)[2]) rrr<-rep(1:dim(DF)[2], dim(DF)[1]) matrix(rrr,dim(DF)[1], dim(DF)[2], byrow=T) Regards Petr > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] A matrix with mixed character and numerical columns
Hi Michael, I dont think it is possible. Please see first the definition of a dataframe and matrix - Original Message From: michael watson (IAH-C) <[EMAIL PROTECTED]> To: r-help@stat.math.ethz.ch Sent: Thursday, May 31, 2007 4:18:11 PM Subject: [R] A matrix with mixed character and numerical columns Is it possible to have one? I have a data.frame with two character columns and 6 numerical columns. I converted to a matrix as I needed to use the col() and row() functions. However, if I convert the data.frame to a matrix, using as.matrix, the numerical columns get converted to characters, and that messes up some of the calculations. Do I really have to split it up into two matrices, one character and the other numerical, just so I can use the col() and row() functions? Are there equivalent functions for data.frames? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Where is CRAN mirror address stored?
chooseCRANmirror() -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Vladimir Eremeev Sent: 31 May 2007 12:14 To: r-help@stat.math.ethz.ch Subject: [R] Where is CRAN mirror address stored? When I update.packages(), R shows the dialog window, listing CRAN mirrors and asks to choose the CRAN mirror to use in this session. Then, R uses this address and never asks again until quit. Is there any way to make R ask for the CRAN mirror again, except restarting it? I am just trying to save typing, because sometimes my internet connection with CRAN becomes too slow, and mirrors disappear. -- View this message in context: http://www.nabble.com/Where-is-CRAN-mirror-address-stored--tf3845953.htm l#a10891857 Sent from the R help mailing list archive at Nabble.com. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] A matrix with mixed character and numerical columns
Is it possible to have one? I have a data.frame with two character columns and 6 numerical columns. I converted to a matrix as I needed to use the col() and row() functions. However, if I convert the data.frame to a matrix, using as.matrix, the numerical columns get converted to characters, and that messes up some of the calculations. Do I really have to split it up into two matrices, one character and the other numerical, just so I can use the col() and row() functions? Are there equivalent functions for data.frames? __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] sizing and saving graphics in R
There is also the functions pdf(), jpeg(), bmp() and png() -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Murray Pung Sent: 31 May 2007 01:22 To: Felicity Jones Cc: r-help@stat.math.ethz.ch Subject: Re: [R] sizing and saving graphics in R I use the savePlot function for saving graphics. The following will save the active graphics panel in your working directory, in format wmf, which I find has a high resolution. Check out other possible formats in help. savePlot(filename = "myfilename",type = c("wmf")) Murray On 31/05/07, Felicity Jones <[EMAIL PROTECTED]> wrote: > > > Dear R wizards, > > I am seeking advice on graphics in R. Specifically, how to manipulate > the size and save a plot I have produced using the LDheatmap library. > I confess I am relatively new to graphics in R, but I would greatly > appreciate any suggestions you may have. > > LDheatmap produces a coloured triangular matrix of pairwise > associations between 600 genetic markers in my dataset. Initially the > graphical output was confined to the computer screen, such that each > pairwise marker association was displayed as approximately 1 pixel > (too small for me to interpret). > > I have successfully managed to play with the LDheatmap function to > enlarge the size of viewport by changing the following code in > LDheatmap > > #From > > heatmapVP <- viewport(width = unit(0.8, "snpc"), height = unit(0.8, > "snpc"), > name=vp.name) > > #To > heatmapVP <- viewport(width = unit(25, "inches"), height = unit(25, > "inches"), name=vp.name) > > This produces a much larger plot (so big that the majority is not seen > on the screen). I would like to save the entire thing so that I can > import it into photoshop or some other image software. > > My problem is that when I save using the R graphics console > (File->Save As->bmp), it only saves the section I can see on the > screen. Any suggestions on how to save the whole plot or manipulate > the plot so I get higher resolution would be much appreciated. > > Thanks for your help in advance, > > Felicity. > > > > > > > > > ___ > > Dr Felicity Jones > Department of Developmental Biology > Stanford University School of Medicine > Beckman Center > 279 Campus Drive > Stanford CA 94305-5329 > USA > > __ > R-help@stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > -- Murray Pung Statistician, Datapharm Australia Pty Ltd 0404 273 283 [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] opinions please: text editors and reporting/Sweave?
Have you tried R2HTML, or is HTML not what you're looking for? -Original Message- From: [EMAIL PROTECTED] on behalf of Tim Howard Sent: Wed 30/05/2007 9:43 PM To: r-help@stat.math.ethz.ch Subject: [R] opinions please: text editors and reporting/Sweave? dear all - I currently use Tinn-R as my text editor to work with code that I submit to R, with some output dumped to text files, some images dumped to pdf. (system: Windows 2K and XP, R 2.4.1 and R 2.5). We are using R for overnight runs to create large output data files for GIS, but then I need simple output reports for analysis results for each separate data set. Thus, I create many reports of the same style, but just based on different input data. I am recognizing that I need a better reporting system, so that I can create clean reports for each separate R run. This obviously means using Sweave and some implementation of LaTex, both of which are new to me. I've installed MikTex and successfully completed a demo or two for creating pdfs from raw LaTeX. It appears that if I want to ease my entry into the world of LaTeX, I might need to switch editors to something like Emacs (I read somewhere that Emacs helps with the TeX markup?). After quite a while wallowing at the Emacs site, I am finding that ESS is well integrated with R and might be the way to go. Aaaagh... I'm in way over my head! My questions: What, in your opinion, is the simplest way to integrate text and graphics reports into a single report such as a pdf file. If the answer to this is LaTeX and Sweave, is it difficult to use a text editor such as Tinn-R or would you strongly recommend I leave behind Tinn and move over to an editor that has more LaTeX help? In reading over Friedrich Leisch's "Sweave User Manual" (v 1.6.0) I am beginning to think I can do everything I need with my simple editor. Before spending many hours going down that path, I thought it prudent to ask the R community. It is likely I am misunderstanding some of the process here and any clarifications are welcome. Thank you in advance for any thoughts. Tim Howard __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] matrix in data.frame
Have you thought of using a list? > a <- matrix(1:10, nrow=2) > b <- 1:5 > x <- list(a=a, b=b) > x $a [,1] [,2] [,3] [,4] [,5] [1,]13579 [2,]2468 10 $b [1] 1 2 3 4 5 > x$a [,1] [,2] [,3] [,4] [,5] [1,]13579 [2,]2468 10 > x$b [1] 1 2 3 4 5 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Lina Hultin-Rosenberg Sent: 30 May 2007 10:26 To: r-help@stat.math.ethz.ch Subject: [R] matrix in data.frame Dear list! I have run into a problem that seems very simple but I can't find any solution to it (have searched the internet, help-files and "An introduction to R" etc without any luck). The problem is the following: I would like to create a data.frame with two components (columns), the first component being a matrix and the second component a vector. Whatever I have tried so far, I end up with a data.frame containing all the columns from the matrix plus the vector which is not what I am after. I have seen this kind of data.frame among R example datasets (oliveoil and yarn). I would greatly appreciate some help with this problem! Kind regards, Lina Hultin Rosenberg Karolinska Biomics Center Karolinska Institute Sweden __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Help me understand colours on linux
Hi Here is my sessionInfo(): Version 2.3.1 (2006-06-01) i686-redhat-linux-gnu attached base packages: [1] "methods" "stats" "graphics" "grDevices" "utils" "datasets" [7] "base" I have a function that is trying to draw rectangles using 136 different colours, and I get the following error: Error in rect(xstart, my.min, xstart + fcount[i, 2], my.max, col = fcolors[i], : Error: X11 cannot allocate additional graphics colours. Consider using X11 with colortype="pseudo.cube" or "gray". However, if I use "pseudo.cube" I don't get anywhere near enough distinct colours. I could use gray, but I would prefer colour. So, questions: 1) is there any set of options I can use which will actually let me create that many colours? 2) if not, how do I test if there is not, and implement gray instead of colours? This function and package works on windows, and it works with less colours. I guess I could try and trap the error, and if I can, go back and re-run the function with "options(X11colortype="gray")" but I'd prefer something more elegant... What I'm looking for is some test where I can say "If you're going to fail, X11colortype='gray', else X11colortype='true'". Possible? Many thanks Mick The information contained in this message may be confidentia...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.