Re: [R] convert RData to txt

2009-10-05 Thread Ilyas .
thank you for your reply,,
i tried your commands but its not working,,i have attached the RData
file,,which i want to convert into txt..

Ilyas
On Mon, Oct 5, 2009 at 9:27 PM, Henrique Dallazuanna wrote:

> You can try something about like this:
>
> lapply(ls(), function(obj)cat("\n", obj, "<-",
> paste(deparse(get(obj)), collapse = "\n"), file = 'RData.txt', append
> = TRUE))
> source('RData.txt')
>
>
> On Mon, Oct 5, 2009 at 4:00 AM,   wrote:
> > hello all,
> > will you plz tell me how can i convert RData files to txt,,,
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
>
> --
> Henrique Dallazuanna
> Curitiba-Paraná-Brasil
> 25° 25' 40" S 49° 16' 22" O
>
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with na.omit when using length()

2009-10-05 Thread Viju Moses
Thanks for that. It works as expected now. A case of GIGO (garbage in- 
garbage out) on my part, with some head-banging. :-)


Regards
Viju Moses

Yihui Xie wrote:

just put na.omit() inside length() if you intend to omit the NA
elements of the vector (otherwise you are trying to omit the NA's of
the returned value of length() which is a scalar 2):

length(na.omit(sno[a==1 & b==0]))

Regards,
Yihui
--
Yihui Xie 
Phone: 515-294-6609 Web: http://yihui.name
Department of Statistics, Iowa State University
3211 Snedecor Hall, Ames, IA



On Mon, Oct 5, 2009 at 11:51 PM, Viju Moses  wrote:

I'm seeing what looks to me like odd behaviour when I use na.omit on a
simple "length" function, as follows.


sno

 [1]  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32 33 34

a

 [1] 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0

b

 [1]  1  1  0  1  1  1  0  0 NA  0  0  0 NA  0  1 NA  0  1  0  0  0  0 NA  0
 0  0  0 NA  0 NA  0  1  0  0

#NA refers to no data available.


df=data.frame(sno,a,b)

# I'm pasting the sorted data frame below:

sortdf=df[order(a,b),]
sortdf

  sno a  b
33 0  0
77 0  0
88 0  0
10  10 0  0
11  11 0  0
12  12 0  0
14  14 0  0
17  17 0  0
20  20 0  0
21  21 0  0
22  22 0  0
24  24 0  0
25  25 0  0
26  26 0  0
27  27 0  0
29  29 0  0
31  31 0  0
33  33 0  0
34  34 0  0
11 0  1
44 0  1
99 0 NA
13  13 0 NA
23  23 0 NA
28  28 0 NA
30  30 0 NA
19  19 1  0
22 1  1
55 1  1
66 1  1
15  15 1  1
18  18 1  1
32  32 1  1
16  16 1 NA

#Now I wish to count howmany records have a=1 AND b=0. From the lower
section of that sorted dataframe we see the answer is 1 (record # 19). But
instead I'm seeing 2. Probably counting record # 16 also.


na.omit(length(sno[a==1 & b==0]))

[1] 2

I'd be grateful to anyone who can point out what I'm doing wrong.

Regards.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with na.omit when using length()

2009-10-05 Thread Yihui Xie
just put na.omit() inside length() if you intend to omit the NA
elements of the vector (otherwise you are trying to omit the NA's of
the returned value of length() which is a scalar 2):

length(na.omit(sno[a==1 & b==0]))

Regards,
Yihui
--
Yihui Xie 
Phone: 515-294-6609 Web: http://yihui.name
Department of Statistics, Iowa State University
3211 Snedecor Hall, Ames, IA



On Mon, Oct 5, 2009 at 11:51 PM, Viju Moses  wrote:
> I'm seeing what looks to me like odd behaviour when I use na.omit on a
> simple "length" function, as follows.
>
>> sno
>  [1]  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
> 25 26 27 28 29 30 31 32 33 34
>> a
>  [1] 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
>> b
>  [1]  1  1  0  1  1  1  0  0 NA  0  0  0 NA  0  1 NA  0  1  0  0  0  0 NA  0
>  0  0  0 NA  0 NA  0  1  0  0
>
> #NA refers to no data available.
>
>> df=data.frame(sno,a,b)
> # I'm pasting the sorted data frame below:
>> sortdf=df[order(a,b),]
>> sortdf
>   sno a  b
> 3    3 0  0
> 7    7 0  0
> 8    8 0  0
> 10  10 0  0
> 11  11 0  0
> 12  12 0  0
> 14  14 0  0
> 17  17 0  0
> 20  20 0  0
> 21  21 0  0
> 22  22 0  0
> 24  24 0  0
> 25  25 0  0
> 26  26 0  0
> 27  27 0  0
> 29  29 0  0
> 31  31 0  0
> 33  33 0  0
> 34  34 0  0
> 1    1 0  1
> 4    4 0  1
> 9    9 0 NA
> 13  13 0 NA
> 23  23 0 NA
> 28  28 0 NA
> 30  30 0 NA
> 19  19 1  0
> 2    2 1  1
> 5    5 1  1
> 6    6 1  1
> 15  15 1  1
> 18  18 1  1
> 32  32 1  1
> 16  16 1 NA
>
> #Now I wish to count howmany records have a=1 AND b=0. From the lower
> section of that sorted dataframe we see the answer is 1 (record # 19). But
> instead I'm seeing 2. Probably counting record # 16 also.
>
>> na.omit(length(sno[a==1 & b==0]))
> [1] 2
>
> I'd be grateful to anyone who can point out what I'm doing wrong.
>
> Regards.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with na.omit when using length()

2009-10-05 Thread Daniel Malter
This looks buggish to me (though at least non-intuitive), but I am almost
sure there is an explanation for why the b==0 condition includes the NAs.
You find a way to circumvent it in the last two lines of the example below.

a=c(1,1,1,0,0,0)
b=c(1,NA,0,1,NA,0)
sno=rnorm(6)

na.omit(length(sno[a==1 & b==0]))
sno[a==1 & b==0]
length(sno[a==1 & b==0])

which(a==1&b==0)
sno[which(a==1&b==0)] 

Daniel

-
cuncta stricte discussurus
-

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im
Auftrag von Viju Moses
Gesendet: Tuesday, October 06, 2009 12:52 AM
An: r-help@r-project.org
Betreff: [R] Problem with na.omit when using length()

I'm seeing what looks to me like odd behaviour when I use na.omit on a
simple "length" function, as follows.

 > sno
  [1]  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22
23 24 25 26 27 28 29 30 31 32 33 34
 > a
  [1] 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0  >
b
  [1]  1  1  0  1  1  1  0  0 NA  0  0  0 NA  0  1 NA  0  1  0  0  0  0 NA
0  0  0  0 NA  0 NA  0  1  0  0

#NA refers to no data available.

 > df=data.frame(sno,a,b)
# I'm pasting the sorted data frame below:
 > sortdf=df[order(a,b),]
 > sortdf
sno a  b
33 0  0
77 0  0
88 0  0
10  10 0  0
11  11 0  0
12  12 0  0
14  14 0  0
17  17 0  0
20  20 0  0
21  21 0  0
22  22 0  0
24  24 0  0
25  25 0  0
26  26 0  0
27  27 0  0
29  29 0  0
31  31 0  0
33  33 0  0
34  34 0  0
11 0  1
44 0  1
99 0 NA
13  13 0 NA
23  23 0 NA
28  28 0 NA
30  30 0 NA
19  19 1  0
22 1  1
55 1  1
66 1  1
15  15 1  1
18  18 1  1
32  32 1  1
16  16 1 NA

#Now I wish to count howmany records have a=1 AND b=0. From the lower
section of that sorted dataframe we see the answer is 1 (record # 19). 
But instead I'm seeing 2. Probably counting record # 16 also.

 > na.omit(length(sno[a==1 & b==0]))
[1] 2

I'd be grateful to anyone who can point out what I'm doing wrong.

Regards.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem with na.omit when using length()

2009-10-05 Thread Viju Moses
I'm seeing what looks to me like odd behaviour when I use na.omit on a 
simple "length" function, as follows.


> sno
 [1]  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 
23 24 25 26 27 28 29 30 31 32 33 34

> a
 [1] 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
> b
 [1]  1  1  0  1  1  1  0  0 NA  0  0  0 NA  0  1 NA  0  1  0  0  0  0 
NA  0  0  0  0 NA  0 NA  0  1  0  0


#NA refers to no data available.

> df=data.frame(sno,a,b)
# I'm pasting the sorted data frame below:
> sortdf=df[order(a,b),]
> sortdf
   sno a  b
33 0  0
77 0  0
88 0  0
10  10 0  0
11  11 0  0
12  12 0  0
14  14 0  0
17  17 0  0
20  20 0  0
21  21 0  0
22  22 0  0
24  24 0  0
25  25 0  0
26  26 0  0
27  27 0  0
29  29 0  0
31  31 0  0
33  33 0  0
34  34 0  0
11 0  1
44 0  1
99 0 NA
13  13 0 NA
23  23 0 NA
28  28 0 NA
30  30 0 NA
19  19 1  0
22 1  1
55 1  1
66 1  1
15  15 1  1
18  18 1  1
32  32 1  1
16  16 1 NA

#Now I wish to count howmany records have a=1 AND b=0. From the lower 
section of that sorted dataframe we see the answer is 1 (record # 19). 
But instead I'm seeing 2. Probably counting record # 16 also.


> na.omit(length(sno[a==1 & b==0]))
[1] 2

I'd be grateful to anyone who can point out what I'm doing wrong.

Regards.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] taking limit

2009-10-05 Thread David Winsemius
This is a rather vague question, ... I hope you will agree? Are you  
asking for some approach that does symbolic calculus? Perhaps package  
Ryacas?



On Oct 5, 2009, at 9:07 PM, dahuang wrote:



hey, i wonder how to perform limit in R.

In R help, i find the function "lim", but it seems i need to load  
package

first, can anyone tell me which one?

Or ANY method of taking limit, i searched it for 2 days but i failed,
thanx~~
--


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ggplot2 applying a function based on facet

2009-10-05 Thread David Winsemius


On Oct 5, 2009, at 10:45 PM, stephen sefick wrote:


Sorry, I want the cumsum of precipitation by gauge name.
that can then be used with the appropriate datetime stamp to pot a
cumulative rainfall plot.



I dunno. Here is what I get when I extract the data-gathering part of  
that extended function.


> str(both)
'data.frame':   3973 obs. of  8 variables:
 $ gauge: int  2102908 2102908 2102908 2102908 2102908  
2102908 2102908 2102908 2102908 2102908 ...

 $ agency   : Factor w/ 1 level "USGS": 1 1 1 1 1 1 1 1 1 1 ...
 $ date : Factor w/ 8 levels "2009-09-28","2009-09-29",..: 1  
1 1 1 1 1 1 1 1 1 ...
 $ time : Factor w/ 96 levels "00:00","00:15",..: 1 2 3 4 5 6  
7 8 9 10 ...
 $ gauge_height : num  0.89 0.89 0.89 0.89 0.89 0.89 0.89 0.89 0.89  
0.88 ...

 $ discharge: num  8.4 8.4 8.4 8.4 8.4 8.4 8.4 8.4 8.4 8.1 ...
 $ precipitation: num  0 0 0 0 0 0 0 0 0 0 ...
 $ gauge_name   : Factor w/ 6 levels "CANOOCHEE RIVER NEAR CLAXTON,  
GA",..: 3 3 3 3 3 3 3 3 3 3 ...

> df$c_sum_precip <- ave(DF$precipitation, DF$guage_name, cumsum)
Error in as.vector(x, mode) : invalid 'mode' argument
> describe(DF$precipitation)  # describe from Hmisc package
DF$precipitation
   n  missing   unique Mean  .05  .10  .25  . 
50  .75  .90
2255 1718   15 0.001610000 
000

 .95
   0

 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.14 0.21  
0.22 0.37 0.46
Frequency 2146   60   16   106343111 
1111
%   953100000000 
0000

>-

Seems that a cumsum on a variable that has >60% missing values is  
going to present certain problems. I'm also not sure how this approach  
will carry across the time variable which at the moment appears to be  
a factor rather than any of the date or datetime classes.


--
David.


On Mon, Oct 5, 2009 at 9:03 PM, David Winsemius > wrote:


On Oct 5, 2009, at 9:39 PM, stephen sefick wrote:


Look at the bottom of the message for my question
#here is a little function that I wrote
USGS <- function(input="discharge", days=7){
library(chron)
library(gsubfn)
#021973269 is the Waynesboro Gauge on the Savannah River Proper  
(SRS)

#02102908 is the Flat Creek Gauge (ftbrfcms)
#02133500 is the Drowning Creek (ftbrbmcm)
#02341800 is the Upatoi Creek Near Columbus (ftbn)
#02342500 is the Uchee Creek Near Fort Mitchell (ftbn)
#02203000 is the Canoochee River Near Claxton (ftst)

a <- "http://waterdata.usgs.gov/nwis/uv?format=rdb&period=";
b <-  
"&site_no=021973269,02102908,02133500,02341800,02342500,02203000"

z <- paste(a, days, b, sep="")
L <- readLines(z)


#trimmed long comment that broke function


L.USGS <- grep("^USGS", L, value = TRUE)
DF <- read.table(textConnection(L.USGS), fill = TRUE)
colnames(DF) <- c("agency", "gauge", "date", "time", "gauge_height",
"discharge", "precipitation")
pat <- "^# +USGS +([0-9]+) +(.*)"
L.DD <- grep(pat, L, value = TRUE)
library(gsubfn)
DD <- strapply(L.DD, pat, c, simplify = rbind)
DDdf <- data.frame(gauge = as.numeric(DD[,1]), gauge_name = DD[,2])
both <- merge(DF, DDdf, by = "gauge", all.x = TRUE)

dts <- as.character(both[,"date"])
tms <- as.character(both[,"time"])
date_time <- as.chron(paste(dts, tms), "%Y-%m-%d %H:%M")
DF <- data.frame(date_time, both)
library(ggplot2)
#discharge
if(input=="discharge"){
qplot(as.POSIXct(date_time), discharge, data=DF,
geom="line")+facet_wrap(~gauge_name,
scales="free_y")+coord_trans(y="log10")
}else{
#precipitation

qplot(as.POSIXct(date_time),
precipitation, data=subset(DF, precipitation!="NA"),
geom="line")+facet_wrap(~gauge_name, scales="free_y")
}
}

USGS("precip")

I would like to have the cumsum based on the facet gauge_name - in
other words a cummulative rainfall amount for each gauge_name


You want "the cumsum" of  but you have wrapped so much  
up in that

function (inlcuding library calls???)  that I cannot see what that
 would be. Not all of us read ggplot calls. The  
canonical route

to getting cumsums by a factor is with ave. Something like:

 DF$cum_x <- ave(DF$x, DF$fac, cumsum)

--
David Winsemius, MD
Heritage Laboratories
West Hartford, CT






--
Stephen Sefick

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods.  We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] taking limit

2009-10-05 Thread dahuang

hey, i wonder how to perform limit in R. 

In R help, i find the function "lim", but it seems i need to load package
first, can anyone tell me which one?

Or ANY method of taking limit, i searched it for 2 days but i failed,
thanx~~


-- 
View this message in context: 
http://www.nabble.com/taking-limit-tp25760269p25760269.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R on Linux, and R on Windows , any difference in maturity+stability?

2009-10-05 Thread Robert Wilkins
Will R have more glitches on one operating system as opposed to
another, or is it pretty much the same?

robert

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ggplot2 applying a function based on facet

2009-10-05 Thread stephen sefick
Sorry, I want the cumsum of precipitation by gauge name.
that can then be used with the appropriate datetime stamp to pot a
cumulative rainfall plot.

On Mon, Oct 5, 2009 at 9:03 PM, David Winsemius  wrote:
>
> On Oct 5, 2009, at 9:39 PM, stephen sefick wrote:
>
>> Look at the bottom of the message for my question
>> #here is a little function that I wrote
>> USGS <- function(input="discharge", days=7){
>> library(chron)
>> library(gsubfn)
>> #021973269 is the Waynesboro Gauge on the Savannah River Proper (SRS)
>> #02102908 is the Flat Creek Gauge (ftbrfcms)
>> #02133500 is the Drowning Creek (ftbrbmcm)
>> #02341800 is the Upatoi Creek Near Columbus (ftbn)
>> #02342500 is the Uchee Creek Near Fort Mitchell (ftbn)
>> #02203000 is the Canoochee River Near Claxton (ftst)
>>
>> a <- "http://waterdata.usgs.gov/nwis/uv?format=rdb&period=";
>> b <- "&site_no=021973269,02102908,02133500,02341800,02342500,02203000"
>> z <- paste(a, days, b, sep="")
>> L <- readLines(z)
>
> #trimmed long comment that broke function
>
>> L.USGS <- grep("^USGS", L, value = TRUE)
>> DF <- read.table(textConnection(L.USGS), fill = TRUE)
>> colnames(DF) <- c("agency", "gauge", "date", "time", "gauge_height",
>> "discharge", "precipitation")
>> pat <- "^# +USGS +([0-9]+) +(.*)"
>> L.DD <- grep(pat, L, value = TRUE)
>> library(gsubfn)
>> DD <- strapply(L.DD, pat, c, simplify = rbind)
>> DDdf <- data.frame(gauge = as.numeric(DD[,1]), gauge_name = DD[,2])
>> both <- merge(DF, DDdf, by = "gauge", all.x = TRUE)
>>
>> dts <- as.character(both[,"date"])
>> tms <- as.character(both[,"time"])
>> date_time <- as.chron(paste(dts, tms), "%Y-%m-%d %H:%M")
>> DF <- data.frame(date_time, both)
>> library(ggplot2)
>> #discharge
>> if(input=="discharge"){
>> qplot(as.POSIXct(date_time), discharge, data=DF,
>> geom="line")+facet_wrap(~gauge_name,
>> scales="free_y")+coord_trans(y="log10")
>> }else{
>> #precipitation
>>
>> qplot(as.POSIXct(date_time),
>> precipitation, data=subset(DF, precipitation!="NA"),
>> geom="line")+facet_wrap(~gauge_name, scales="free_y")
>> }
>> }
>>
>> USGS("precip")
>>
>> I would like to have the cumsum based on the facet gauge_name - in
>> other words a cummulative rainfall amount for each gauge_name
>
> You want "the cumsum" of  but you have wrapped so much up in that
> function (inlcuding library calls???)  that I cannot see what that
>  would be. Not all of us read ggplot calls. The canonical route
> to getting cumsums by a factor is with ave. Something like:
>
>  DF$cum_x <- ave(DF$x, DF$fac, cumsum)
>
> --
> David Winsemius, MD
> Heritage Laboratories
> West Hartford, CT
>
>



-- 
Stephen Sefick

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods.  We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to fit time varying coefficient regression model?

2009-10-05 Thread R_help Help
Hi - I read through dse package manual a bit. I'm not quite certain
how I can use it to estimate a time varying coefficient regression
model? I might pick up an inappropriate package. Any suggestion would
be greatly appreciated. Thank you.

rh

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Vim-R-plugin (new version)

2009-10-05 Thread Jakson A. Aquino
On Mon, Oct 05, 2009 at 08:03:23PM -0400, Gabor Grothendieck wrote:
> Looks interesting.   Could you make a vimball out of it to facilitate
> installation.

It seems that VimBall is capable of creating vimballs of simple
plugins which have a file at ftplugin and a another at doc.  The
Vim-R-plugin has 18 files, and two symbolic links in 7
directories.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ggplot2 applying a function based on facet

2009-10-05 Thread David Winsemius


On Oct 5, 2009, at 9:39 PM, stephen sefick wrote:


Look at the bottom of the message for my question
#here is a little function that I wrote
USGS <- function(input="discharge", days=7){
library(chron)
library(gsubfn)
#021973269 is the Waynesboro Gauge on the Savannah River Proper (SRS)
#02102908 is the Flat Creek Gauge (ftbrfcms)
#02133500 is the Drowning Creek (ftbrbmcm)
#02341800 is the Upatoi Creek Near Columbus (ftbn)
#02342500 is the Uchee Creek Near Fort Mitchell (ftbn)
#02203000 is the Canoochee River Near Claxton (ftst)

a <- "http://waterdata.usgs.gov/nwis/uv?format=rdb&period=";
b <- "&site_no=021973269,02102908,02133500,02341800,02342500,02203000"
z <- paste(a, days, b, sep="")
L <- readLines(z)


#trimmed long comment that broke function


L.USGS <- grep("^USGS", L, value = TRUE)
DF <- read.table(textConnection(L.USGS), fill = TRUE)
colnames(DF) <- c("agency", "gauge", "date", "time", "gauge_height",
"discharge", "precipitation")
pat <- "^# +USGS +([0-9]+) +(.*)"
L.DD <- grep(pat, L, value = TRUE)
library(gsubfn)
DD <- strapply(L.DD, pat, c, simplify = rbind)
DDdf <- data.frame(gauge = as.numeric(DD[,1]), gauge_name = DD[,2])
both <- merge(DF, DDdf, by = "gauge", all.x = TRUE)

dts <- as.character(both[,"date"])
tms <- as.character(both[,"time"])
date_time <- as.chron(paste(dts, tms), "%Y-%m-%d %H:%M")
DF <- data.frame(date_time, both)
library(ggplot2)
#discharge
if(input=="discharge"){
qplot(as.POSIXct(date_time), discharge, data=DF,
geom="line")+facet_wrap(~gauge_name,
scales="free_y")+coord_trans(y="log10")
}else{
#precipitation

qplot(as.POSIXct(date_time),
precipitation, data=subset(DF, precipitation!="NA"),
geom="line")+facet_wrap(~gauge_name, scales="free_y")
}
}

USGS("precip")

I would like to have the cumsum based on the facet gauge_name - in
other words a cummulative rainfall amount for each gauge_name


You want "the cumsum" of  but you have wrapped so much up  
in that function (inlcuding library calls???)  that I cannot see what  
that  would be. Not all of us read ggplot calls. The  
canonical route to getting cumsums by a factor is with ave. Something  
like:


 DF$cum_x <- ave(DF$x, DF$fac, cumsum)

--
David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] ggplot2 applying a function based on facet

2009-10-05 Thread stephen sefick
Look at the bottom of the message for my question
#here is a little function that I wrote
USGS <- function(input="discharge", days=7){
library(chron)
library(gsubfn)
#021973269 is the Waynesboro Gauge on the Savannah River Proper (SRS)
#02102908 is the Flat Creek Gauge (ftbrfcms)
#02133500 is the Drowning Creek (ftbrbmcm)
#02341800 is the Upatoi Creek Near Columbus (ftbn)
#02342500 is the Uchee Creek Near Fort Mitchell (ftbn)
#02203000 is the Canoochee River Near Claxton (ftst)

a <- "http://waterdata.usgs.gov/nwis/uv?format=rdb&period=";
b <- "&site_no=021973269,02102908,02133500,02341800,02342500,02203000"
z <- paste(a, days, b, sep="")
L <- readLines(z)

#look for the data with USGS in front of it (this take advantage of
the agency column)
L.USGS <- grep("^USGS", L, value = TRUE)
DF <- read.table(textConnection(L.USGS), fill = TRUE)
colnames(DF) <- c("agency", "gauge", "date", "time", "gauge_height",
"discharge", "precipitation")
pat <- "^# +USGS +([0-9]+) +(.*)"
L.DD <- grep(pat, L, value = TRUE)
library(gsubfn)
DD <- strapply(L.DD, pat, c, simplify = rbind)
DDdf <- data.frame(gauge = as.numeric(DD[,1]), gauge_name = DD[,2])
both <- merge(DF, DDdf, by = "gauge", all.x = TRUE)

dts <- as.character(both[,"date"])
tms <- as.character(both[,"time"])
date_time <- as.chron(paste(dts, tms), "%Y-%m-%d %H:%M")
DF <- data.frame(date_time, both)
library(ggplot2)
#discharge
if(input=="discharge"){
qplot(as.POSIXct(date_time), discharge, data=DF,
geom="line")+facet_wrap(~gauge_name,
scales="free_y")+coord_trans(y="log10")
}else{
#precipitation

qplot(as.POSIXct(date_time),
precipitation, data=subset(DF, precipitation!="NA"),
geom="line")+facet_wrap(~gauge_name, scales="free_y")
}
}

USGS("precip")

I would like to have the cumsum based on the facet gauge_name - in
other words a cummulative rainfall amount for each gauge_name

-- 
Stephen Sefick

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods.  We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Loop function/comparison operator problem

2009-10-05 Thread David Winsemius


On Oct 5, 2009, at 7:56 PM, Duncan Murdoch wrote:


On 05/10/2009 7:47 PM, jimdare wrote:

Hi There,
I have created the following function
format<- function(){
repeat {
form<-readline(paste("\nIn what format do you want to save these
plots?\nChoose from: wmf, emf, png, jpg, jpeg, bmp, tif, tiff, ps,  
eps, or
pdf.\nNote: eps is the suggested format for publication quality  
plots.\nPlot

format --> "));
cat("\nI'm sorry, I don't know what that format is.\nPlease try
again\nPress ENTER...");readline()}
	if (form == c("wmf", "emf", "png", "jpg", "jpeg", "bmp", "tif",  
"tiff",

"ps", "eps", "pdf")) {break}
}
How do I get the program to recognise that the user has entered one  
of the
formats in the last line?  Even entering "png" insted of png at the  
prompt

doesn't seem to work.  Will the loop only break if I enter all of the
possible formats?


To debug stuff like this, assign a value to form, then evaluate the  
condition.  For example,


> form <- "png"
> form == c("wmf", "emf", "png", "jpg", "jpeg", "bmp", "tif", "tiff",
+  "ps", "eps", "pdf")
[1] FALSE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE

So you could put any() around the test, or use the %in% operator  
instead of ==, or use the menu() or select.list() functions.


The ?Devices help page suggests that there is a core of valid formats  
and a group of installation specific formats. the capabilities()  
functions gives logical values for the list of both graphics and non- 
graphics optional functions. This is a hack to return a list of core  
features with additional system specific options (illustrated on a my  
particular Mac):


> val.dev <- c("ps" ,"pdf", "pictex", "xfig", "bitmap",  
names(capabilities()[names(capabilities()) %in% c("X11", "bmp",  
"jpeg", "png", "tiff", "cairo_pdf", "quartz")]) )

> val.dev
[1] "ps" "pdf""pictex" "xfig"   "bitmap" "jpeg"   "png" 
"tiff"   "X11"


The fact that the valid types are in the names makes it all a bit  
Baroque. (And I think that a correct Mac version would furhter check  
to see if aqua is true and if so return "jpeg", "png" and "tiff" even  
if those were not listed as part of the X11 capabilities.)




Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to have 'match' ignore no-matches

2009-10-05 Thread Tony Plate

x <- data.frame(d=letters[1:3], e=letters[3:5])
lookuptable <- c(a="aa", c="cc", e="ee")
match.or.keep <- function(x, lookuptable) {if (is.factor(x)) x <- as.character(x); 
m <- match(x, names(lookuptable)); ifelse(is.na(m), x, lookuptable[m])}
# to return a matrix
apply(x, 2, match.or.keep, lookuptable=lookuptable)
de   
[1,] "aa" "cc"
[2,] "b"  "d" 
[3,] "cc" "ee"

# to return a data frame
as.data.frame(lapply(x, match.or.keep, lookuptable=lookuptable))

  d  e
1 aa cc
2  b  d
3 cc ee





Jill Hollenbach wrote:

Let me clarify:
I'm using this--

dfnew<- sapply(df, function(df) lookuptable[match(df, lookuptable [ ,1]),
2])


lookup

0101   01:01
0201   02:01
0301   03:01
0401   04:01


df

0101   0301
0201   0401
0101   0502


dfnew

01:01   03:01
02:01   04:01
01:01   NA

but what I want is:

dfnew2

01:01   03:01
02:01   04:01
01:01   0502

thanks again,
Jill




Jill Hollenbach wrote:

Hi all,
I think this is a very basic question, but I'm new to this so please bear
with me.

I'm using match to translate elements of a data frame using a lookup
table. If the content of a particular cell is not found in the lookup
table, the function returns NA. I'm wondering how I can just ignore those
cells, and return the original contents if no match is found in the lookup
table.

Many thanks in advance, this site has been extremely helpful for me so
far,
Jill

Jill Hollenbach, PhD, MPH
Assistant Staff Scientist
Center for Genetics
Children's Hospital Oakland Research Institute
jhollenb...@chori.org





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Vim-R-plugin (new version)

2009-10-05 Thread Gabor Grothendieck
Looks interesting.   Could you make a vimball out of it to facilitate
installation.

On Mon, Oct 5, 2009 at 10:12 AM, Jakson A. Aquino
 wrote:
> Dear R users,
>
> The author of Tinn-R (Jose Claudio Faria) now is co-author of
> Vim-R-plugin2, a plugin that makes it possible to send commands
> from the Vim text editor to R. We added many new key bindings,
> restructured the menu and created new Tool Bar buttons. The new
> version is available at:
>
>  http://www.vim.org/scripts/script.php?script_id=2628
>
>  NOTES:
>   (1) Some old key binding changed, including the shortcuts
>       to start R.
>   (2) The plugin doesn't work on Microsoft Windows yet.
>
> Below is the plugin's menu structure, and the corresponding
> default keyboard shortcuts:
>
> Start/Close
>  . Start R (default)                      \rf
>  . Start R --vanilla                      \rv
>  . Start R (custom)                       \rc
>  
>  . Close R (no save)                      \rq
>  . Close R (save workspace)               \rw
> ---
>
> Send
>  . File                                    f5
>  . File (echo)                             F5
>  
>  . Block (cur)                             f6
>  . Block (cur, echo)                       F6
>  . Block (cur, echo and down)             ^F6
>  
>  . Function (cur)                          f7
>  . Function (cur and down)                 F7
>  
>  . Selection                               f8
>  . Selection (echo)                        F8
>  . Selection (and down)                    f9
>  . Selection (echo and down)               F9
>  
>  . Line                                    f8
>  . Line (and down)                         f9
>  . Line (and new one)                      \q
> ---
>
> Control
>  . List space                             \rl
>  . Clear console                          \rr
>  . Clear all                              \rm
>  
>  . Object (print)                         \rp
>  . Object (names)                         \rn
>  . Object (str)                           \rt
>  
>  . Arguments (cur)                        \ra
>  . Example (cur)                          \re
>  . Help (cur)                             \rh
>  
>  . Summary (cur)                          \rs
>  . Plot (cur)                             \rg
>  . Plot and summary (cur)                 \rb
>  
>  . Set working directory (cur file path)  \rd
>  
>  . Sweave (cur file)                      \sw
>  . Sweave and PDF (cur file)              \sp
>  
>  . Rebuild (list of objects)              \ro
>
> Best regards,
>
> Jakson Aquino
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Loop function/comparison operator problem

2009-10-05 Thread Duncan Murdoch

On 05/10/2009 7:47 PM, jimdare wrote:

Hi There,

I have created the following function

format<- function(){
repeat {
form<-readline(paste("\nIn what format do you want to save these
plots?\nChoose from: wmf, emf, png, jpg, jpeg, bmp, tif, tiff, ps, eps, or
pdf.\nNote: eps is the suggested format for publication quality plots.\nPlot
format --> "));
cat("\nI'm sorry, I don't know what that format is.\nPlease try
again\nPress ENTER...");readline()}
if (form == c("wmf", "emf", "png", "jpg", "jpeg", "bmp", "tif", "tiff",
"ps", "eps", "pdf")) {break}
}

How do I get the program to recognise that the user has entered one of the
formats in the last line?  Even entering "png" insted of png at the prompt
doesn't seem to work.  Will the loop only break if I enter all of the
possible formats?


To debug stuff like this, assign a value to form, then evaluate the 
condition.  For example,


> form <- "png"
> form == c("wmf", "emf", "png", "jpg", "jpeg", "bmp", "tif", "tiff",
+  "ps", "eps", "pdf")
 [1] FALSE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE

So you could put any() around the test, or use the %in% operator instead 
of ==, or use the menu() or select.list() functions.


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Loop function/comparison operator problem

2009-10-05 Thread jim holtman
use %in% instead of '=='

On Mon, Oct 5, 2009 at 7:47 PM, jimdare  wrote:
>
> Hi There,
>
> I have created the following function
>
> format<- function(){
> repeat {
> form<-readline(paste("\nIn what format do you want to save these
> plots?\nChoose from: wmf, emf, png, jpg, jpeg, bmp, tif, tiff, ps, eps, or
> pdf.\nNote: eps is the suggested format for publication quality plots.\nPlot
> format --> "));
>        cat("\nI'm sorry, I don't know what that format is.\nPlease try
> again\nPress ENTER...");readline()}
>        if (form == c("wmf", "emf", "png", "jpg", "jpeg", "bmp", "tif", "tiff",
> "ps", "eps", "pdf")) {break}
> }
>
> How do I get the program to recognise that the user has entered one of the
> formats in the last line?  Even entering "png" insted of png at the prompt
> doesn't seem to work.  Will the loop only break if I enter all of the
> possible formats?
>
> Thanks in advance,
>
> James
> --
> View this message in context: 
> http://www.nabble.com/Loop-function-comparison-operator-problem-tp25757203p25757203.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] gsub - replace multiple occurences with different strings

2009-10-05 Thread Gabor Grothendieck
Here are two approaching using Bill's sample data:

1. gsubfn supports proto objects whose methods have access to a count
variable that is built into gsubfn and automatically reset to zero at
the start of each string so you can do this (gsubfn uses proto
internally so you don't have to explicitly load it):

> x <- c("xx y e d xx e t f xx e f xx",
+   "xx y e d xx e t f xx",
+   "xx y e d xx e t f xx e f  y e d xx e t f xx e f xx")
>
> library(gsubfn)
> p <- proto(fun = function(this, x) {
+if (count > 4) x
+else c("x1", "x2", "x3", "x4")[count]
+ })
> gsubfn("xx", p, x)
[1] "x1 y e d x2 e t f x3 e f x4"
[2] "x1 y e d x2 e t f x3"
[3] "x1 y e d x2 e t f x3 e f x4xx y e d xx e t f xx e f xx"

See the gsubfn vignette for more examples.

2. A simple approach is just to use a for loop:

> X <- x
> for(xn in c("x1", "x2", "x3", "x4")) X <- sub("xx", xn, X)
> X
[1] "x1 y e d x2 e t f x3 e f x4"
[2] "x1 y e d x2 e t f x3"
[3] "x1 y e d x2 e t f x3 e f x4xx y e d xx e t f xx e f xx"
>


On Mon, Oct 5, 2009 at 11:19 AM, William Dunlap  wrote:
>> -Original Message-
>> From: r-help-boun...@r-project.org
>> [mailto:r-help-boun...@r-project.org] On Behalf Of Martin Batholdy
>> Sent: Monday, October 05, 2009 7:34 AM
>> To: r help
>> Subject: [R] gsub - replace multiple occurences with different strings
>>
>> Hi,
>>
>> I search a way to replace multiple occurrences of a string with
>> different strings
>> depending on the place where it occurs.
>>
>>
>> I tried the following;
>>
>> x <- c("xx y e d xx e t f xx e f xx")
>> x <- gsub("xx", c("x1", "x2", "x3", "x4"), x)
>>
>>
>> what I want to get is;
>>
>> x =
>> x1 y y e d x2 e t f x3 e f x4
>
> You have a doubled y in the output but not the input,
> I'll assume the input is correct.  I extended x to three similar
> strings:
>
>  x <- c("xx y e d xx e t f xx e f xx",
>           "xx y e d xx e t f xx",
>           "xx y e d xx e t f xx e f  y e d xx e t f xx e f xx")
>
> If you know you always have 4 xx's you can use sub (or gsub),
> but it doesn't work properly if there are not exactly 4 xx's:
>  > sub("xx(.*)xx(.*)xx(.*)xx", "x1\\1x2\\2x3\\3x4", x)
>  [1] "x1 y e d x2 e t f x3 e f x4"
>  [2] "xx y e d xx e t f xx"
>  [3] "x1 y e d xx e t f xx e f  y e d x2 e t f x3 e f x4"
>
> You can use gsubfn() from package gsubfn along with a function that
> maintains
> a count of how many times it has been called, as in
>  > gsubfn("xx", local({n<-0;function(x){n<<-n+1;paste(x,n,sep="")}}),
> x)
>  [1] "xx1 y e d xx2 e t f xx3 e f xx4"
>
>  [2] "xx5 y e d xx6 e t f xx7"
>
>  [3] "xx8 y e d xx9 e t f xx10 e f xx11xx12 y e d xx13 e t f xx14 e f
> xx15"
>
> If you want the count to start anew with each string in the vector you
> can use sapply.
>
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com
>
>
>
>
>>
>>
>> but what I get is;
>>
>> x =
>> x1 y y e d x1 e t f x1 e f x1
>>       [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Loop function/comparison operator problem

2009-10-05 Thread jimdare

Hi There,

I have created the following function

format<- function(){
repeat {
form<-readline(paste("\nIn what format do you want to save these
plots?\nChoose from: wmf, emf, png, jpg, jpeg, bmp, tif, tiff, ps, eps, or
pdf.\nNote: eps is the suggested format for publication quality plots.\nPlot
format --> "));
cat("\nI'm sorry, I don't know what that format is.\nPlease try
again\nPress ENTER...");readline()}
if (form == c("wmf", "emf", "png", "jpg", "jpeg", "bmp", "tif", "tiff",
"ps", "eps", "pdf")) {break}
}

How do I get the program to recognise that the user has entered one of the
formats in the last line?  Even entering "png" insted of png at the prompt
doesn't seem to work.  Will the loop only break if I enter all of the
possible formats?

Thanks in advance,

James 
-- 
View this message in context: 
http://www.nabble.com/Loop-function-comparison-operator-problem-tp25757203p25757203.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Date-Time-Stamp input method for user-specific formats

2009-10-05 Thread Gabor Grothendieck
Try this.  First we read a line at a time into L except for the
header.  Then we use strapply to match on the given pattern.  It
passes the backreferences (the portions within parentheses in the
pattern) to the function (defined via a formula) whose implicit
arguments are x, y and z.  That function returns two columns which are
in the required form so that in the next statement we convert one to
chron and the other to numeric.  See R News 4/1 for more about dates
and times.

library(gsubfn) # strapply
library(chron) # as.chron
Lines <- "DATETIMEFREQ
01/09/2009  59.036
01/09/2009 00:00:01 58.035
01/09/2009 00:00:02 53.035
01/09/2009 00:00:03 47.033
01/09/2009 00:00:04 52.03
01/09/2009 00:00:05 55.025"
L <- readLines(Lines)[-1]

pat <- "(../../) (..:..:..){0,1} *([0-9.]+)"
s <- strapply(L, pat, ~ c(paste(x, y, "00:00:00"), z), simplify = rbind)

fmt <- "%m/%d/%Y %H:%M:%S"
DF <- data.frame(Time = as.chron(s[,1], fmt), Freq = as.numeric(s[,2]))

DF

The final output looks like this:

> DF
 Time   Freq
1 (01/09/09 00:00:00) 59.036
2 (01/09/09 00:00:01) 58.035
3 (01/09/09 00:00:02) 53.035
4 (01/09/09 00:00:03) 47.033
5 (01/09/09 00:00:04) 52.030
6 (01/09/09 00:00:05) 55.025

If the times are unique you could consider making a zoo object out of
it by replacing the DF<- statement with:

library(zoo)
z <- zoo(as.numeric(s[,2]), as.chron(s[,1], fmt))

See the three vignettes in the zoo package.


On Mon, Oct 5, 2009 at 5:14 PM, esp  wrote:
>
> Date-Time-Stamp input method to correctly interpret user-specific
> formats:coding is  90% there - based on exmple at
> http://tolstoy.newcastle.edu.au/R/help/05/02/12003.html
> ...anyone got the last 10% please?
>
> CONTEXT:
>
> Data is received where one of the columns is a datetimestamp.  At midnight,
> the value represented as text in this column consists of just the date part,
> e.g. "01/09/2009".  At other times, the value in the column contains both
> date and time e.g. "01/09/2009 00:00:01".  The goal is to read it into R as
> an appropriate data type, where for example date arithmetic can be
> performed.  As far as I can tell, the most appropriate such data type is
> POSIXct.  The trick then is to read in the datetimestamps in the data as
> this type.
>
> PROBLEM:
>
> POSIXct defaults to a text representation almost but not quite like my
> received data.  The main difference is that the POSIXct date part is in
> reverse order, e.g. "2009-09-01".  It is possible to define a different
> format where date and time parts look like my data but when encountering
> datetimestamps where only the the date part is present (as in the case of my
> midnight data) then this is interpreted as NA i.e. undefined.
>
> SOLUTION (ALMOST):
>
> There is a workaround (based on example at
> http://tolstoy.newcastle.edu.au/R/help/05/02/12003.html).  It is possible to
> define a class then read the data in as this class.  For such a class it is
> possible to define a class method, in terms of a function, for translating a
> text (character string) representation into a value. In that function, one
> can use a conditional expression to treat midnight datetimestamps
> differently from those at other times of day.  The example below does that.
> In order to apply this function over all of the datetimestamp values in the
> column, it is necessary to use something like R's 'sapply' function.
>
> SNAG:
>
> The function below implements this approach.  A datetimestamp with only the
> date part, including leading zeroes, is always length 10 (characters).   It
> correctly interprets the datetimestamp values, but unfortunately translates
> them into what appear to be numeric type.  I am actually uncertain precisely
> what is happening, as I am very new to R and have most certainly stretched
> myself in writing this code.  I think perhaps it returns a list and
> something associated with this aspect makes it "forget" the data type is
> POSIXct or at least how such a type should be displayed as text or what to
> do about it.
>
> PLEA:
>
> Please, can anyone give any help whatsoever, however tenuous?
>
> CODE, DATA & RESULTS:
>
> Function to Read required data, intended to make the datetime column of the
> data (example given further below) into POSIXct values:
> <<<
> spot_frequency_readin <- function(file,nrows=-1) {
>
> # create temp class
> setClass("t_class2_", representation("character"))
> setAs("character", "t_class2_", function(from) {sapply(from, function(x) {
>  if (nchar(x)==10) {
> as.POSIXct(strptime(x,format="%d/%m/%Y"))
> }
> else {
> as.POSIXct(strptime(x,format="%d/%m/%Y %H:%M:%S"))
> }
> }
> )
> }
> )
>
> #(for format symbols, see "R Reference Card")
>
> # read the file (TSV)
> file <- read.delim(file, header=TRUE, comment.char = "", nrows=nrows,
> as.is=FALSE, col.names=c("DATETIME", "FREQ"), colClasses=c("t_class2_",
> "numeric") )
>
> # remove it now that we are done with it
> removeClass("t_class2_")
>
> return(file)
> }

> This

Re: [R] Date-Time-Stamp input method for user-specific formats

2009-10-05 Thread David Winsemius


On Oct 5, 2009, at 5:14 PM, esp wrote:



Date-Time-Stamp input method to correctly interpret user-specific
formats:coding is  90% there - based on exmple at
http://tolstoy.newcastle.edu.au/R/help/05/02/12003.html
...anyone got the last 10% please?

CONTEXT:

Data is received where one of the columns is a datetimestamp.  At  
midnight,
the value represented as text in this column consists of just the  
date part,
e.g. "01/09/2009".  At other times, the value in the column contains  
both
date and time e.g. "01/09/2009 00:00:01".  The goal is to read it  
into R as

an appropriate data type, where for example date arithmetic can be
performed.  As far as I can tell, the most appropriate such data  
type is
POSIXct.  The trick then is to read in the datetimestamps in the  
data as

this type.

PROBLEM:

POSIXct defaults to a text representation almost but not quite like my
received data.  The main difference is that the POSIXct date part is  
in
reverse order, e.g. "2009-09-01".  It is possible to define a  
different
format where date and time parts look like my data but when  
encountering
datetimestamps where only the the date part is present (as in the  
case of my

midnight data) then this is interpreted as NA i.e. undefined.

SOLUTION (ALMOST):

There is a workaround (based on example at
http://tolstoy.newcastle.edu.au/R/help/05/02/12003.html).  It is  
possible to
define a class then read the data in as this class.  For such a  
class it is
possible to define a class method, in terms of a function, for  
translating a
text (character string) representation into a value. In that  
function, one

can use a conditional expression to treat midnight datetimestamps
differently from those at other times of day.  The example below  
does that.
In order to apply this function over all of the datetimestamp values  
in the

column, it is necessary to use something like R's 'sapply' function.

SNAG:

The function below implements this approach.  A datetimestamp with  
only the
date part, including leading zeroes, is always length 10  
(characters).   It
correctly interprets the datetimestamp values, but unfortunately  
translates
them into what appear to be numeric type.  I am actually uncertain  
precisely
what is happening, as I am very new to R and have most certainly  
stretched

myself in writing this code.  I think perhaps it returns a list and
something associated with this aspect makes it "forget" the data  
type is
POSIXct or at least how such a type should be displayed as text or  
what to

do about it.

PLEA:

Please, can anyone give any help whatsoever, however tenuous?

CODE, DATA & RESULTS:

Function to Read required data, intended to make the datetime column  
of the

data (example given further below) into POSIXct values:
<<<
spot_frequency_readin <- function(file,nrows=-1) {

# create temp class
setClass("t_class2_", representation("character"))
setAs("character", "t_class2_", function(from) {sapply(from,  
function(x) {

 if (nchar(x)==10) {
as.POSIXct(strptime(x,format="%d/%m/%Y"))
}
else {
as.POSIXct(strptime(x,format="%d/%m/%Y %H:%M:%S"))
}
}
)
}
)

#(for format symbols, see "R Reference Card")

# read the file (TSV)
file <- read.delim(file, header=TRUE, comment.char = "", nrows=nrows,
as.is=FALSE, col.names=c("DATETIME", "FREQ"),  
colClasses=c("t_class2_",

"numeric") )

# remove it now that we are done with it
removeClass("t_class2_")

return(file)
}


This appears to work apart as regards processing each row of data  
correctly,
but the values returned look like numeric equivalents of POSIXct, as  
opposed

to the expected character-based (string) equivalents:


Example Data:
<<<
DATETIMEFREQ
01/09/2009  59.036
01/09/2009 00:00:01 58.035
01/09/2009 00:00:02 53.035
01/09/2009 00:00:03 47.033
01/09/2009 00:00:04 52.03
01/09/2009 00:00:05 55.025





Example Function Call:
<<<

spot = spot_frequency_readin("mydatafile.txt",4)





Result of Example Function Call:
<<<

spot[1]

   DATETIME

1 1251759600
2 1251759601
3 1251759602
4 1251759603





What I ideally wanted to see (whether or not the time part of the
datetimestamp at midnight was displayed):
<<<

spot[1]

   DATETIME

01/09/2009 00:00:00
01/09/2009 00:00:01
01/09/2009 00:00:02
01/09/2009 00:00:03
01/09/2009 00:00:04





For the function as defined above using 'sapply'

spot[,1]

01/09/2009 01/09/2009 00:00:01 01/09/2009 00:00:02 01/09/2009
00:00:03
1251759600  1251759601  1251759602
1251759603

This was unexpected - it seems to have displayed the datetimestamp  
values

both as per my defined character-string representation and as numeric
values.


as.POSIXct(spot$DATETIME,  origin="1970-01-01")
   01/09/2009   01/09/2009 00:00:01   01/09/2009  
00:00:02
"2009-09-01 05:00:00 EDT" "2009-09-01 05:00:01 EDT" "2009-09-01  
05:00:02 EDT"

  01/09/2009 00:00:03
"2009-09-01 05:00:03 EDT"

If you want to get rid of the somewhat extranous names:

> unn

Re: [R] Date-Time-Stamp input method for user-specific formats

2009-10-05 Thread Don MacQueen

Off the top of my head, I think you're working to hard at this.

I would read in the timestamp  column as a character string. Then, 
find those where the string length is too short [using nchar()], 
append "00:00:00" to those [using paste()], and then convert to 
POSIXt [using as.POSIXct()].


No need to define new classes. Simple and easy to understand.

-Don

At 2:14 PM -0700 10/5/09, esp wrote:

Date-Time-Stamp input method to correctly interpret user-specific
formats:coding is  90% there - based on exmple at
http://*tolstoy.newcastle.edu.au/R/help/05/02/12003.html
...anyone got the last 10% please? 


CONTEXT:

Data is received where one of the columns is a datetimestamp.  At midnight,
the value represented as text in this column consists of just the date part,
e.g. "01/09/2009".  At other times, the value in the column contains both
date and time e.g. "01/09/2009 00:00:01".  The goal is to read it into R as
an appropriate data type, where for example date arithmetic can be
performed.  As far as I can tell, the most appropriate such data type is
POSIXct.  The trick then is to read in the datetimestamps in the data as
this type.

PROBLEM:

POSIXct defaults to a text representation almost but not quite like my
received data.  The main difference is that the POSIXct date part is in
reverse order, e.g. "2009-09-01".  It is possible to define a different
format where date and time parts look like my data but when encountering
datetimestamps where only the the date part is present (as in the case of my
midnight data) then this is interpreted as NA i.e. undefined.

SOLUTION (ALMOST):

There is a workaround (based on example at
http://*tolstoy.newcastle.edu.au/R/help/05/02/12003.html).  It is possible to
define a class then read the data in as this class.  For such a class it is
possible to define a class method, in terms of a function, for translating a
text (character string) representation into a value. In that function, one
can use a conditional expression to treat midnight datetimestamps
differently from those at other times of day.  The example below does that.
In order to apply this function over all of the datetimestamp values in the
column, it is necessary to use something like R's 'sapply' function.

SNAG:

The function below implements this approach.  A datetimestamp with only the
date part, including leading zeroes, is always length 10 (characters).   It
correctly interprets the datetimestamp values, but unfortunately translates
them into what appear to be numeric type.  I am actually uncertain precisely
what is happening, as I am very new to R and have most certainly stretched
myself in writing this code.  I think perhaps it returns a list and
something associated with this aspect makes it "forget" the data type is
POSIXct or at least how such a type should be displayed as text or what to
do about it.

PLEA:

Please, can anyone give any help whatsoever, however tenuous?

CODE, DATA & RESULTS:

Function to Read required data, intended to make the datetime column of the
data (example given further below) into POSIXct values:
<<<
spot_frequency_readin <- function(file,nrows=-1) {

# create temp class
setClass("t_class2_", representation("character"))
setAs("character", "t_class2_", function(from) {sapply(from, function(x) {
  if (nchar(x)==10) {
as.POSIXct(strptime(x,format="%d/%m/%Y"))
}
else {
as.POSIXct(strptime(x,format="%d/%m/%Y %H:%M:%S"))
}
}
)
}
)

#(for format symbols, see "R Reference Card")

# read the file (TSV)
file <- read.delim(file, header=TRUE, comment.char = "", nrows=nrows,
as.is=FALSE, col.names=c("DATETIME", "FREQ"), colClasses=c("t_class2_",
"numeric") )

# remove it now that we are done with it
removeClass("t_class2_")

return(file)
}



This appears to work apart as regards processing each row of data correctly,
but the values returned look like numeric equivalents of POSIXct, as opposed
to the expected character-based (string) equivalents:


Example Data:
<<<
DATETIMEFREQ
01/09/2009  59.036
01/09/2009 00:00:01 58.035
01/09/2009 00:00:02 53.035
01/09/2009 00:00:03 47.033
01/09/2009 00:00:04 52.03
01/09/2009 00:00:05 55.025





Example Function Call:
<<<

 spot = spot_frequency_readin("mydatafile.txt",4)





Result of Example Function Call:
<<<

 spot[1]

DATETIME

1 1251759600
2 1251759601
3 1251759602
4 1251759603





What I ideally wanted to see (whether or not the time part of the
datetimestamp at midnight was displayed):
<<<

 spot[1]

DATETIME

01/09/2009 00:00:00
01/09/2009 00:00:01
01/09/2009 00:00:02
01/09/2009 00:00:03
01/09/2009 00:00:04





For the function as defined above using 'sapply'

 spot[,1]

 01/09/2009 01/09/2009 00:00:01 01/09/2009 00:00:02 01/09/2009
00:00:03
 1251759600  1251759601  1251759602
1251759603


This was unexpected - it seems to have displayed the datetimestamp values
both as per my defined character-string representation and as numeric
values. 


Alternative

[R] Date-Time-Stamp input method for user-specific formats

2009-10-05 Thread esp

Date-Time-Stamp input method to correctly interpret user-specific
formats:coding is  90% there - based on exmple at
http://tolstoy.newcastle.edu.au/R/help/05/02/12003.html
...anyone got the last 10% please?  

CONTEXT:

Data is received where one of the columns is a datetimestamp.  At midnight,
the value represented as text in this column consists of just the date part,
e.g. "01/09/2009".  At other times, the value in the column contains both
date and time e.g. "01/09/2009 00:00:01".  The goal is to read it into R as
an appropriate data type, where for example date arithmetic can be
performed.  As far as I can tell, the most appropriate such data type is
POSIXct.  The trick then is to read in the datetimestamps in the data as
this type.

PROBLEM:

POSIXct defaults to a text representation almost but not quite like my
received data.  The main difference is that the POSIXct date part is in
reverse order, e.g. "2009-09-01".  It is possible to define a different
format where date and time parts look like my data but when encountering
datetimestamps where only the the date part is present (as in the case of my
midnight data) then this is interpreted as NA i.e. undefined.

SOLUTION (ALMOST):

There is a workaround (based on example at
http://tolstoy.newcastle.edu.au/R/help/05/02/12003.html).  It is possible to
define a class then read the data in as this class.  For such a class it is
possible to define a class method, in terms of a function, for translating a
text (character string) representation into a value. In that function, one
can use a conditional expression to treat midnight datetimestamps
differently from those at other times of day.  The example below does that. 
In order to apply this function over all of the datetimestamp values in the
column, it is necessary to use something like R's 'sapply' function.

SNAG:

The function below implements this approach.  A datetimestamp with only the
date part, including leading zeroes, is always length 10 (characters).   It
correctly interprets the datetimestamp values, but unfortunately translates
them into what appear to be numeric type.  I am actually uncertain precisely
what is happening, as I am very new to R and have most certainly stretched
myself in writing this code.  I think perhaps it returns a list and
something associated with this aspect makes it "forget" the data type is
POSIXct or at least how such a type should be displayed as text or what to
do about it.

PLEA:

Please, can anyone give any help whatsoever, however tenuous?

CODE, DATA & RESULTS:

Function to Read required data, intended to make the datetime column of the
data (example given further below) into POSIXct values:
<<<
spot_frequency_readin <- function(file,nrows=-1) {

# create temp class
setClass("t_class2_", representation("character"))
setAs("character", "t_class2_", function(from) {sapply(from, function(x) {
  if (nchar(x)==10) {
as.POSIXct(strptime(x,format="%d/%m/%Y"))
}
else {
as.POSIXct(strptime(x,format="%d/%m/%Y %H:%M:%S"))
}
}
)
}
)

#(for format symbols, see "R Reference Card")

# read the file (TSV)
file <- read.delim(file, header=TRUE, comment.char = "", nrows=nrows,
as.is=FALSE, col.names=c("DATETIME", "FREQ"), colClasses=c("t_class2_",
"numeric") )

# remove it now that we are done with it
removeClass("t_class2_")

return(file)
}
>>>
This appears to work apart as regards processing each row of data correctly,
but the values returned look like numeric equivalents of POSIXct, as opposed
to the expected character-based (string) equivalents:


Example Data:
<<<
DATETIMEFREQ
01/09/2009  59.036
01/09/2009 00:00:01 58.035
01/09/2009 00:00:02 53.035
01/09/2009 00:00:03 47.033
01/09/2009 00:00:04 52.03
01/09/2009 00:00:05 55.025
>>>


Example Function Call:
<<<
> spot = spot_frequency_readin("mydatafile.txt",4)
>>>


Result of Example Function Call:
<<<
> spot[1]
DATETIME

1 1251759600
2 1251759601
3 1251759602
4 1251759603
>>>


What I ideally wanted to see (whether or not the time part of the
datetimestamp at midnight was displayed):
<<<
> spot[1]
DATETIME

01/09/2009 00:00:00
01/09/2009 00:00:01
01/09/2009 00:00:02
01/09/2009 00:00:03
01/09/2009 00:00:04
>>>


For the function as defined above using 'sapply'
> spot[,1]
 01/09/2009 01/09/2009 00:00:01 01/09/2009 00:00:02 01/09/2009
00:00:03 
 1251759600  1251759601  1251759602 
1251759603

This was unexpected - it seems to have displayed the datetimestamp values
both as per my defined character-string representation and as numeric
values.  

Alternatively ifI replace the 'sapply' by a 'lapply' then I get something
closer to what I expect.  It is at least what looks like R's default text
representation for POSIXct datetimes, even if it is not in my preferred
format.
<<<
> spot[,1]

[[1]]
[1] "2009-09-01 BST"

[[2]]
[1] "2009-09-01 00:00:01 BST"

[[3]]
[1] "2009-09-01 00:00:02 BST"

[[4]]
[1] "2009-09-01 00:00:03 BST"
>>>

-- 
View this message in

Re: [R] how to have 'match' ignore no-matches

2009-10-05 Thread David Winsemius


On Oct 5, 2009, at 4:47 PM, Jill Hollenbach wrote:



Let me clarify:
I'm using this--

dfnew<- sapply(df, function(df) lookuptable[match(df, lookuptable [ , 
1]),

2])


It seems a very bad idea to use the same name in your functions as  
that of the dataframe argument you might be passing. You end up with  
two different objects (in different environments) both with the name  
"df". The R interpreter of course can handle keeping those two objects  
separate, but my concern is for the poor "wetware" interpreters  
including you out here in R-help-land.





lookup

0101   01:01
0201   02:01
0301   03:01
0401   04:01


These are not cut-and-pastable. (And I cannot figure out what data  
type you expect them to be. They are not displayed in a form that I  
would expect to see at the console from either a matrix or a  
dataframe. Use the dput function to show an ASCII interpretable form  
that can be unambiguously assigned to a variable.


lookup <- read.table(textConnection("
 0101   01:01
 0201   02:01
 0301   03:01
 0401   04:01") )


> lookup   #as a dataframe would be displayed
   V1V2
1 101 01:01
2 201 02:01
3 301 03:01
4 401 04:01
> str(lookup)
'data.frame':   4 obs. of  2 variables:
 $ V1: int  101 201 301 401

(Impossible to tell if you have your first column as an integer or  
character (or even whether you are thinking of them a columns at all  
given how you later indicate you want your output.)


 $ V2: Factor w/ 4 levels "01:01","02:01",..: 1 2 3 4
> dput(lookup)
structure(list(V1 = c(101L, 201L, 301L, 401L), V2 =  
structure(1:4, .Label = c("01:01",

"02:01", "03:01", "04:01"), class = "factor")), .Names = c("V1",
"V2"), class = "data.frame", row.names = c(NA, -4L))

Easy and completely unambiguous to type "lookup <-" and then paste in  
the output of dput.





df

0101   0301
0201   0401
0101   0502


dfnew

01:01   03:01
02:01   04:01
01:01   NA

but what I want is:

dfnew2

01:01   03:01
02:01   04:01
01:01   0502

thanks again,
Jill




Jill Hollenbach wrote:


Hi all,
I think this is a very basic question, but I'm new to this so  
please bear

with me.

I'm using match to translate elements of a data frame using a lookup
table. If the content of a particular cell is not found in the lookup
table, the function returns NA. I'm wondering how I can just ignore  
those
cells, and return the original contents if no match is found in the  
lookup

table.

Many thanks in advance, this site has been extremely helpful for me  
so

far,
Jill

Jill Hollenbach, PhD, MPH
   Assistant Staff Scientist
   Center for Genetics
   Children's Hospital Oakland Research Institute
   jhollenb...@chori.org



--
View this message in context: 
http://www.nabble.com/how-to-have-%27match%27-ignore-no-matches-tp25756601p25757009.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Visualizing some data

2009-10-05 Thread Bert Gunter
See http://addictedtor.free.fr/graphiques/

for many examples with code.

Bert Gunter
Genentech Nonclinical Biostatistics
 
 

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of sverre
Sent: Monday, October 05, 2009 1:13 PM
To: r-help@r-project.org
Subject: [R] Visualizing some data


Hi all,

I have an easy data set. It has three columns: Subject, Condition, dprime. A
small excerpt follows, in order to illustrate:

Subject Condition   dprime
HY  s   3.725846
CM  s   2.877658
EH  s   5
HY  st  2.783553
CM  st  2.633955
EH  st  5

I want to visualize this. What I thought of was having dprime on the y-axis
(scale 0-5), Subject on the x-axis, and then two lines plotted to show the
dprime value for condition s and condition st. I would like s to be plotted
with a solid line and st plotted as a dashed line.

I assume this is very easy. I just have no idea how to do it. I would
greatly appreciate any help.

Thanks.
-- 
View this message in context:
http://www.nabble.com/Visualizing-some-data-tp25756996p25756996.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] 3D polar plots

2009-10-05 Thread David Winsemius
Searching on "spherical coordinates" brings up quite a few hits in r- 
search:


Try:
http://search.r-project.org/cgi-bin/namazu.cgi?query=%22spherical+coordinates%22&max=100&result=normal&sort=score&idxname=functions&idxname=Rhelp08&idxname=views&idxname=Rhelp02

The most relevant appears to be:

http://finzi.psych.upenn.edu/R/library/RFOC/html/00Index.html

A similar question was posed a couple of months ago and the answer was  
that the responder was not aware of any but imagined that you would  
end up using using rgl.



On Oct 5, 2009, at 4:31 PM, Shane B. wrote:


Hello,

I am very new to R. I would like to plot astronomy data by right
ascension, declination, and various "distance" values, such as
redshift and comoving distance, in 3D. Is there any 3D polar plotting
functions? I can't seam to locate any information on whether it exists
or not.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] GLM quasipoisson error

2009-10-05 Thread Ben Bolker



atorso wrote:
> 
> Hello,
> 
> I'm having an error when trying to fit the next GLM:
> 
>>>model<-glm(response ~ CLONE_M + CLONE_F + HATCHING
> +(CLONE_M*CLONE_F) + (CLONE_M*HATCHING) + (CLONE_F*HATCHING) +
> (CLONE_M*CLONE_F*HATCHING), family=quasipoisson)
>>> anova(model, test="Chi")
> 
>>Error in if (dispersion == 1) Inf else object$df.residual : 
>   missing value where TRUE/FALSE needed
> 
> If I fit the same model by using the Poisson distribution, it works.
> 
> I have not a clue about where the problem could be. Do you have any
> idea or suggestion I could try?
> 
> 

It would help if you gave a reproducible example.

The following simple example seems to work.

> x = runif(100)
> y = rpois(100,x)
> mq = glm(y~x,family="quasipoisson")
> anova(mq,test="Chi")

Other points: (1) I think you're a little bit confused about
R model notation.  * means "main effects and all interactions",
: means "interaction only".  You could rewrite your model
more correctly as

model<-glm(response ~ CLONE_M + CLONE_F + HATCHING
+(CLONE_M:CLONE_F) + (CLONE_M:HATCHING) + (CLONE_F:HATCHING) +
(CLONE_M:CLONE_F:HATCHING), family=quasipoisson)

or even better (compactly) as 

model<-glm(response ~  CLONE_M*CLONE_F*HATCHING, 
family=quasipoisson)

although all three ways give equivalent answers since the extra
main-effect terms get dropped silently.

(2) you should probably use test="F" rather than test="Chisq"
for a quasi- model: see Crawley 2002 and/or Venables and Ripley.


-- 
View this message in context: 
http://www.nabble.com/GLM-quasipoisson-error-tp25754404p25757025.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] 3D polar plots

2009-10-05 Thread Shane B.
Hello,

I am very new to R. I would like to plot astronomy data by right
ascension, declination, and various "distance" values, such as
redshift and comoving distance, in 3D. Is there any 3D polar plotting
functions? I can't seam to locate any information on whether it exists
or not.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to have 'match' ignore no-matches

2009-10-05 Thread Jill Hollenbach

Let me clarify:
I'm using this--

dfnew<- sapply(df, function(df) lookuptable[match(df, lookuptable [ ,1]),
2])

>lookup
0101   01:01
0201   02:01
0301   03:01
0401   04:01

>df
0101   0301
0201   0401
0101   0502

>dfnew
01:01   03:01
02:01   04:01
01:01   NA

but what I want is:
>dfnew2
01:01   03:01
02:01   04:01
01:01   0502

thanks again,
Jill




Jill Hollenbach wrote:
> 
> Hi all,
> I think this is a very basic question, but I'm new to this so please bear
> with me.
> 
> I'm using match to translate elements of a data frame using a lookup
> table. If the content of a particular cell is not found in the lookup
> table, the function returns NA. I'm wondering how I can just ignore those
> cells, and return the original contents if no match is found in the lookup
> table.
> 
> Many thanks in advance, this site has been extremely helpful for me so
> far,
> Jill
> 
> Jill Hollenbach, PhD, MPH
> Assistant Staff Scientist
> Center for Genetics
> Children's Hospital Oakland Research Institute
> jhollenb...@chori.org
> 

-- 
View this message in context: 
http://www.nabble.com/how-to-have-%27match%27-ignore-no-matches-tp25756601p25757009.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to have 'match' ignore no-matches

2009-10-05 Thread Rolf Turner


On 6/10/2009, at 9:01 AM, Jill Hollenbach wrote:



Hi all,
I think this is a very basic question, but I'm new to this so  
please bear

with me.

I'm using match to translate elements of a data frame using a  
lookup table.
If the content of a particular cell is not found in the lookup  
table, the
function returns NA. I'm wondering how I can just ignore those  
cells, and

return the original contents if no match is found in the lookup table.

Many thanks in advance, this site has been extremely helpful for me  
so far,

Jill


It's not clear to me just what you want to accomplish, but it's possible
that setting nomatch=0 in your call to match() might get you somewhere.

Note that xxx[c(0,1,2,3] gives xxx[c(1,2,3)] whereas
xxx[c(NA,1,2,3)] gives c(NA,xxx[c(1,2,3)]).

cheers,

Rolf Turner

##
Attention:\ This e-mail message is privileged and confid...{{dropped:9}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to have 'match' ignore no-matches

2009-10-05 Thread David Winsemius


On Oct 5, 2009, at 4:01 PM, Jill Hollenbach wrote:



Hi all,
I think this is a very basic question, but I'm new to this so please  
bear

with me.

I'm using match to translate elements of a data frame using a lookup  
table.
If the content of a particular cell is not found in the lookup  
table, the
function returns NA. I'm wondering how I can just ignore those  
cells, and

return the original contents if no match is found in the lookup table.


Can you construct a simple example (with sufficient complexity and  
lack of matches)  that can be cut-and-pasted into the console along  
with specification of how you want your results?


I cannot understand what you are asking for a "translat[ion] of a  
dataframe" and when you request the "return [of] the original contents  
if no match is found in the lookup table."


--
David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Visualizing some data

2009-10-05 Thread sverre

Hi all,

I have an easy data set. It has three columns: Subject, Condition, dprime. A
small excerpt follows, in order to illustrate:

Subject Condition   dprime
HY  s   3.725846
CM  s   2.877658
EH  s   5
HY  st  2.783553
CM  st  2.633955
EH  st  5

I want to visualize this. What I thought of was having dprime on the y-axis
(scale 0-5), Subject on the x-axis, and then two lines plotted to show the
dprime value for condition s and condition st. I would like s to be plotted
with a solid line and st plotted as a dashed line.

I assume this is very easy. I just have no idea how to do it. I would
greatly appreciate any help.

Thanks.
-- 
View this message in context: 
http://www.nabble.com/Visualizing-some-data-tp25756996p25756996.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to have 'match' ignore no-matches

2009-10-05 Thread Jill Hollenbach

Hi all,
I think this is a very basic question, but I'm new to this so please bear
with me.

I'm using match to translate elements of a data frame using a lookup table.
If the content of a particular cell is not found in the lookup table, the
function returns NA. I'm wondering how I can just ignore those cells, and
return the original contents if no match is found in the lookup table.

Many thanks in advance, this site has been extremely helpful for me so far,
Jill

Jill Hollenbach, PhD, MPH
Assistant Staff Scientist
Center for Genetics
Children's Hospital Oakland Research Institute
jhollenb...@chori.org
-- 
View this message in context: 
http://www.nabble.com/how-to-have-%27match%27-ignore-no-matches-tp25756601p25756601.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] interpreting glmer results

2009-10-05 Thread Kingsford Jones
On Mon, Oct 5, 2009 at 11:57 AM, Bert Gunter  wrote:
[snip]
> -- ... and if the correlations are "high" it tells you that your model may
> be near unidentifiable = the model parameters may not be effectively
> estimated from the data. To understand what "high", "near" and "effectively"
> may mean for your data, CYLS ("Consult your local statistician")
>
> (If you really wish to use sophisticated tools like glmer, you really need
> to understand what you're doing. There is no guarantee of immunity from the
> consequences of ignorance.)

Indeed.  I was simply trying to answer the 'basic' linear models
questions and staying away from judgements.  However, since you bring
it up, I'll go ahead and climb up on a soapbox ;-)


\begin{rant}

I think it is a problem in ecology (and I'm sure other fields) that
there is huge demand for tools allowing inferences about complex
systems, yet very few have the skills necessary to safely use the
tools provided by statisticians.  For example, a common situation for
an ecologist is to be faced with analyzing observational data with
temporal and spatial non-independence of observations, lack of
balance, lack of normality, and often zero inflation and/or
under/over-dispersion.  Reviewers know enough to understand the
problems this presents classical techniques, and therefore use of
complex tools (such as mixed models or hierarchical Bayesian models)
can become a prerequisite to getting published.  In other words,
careers depend on using tools that ecologists who spends their time
focused on ecology rather than mathematical statistics have little
hope of truly understanding.  This is certainly no jab at the
intelligence of ecologists -- it's just that when you get into areas
such as drawing inferences from a GLMM, the proportion of
statisticians, even, who understand the subtleties and pitfalls is
small, and when you throw in say zero inflation and spatially
structured covariance matrices that small proportion dwindles
drastically.

/end{rant}


So, I suppose what I should have done after mentioning the LRT was to
provide this list I sent to r-sig-ecology awhile back (with a LMM in
mind):

- LRTs aren't valid to compare REML fits with
different fixed effects because REML essentially
maximizes A'Y where E[A'Y] = 0, so changing the
fixed effects changes A' which changes the data
making the likelihoods non-comparable.
- Pinheiro and Bates (2000, pg 87-88) recommend
LRTs with the standard X^2 distribution not be
used to compare ML fits with different fixed effects
because the tests can be very "anticonservative",
particularly as the number of parameters being
removed becomes large relative to the number of
observations.
- LRTs for differences in the random part of the
model when the fixed effects are the same can be
conservative due to the null value of 0 being on
the edge of the variance parameter space.
- It seems the issue of counting the number of
parameters being estimated will be an issue when
comparing models that differ in their random
effects.


best,

Kingsford Jones




>
> -- Bert
>
> hth,
>
> Kingsford
>
>
>
>> Many thanks for any help.
>>
>> Cheers,
>> Umesh Srinivasan,
>> Bangalore, India
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



[R] boundary situation that caught me by surprise

2009-10-05 Thread RICHARD M. HEIBERGER
> tmp2 <- matrix(1:8,4,2)
> dimnames(tmp2)
NULL
> tmp2
 [,1] [,2]
[1,]15
[2,]26
[3,]37
[4,]48
> dimnames(tmp2)[[2]] <- c("a","b")
> tmp2
 a b
[1,] 1 5
[2,] 2 6
[3,] 3 7
[4,] 4 8
> tmp1 <- matrix(1:4,4,1)
> dimnames(tmp1)
NULL
> tmp1
 [,1]
[1,]1
[2,]2
[3,]3
[4,]4
> dimnames(tmp1)[[2]] <- "a"
Error in dimnames(tmp1)[[2]] <- "a" : 'dimnames' must be a list
## this error caught me by surprise.  Since this is a replaement,I
think it should work parallel
## to the two column case.  My reading of the ?Extract description of drop
   drop: For matrices and arrays.  If ‘TRUE’ the result is coerced to
the lowest possible dimension (see
  the examples).  This only works for extracting elements, not
for the replacement.  See ‘drop’
  for further details.
suggests that this should work for the one column case.

Two alternate methods that do work are
> dimnames(tmp1) <- list(1:4,"a")
> colnames(tmp1) <- "a"

Rich

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ubuntu, Revolutions, R

2009-10-05 Thread Andy Choens
On Monday 05 October 2009 09:38:21 am David M Smith wrote:
> Andrew is correct: the upcoming release of Ubuntu (Karmic Koala) will
> feature the REvolution R distribution. (I am a REvolution Computing
> employee.) Our developers have been working with Canonical's
> representatives over the past several months to upgrade R in Ubuntu to
> 2.9.2 and to include the REvolution R extensions.
>
> > My question(s) for the community is this (pick any question(s) you like
> > to answer:
> >Should I install the REvolution Computing packages?
> >Do these packages really make R faster?
> >Are these packages stable?
> >What are your experiences with REvolution Computing software?
>
> Whether you install the REvolution Computing packages is up to you.
> When you upgrade to KK, the only change made to stock R is the
> .Rprofile.site file, adding the message about how to install the
> extensions. (You can edit the .Rprofile.site file if you prefer.)
>
> If you do install the extensions, no changes are made to the core R
> language (it is 100% compatible with stock R). R will be linked to
> multi-threaded math libraries, which will improve performance for some
> mathematical operations (particularly on a multi-core system, where
> more than 1 processor will be used). So you should expect it to make R
> faster.
>
> Installing the extensions also installs some additional packages from
> REvolution Computing, including foreach and iterators, and Simon
> Urbanek's multicore package from CRAN. The REvolution packages have
> been in use for over a year, and are very stable. In any case they are
> not attached by default. But if you do load these packages, you can
> use the "foreach" function to parallelize loops, making R run faster
> on multicore systems.
>
> I'll leave others to speak of their experiences of REvolution
> Computing software (our contributions to the community include the
> packages nws, foreach, iterators, doSNOW and doMC and REvolution R
> itself). But from my personal perspective, I'm proud to have been able
> to extend awareness and use of R to new domains, and to improve the
> performance of R for many users.
>
> # David Smith
> Director of Community, REvolution Computing
>

David,

Thank you for this informative response, and for identifying yourself clearly 
as an employee of REvolution Computing. Being able to use more than a single 
processor for some R projects sounds tantalizing and I will admit that I need 
to learn more about the new functions such as foreach, iterators, etc. I have 
an odfWeave project that would benefit greatly from a parallel loop statement. 
I am also glad to hear that the r-core package has not been affected directly, 
giving users the option whether or not to use these extensions.

I do not want to discourage companies from monetizing open-source projects. 
Since these packages are also open-source, I suspect I will install them and 
learn a few new tricks. But, I think companies like REvolution Computing need 
to be careful in how they integrate with a project like Ubuntu. While it is 
entirely possible that I missed an obvious announcement about this addition to 
Karmic, I would have appreciated knowing more about this new collaboration up-
front, rather than discovering it after upgrading. If I've missed something  
obvious please feel free to point out my error.

An obvious problem with my request is that this is the sort of change / 
improvement that is unlikely to make it into the "New Features" publications 
produced by Canonical, since R users are obviously a tiny minority of Ubuntu 
users. I think it's going to be important for Canonical and it's partners (not 
just your company) to be more aggressive in communicating these sorts of 
changes to the affected public.

Thanks
--andy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] interpreting glmer results

2009-10-05 Thread Bert Gunter
Comment at end Below.

Bert Gunter
Genentech Nonclinical Biostatistics
 
 -Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Kingsford Jones
Sent: Monday, October 05, 2009 10:30 AM
To: Umesh Srinivasan
Cc: r-help
Subject: Re: [R] interpreting glmer results

On Mon, Oct 5, 2009 at 8:52 AM, Umesh Srinivasan
 wrote:
> Hi all,
[snip]
>
> Fixed effects:
>             Estimate Std. Error z value Pr(>|z|)
> (Intercept) -138.8423     0.4704  -295.1  < 2e-16 ***
> SpeciesCr     -0.9977     0.6259    -1.6  0.11091
> SpeciesDb     -1.2140     0.6945    -1.7  0.08046 .
> SpeciesHk     -2.0864     1.2134    -1.7  0.08553 .
> SpeciesPa     -2.6245     1.2063    -2.2  0.02958 *
> SpeciesPs      1.3056     0.4027     3.2  0.00119 **
> distancen    121.7170     0.3609   337.3  < 2e-16 ***
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> Correlation of Fixed Effects:
>          (Intr) SpcsCr SpcsDb SpcsHk SpecsP SpcsPs
> SpeciesCr -0.423
> SpeciesDb -0.391  0.295
> SpeciesHk -0.223  0.169  0.152
> SpeciesPa -0.222  0.170  0.153  0.088
> SpeciesPs -0.732  0.507  0.458  0.262  0.263
> distancen -0.648 -0.020 -0.002 -0.003 -0.006  0.085
>
> Here, clearly, distance from the tree has an effect, but I want to
> know whether the identity of the species influences seedling numbers
> in general. I am unable, however, to make much sense of the output.


As with other linear model type functions in R the summary method
returns tests based on a factor's contrasts (treatment by default,
comparing other levels to a baseline level).  To get an omnibus test
of a factor, one option is to create a model with and without the
factor and perform an LRT:

library(lme4)
example(glmer)
gm0 <- glmer(cbind(incidence, size - incidence) ~ 1 + (1 | herd),
family = binomial, data = cbpp)
anova(gm0, gm1)

> Also, what does correlation of fixed effects really tell me?
>

These can be of interest for inference (e.g. a confidence region for
two of the coefficients is an ellipse with eccentricity defined by
their correlation).

-- ... and if the correlations are "high" it tells you that your model may
be near unidentifiable = the model parameters may not be effectively
estimated from the data. To understand what "high", "near" and "effectively"
may mean for your data, CYLS ("Consult your local statistician")

(If you really wish to use sophisticated tools like glmer, you really need
to understand what you're doing. There is no guarantee of immunity from the
consequences of ignorance.)

-- Bert

hth,

Kingsford



> Many thanks for any help.
>
> Cheers,
> Umesh Srinivasan,
> Bangalore, India
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ubuntu, Revolutions, R

2009-10-05 Thread Ista Zahn
I'm a fellow (K)Ubuntu user, although I'm waiting for KK to be
released before upgrading. I just wanted to point out that presumably
this advertisement can be avoided by installing R as instructed at
http://cran.r-project.org/bin/linux/ubuntu/ rather than using Ubuntu's
version. I usually do this anyway because it's usually more
up-to-date.

-Ista

On Mon, Oct 5, 2009 at 9:38 AM, David M Smith
 wrote:
> Andrew is correct: the upcoming release of Ubuntu (Karmic Koala) will
> feature the REvolution R distribution. (I am a REvolution Computing
> employee.) Our developers have been working with Canonical's
> representatives over the past several months to upgrade R in Ubuntu to
> 2.9.2 and to include the REvolution R extensions.
>
>> My question(s) for the community is this (pick any question(s) you like to
>> answer:
>>        Should I install the REvolution Computing packages?
>>        Do these packages really make R faster?
>>        Are these packages stable?
>>        What are your experiences with REvolution Computing software?
>
> Whether you install the REvolution Computing packages is up to you.
> When you upgrade to KK, the only change made to stock R is the
> .Rprofile.site file, adding the message about how to install the
> extensions. (You can edit the .Rprofile.site file if you prefer.)
>
> If you do install the extensions, no changes are made to the core R
> language (it is 100% compatible with stock R). R will be linked to
> multi-threaded math libraries, which will improve performance for some
> mathematical operations (particularly on a multi-core system, where
> more than 1 processor will be used). So you should expect it to make R
> faster.
>
> Installing the extensions also installs some additional packages from
> REvolution Computing, including foreach and iterators, and Simon
> Urbanek's multicore package from CRAN. The REvolution packages have
> been in use for over a year, and are very stable. In any case they are
> not attached by default. But if you do load these packages, you can
> use the "foreach" function to parallelize loops, making R run faster
> on multicore systems.
>
> I'll leave others to speak of their experiences of REvolution
> Computing software (our contributions to the community include the
> packages nws, foreach, iterators, doSNOW and doMC and REvolution R
> itself). But from my personal perspective, I'm proud to have been able
> to extend awareness and use of R to new domains, and to improve the
> performance of R for many users.
>
> # David Smith
> Director of Community, REvolution Computing
>
> On Sun, Oct 4, 2009 at 8:40 PM, Andrew Choens  wrote:
>> For those who don't follow Ubuntu development carefully, the first Beta for 
>> the
>> next Ubuntu was recently released, so I took my home system and upgraded to
>> help out with filing bugs, etc.
>>
>> Just to be clear, I am not looking for help with the upgrade process. I've 
>> had
>> R, and a few miscellaneous CRAN packages installed on this computer for 
>> years.
>> Today, when I loaded an R session I had developed before the upgrade, I saw
>> something new in my R "welcome message".
>>>
>>>R version 2.9.2 (2009-08-24)
>>>Copyright (C) 2009 The R Foundation for Statistical Computing
>>>ISBN 3-900051-07-0
>>>
>>>R is free software and comes with ABSOLUTELY NO WARRANTY.
>>>You are welcome to redistribute it under certain conditions.
>>>Type 'license()' or 'licence()' for distribution details.
>>>
>>>R is a collaborative project with many contributors.
>>>Type 'contributors()' for more information and
>>>'citation()' on how to cite R or R packages in publications.
>>>
>>>
>>>This is REvolution R version 3.0.0:
>>>the optimized distribution of R from REvolution Computing.
>>>REvolution R enhancements Copyright (C) REvolution Computing, Inc.
>>>
>>>Checking for REvolution MKL:
>>  >- REvolution R enhancements not installed.
>>>For improved performance and other extensions: apt-get install revolution-r
>>
>> The last part, about this being the "enhanced" version of R was . . .
>> unexpected.  I have heard of this company before and now I've spent some time
>> on their website. Looking at my installation, Ubuntu did not install any of
>> the REvolution Computing components, although R now basically thows a warning
>> every time I start it.
>>
>> My question(s) for the community is this (pick any question(s) you like to
>> answer:
>>        Should I install the REvolution Computing packages?
>>        Do these packages really make R faster?
>>        Are these packages stable?
>>        What are your experiences with REvolution Computing software?
>>
>> I am interested in hearing from members of the community, REvolution 
>> Computing
>> employees/supporters (although please ID yourself as such) and most anyone
>> else. I can see what they say on their website, but I'm interested in getting
>> other opinions too.
>>
>> Thanks!
>
> --
> David M Smith 
> Director of Community, REvolution Computing www.revolution-comput

Re: [R] interpreting glmer results

2009-10-05 Thread Kingsford Jones
On Mon, Oct 5, 2009 at 8:52 AM, Umesh Srinivasan
 wrote:
> Hi all,
[snip]
>
> Fixed effects:
>             Estimate Std. Error z value Pr(>|z|)
> (Intercept) -138.8423     0.4704  -295.1  < 2e-16 ***
> SpeciesCr     -0.9977     0.6259    -1.6  0.11091
> SpeciesDb     -1.2140     0.6945    -1.7  0.08046 .
> SpeciesHk     -2.0864     1.2134    -1.7  0.08553 .
> SpeciesPa     -2.6245     1.2063    -2.2  0.02958 *
> SpeciesPs      1.3056     0.4027     3.2  0.00119 **
> distancen    121.7170     0.3609   337.3  < 2e-16 ***
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> Correlation of Fixed Effects:
>          (Intr) SpcsCr SpcsDb SpcsHk SpecsP SpcsPs
> SpeciesCr -0.423
> SpeciesDb -0.391  0.295
> SpeciesHk -0.223  0.169  0.152
> SpeciesPa -0.222  0.170  0.153  0.088
> SpeciesPs -0.732  0.507  0.458  0.262  0.263
> distancen -0.648 -0.020 -0.002 -0.003 -0.006  0.085
>
> Here, clearly, distance from the tree has an effect, but I want to
> know whether the identity of the species influences seedling numbers
> in general. I am unable, however, to make much sense of the output.


As with other linear model type functions in R the summary method
returns tests based on a factor's contrasts (treatment by default,
comparing other levels to a baseline level).  To get an omnibus test
of a factor, one option is to create a model with and without the
factor and perform an LRT:

library(lme4)
example(glmer)
gm0 <- glmer(cbind(incidence, size - incidence) ~ 1 + (1 | herd),
family = binomial, data = cbpp)
anova(gm0, gm1)

> Also, what does correlation of fixed effects really tell me?
>

These can be of interest for inference (e.g. a confidence region for
two of the coefficients is an ellipse with eccentricity defined by
their correlation).


hth,

Kingsford



> Many thanks for any help.
>
> Cheers,
> Umesh Srinivasan,
> Bangalore, India
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Apply R in Excel through RExcel

2009-10-05 Thread Felipe Carrillo
ryusuke:
It sounds like you need to have (D)COM server to be able to work on the 
background. As for the foreground, rcom is what you need and it appears to be 
working OK. I am out of my office right now but I'll back to work next week and 
I will be able to explain in more detail how the applications I tied together 
work with RExcel and R. You can download the latest staconnDCOM.latest.exe from 
this site:
http://rcom.univie.ac.at/

--- On Sun, 10/4/09, Ryusuke Kenji  wrote:

> From: Ryusuke Kenji 
> Subject: Apply R in Excel through RExcel
> To: elaine.mc...@gmail.com, mazatlanmex...@yahoo.com
> Date: Sunday, October 4, 2009, 9:55 AM
> 
> 
> 
> 
>  
> Hi Elaine and Felipe,
> 
> I know about you guys from google groups ggplot
> and RExcel, I am learning R and would like apply it in
> Excel through RExcel,  I installed RAndFriends and its
> cover all relevant software, there is workable with front
> ground server but cant connect to R Server if I choose
> background server.
> 
> I will appreciate if you are willing to share precious
> suggestion and knowledge with me. Thank you.
> 
> 
> 
> Thanks warm and Regards
> Ryusuke
> 
> 
> 無料!ケータイへのHotmailアラートはこちら
> いますぐ使ってみる。
> 
> 




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls not accepting control parameter?

2009-10-05 Thread Peter Ehlers

Hi Rainer,

Sorry, I hadn't read your post quite carefully enough.
The problem appears to be with SSlogis. It seems that
control parameters are not being passed through SSlogis.
If you specify a start vector, minFactor can be set.

  nls( y ~ Asym/(1+exp((xmid-x)/scal)), data = dat,
  start=list(Asym=2e6, xmid=2005, scal=44),
  control=list(minFactor=1e-12), trace=TRUE)

or even

  nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat,
  start=list(Asym=2e6, xmid=2005, scal=44),
  control=list(minFactor=1e-12), trace=TRUE)

or perhaps (but it still won't converge, of course)

  nls( y ~ 1/(1+exp((xmid-x)/scal)), data = dat,
  start=list(xmid=2005, scal=44), algorithm = "plinear",
  control=list(minFactor=1e-12), trace=TRUE)

(I used start values obtained from a fit of dat[-c(1,2),].)

Try it with minFactor=1/4 and with 1/2^20.

 -Peter Ehlers

Rainer M Krug wrote:

On Fri, Oct 2, 2009 at 7:23 PM, Peter Ehlers  wrote:


Hello Rainer,

I think that your problem is with trying to fit a logistic model to
data that don't support that model. Removing the first two points
from your data will work (but of course it may not represent reality).
The logistic function does not exhibit the kind of minimum that
your data suggest.



Hi Peter

partly - when I do as you suggest, it definitely works, but this does not
change the behavioyur, that the error message always says:

" step factor 0.000488281 reduced below 'minFactor' of 0.000976562"

and it does not change to whichever value I try to set minFactor.
So either I am misunderstanding what the control argument for nls is doing,
or there is a bug in nls or in the error message.

Rainer





 -Peter Ehlers


Rainer M Krug wrote:


Hi

I want to change a control parameter for an nls () as I am getting an
error
message  "step factor 0.000488281 reduced below 'minFactor' of
0.000976562".
Despite all tries, it seems that the control parameter of the nls, does
not
seem to get handed down to the function itself, or the error message is
using a different one.

Below system info and an example highlighting the problem.

Thanks,

Rainer


 version   _
platform   i486-pc-linux-gnu
arch   i486
os linux-gnu
system i486, linux-gnu
status
major  2
minor  9.2
year   2009
month  08
day24
svn rev49384
language   R
version.string R version 2.9.2 (2009-08-24)

 sessionInfo()
R version 2.9.2 (2009-08-24)
i486-pc-linux-gnu

locale:

LC_CTYPE=en_ZA.UTF-8;LC_NUMERIC=C;LC_TIME=en_ZA.UTF-8;LC_COLLATE=en_ZA.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_ZA.UTF-8;LC_PAPER=en_ZA.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_ZA.UTF-8;LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] R.utils_1.2.0 R.oo_1.5.0R.methodsS3_1.0.3 maptools_0.7-26
[5] sp_0.9-44 foreign_0.8-37

loaded via a namespace (and not attached):
[1] grid_2.9.2  lattice_0.17-25


#

EXAMPLE:

dat <- data.frame(
 x = 2006:2037,
 y = c(143088, 140218, 137964,
   138313, 140005, 141483, 142365,
   144114, 145335, 146958, 148584,
   149398, 151074, 152241, 153919,
   155580, 157258, 158981, 160591,
   162126, 163743, 165213, 166695,
   168023, 169522, 170746, 172057,
   173287, 173977, 175232, 176308,
   177484)
 )

nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat, trace=TRUE)

(newMinFactor <- 1/(4*1024))
nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat,
control=nls.control(minFactor=newMinFactor), trace=TRUE)
nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat,
control=c(minFactor=newMinFactor), trace=TRUE)


(newMinFactor <- 4/1024)
nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat,
control=nls.control(minFactor=newMinFactor), trace=TRUE)
nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat,
control=c(minFactor=newMinFactor), trace=TRUE)









__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to plot a Quadratic Model?

2009-10-05 Thread Kingsford Jones
see ?curve

e.g.

qftn <- function(x) 1 + 2*x - .1*x^2
curve(qftn, 0, 10)

hth,

Kingsford

On Mon, Oct 5, 2009 at 9:42 AM, Juliano van Melis  wrote:
> Good day for all,
>
> I'm a beginner aRgonaut, thus I'm having a problem to plot a quadratic model
> of regression in a plot.
> First I wrote:
>
>>plot(Y~X)
>
> and then I tried:
>
>>abline(lm(Y~X+I(X^2))
>
> but "abline" only uses the first two of three regression coefficients, thus
> I tried:
>
>>line(lm(Y~X+I(X^2))
>
> but a message error is showed ("insufficient observations").
>
> Therefore, I want to know: how could I plot a quadratic line in my plot
> graph?
>
> thanks!
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] box plot

2009-10-05 Thread David Winsemius


On Oct 5, 2009, at 12:47 PM, Henrique Dallazuanna wrote:


See family argument in ?par:

par(family = 'serif')
boxplot(rnorm(100), main = 'Title of Boxplot')

On Mon, Oct 5, 2009 at 11:48 AM, kayj  wrote:


Hi All,

I was wondering if if it is possible to change the font style and  
the font

size for x labels, y labels and the main title for a box plot?


And while you are at that help page for par don't forget to read this  
set of parameters:


cex.axis:
The magnification to be used for axis annotation relative to the  
current setting of cex.

cex.lab:
The magnification to be used for x and y labels relative to the  
current setting of cex.

cex.main:
The magnification to be used for main titles relative to the current  
setting of cex.

cex.sub:
The magnification to be used for sub-titles relative to the current  
setting of cex.


--
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

___



David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] GLM quasipoisson error

2009-10-05 Thread atorso
Hello,

I'm having an error when trying to fit the next GLM:

>>model<-glm(response ~ CLONE_M + CLONE_F + HATCHING
+(CLONE_M*CLONE_F) + (CLONE_M*HATCHING) + (CLONE_F*HATCHING) +
(CLONE_M*CLONE_F*HATCHING), family=quasipoisson)
>> anova(model, test="Chi")

>Error in if (dispersion == 1) Inf else object$df.residual : 
  missing value where TRUE/FALSE needed

If I fit the same model by using the Poisson distribution, it works.

I have not a clue about where the problem could be. Do you have any
idea or suggestion I could try?

Thank you in advance, 

Ana 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to plot a Quadratic Model?

2009-10-05 Thread Ista Zahn
I hardly use base graphics so I'm no help there. You can do this
easily with ggplot2 though:

library(ggplot2)
X <- rnorm(100)
Y <- rnorm(100) - X^2
qplot(x=X, y=Y, geom=c("point", "smooth"), method="lm", formula = y ~
poly(x, 2))

Note that X is not x and Y is not y in the sense that "formula = Y ~
poly(X, 2)" will not work (this tripped me up at first). qplot is
taking x to mean "the first argument" (X in this case) and y to mean
"the second argument" (Y in this case).

-Ista

On Mon, Oct 5, 2009 at 11:42 AM, Juliano van Melis  wrote:
> Good day for all,
>
> I'm a beginner aRgonaut, thus I'm having a problem to plot a quadratic model
> of regression in a plot.
> First I wrote:
>
>>plot(Y~X)
>
> and then I tried:
>
>>abline(lm(Y~X+I(X^2))
>
> but "abline" only uses the first two of three regression coefficients, thus
> I tried:
>
>>line(lm(Y~X+I(X^2))
>
> but a message error is showed ("insufficient observations").
>
> Therefore, I want to know: how could I plot a quadratic line in my plot
> graph?
>
> thanks!
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] box plot

2009-10-05 Thread Henrique Dallazuanna
See family argument in ?par:

par(family = 'serif')
boxplot(rnorm(100), main = 'Title of Boxplot')

On Mon, Oct 5, 2009 at 11:48 AM, kayj  wrote:
>
> Hi All,
>
> I was wondering if if it is possible to change the font style and the font
> size for x labels, y labels and the main title for a box plot?
>
> Thanks
> --
> View this message in context: 
> http://www.nabble.com/box-plot-tp25752195p25752195.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] COM component/dll of R functions

2009-10-05 Thread Abbas R. Ali
Hi
 
I want to integrate my R function with C++/C# application and want to pass 
parameters from C++/C#. Can any body guide me in this regard?
 Thanks


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] box plot

2009-10-05 Thread kayj

Hi All,

I was wondering if if it is possible to change the font style and the font
size for x labels, y labels and the main title for a box plot?

Thanks
-- 
View this message in context: 
http://www.nabble.com/box-plot-tp25752195p25752195.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to plot a Quadratic Model?

2009-10-05 Thread Juliano van Melis
Good day for all,

I'm a beginner aRgonaut, thus I'm having a problem to plot a quadratic model
of regression in a plot.
First I wrote:

>plot(Y~X)

and then I tried:

>abline(lm(Y~X+I(X^2))

but "abline" only uses the first two of three regression coefficients, thus
I tried:

>line(lm(Y~X+I(X^2))

but a message error is showed ("insufficient observations").

Therefore, I want to know: how could I plot a quadratic line in my plot
graph?

thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] else if statement error

2009-10-05 Thread David Winsemius


On Oct 5, 2009, at 5:38 AM, Martin Maechler wrote:


"DW" == David Winsemius 
   on Sat, 3 Oct 2009 12:56:51 -0400 writes:


   DW> On Oct 3, 2009, at 11:54 AM, Chen Gu wrote:


Hello,

I am doing a simple if else statement in R. But it always comes out
error
such as 'unexpected error'
There are two variables. ini and b. when ini=1, a=3; when ini>1 and

   b> 2,

a=5; all other situations, a=6. I don't know where it is wrong.
Here is my code

ini=3
b=4
   DW> Your basic problem is that you are confusing if and else  
which are
   DW> program control functions with ifelse which is designed for  
assignment

   DW> purposes;

David, not quite:  in R, "everything"(*) is a function,
and in the example we have here
(and innumerous similar examples)   ifelse(.) is not efficient
to use:

ini=3
b=4
a <-   ifelse( ini==1, 3, ifelse( ini>1 & b>2 , 5, 6))

a

[1] 5


More efficient -- and also nicer in my eyes ---
is
a <-  if(ini == 1) 3 else if(ini > 1 && b > 2) 5  else  6


No argument that is a bit more readable than my version above. I  
guessed that:


a <- 6 - 3*(ini == 1) - (ini > 1 && b > 2)

 ... might be even more efficient, ... but 10^6 instances took almost  
twice as long as the if(){}else{} construction. The if else  
construction took 2e-06 seconds, the ifelse(,,) took 4.2e-05 seconds  
and the Boolean math version 3.45e-06 seconds.





As I say on many occasions:
  ifelse() is useful and nice, but it is used in many more
  places than it should {BTW, not only because of examples like  
the above}.


On the r-help list I see many more examples where (I thought) ifelse  
should have been used. I would be interested in seeing the other  
examples that you have in mind where it should not be used. On the  
principle that there are no new questions on r-help anymore, perhaps a  
link to a discussion from the archives could be  more efficient?  
(I did try searching and have found some interesting material, but not  
on this particular point. A search for:


"if else" ifelse efficien*

came up empty when restricted to R-help.)




Martin Maechler, ETH Zurich


(*) "almost"

--


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] gsub - replace multiple occurences with different strings

2009-10-05 Thread William Dunlap
> -Original Message-
> From: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] On Behalf Of Martin Batholdy
> Sent: Monday, October 05, 2009 7:34 AM
> To: r help
> Subject: [R] gsub - replace multiple occurences with different strings
> 
> Hi,
> 
> I search a way to replace multiple occurrences of a string with  
> different strings
> depending on the place where it occurs.
> 
> 
> I tried the following;
> 
> x <- c("xx y e d xx e t f xx e f xx")
> x <- gsub("xx", c("x1", "x2", "x3", "x4"), x)
> 
> 
> what I want to get is;
> 
> x =
> x1 y y e d x2 e t f x3 e f x4

You have a doubled y in the output but not the input,
I'll assume the input is correct.  I extended x to three similar
strings:

  x <- c("xx y e d xx e t f xx e f xx",
   "xx y e d xx e t f xx",
   "xx y e d xx e t f xx e f  y e d xx e t f xx e f xx")

If you know you always have 4 xx's you can use sub (or gsub),
but it doesn't work properly if there are not exactly 4 xx's:
  > sub("xx(.*)xx(.*)xx(.*)xx", "x1\\1x2\\2x3\\3x4", x)
  [1] "x1 y e d x2 e t f x3 e f x4"   
  [2] "xx y e d xx e t f xx"  
  [3] "x1 y e d xx e t f xx e f  y e d x2 e t f x3 e f x4"
  
You can use gsubfn() from package gsubfn along with a function that
maintains
a count of how many times it has been called, as in
  > gsubfn("xx", local({n<-0;function(x){n<<-n+1;paste(x,n,sep="")}}),
x)
  [1] "xx1 y e d xx2 e t f xx3 e f xx4"

  [2] "xx5 y e d xx6 e t f xx7"

  [3] "xx8 y e d xx9 e t f xx10 e f xx11xx12 y e d xx13 e t f xx14 e f
xx15"

If you want the count to start anew with each string in the vector you
can use sapply.

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com  




> 
> 
> but what I get is;
> 
> x =
> x1 y y e d x1 e t f x1 e f x1
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] interpreting glmer results

2009-10-05 Thread Umesh Srinivasan
Hi all,

I am trying to run a glm with mixed effects. My response variable is
number of seedlings emerging; my fixed effects are the tree species
and distance from the tree (in two classes - near and far).; my random
effect is the individual tree itself (here called Plot). The command
I've used is:

mod <- glmer(number ~ Species + distance + offset(area) + (1|Plot),
family = poisson)


There is an area offset because the plot in which seedlings were
counted was a wedge with its point at the tree base, and therefore the
area of the part of plot far from the tree was greater than the area
of the plot closer to the tree.

The results I'm getting are:

Generalized linear mixed model fit by the Laplace approximation
Formula: number ~ Species + distance + offset(area) + (1 | Plot)
   AIC   BIC logLik deviance
 145.6 168.7 -64.82129.6
Random effects:
 Groups NameVariance Std.Dev.
 Plot   (Intercept) 0.60205  0.77592
Number of obs: 132, groups: Plot, 132

Fixed effects:
 Estimate Std. Error z value Pr(>|z|)
(Intercept) -138.8423 0.4704  -295.1  < 2e-16 ***
SpeciesCr -0.9977 0.6259-1.6  0.11091
SpeciesDb -1.2140 0.6945-1.7  0.08046 .
SpeciesHk -2.0864 1.2134-1.7  0.08553 .
SpeciesPa -2.6245 1.2063-2.2  0.02958 *
SpeciesPs  1.3056 0.4027 3.2  0.00119 **
distancen121.7170 0.3609   337.3  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Correlation of Fixed Effects:
  (Intr) SpcsCr SpcsDb SpcsHk SpecsP SpcsPs
SpeciesCr -0.423
SpeciesDb -0.391  0.295
SpeciesHk -0.223  0.169  0.152
SpeciesPa -0.222  0.170  0.153  0.088
SpeciesPs -0.732  0.507  0.458  0.262  0.263
distancen -0.648 -0.020 -0.002 -0.003 -0.006  0.085

Here, clearly, distance from the tree has an effect, but I want to
know whether the identity of the species influences seedling numbers
in general. I am unable, however, to make much sense of the output.
Also, what does correlation of fixed effects really tell me?

Many thanks for any help.

Cheers,
Umesh Srinivasan,
Bangalore, India

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Saving each output of a loop into something that can be graphed

2009-10-05 Thread John Kane
Define a matrix to hold the data and insert it into the loop?

Something like 

mymat <- matrix(rep(NA, 20), nrow=10)
for(i in 1:10){
a <- i
b <- i+1
mymat[i,] <- c(a,b)
}

matplot(mymat)



--- On Mon, 10/5/09, RR99  wrote:

> From: RR99 
> Subject: [R] Saving each output of a loop into something that can be graphed
> To: r-help@r-project.org
> Received: Monday, October 5, 2009, 2:27 AM
> 
> Hi, 
> 
> from the code below; whats happening is i want to plot how
> the [1,9] entry
> changes as the matrx changes, print[] gives the values one
> after the other,
> but only the last value is saved so my graph has only one
> point. does anyone
> know how i can merge all the outputs into one dataframe or
> something?
> 
> Q<-matrix(c(-0.59,5.44,0,0,0,0,0,0,0,0.59,-8.16,2.51,0,0,0,0,0,0,0,2.72,-5.64,9.45,0,0,0,0,0,0,0,3.13,-13.23,6.91,0,0,0,0,0,0,0,3.15,-14.88,10.22,0,0,0,0,0,0,0.63,7.97,-19.81,30.19,0,96.77,0,0,0,0,0,7.67,-98.12,0,19.35,0,0,0,0,0,0,37.74,-315.79,0,0,0,0,0,0,1.92,30.19,315.79,-116.12),nrow=9,ncol=9)
> T<-Q*.01
> for (i in (1:10)){
> R<-MatrixExp(i*T)
> S<-print(R[1,9])
> }
> 
> 
> any help would be greatly appreciated.
> -- 
> View this message in context: 
> http://www.nabble.com/Saving-each-output-of-a-loop-into-something-that-can-be-graphed-tp25745700p25745700.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org
> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained,
> reproducible code.
> 


  __
[[elided Yahoo spam]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] gsub - replace multiple occurences with different strings

2009-10-05 Thread Martin Batholdy
Hi,

I search a way to replace multiple occurrences of a string with  
different strings
depending on the place where it occurs.


I tried the following;

x <- c("xx y e d xx e t f xx e f xx")
x <- gsub("xx", c("x1", "x2", "x3", "x4"), x)


what I want to get is;

x =
x1 y y e d x2 e t f x3 e f x4


but what I get is;

x =
x1 y y e d x1 e t f x1 e f x1
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ggplot2: proper use of facet_grid inside a function

2009-10-05 Thread baptiste auguie
Now why do I always come up with a twisted bquote() where a simple
paste() would do!

Thanks,

baptiste


2009/10/5 hadley wickham :
>> Whether or not what follows is to be recommended I don't know, but it
>> seems to work,
>>
>> p <- ggplot(diamonds, aes(carat, ..density..)) +
>>  geom_histogram(binwidth = 0.2)
>>
>> x = quote(cut)
>> facets = facet_grid(as.formula(bquote(.~.(x
>> p + facets
>
> That's what I'd recommend.  You can also just do
>
> facets <- facet_grid(paste(". ~ ", var))
>
> Hadley
> --
> http://had.co.nz/
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Vim-R-plugin (new version)

2009-10-05 Thread Jakson A. Aquino
Dear R users,

The author of Tinn-R (Jose Claudio Faria) now is co-author of
Vim-R-plugin2, a plugin that makes it possible to send commands
from the Vim text editor to R. We added many new key bindings,
restructured the menu and created new Tool Bar buttons. The new
version is available at:

 http://www.vim.org/scripts/script.php?script_id=2628

 NOTES:
   (1) Some old key binding changed, including the shortcuts
   to start R.
   (2) The plugin doesn't work on Microsoft Windows yet.

Below is the plugin's menu structure, and the corresponding
default keyboard shortcuts:

Start/Close
  . Start R (default)  \rf
  . Start R --vanilla  \rv
  . Start R (custom)   \rc
  
  . Close R (no save)  \rq
  . Close R (save workspace)   \rw
---

Send
  . Filef5
  . File (echo) F5
  
  . Block (cur) f6
  . Block (cur, echo)   F6
  . Block (cur, echo and down) ^F6
  
  . Function (cur)  f7
  . Function (cur and down) F7
  
  . Selection   f8
  . Selection (echo)F8
  . Selection (and down)f9
  . Selection (echo and down)   F9
  
  . Linef8
  . Line (and down) f9
  . Line (and new one)  \q
---

Control
  . List space \rl
  . Clear console  \rr
  . Clear all  \rm
  
  . Object (print) \rp
  . Object (names) \rn
  . Object (str)   \rt
  
  . Arguments (cur)\ra
  . Example (cur)  \re
  . Help (cur) \rh
  
  . Summary (cur)  \rs
  . Plot (cur) \rg
  . Plot and summary (cur) \rb
  
  . Set working directory (cur file path)  \rd
  
  . Sweave (cur file)  \sw
  . Sweave and PDF (cur file)  \sp
  
  . Rebuild (list of objects)  \ro

Best regards,

Jakson Aquino

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ggplot2: proper use of facet_grid inside a function

2009-10-05 Thread hadley wickham
> Whether or not what follows is to be recommended I don't know, but it
> seems to work,
>
> p <- ggplot(diamonds, aes(carat, ..density..)) +
>  geom_histogram(binwidth = 0.2)
>
> x = quote(cut)
> facets = facet_grid(as.formula(bquote(.~.(x
> p + facets

That's what I'd recommend.  You can also just do

facets <- facet_grid(paste(". ~ ", var))

Hadley
-- 
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] convert RData to txt

2009-10-05 Thread David Winsemius


On Oct 5, 2009, at 3:00 AM, mykh...@gmail.com wrote:


hello all,
will you plz tell me how can i convert RData files to txt,,,


?load
?write.csv
?cat



--


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Long for Loop- calling C from R - Parallel Computing

2009-10-05 Thread Karl Ove Hufthammer
In article <6f6f0fd60910050629p28c99209jcd7836353fd2d754
@mail.gmail.com>, antonioparede...@gmail.com says...
> I'm running the following for loop to generate random variables in chunks of
> 60 at a time (l), here h is of order in millions (could be 5 to 6 millions),
> note that generating all the variables at once could have an impact on the
> final results

No, it will not. See this example code for an illustration:

set.seed(1)
rnorm(3)
rnorm(3)
set.seed(1)
rnorm(6)

So if you generate the six numbers three at a time or all at once gives 
exactly the same result.

So my suggestion is to generate all the numbers at once. That takes next 
to no time. Or, if it takes too much memory, generate for example a 
million at once, and repeat a few times.

-- 
Karl Ove Hufthammer

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ubuntu, Revolutions, R

2009-10-05 Thread David M Smith
Andrew is correct: the upcoming release of Ubuntu (Karmic Koala) will
feature the REvolution R distribution. (I am a REvolution Computing
employee.) Our developers have been working with Canonical's
representatives over the past several months to upgrade R in Ubuntu to
2.9.2 and to include the REvolution R extensions.

> My question(s) for the community is this (pick any question(s) you like to
> answer:
>Should I install the REvolution Computing packages?
>Do these packages really make R faster?
>Are these packages stable?
>What are your experiences with REvolution Computing software?

Whether you install the REvolution Computing packages is up to you.
When you upgrade to KK, the only change made to stock R is the
.Rprofile.site file, adding the message about how to install the
extensions. (You can edit the .Rprofile.site file if you prefer.)

If you do install the extensions, no changes are made to the core R
language (it is 100% compatible with stock R). R will be linked to
multi-threaded math libraries, which will improve performance for some
mathematical operations (particularly on a multi-core system, where
more than 1 processor will be used). So you should expect it to make R
faster.

Installing the extensions also installs some additional packages from
REvolution Computing, including foreach and iterators, and Simon
Urbanek's multicore package from CRAN. The REvolution packages have
been in use for over a year, and are very stable. In any case they are
not attached by default. But if you do load these packages, you can
use the "foreach" function to parallelize loops, making R run faster
on multicore systems.

I'll leave others to speak of their experiences of REvolution
Computing software (our contributions to the community include the
packages nws, foreach, iterators, doSNOW and doMC and REvolution R
itself). But from my personal perspective, I'm proud to have been able
to extend awareness and use of R to new domains, and to improve the
performance of R for many users.

# David Smith
Director of Community, REvolution Computing

On Sun, Oct 4, 2009 at 8:40 PM, Andrew Choens  wrote:
> For those who don't follow Ubuntu development carefully, the first Beta for 
> the
> next Ubuntu was recently released, so I took my home system and upgraded to
> help out with filing bugs, etc.
>
> Just to be clear, I am not looking for help with the upgrade process. I've had
> R, and a few miscellaneous CRAN packages installed on this computer for years.
> Today, when I loaded an R session I had developed before the upgrade, I saw
> something new in my R "welcome message".
>>
>>R version 2.9.2 (2009-08-24)
>>Copyright (C) 2009 The R Foundation for Statistical Computing
>>ISBN 3-900051-07-0
>>
>>R is free software and comes with ABSOLUTELY NO WARRANTY.
>>You are welcome to redistribute it under certain conditions.
>>Type 'license()' or 'licence()' for distribution details.
>>
>>R is a collaborative project with many contributors.
>>Type 'contributors()' for more information and
>>'citation()' on how to cite R or R packages in publications.
>>
>>
>>This is REvolution R version 3.0.0:
>>the optimized distribution of R from REvolution Computing.
>>REvolution R enhancements Copyright (C) REvolution Computing, Inc.
>>
>>Checking for REvolution MKL:
>  >- REvolution R enhancements not installed.
>>For improved performance and other extensions: apt-get install revolution-r
>
> The last part, about this being the "enhanced" version of R was . . .
> unexpected.  I have heard of this company before and now I've spent some time
> on their website. Looking at my installation, Ubuntu did not install any of
> the REvolution Computing components, although R now basically thows a warning
> every time I start it.
>
> My question(s) for the community is this (pick any question(s) you like to
> answer:
>        Should I install the REvolution Computing packages?
>        Do these packages really make R faster?
>        Are these packages stable?
>        What are your experiences with REvolution Computing software?
>
> I am interested in hearing from members of the community, REvolution Computing
> employees/supporters (although please ID yourself as such) and most anyone
> else. I can see what they say on their website, but I'm interested in getting
> other opinions too.
>
> Thanks!

-- 
David M Smith 
Director of Community, REvolution Computing www.revolution-computing.com
Tel: +1 (206) 577-4778 x3203 (San Francisco, USA)

Check out our upcoming events schedule at www.revolution-computing.com/events

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Long for Loop- calling C from R - Parallel Computing

2009-10-05 Thread Antonio Paredes
Hello everyone,

I'm running the following for loop to generate random variables in chunks of
60 at a time (l), here h is of order in millions (could be 5 to 6 millions),
note that generating all the variables at once could have an impact on the
final results

for(j in 1:h){
dat$t.o[seq(0,g1,l)[j]+1:l]<-dat$mu[seq(0,g1,l)[j]+1:l] +
rnorm(l,0,dat$g.var[seq(0,g1,l)[j]+1:l])
}

Is there any way that I can improve on this loop and preserve my objective
of generating variable 60 (l) at a time. What about calling C from R, will
that be a lot faster. Is this a typical situation designed for parallel
computing?

My knowledge of looping in R is very weak; but please note, that my interest
is not just on a piece of code that solve the problem; if you have a link to
a reference that discussed some of these issues let me know. I'm currently
reading the R Inferno; but any other reference will be appreciated.

Thanks

-- 
-Tony

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ggplot2: proper use of facet_grid inside a function

2009-10-05 Thread baptiste auguie
Hi,

Whether or not what follows is to be recommended I don't know, but it
seems to work,

p <- ggplot(diamonds, aes(carat, ..density..)) +
  geom_histogram(binwidth = 0.2)

x = quote(cut)
facets = facet_grid(as.formula(bquote(.~.(x
p + facets

HTH,

baptiste

2009/10/5 Bryan Hanson :
> Thanks Thierry for the work-around.  I was out of ideas.
>
> I had looked around for the facet_grid() analog of aes_string(), and
> concluded there wasn't one.  The only thing I found was the notion of
>
> facet_grid("...") but apparently it is intended for some other use, as it
> doesn't work as I thought it would (like a hypothetical
> facet_grid_string()).
>
> Thanks so much.  Bryan
>
>
> On 10/5/09 4:12 AM, "ONKELINX, Thierry"  wrote:
>
>> Dear Bryan,
>>
>> In the ggplot() function you can choose between aes() and aes_string().
>> In the first you need to hardwire the variable names, in the latter you
>> can use objects which contain the variable names. So in your case you
>> need aes_string().
>>
>> Unfortunatly, facet_grid() works like aes() and not like aes_string().
>> That is why you are getting errors.
>>
>> A workaround would be to add a dummy column to your data.
>>
>> library(ggplot2)
>> data <- mpg
>> fac1 <- "cty"
>> fac2 <- "drv"
>> res <- "displ"
>> data$dummy <- data[, fac2]
>> ggplot(data, aes_string(x = fac1, y = res)) + geom_point() +
>> facet_grid(.~dummy)
>>
>> HTH,
>>
>> Thierry
>>
>>
>> 
>> 
>> ir. Thierry Onkelinx
>> Instituut voor natuur- en bosonderzoek / Research Institute for Nature
>> and Forest
>> Cel biometrie, methodologie en kwaliteitszorg / Section biometrics,
>> methodology and quality assurance
>> Gaverstraat 4
>> 9500 Geraardsbergen
>> Belgium
>> tel. + 32 54/436 185
>> thierry.onkel...@inbo.be
>> www.inbo.be
>>
>> To call in the statistician after the experiment is done may be no more
>> than asking him to perform a post-mortem examination: he may be able to
>> say what the experiment died of.
>> ~ Sir Ronald Aylmer Fisher
>>
>> The plural of anecdote is not data.
>> ~ Roger Brinner
>>
>> The combination of some data and an aching desire for an answer does not
>> ensure that a reasonable answer can be extracted from a given body of
>> data.
>> ~ John Tukey
>>
>> -Oorspronkelijk bericht-
>> Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
>> Namens Bryan Hanson
>> Verzonden: vrijdag 2 oktober 2009 17:21
>> Aan: > Onderwerp: [R] ggplot2: proper use of facet_grid inside a function
>>
>> Hello Again R Folk:
>>
>> I have found items about this in the archives, but I'm still not getting
>> it right.  I want to use ggplot2 with facet_grid inside a function with
>> user specified variables, for instance:
>>
>>     p <- ggplot(data, aes_string(x = fac1, y = res)) + facet_grid(. ~
>> fac2)
>>
>> Where data, fac1, fac2 and res are arguments to the function.  I have
>> tried
>>
>>     p <- ggplot(data, aes_string(x = fac1, y = res)) + facet_grid(. ~
>> as.name(fac2))
>>
>> and
>>
>>     p <- ggplot(data, aes_string(x = fac1, y = res)) + facet_grid(". ~
>> fac2")
>>
>> But all of these produce the same error:
>>
>> Error in `[.data.frame`(plot$data, , setdiff(cond, names(df)), drop =
>> FALSE) :
>>   undefined columns selected
>>
>> If I hardwire the true identity of fac2 into the function, it works as
>> desired, so I know this is a problem of connecting the name with the
>> proper value.
>>
>> I'm up to date on everything:
>>
>> R version 2.9.2 (2009-08-24)
>> i386-apple-darwin8.11.1
>>
>> locale:
>> en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8
>>
>> attached base packages:
>> [1] grid      datasets  tools     utils     stats     graphics
>> grDevices methods
>> [9] base
>>
>> other attached packages:
>>  [1] Hmisc_3.6-0        ggplot2_0.8.3      reshape_0.8.3
>> proto_0.3-8
>>  [5] mvbutils_2.2.0     ChemoSpec_1.1      lattice_0.17-25
>> mvoutlier_1.4
>>  [9] plyr_0.1.8         RColorBrewer_1.0-2 chemometrics_0.4   som_0.3-4
>>
>> [13] robustbase_0.4-5   rpart_3.1-45       pls_2.1-0          pcaPP_1.7
>>
>> [17] mvtnorm_0.9-7      nnet_7.2-48        mclust_3.2
>> MASS_7.2-48
>> [21] lars_0.9-7         e1071_1.5-19       class_7.2-48
>>
>> loaded via a namespace (and not attached):
>> [1] cluster_1.12.0
>>
>> Thanks for any help!  Bryan
>> *
>> Bryan Hanson
>> Professor of Chemistry & Biochemistry
>> DePauw University, Greencastle IN USA
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>> Druk dit bericht a.u.b. niet onnodig af.
>> Please do not print this message unnecessarily.
>>
>> Dit bericht en eventuele bijlagen geven enkel de visie van de schrijver weer
>> en binden het INBO onder geen enkel beding, zolang dit bericht ni

Re: [R] ggplot2: proper use of facet_grid inside a function

2009-10-05 Thread Bryan Hanson
Thanks Thierry for the work-around.  I was out of ideas.

I had looked around for the facet_grid() analog of aes_string(), and
concluded there wasn't one.  The only thing I found was the notion of

facet_grid("...") but apparently it is intended for some other use, as it
doesn't work as I thought it would (like a hypothetical
facet_grid_string()).

Thanks so much.  Bryan


On 10/5/09 4:12 AM, "ONKELINX, Thierry"  wrote:

> Dear Bryan,
> 
> In the ggplot() function you can choose between aes() and aes_string().
> In the first you need to hardwire the variable names, in the latter you
> can use objects which contain the variable names. So in your case you
> need aes_string().
> 
> Unfortunatly, facet_grid() works like aes() and not like aes_string().
> That is why you are getting errors.
> 
> A workaround would be to add a dummy column to your data.
> 
> library(ggplot2)
> data <- mpg
> fac1 <- "cty"
> fac2 <- "drv"
> res <- "displ"
> data$dummy <- data[, fac2]
> ggplot(data, aes_string(x = fac1, y = res)) + geom_point() +
> facet_grid(.~dummy)
> 
> HTH,
> 
> Thierry
> 
> 
> 
> 
> ir. Thierry Onkelinx
> Instituut voor natuur- en bosonderzoek / Research Institute for Nature
> and Forest
> Cel biometrie, methodologie en kwaliteitszorg / Section biometrics,
> methodology and quality assurance
> Gaverstraat 4
> 9500 Geraardsbergen
> Belgium
> tel. + 32 54/436 185
> thierry.onkel...@inbo.be
> www.inbo.be
> 
> To call in the statistician after the experiment is done may be no more
> than asking him to perform a post-mortem examination: he may be able to
> say what the experiment died of.
> ~ Sir Ronald Aylmer Fisher
> 
> The plural of anecdote is not data.
> ~ Roger Brinner
> 
> The combination of some data and an aching desire for an answer does not
> ensure that a reasonable answer can be extracted from a given body of
> data.
> ~ John Tukey
>  
> -Oorspronkelijk bericht-
> Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> Namens Bryan Hanson
> Verzonden: vrijdag 2 oktober 2009 17:21
> Aan:  Onderwerp: [R] ggplot2: proper use of facet_grid inside a function
> 
> Hello Again R Folk:
> 
> I have found items about this in the archives, but I'm still not getting
> it right.  I want to use ggplot2 with facet_grid inside a function with
> user specified variables, for instance:
> 
> p <- ggplot(data, aes_string(x = fac1, y = res)) + facet_grid(. ~
> fac2)
> 
> Where data, fac1, fac2 and res are arguments to the function.  I have
> tried
> 
> p <- ggplot(data, aes_string(x = fac1, y = res)) + facet_grid(. ~
> as.name(fac2))
> 
> and 
> 
> p <- ggplot(data, aes_string(x = fac1, y = res)) + facet_grid(". ~
> fac2")
> 
> But all of these produce the same error:
> 
> Error in `[.data.frame`(plot$data, , setdiff(cond, names(df)), drop =
> FALSE) : 
>   undefined columns selected
> 
> If I hardwire the true identity of fac2 into the function, it works as
> desired, so I know this is a problem of connecting the name with the
> proper value.
> 
> I'm up to date on everything:
> 
> R version 2.9.2 (2009-08-24)
> i386-apple-darwin8.11.1
> 
> locale:
> en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8
> 
> attached base packages:
> [1] grid  datasets  tools utils stats graphics
> grDevices methods
> [9] base 
> 
> other attached packages:
>  [1] Hmisc_3.6-0ggplot2_0.8.3  reshape_0.8.3
> proto_0.3-8  
>  [5] mvbutils_2.2.0 ChemoSpec_1.1  lattice_0.17-25
> mvoutlier_1.4
>  [9] plyr_0.1.8 RColorBrewer_1.0-2 chemometrics_0.4   som_0.3-4
>
> [13] robustbase_0.4-5   rpart_3.1-45   pls_2.1-0  pcaPP_1.7
>
> [17] mvtnorm_0.9-7  nnet_7.2-48mclust_3.2
> MASS_7.2-48  
> [21] lars_0.9-7 e1071_1.5-19   class_7.2-48
> 
> loaded via a namespace (and not attached):
> [1] cluster_1.12.0
> 
> Thanks for any help!  Bryan
> *
> Bryan Hanson
> Professor of Chemistry & Biochemistry
> DePauw University, Greencastle IN USA
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> Druk dit bericht a.u.b. niet onnodig af.
> Please do not print this message unnecessarily.
> 
> Dit bericht en eventuele bijlagen geven enkel de visie van de schrijver weer
> en binden het INBO onder geen enkel beding, zolang dit bericht niet bevestigd
> is
> door een geldig ondertekend document. The views expressed in  this message
> and any annex are purely those of the writer and may not be regarded as
> stating 
> an official position of INBO, as long as the message is not confirmed by a
> duly 
> signed document.

__
R-help@r-project.org mailing list
https://sta

Re: [R] convert RData to txt

2009-10-05 Thread Henrique Dallazuanna
You can try something about like this:

lapply(ls(), function(obj)cat("\n", obj, "<-",
paste(deparse(get(obj)), collapse = "\n"), file = 'RData.txt', append
= TRUE))
source('RData.txt')


On Mon, Oct 5, 2009 at 4:00 AM,   wrote:
> hello all,
> will you plz tell me how can i convert RData files to txt,,,
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Ubuntu, Revolutions, R

2009-10-05 Thread Hans W. Borchers

I updated to Ubuntu 9.10 Beta yesterday, and yes I do see the same message
and I am a bit irritated.  I don't want to read these 'marketing' lines any
time I start up R.
I simply deleted the lines from "/etc/R/Rprofile.site" for now, but I am 
still wondering who put that in. Is there any deeper reason I didn't get ?

Hans Werner


gunksta wrote:
> 
> For those who don't follow Ubuntu development carefully, the first Beta
> for the 
> next Ubuntu was recently released, so I took my home system and upgraded
> to 
> help out with filing bugs, etc. 
> 
> Just to be clear, I am not looking for help with the upgrade process. I've
> had 
> R, and a few miscellaneous CRAN packages installed on this computer for
> years. 
> Today, when I loaded an R session I had developed before the upgrade, I
> saw 
> something new in my R "welcome message".
>>
>>R version 2.9.2 (2009-08-24)
>>Copyright (C) 2009 The R Foundation for Statistical Computing
>>ISBN 3-900051-07-0
>>
>>R is free software and comes with ABSOLUTELY NO WARRANTY.
>>You are welcome to redistribute it under certain conditions.
>>Type 'license()' or 'licence()' for distribution details.
>>
>>R is a collaborative project with many contributors.
>>Type 'contributors()' for more information and
>>'citation()' on how to cite R or R packages in publications.
>>
>>
>>This is REvolution R version 3.0.0:
>>the optimized distribution of R from REvolution Computing.
>>REvolution R enhancements Copyright (C) REvolution Computing, Inc.
>>
>>Checking for REvolution MKL:
>  >- REvolution R enhancements not installed.
>>For improved performance and other extensions: apt-get install
revolution-r
> 
> The last part, about this being the "enhanced" version of R was . . . 
> unexpected.  I have heard of this company before and now I've spent some
> time 
> on their website. Looking at my installation, Ubuntu did not install any
> of 
> the REvolution Computing components, although R now basically thows a
> warning 
> every time I start it.
> 
> My question(s) for the community is this (pick any question(s) you like to 
> answer: 
> Should I install the REvolution Computing packages? 
> Do these packages really make R faster? 
> Are these packages stable? 
> What are your experiences with REvolution Computing software?
> 
> I am interested in hearing from members of the community, REvolution
> Computing 
> employees/supporters (although please ID yourself as such) and most anyone 
> else. I can see what they say on their website, but I'm interested in
> getting 
> other opinions too.
> 
> Thanks!
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Ubuntu%2C-Revolutions%2C-R-tp25744817p25749786.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] convert RData to txt

2009-10-05 Thread Paul Hiemstra

mykh...@gmail.com wrote:
hello all, 
will you plz tell me how can i convert RData files to txt,,,


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
  

Hi,

Depends on what is in the RData files, which can be any R object. So 
there is no single answer as long as you don't provide more information. 
So please read the posting guide for more information.


In general you can use the load() command to the RData file, and if it 
is for example a csv file you can use write.csv to write it to ascii.


cheers
Paul

--
Drs. Paul Hiemstra
Department of Physical Geography
Faculty of Geosciences
University of Utrecht
Heidelberglaan 2
P.O. Box 80.115
3508 TC Utrecht
Phone:  +3130 274 3113 Mon-Tue
Phone:  +3130 253 5773 Wed-Fri
http://intamap.geo.uu.nl/~paul

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to get NA's into the output of xtabs?

2009-10-05 Thread jim holtman
Try the reshape package:

> my.df
   Show  Size   Date
1 Babylon 5 0.000 2007-08-03
2Dr Who 0.701 2007-08-03
3Dr Who 0.850 2007-08-04
4Dr Who 0.850 2007-08-05
5 Star Trek 0.700 2007-08-03
6 Star Trek 0.800 2007-08-04
7 Torchwood 0.800 2007-08-04
8 Torchwood 0.900 2007-08-05
9 Sarah Jane Adventures 0.200 2007-08-05
> require(reshape)
Loading required package: reshape
Loading required package: plyr
> str(my.df)
'data.frame':   9 obs. of  3 variables:
 $ Show: Factor w/ 5 levels "Babylon 5","Dr Who",..: 1 2 2 2 3 3 4 4 5
 $ Size: num  0 0.701 0.85 0.85 0.7 0.8 0.8 0.9 0.2
 $ Date:Class 'Date'  num [1:9] 13728 13728 13729 13730 13728 ...
> x <- melt(my.df, id=c("Show", "Date"))
> cast(x, Show ~ Date)
   Show 2007-08-03 2007-08-04 2007-08-05
1 Babylon 5  0.000 NA NA
2Dr Who  0.701   0.85   0.85
3 Star Trek  0.700   0.80 NA
4 Torchwood NA   0.80   0.90
5 Sarah Jane Adventures NA NA   0.20
>


On Mon, Oct 5, 2009 at 7:03 AM, Tony Breyal  wrote:
> Dear all,
>
> Lets say I have the following data frame:
>
>> df1 <- data.frame(Show=c('Star Trek', 'Babylon 5', 'Dr Who'), Size=c(0.7, 
>> 0.0, 0.701),  Date=as.Date(c('2007-08-03', '2007-08-03', '2007-08-03'), 
>> format='%Y-%m-%d'))
>> df2 <- data.frame(Show=c('Star Trek', 'Dr Who', 'Torchwood'), Size=c(0.8, 
>> 0.85, 0.8), Date=as.Date(c('2007-08-04', '2007-08-04', '2007-08-04'), 
>> format='%Y-%m-%d'))
>> df3 <- data.frame(Show=c('Sarah Jane Adventures', 'Torchwood', 'Dr Who'), 
>> Size=c(0.2, 0.9, 0.85), Date=as.Date(c('2007-08-05', '2007-08-05', 
>> '2007-08-05'), format='%Y-%m-%d'))
>> df.list <- list(df1, df2, df3)
>> my.df <- Reduce(function(x, y) merge(x, y, all=TRUE), df.list, accumulate=F)
>> my.df
>                   Show  Size       Date
> 1             Babylon 5 0.000 2007-08-03
> 2                Dr Who 0.701 2007-08-03
> 3                Dr Who 0.850 2007-08-04
> 4                Dr Who 0.850 2007-08-05
> 5             Star Trek 0.700 2007-08-03
> 6             Star Trek 0.800 2007-08-04
> 7             Torchwood 0.800 2007-08-04
> 8             Torchwood 0.900 2007-08-05
> 9 Sarah Jane Adventures 0.200 2007-08-05
>>
>
> I would like to come up with something like this:
>
>
> Show                          2007-08-03 2007-08-04 2007-08-05
> Babylon 5                    0.000         NA            NA
> Dr Who                       0.701         0.850         0.850
> Star Trek                     0.700         0.800         NA
> Torchwood                   NA            0.800         0.900
> Sarah Jane Adventures NA            NA             0.200
>
> The best i can do so far is:
>
>> xtabs(as.numeric(Size) ~ Show + Date, data = my.df)
>                       Date
> Show                    2007-08-03 2007-08-04 2007-08-05
>  Babylon 5                  0.000      0.000      0.000
>  Dr Who                     0.701      0.850      0.850
>  Star Trek                  0.700      0.800      0.000
>  Torchwood                  0.000      0.800      0.900
>  Sarah Jane Adventures      0.000      0.000      0.200
>
> Many thanks in advance,
> Tony
>
>
> # Win Vista Ultimate
>> sessionInfo()
> R version 2.9.2 (2009-08-24)
> i386-pc-mingw32
>
> locale:
> LC_COLLATE=English_United Kingdom.1252;LC_CTYPE=English_United Kingdom.
> 1252;LC_MONETARY=English_United Kingdom.
> 1252;LC_NUMERIC=C;LC_TIME=English_United Kingdom.1252
>
> attached base packages:
> [1] stats     graphics  grDevices utils     datasets  methods
> base
>>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to get NA's into the output of xtabs?

2009-10-05 Thread Tony Breyal
Dear all,

Lets say I have the following data frame:

> df1 <- data.frame(Show=c('Star Trek', 'Babylon 5', 'Dr Who'), Size=c(0.7, 
> 0.0, 0.701),  Date=as.Date(c('2007-08-03', '2007-08-03', '2007-08-03'), 
> format='%Y-%m-%d'))
> df2 <- data.frame(Show=c('Star Trek', 'Dr Who', 'Torchwood'), Size=c(0.8, 
> 0.85, 0.8), Date=as.Date(c('2007-08-04', '2007-08-04', '2007-08-04'), 
> format='%Y-%m-%d'))
> df3 <- data.frame(Show=c('Sarah Jane Adventures', 'Torchwood', 'Dr Who'), 
> Size=c(0.2, 0.9, 0.85), Date=as.Date(c('2007-08-05', '2007-08-05', 
> '2007-08-05'), format='%Y-%m-%d'))
> df.list <- list(df1, df2, df3)
> my.df <- Reduce(function(x, y) merge(x, y, all=TRUE), df.list, accumulate=F)
> my.df
   Show  Size   Date
1 Babylon 5 0.000 2007-08-03
2Dr Who 0.701 2007-08-03
3Dr Who 0.850 2007-08-04
4Dr Who 0.850 2007-08-05
5 Star Trek 0.700 2007-08-03
6 Star Trek 0.800 2007-08-04
7 Torchwood 0.800 2007-08-04
8 Torchwood 0.900 2007-08-05
9 Sarah Jane Adventures 0.200 2007-08-05
>

I would like to come up with something like this:


Show  2007-08-03 2007-08-04 2007-08-05
Babylon 50.000 NANA
Dr Who   0.701 0.850 0.850
Star Trek 0.700 0.800 NA
Torchwood   NA0.800 0.900
Sarah Jane Adventures NANA 0.200

The best i can do so far is:

> xtabs(as.numeric(Size) ~ Show + Date, data = my.df)
   Date
Show2007-08-03 2007-08-04 2007-08-05
  Babylon 5  0.000  0.000  0.000
  Dr Who 0.701  0.850  0.850
  Star Trek  0.700  0.800  0.000
  Torchwood  0.000  0.800  0.900
  Sarah Jane Adventures  0.000  0.000  0.200

Many thanks in advance,
Tony


# Win Vista Ultimate
> sessionInfo()
R version 2.9.2 (2009-08-24)
i386-pc-mingw32

locale:
LC_COLLATE=English_United Kingdom.1252;LC_CTYPE=English_United Kingdom.
1252;LC_MONETARY=English_United Kingdom.
1252;LC_NUMERIC=C;LC_TIME=English_United Kingdom.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods
base
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] COURSE: Introduction to Metabolomics and biomarker research

2009-10-05 Thread Matthias Kohl

Introduction to Metabolomics and biomarker research.

The course will take place in Innsbruck, 16th to 17th November 2009, and
will be held in German.

For more details see:
http://www.gdch.de/vas/fortbildung/kurse/fortbildung2009.htm#1175

--
Dr. Matthias Kohl
www.stamats.de

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Parsing Files in R (USGS StreamFlow data)

2009-10-05 Thread Gabor Grothendieck
Repeat the DD calculation but with this pattern instead:

 pat <- "^# +USGS +([0-9]+) +(.*)"

and then merge DD with DF:

DDdf <- data.frame(gauge = as.numeric(DD[,1]), gauage_name = DD[,2])
both <- merge(DF, DDdf, by = "gauge", all.x = TRUE)


On Sun, Oct 4, 2009 at 10:14 PM, Gabor Grothendieck
 wrote:
> Its not completely clear what you want to preserve and what you want
> to eliminate but try this:
>
>> L <- 
>> readLines("http://waterdata.usgs.gov/nwis/uv?format=rdb&period=7&site_no=021973269,06018500";)
>
>> L.USGS <- grep("^USGS", L, value = TRUE)
>> DF <- read.table(textConnection(L.USGS), fill = TRUE)
>> head(DF)
>    V1       V2         V3    V4   V5   V6   V7
> 1 USGS 21973269 2009-09-27 00:00 6.96 4990 0.00
> 2 USGS 21973269 2009-09-27 00:15 6.96 4990 0.00
> 3 USGS 21973269 2009-09-27 00:30 6.97 5000 0.01
> 4 USGS 21973269 2009-09-27 00:45 6.97 5000 0.00
> 5 USGS 21973269 2009-09-27 01:00 6.98 5010 0.00
> 6 USGS 21973269 2009-09-27 01:15 6.98 5010 0.00
>
>> pat <- "^# +([0-9]+) +([0-9]+) +(.*)"
>> L.DD <- grep(pat, L, value = TRUE)
>> library(gsubfn)
>> DD <- strapply(L.DD, pat, c, simplify = rbind)
>> head(DD)
>     [,1] [,2]    [,3]
> [1,] "01" "00065" "Gage height, feet"
> [2,] "02" "00060" "Discharge, cubic feet per second"
> [3,] "03" "00045" "Precipitation, total, inches"
> [4,] "02" "00065" "Gage height, feet"
> [5,] "05" "00060" "Discharge, cubic feet per second"
>
>
> On Sun, Oct 4, 2009 at 9:49 PM, stephen sefick  wrote:
>> http://waterdata.usgs.gov/nwis/uv?format=rdb&period=7&site_no=021973269
>>
>> I would like to be able to parse this file up:
>>
>> I can do this
>> x <- 
>> read.table("http://waterdata.usgs.gov/nwis/uv?format=rdb&period=7&site_no=021973269";,
>> skip=26)
>>
>> but If I add another gauge to this
>>
>> x <- 
>> read.table("http://waterdata.usgs.gov/nwis/uv?format=rdb&period=7&site_no=021973269,06018500";,
>> skip=26)
>> It does not work because there are two files appended to each other.
>>
>> It would be easy enough to write the code so that each individual
>> gauge would be read in as a different file, but is there a way to get
>> this information in using the commented part of the file to give the
>> headers?  This is probably a job for some other programing language
>> like perl, but I don't know perl.
>>
>> any help would be very helpful.
>> regards,
>>
>> --
>> Stephen Sefick
>>
>> Let's not spend our time and resources thinking about things that are
>> so little or so large that all they really do for us is puff us up and
>> make us feel like gods.  We are mammals, and have not exhausted the
>> annoying little problems of being mammals.
>>
>>                                                                -K. Mullis
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] else if statement error

2009-10-05 Thread Martin Maechler
> "DW" == David Winsemius 
> on Sat, 3 Oct 2009 12:56:51 -0400 writes:

DW> On Oct 3, 2009, at 11:54 AM, Chen Gu wrote:

>> Hello,
>> 
>> I am doing a simple if else statement in R. But it always comes out  
>> error
>> such as 'unexpected error'
>> There are two variables. ini and b. when ini=1, a=3; when ini>1 and  
b> 2,
>> a=5; all other situations, a=6. I don't know where it is wrong.
>> Here is my code
>> 
>> ini=3
>> b=4
DW> Your basic problem is that you are confusing if and else which are  
DW> program control functions with ifelse which is designed for assignment  
DW> purposes;

David, not quite:  in R, "everything"(*) is a function,
and in the example we have here 
(and innumerous similar examples)   ifelse(.) is not efficient
to use:
>> ini=3
>> b=4
>> a <-   ifelse( ini==1, 3, ifelse( ini>1 & b>2 , 5, 6))
>> 
>> a
 > [1] 5

More efficient -- and also nicer in my eyes ---
is
a <-  if(ini == 1) 3 else if(ini > 1 && b > 2) 5  else  6

As I say on many occasions:  
   ifelse() is useful and nice, but it is used in many more
   places than it should {BTW, not only because of examples like the above}.

Martin Maechler, ETH Zurich


(*) "almost"


>> if (ini==1) {
>> a=3
>> }
>> else if (ini>1 and b>2 ) {
DW> The error is probably being thrown because "and" is not a valid  
DW> conjunction operator in R.
>> 1 and 1
DW> Error: syntax error
>> 1 & 1
DW> [1] TRUE


>> a=5
>> }
>> else {a=6}
>> 
>> e.

DW> David Winsemius, MD
DW> Heritage Laboratories
DW> West Hartford, CT

DW> __
DW> R-help@r-project.org mailing list
DW> https://stat.ethz.ch/mailman/listinfo/r-help
DW> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
DW> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] convert RData to txt

2009-10-05 Thread mykhwab
hello all, 
will you plz tell me how can i convert RData files to txt,,,

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] GAM question

2009-10-05 Thread Simon Wood
If you mean the values of the smooth function for different values of its 
argument then take a look at setting `type="terms"' for `predict.gam'... the 
returns predictions split up by individual smooth terms (and any parametric 
terms)


On Thursday 01 October 2009 12:49, Daniel Rabczenko wrote:
> Hello evyrone,
> I would be grateful if you could help me in (I hope) simple problem.
> I fit a gam model (from mgcv package) with several smooth functions .
> I don't know how to extract values of just one smooth function. Can you
> please help me in this?
> Kind regards,
> Daniel Rabczenko
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html and provide commented, minimal,
> self-contained, reproducible code.

-- 
> Simon Wood, Mathematical Sciences, University of Bath, Bath, BA2 7AY UK
> +44 1225 386603  www.maths.bath.ac.uk/~sw283

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Unusual error while using coxph

2009-10-05 Thread Laura Bonnett
Hi all,

I'm very confused!  I've been using the same code for many weeks without any
bother for various covariates.  I'm now looking at another covaraite and
whenever I run the code you can see below I get an error message: "Error in
rep(0, nrow(data)) : invalid 'times' argument"

This code works:
# remove 'missing' cases from data #
snearma <- function(data)
{
for(i in 1:nrow(data)){
if(is.na(data$all.sp)[i])
data$all.sp[i]<-0
if(is.na(data$all.cp)[i])
data$all.cp[i]<-0
if(is.na(data$all.scgtc)[i])
data$all.scgtc[i]<-0
if(is.na(data$all.tc)[i])
data$all.tc[i] <- 0
if(is.na(data$all.ta)[i])
data$all.ta[i] <- 0
if(is.na(data$all.aa)[i])
data$all.aa[i] <- 0
if(is.na(data$all.m)[i])
data$all.m[i] <- 0
if(is.na(data$all.otc)[i])
data$all.otc[i] <- 0
if(is.na(data$all.o)[i])
data$all.o[i] <- 0
}
dummy <- rep(0,nrow(data))
for(i in 1:nrow(data)){
if(data$all.sp[i]==0 && data$all.cp[i]==0 && data$all.scgtc[i]==0 &&
data$all.tc[i]==0 && data$all.ta[i]==0 && data$all.aa[i]==0 &&
data$all.m[i]==0 & data$all.otc[i]==0 && data$all.o[i]==0)
dummy[i] <- i
}
return(data[-dummy,])
}
# create smaller dataset with missing cases removed #
nmarma <- snearma(nearma)
# create short stratification variable #
nmrpa <- randp(nmarma)
# create censoring variable for the covariate #
stypea <- function(data)
{
for(i in 1:nrow(data)){
if(is.na(data$all.sp)[i])
data$all.sp[i]<-0
if(is.na(data$all.cp)[i])
data$all.cp[i]<-0
if(is.na(data$all.scgtc)[i])
data$all.scgtc[i]<-0
if(is.na(data$all.tc)[i])
data$all.tc[i] <- 0
if(is.na(data$all.ta)[i])
data$all.ta[i] <- 0
if(is.na(data$all.aa)[i])
data$all.aa[i] <- 0
if(is.na(data$all.m)[i])
data$all.m[i] <- 0
if(is.na(data$all.otc)[i])
data$all.otc[i] <- 0
if(is.na(data$all.o)[i])
data$all.o[i] <- 0
}
stype <- rep(0,nrow(data))
for(i in 1:nrow(data)){
if(data$all.type[i]=="P" && data$all.sp[i]>=1 && data$all.scgtc[i] == 0)
stype[i] <- 1
if(data$all.type[i]=="P" && data$all.cp[i]>=1 && data$all.scgtc[i] == 0)
stype[i] <- 1
if(data$all.type[i]=="P" && data$all.scgtc[i]>=1)
stype[i] <- 2
if(data$all.type[i]=="P" && data$all.sp[i]==0 && data$all.cp[i]==0 &&
data$all.scgtc[i]==0 && data$all.otc[i]==0 && data$all.o[i]==0)
stype[i] <- 2
if(data$all.type[i]=="G" && data$all.tc[i]>=1 && data$all.m[i]==0 &&
data$all.ta[i]==0 & data$all.aa[i]==0)
stype[i] <- 3
if(data$all.type[i]=="G" && data$all.m[i]>=1 && data$all.tc[i]==0)
stype[i] <- 3
if(data$all.type[i]=="G" && data$all.ta[i]>=1 && data$all.tc[i]==0)
stype[i] <- 3
if(data$all.type[i]=="G" && data$all.aa[i]>=1 && data$all.tc[i]==0)
stype[i] <- 3
if(data$all.type[i]=="G" && data$all.m[i]>=1 && data$all.tc[i]>=1)
stype[i] <- 3
if(data$all.type[i]=="G" && data$all.ta[i]>=1 && data$all.tc[i]>=1)
stype[i] <- 3
if(data$all.type[i]=="G" && data$all.aa[i]>=1 && data$all.tc[i]>=1)
stype[i] <- 3
if(data$all.otc[i]>=1)
stype[i] <- 4
if(data$all.o[i]>=1)
stype[i] <- 4
}
return(stype)
}
fita <-
survdiff(Surv(rem.Remtime,rem.Rcens)~stypea(nmarma)+strata(nmrpa),data=nmarma)
fita
lrpvalue=1-pchisq(fita$chisq,3)
xx <-
cuminc(nmarma$rem.Remtime/365,nmarma$rem.Rcens,stypea(nmarma),strata=nmrpa)
plot(xx,curvlab=c("Simple/Complex","SC+2gentc or 2gentc","TC or My/Ab or
My/Ab+gentc","Other"),lty=1,color=c(2:5),xlab="Time from randomisation
(years)",ylab="Probability of 12-month remission",main="Time to 12-month
remission",wh=c(2.0,0.4))
text(4,0.5,cex=0.85,paste("Log-rank
test=",round(fita$chisq,3),"p-value=",round(lrpvalue,3)))

whereas this doesn't:
par <- function(data)
{
dummy <- rep(0,nrow(data))
for(i in 1:nrow(data)){
if(is.na(data$all.frontlob)[i] && is.na(data$all.templob)[i] &&
is.na(data$all.parlob)[i]
&& is.na(data$all.occlob)[i] && is.na(data$all.notspec)[i])
dummy[i]<- i
}
for(i in 1:nrow(data)){
if(is.na(data$all.frontlob)[i])
data$all.frontlob[i] <- "N"
if(is.na(data$all.templob)[i])
data$all.templob[i] <- "N"
if(is.na(data$all.parlob)[i])
data$all.parlob[i] <- "N"
if(is.na(data$all.occlob)[i])
data$all.occlob[i] <- "N"
if(is.na(data$all.notspec)[i])
data$all.notspec[i] <- "N"
}
for(i in 1:nrow(data)){
if(data$all.frontlob[i]=="N" && data$all.templob[i]=="N" &&
data$all.parlob[i]=="N" && data$all.occlob[i]=="N" &&
data$all.notspec[i]=="N")
dummy[i] <- i
if(data$all.frontlob[i]=="Y" && data$all.parlob[i]=="Y")
dummy[i] <- i
}
return(data[-dummy,])
}
shortpar <- par(nearma)
shortrpa <- randp(shortpar)
lobe <- function(data)
{
for(i in 1:nrow(data)){
if(is.na(data$all.frontlob)[i])
data$all.frontlob[i] <- "N"
if(is.na(data$all.templob)[i])
data$all.templob[i] <- "N"
if(is.na(data$all.occlob)[i])
data$all.occlob[i]

Re: [R] .Rprofile file

2009-10-05 Thread Petr PIKAL
Hi

I  modify Rprofile.site file in etc directory in installed version of R 
and I load all packages and data files I use through it. You shall also go 
through file Rconsole in the same directory.

Regards
Petr

r-help-boun...@r-project.org napsal dne 02.10.2009 13:03:25:

> > I want to use the .RProfile to set defaults such as text editor.  Is 
> > this a file I need to create?  Also, where should I put it?  I tend to 
 
> > create .RData files for different projects, putting each in a 
different 
> > Windows (Vista) folder.  Is one .Rprofile file created  that any 
> > instance of R can access  (I would imagine so)?
> 
> I'm just trying to find that out myself for XP -- on Linux it is very
> easy, you just create ~/.Rprofile in your home directory.
> 
> I think this thread might help, but I haven't had time to pursue this
> yet:
> 
> http://tolstoy.newcastle.edu.au/R/devel/05/12/3454.html
> 
> Marianne
> 
> 
> -- 
> Marianne Promberger PhD
> King's College London
> London SE1 9RT
> Phone: 020 7188 2590
> GnuPG/PGP public key ID 80AD9916
> .tex .bib .R .Rnw files welcome
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ggplot2: proper use of facet_grid inside a function

2009-10-05 Thread ONKELINX, Thierry
Dear Bryan,

In the ggplot() function you can choose between aes() and aes_string().
In the first you need to hardwire the variable names, in the latter you
can use objects which contain the variable names. So in your case you
need aes_string().

Unfortunatly, facet_grid() works like aes() and not like aes_string().
That is why you are getting errors.

A workaround would be to add a dummy column to your data.

library(ggplot2)
data <- mpg
fac1 <- "cty"
fac2 <- "drv"
res <- "displ"
data$dummy <- data[, fac2]
ggplot(data, aes_string(x = fac1, y = res)) + geom_point() +
facet_grid(.~dummy)

HTH,

Thierry




ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature
and Forest
Cel biometrie, methodologie en kwaliteitszorg / Section biometrics,
methodology and quality assurance
Gaverstraat 4
9500 Geraardsbergen
Belgium
tel. + 32 54/436 185
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to
say what the experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of
data.
~ John Tukey

-Oorspronkelijk bericht-
Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
Namens Bryan Hanson
Verzonden: vrijdag 2 oktober 2009 17:21
Aan: https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Druk dit bericht a.u.b. niet onnodig af.
Please do not print this message unnecessarily.

Dit bericht en eventuele bijlagen geven enkel de visie van de schrijver weer 
en binden het INBO onder geen enkel beding, zolang dit bericht niet bevestigd is
door een geldig ondertekend document. The views expressed in  this message 
and any annex are purely those of the writer and may not be regarded as stating 
an official position of INBO, as long as the message is not confirmed by a duly 
signed document.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] The nature of evidence (was Re: Urgently needed Exercisesolutions related to PracticalData)

2009-10-05 Thread Daniel Malter
You are right, Patrick, there is no evidence that he is - as much as there
is no evidence that he is not. There is no evidence that he is in
Switzerland; there is no evidence that he is in Computer Science; and there
is no evidence that he is in fact "Mahesh." It seems to me we are now
redeclaring zero evidence for A to likely evidence for B by engaging in a
mixture of unqualified psychological profiling, speculation, and still,
discipline-based contempt.  And from all this we conclude that it is about
50 % likely that he is an MBA student. - Really? I still find this strange
for a group that is "obsessed" with valid inference and small alpha errors.
I prefer to stick with Descartes: "It is even useful to assume the dubious
as false to find the more clearly that which is very certain and easily
recognizable." And as such your story - even though possible - is still not
more than one out of many possible stories. But since I am probably biased
in the other direction, may be we can get covername "Mahesh" to send us a
private email what he is really doing.

Daniel

-
cuncta stricte discussurus
-

-Ursprüngliche Nachricht-
Von: Patrick Connolly [mailto:p_conno...@slingshot.co.nz] 
Gesendet: Monday, October 05, 2009 1:56 AM
An: Daniel Malter
Cc: 'David Winsemius'; r-help@r-project.org
Betreff: [R] The nature of evidence (was Re: [R] Urgently needed
Exercisesolutions related to PracticalData)

On Sat, 03-Oct-2009 at 11:35PM -0400, Daniel Malter wrote:

|> This person has probably mistaken "expelsior" for "excelsior." 
|> 
|> However, be that as it may, I am personally annoyed by your 
|> statement, David, in which you indicate that you believe this to be 
|> an MBA/business school problem, especially since the delinquent 
|> clearly indicates in his post - for which YOU have provided the link 
|> - that he is a computer science student, and also because it is 
|> presented without any evidence (which is particularly ironic because 
|> this list is for helping with a program that analyzes "evidence").

In fairness to David, what is the evidence that the OP really *is* a
computer science student?  Something tells me not to believe a word coming
from the "evidence" Daniel uses.  The idea of an MBA student showing how
diverse his skills are by getting enough credits to pass a computer science
qualification is not that far-fetched.  Being able to wing an MBA-type job
with such a qualification would be somewhat easier than winging a computer
science job.

On balance, I'd say the evidence for David's postulate is at least as good
as it is for the null.


|> I say that with the full consciousness and great respect that I have 
|> for your numerous, very helpful contributions to this list and, 
|> personally, to problems I had, but I don't think that this uppity 
|> disciplinary attitude is appropriate.
|> 
|> Daniel
|> 
|> -
|> cuncta stricte discussurus
|> -
|> 
|> -Ursprüngliche Nachricht-
|> Von: r-help-boun...@r-project.org 
|> [mailto:r-help-boun...@r-project.org] Im Auftrag von David Winsemius
|> Gesendet: Saturday, October 03, 2009 9:48 PM
|> An: Mahesh Krishnan
|> Cc: r-help@r-project.org
|> Betreff: Re: [R] Urgently needed Exercise solutions related to 
|> PracticalData Analysis Using R Statisctial Software
|> 
|> 
|> On Oct 3, 2009, at 8:31 PM, Mahesh Krishnan wrote:
|> 
|> > Hello,
|> >
|> > Can anybody help me in solving these exercises on regular basis 
|> > paid or unpaid basis.
|> > If he/she want to pay by PayPal then it should be noticed that 
|> > mention at this email kindly your paypal email account where one 
|> > can transfer money in all cases.
|> >
|> > But please respect the deadline in al cases.. *Dealine for this 
|> > assignment is 07.10.09.* *If it is paid based then kindly let me 
|> > know a decent price for this ..*
|> >
|> > Regard,
|> > Mahesh
|> 
|> Mahesh has also posted this at:
|> http://www.odesk.com/jobs/Practical-Data-Analysis-Statistical-softwar
|> e-data-
|> analysis-needed_
|> ~~ee9e750d03fe4303?source=rss
|> 
|> And is offering payment for a longer term for "Solving Exercises for 
|> a student on regular basis!
|> Estimated Workload:As needed - Less than 10 hrs/week Estimated
|> Duration:3 to 6 months Last Buyer Activity:September 15, 2009"
|> 
|> Who could resist with the average fee of $11.64?
|> 
|> It's probably a life strategy. Pay your way through your business 
|> school homework exercises and then hire someone to do all your 
|> technical work once you get a management position.
|> 
|> --
|> 
|> David Winsemius, MD
|> Heritage Laboratories
|> West Hartford, CT
|> 
|> __
|> R-help@r-project.org mailing list
|> https://stat.ethz.ch/mailman/listinfo/r-help
|> PLEASE do read the posting guide 
|> http://www.R-project.org/posting-guide.html
|> and provide commented, minimal, self-contained, reproducible code.
|> 
|> ___

[R] try function

2009-10-05 Thread christophe dutang
Hi all,

I'm using the try function for data import with read.csv function. I would
like to know if there is a double allocation of memory when using this code
test.t <- try(input1 <- read.csv("myfile.csv") )
compared to this one
test.t <- try( read.csv("myfile.csv") )

I think for the first code, both test.t and input1 have the input data (when
no error is raised). So I use twice the memory for the same dataset. But I
look at the RAM memory used by R for these two code (in windows XP with
process explorer), the first code seems to use less memory... which is
counter intuitive.

what is the truth?

Thanks in advance

Christophe

PS : I'm using R 2.9.2 on win XP

-- 
Christophe DUTANG
Ph. D. student at ISFA

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] DEoptim 2.0-0

2009-10-05 Thread Katharine Mullen

Dear All,

We are happy to announce the release of the new version of DEoptim 
(version 2.0-0) which is now available from CRAN.


The DEoptim package [3] performs Differential Evolution (DE) minimization, 
a genetic algorithm-based optimization technique [2,3]. This allows robust 
minimization over a continuous (bounded or not) domain.


The new DEoptim function calls a C implementation of the DE algorithm 
similar to the MS Visual C++ v5.0 implementation distributed with [2].


More details on DE optimization can be found on the DE homepage [1].

We believe that the DE approach may be applicable in many fields of 
research and hope that the package DEoptim will be fruitful for many 
researchers.


Kate Mullen and David Ardia

MODIFICATIONS
  o The R-based implementation of Differential Evolution has been
replaced with a C-based implementation similar to the MS Visual C++
v5.0 implementation accompanying the book `Differential Evolution -
A Practical Approach to Global Optimization',downloaded from
http://www.icsi.berkeley.edu/~storn/DeWin.zip.

The new C implementation is significantly faster.

  o The S3 method for plotting has been enhanced. It allows now to plot
the intermediate populations if provided.

  o The package maintainer has been changed to Katharine Mullen,
.

  o A NAMESPACE has been added.

  o Argument FUN for DEoptim is now called fn for compatibility with
optim.

  o demo file has been removed

  o CITATION file modified

REFERENCES

[1] Differential Evolution homepage: 
http://www.icsi.berkeley.edu/~storn/code.html


[2] Price, K.V., Storn, R.M., Lampinen J.A. (2005). Differential Evolution
- A Practical Approach to Global Optimization. Springer-Verlag.  ISBN
3540209506.

[3] Ardia, D., Mullen, K. (2009). DEoptim: Differential
Evolution Optimization in R. R package version 2.0-0. URL
http://CRAN.R-project.org/package=DEoptim

___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: help with regexp mass substitution

2009-10-05 Thread Luca Braglia
Mmm, just Friday ... :)

Thank you jim & gabor

> -Messaggio originale-
> Da: jim holtman
> [mailto:jholt...@gmail.com]
> Inviato: venerdì 2 ottobre 2009
> 14.12
> A: Luca Braglia
> Cc: r-help@r-project.org
> Oggetto: Re: [R] help with
> regexp mass substitution
> 
> You need perl=TRUE:

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls not accepting control parameter?

2009-10-05 Thread Rainer M Krug
On Fri, Oct 2, 2009 at 7:23 PM, Peter Ehlers  wrote:

> Hello Rainer,
>
> I think that your problem is with trying to fit a logistic model to
> data that don't support that model. Removing the first two points
> from your data will work (but of course it may not represent reality).
> The logistic function does not exhibit the kind of minimum that
> your data suggest.
>
>
Hi Peter

partly - when I do as you suggest, it definitely works, but this does not
change the behavioyur, that the error message always says:

" step factor 0.000488281 reduced below 'minFactor' of 0.000976562"

and it does not change to whichever value I try to set minFactor.
So either I am misunderstanding what the control argument for nls is doing,
or there is a bug in nls or in the error message.

Rainer




>  -Peter Ehlers
>
>
> Rainer M Krug wrote:
>
>> Hi
>>
>> I want to change a control parameter for an nls () as I am getting an
>> error
>> message  "step factor 0.000488281 reduced below 'minFactor' of
>> 0.000976562".
>> Despite all tries, it seems that the control parameter of the nls, does
>> not
>> seem to get handed down to the function itself, or the error message is
>> using a different one.
>>
>> Below system info and an example highlighting the problem.
>>
>> Thanks,
>>
>> Rainer
>>
>>
>>  version   _
>>>
>> platform   i486-pc-linux-gnu
>> arch   i486
>> os linux-gnu
>> system i486, linux-gnu
>> status
>> major  2
>> minor  9.2
>> year   2009
>> month  08
>> day24
>> svn rev49384
>> language   R
>> version.string R version 2.9.2 (2009-08-24)
>>
>>  sessionInfo()
>>>
>> R version 2.9.2 (2009-08-24)
>> i486-pc-linux-gnu
>>
>> locale:
>>
>> LC_CTYPE=en_ZA.UTF-8;LC_NUMERIC=C;LC_TIME=en_ZA.UTF-8;LC_COLLATE=en_ZA.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_ZA.UTF-8;LC_PAPER=en_ZA.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_ZA.UTF-8;LC_IDENTIFICATION=C
>>
>> attached base packages:
>> [1] stats graphics  grDevices utils datasets  methods   base
>>
>> other attached packages:
>> [1] R.utils_1.2.0 R.oo_1.5.0R.methodsS3_1.0.3 maptools_0.7-26
>> [5] sp_0.9-44 foreign_0.8-37
>>
>> loaded via a namespace (and not attached):
>> [1] grid_2.9.2  lattice_0.17-25
>>
>>
>> #
>>
>> EXAMPLE:
>>
>> dat <- data.frame(
>>  x = 2006:2037,
>>  y = c(143088, 140218, 137964,
>>138313, 140005, 141483, 142365,
>>144114, 145335, 146958, 148584,
>>149398, 151074, 152241, 153919,
>>155580, 157258, 158981, 160591,
>>162126, 163743, 165213, 166695,
>>168023, 169522, 170746, 172057,
>>173287, 173977, 175232, 176308,
>>177484)
>>  )
>>
>> nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat, trace=TRUE)
>>
>> (newMinFactor <- 1/(4*1024))
>> nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat,
>> control=nls.control(minFactor=newMinFactor), trace=TRUE)
>> nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat,
>> control=c(minFactor=newMinFactor), trace=TRUE)
>>
>>
>> (newMinFactor <- 4/1024)
>> nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat,
>> control=nls.control(minFactor=newMinFactor), trace=TRUE)
>> nls( y ~ SSlogis(x, Asym, xmid, scal), data = dat,
>> control=c(minFactor=newMinFactor), trace=TRUE)
>>
>>
>>
>>


-- 
Rainer M. Krug, PhD (Conservation Ecology, SUN), MSc (Conservation Biology,
UCT), Dipl. Phys. (Germany)

Centre of Excellence for Invasion Biology
Natural Sciences Building
Office Suite 2039
Stellenbosch University
Main Campus, Merriman Avenue
Stellenbosch
South Africa

Cell:   +27 - (0)83 9479 042
Fax:+27 - (0)86 516 2782
Fax:+49 - (0)721 151 334 888
email:  rai...@krugs.de

Skype:  RMkrug
Google: r.m.k...@gmail.com

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.