[R] help need on working in subset within a dataframe

2011-03-21 Thread Umesh Rosyara
Dear R-experts
 
Execuse me for an easy question, but I need help, sorry for that.
 
>From days I have been working with a large dataset, where operations are
needed within a component of dataset. Here is my question:
 
I have big dataset where x1:.x1000 or so. What I need to do is to work
on 4 consequite variables to calculate a statistics and output. So far so
good. There are more vector operations inside function to do this. My
question this time is I want to do this seperately for each level of factor
(infollowing example it is Ped, thus if there are 20 ped, I want a output
with 20 statistics, so that I can work further on them). 
 
#data generation 
ped <- c(1,1,1,1,1, 1,1,1,1,1, 2,2,2,2,2, 2,2,2,2,2)# I have 20 ped 
fd <- c(1,1,1,1,1, 2,2,2,2,2,  3,3,3,3,3, 4,4,4,4,4) # I have ~100 fd 
iid <- c(1:20) # number can go up to 2000  
mid <- c(0,0,1,1,1, 0,0,6,6,6, 0,0, 11,11,11, 0,0,16,16,16) 
fid <- c(0,0,2,2,2, 0,0,7,7,7, 0,0, 12,12,12, 0,0,17, 17, 17) 
y <- c(3,4,5,6,7,  3,4,8,9, 8,  2,3,3,6,7,  9,12,10,8,12)
x1 <- c(1,1,1,0,0, 1,0,1,1,0,   0, 1,1,0,1,1, 1,1,0,0)
x2 <- c(1,1,1,0,0, 1,0,1,1,0,   0, 1,1, 1,0,   1,1,0,1,0)
x3 <- c(1,0,0,1,1, 1,1,1,1,1,   1, 1,1, 1,0,   1,1,0,1,0)
x4 <- c(1,1,1,1,0, 0,1,1, 0,0,  0, 1,0,0, 0,   0,0,1, 1,1)
# I have more X variables potentially >1000 but I need to work four at a
time
dataframe <- data.frame(ped, fd, iid, mid, fid, y, x1, x2, x3, x4)  
 
myfun <- function(dataframe)  {
namemat <- matrix(c(1:4), nrow = 1)
smyfun <- function(x)  {
 x <- as.vector(x)
 K1 <- dataframe$x1 * 0.23
 K2 <- dataframe$x2 * 0.98
 # just example there is long vector calculations in read dataset 
 kt1 <- K1 * K2
 kt2 <- K1 / K2
 Qni <- (K1*(kt1-0.25)+ K2 *(kt2-0.25))
 y <- dataframe$y
 yg <- mean(y, na.rm= TRUE) # mean of trait Y # mean of trait Y
 dvm <- (y-yg ) # deviation of phenotypic value from mean 
 sumdvm <-abs(sum(dvm, na.rm= TRUE))
  yQni <- y* Qni
  sumyQni <-abs(sum(yQni, na.rm= TRUE)) 
  npt = ( sumdvm/ sumyQni) 
  return(npt)
 }
 npt1 <- apply(namemat,1, smyfun)
 return(npt1)
}
 
  myfun (dataframe)
 
My question is how can I automate the process so that the above function can
calculate different values for n levels (>20 in my real data) of factor ped.

 
Thanks in advance for the help. R-community is always helpful. 
 
Umesh R

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] why the survival function estimate using package 'mstate' & package 'cmprsk' vary from sas and LTA (From WHO).

2011-03-21 Thread chaijian

hello,everyone:
  I am now confused in multistate survival , when I want to poccess a  
multistate survival analysis, I turn to R and the package 'mstate'  and packege 
'cmprsk'. When I come to publishing the article.
Follow  requirement  of the magzine, the statistic is carry out in LTA package 
(which was said  to be a authority in intauterine from WHO) . There we found 
difference in the results(incidence ratio or survival function) computed by the 
two software. I also come to the result from SAS, carry out survival analysis 
for each cause seperately using prod-limit method,. the result is closed to LTA 
rather than R.
why ? is the result from R better than LTA or carry out survival analysis for 
each cause seperately using prod-limit method?
thanks a lot!
sincerecely
chai jian
from population and family-planing institute of Henan Province, china
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using the mahalanobis( ) function

2011-03-21 Thread Tyler Rinker

This is what I've tried so far and just can't get it.  I know I want a value of 
3.93 (for Age= y and m) using mahalanobis d as an effect size for a follow up 
to an MANOVA:
 
age.frame<-data.frame(Age, Friend.Agression, Parent.Agression, 
Stranger.Agression)
> age.frame
   Age Friend.Agression Parent.Agression Stranger.Agression
1y87  8
2y56  8
3y63  7
4y55  7
5m   15   13 10
6m   13   11  9
7m   12   12  9
8m   18   10  7
9o   11   11 10
10   o   104 12
11   o   129 12
12   o98 14
13   y   137  7
14   y95 10
15   y   114  4
16   y   153  4
17   m   14   12  8
18   m   10   15 11
19   m   12   11  8
20   m   109  9
21   o   108 11
22   o   13   11 13
23   o98 12
24   o79 16
 
ay<-subset(nd,Age=="y")
> ay
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
1   f   y87  8
2   f   y56  8
3   f   y63  7
4   f   y55  7
13  m   y   137  7
14  m   y95 10
15  m   y   114  4
16  m   y   153  4
> am<-subset(nd,Age=="m")
> am
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
5   f   m   15   13 10
6   f   m   13   11  9
7   f   m   12   12  9
8   f   m   18   10  7
17  m   m   14   12  8
18  m   m   10   15 11
19  m   m   12   11  8
20  m   m   109  9
> ao<-subset(nd,Age=="o")
> ao
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
9   f   o   11   11 10
10  f   o   104 12
11  f   o   129 12
12  f   o98 14
21  m   o   108 11
22  m   o   13   11 13
23  m   o98 12
24  m   o79 16
> amm<-cbind(am$Friend.Agression, am$Parent.Agression,am$Stranger.Agression)
> amm
 [,1] [,2] [,3]
[1,]   15   13   10
[2,]   13   119
[3,]   12   129
[4,]   18   107
[5,]   14   128
[6,]   10   15   11
[7,]   12   118
[8,]   1099
> aym<-cbind(ay$Friend.Agression, ay$Parent.Agression,ay$Stranger.Agression)
> aym
 [,1] [,2] [,3]
[1,]878
[2,]568
[3,]637
[4,]557
[5,]   1377
[6,]95   10
[7,]   1144
[8,]   1534
> aom<-cbind(ao$Friend.Agression, ao$Parent.Agression,ao$Stranger.Agression)
> aom
 [,1] [,2] [,3]
[1,]   11   11   10
[2,]   104   12
[3,]   129   12
[4,]98   14
[5,]   108   11
[6,]   13   11   13
[7,]98   12
[8,]79   16
 
 
> mean(aym)
[1] 6.958333
> mean(amm)
[1] 11.16667
> mean(aom)
[1] 10.375
 
> ascores<-cbind(Friend.Agression, Parent.Agression, Stranger.Agression)
> ascores
  Friend.Agression Parent.Agression Stranger.Agression
 [1,]87  8
 [2,]56  8
 [3,]63  7
 [4,]55  7
 [5,]   15   13 10
 [6,]  

Re: [R] Rmark and scientific notation issue

2011-03-21 Thread Ben Bolker
odaniel  utas.edu.au> writes:

> Just wondering if anyone might know a way to stop R reading my survival
> history that has zeros at the beginning in as scientific notation.

  Gmane doesn't want me to quote much, but maybe

read.table("myfile.dat",colClasses=c("character",rep("numeric",2)) ?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Rmark and scientific notation issue

2011-03-21 Thread odaniel
Hi all, 
Just wondering if anyone might know a way to stop R reading my survival
history that has zeros at the beginning in as scientific notation. The data
is being read in from a .txt file via read.table to be used with Rmark
I have the same issues at the moment with reading in .csv and .xls files.
The number was formatted as text in the .xls file otherwise excel puts it
into scientific notation automatically as well.

Example of current data 
110000   0   0
1.101e+09 0   0
110110   0   0
110011   0   0
110100   0   0
1.11e+09   0   0

What it should be
110000  0   0
001101  0   0
110110  0   0
110011  0   0
110100  0   0
000111  0   0


I am using Microsoft Office Excel 2003 or Notepad++ to view or change my
data file ext. just in case this is an issue.

Cheers


--
View this message in context: 
http://r.789695.n4.nabble.com/Rmark-and-scientific-notation-issue-tp3395435p3395435.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to substract a valur from dataframe with condition

2011-03-21 Thread Bill.Venables
dat <- within(dat, {
X2 <- ifelse(X2 > 50, 100-X2, X2)
X3 <- ifelse(X3 > 50, 100-X3, X3)
})
 

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of joe82
Sent: Tuesday, 22 March 2011 7:40 AM
To: r-help@r-project.org
Subject: [R] How to substract a valur from dataframe with condition

Hello All,

I need help with my dataframe, it is big but here I am using a small table
as an example.

My dataframe df looks like:
X1  X2X3
1   2011-02  0.00 96.00
2   2011-02  0.00  2.11
3   2011-02  2.00  3.08
4   2011-02  0.06  2.79
5   2011-02  0.00 96.00
6   2011-02  0.00 97.00
7   2011-02  0.08  2.23

I want values in columns X2 and X3 to be checked if they are greater than
50, if yes, then subtract from '100'

df should look like:

   X1  X2X3
1   2011-02  0.00 4.00
2   2011-02  0.00  2.11
3   2011-02  2.00  3.08
4   2011-02  0.06  2.79
5   2011-02  0.00 4.00
6   2011-02  0.00 3.00
7   2011-02  0.08  2.23


Please help, I will really appreciate that.

Thanks,

Joe




--
View this message in context: 
http://r.789695.n4.nabble.com/How-to-substract-a-valur-from-dataframe-with-condition-tp3394907p3394907.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using the mahalanobis( ) function

2011-03-21 Thread Tyler Rinker

Hello all,
 
I am a 2 month newbie to R and am stumped.  I have a data set that I've run 
multivariate stats on using the manova function (I included the data set).  Now 
it comes time for a table of effect sizes with significance.  The univariate 
tests are easy.  Where I run into trouble filling in the table of effect sizes 
is the Mahalanobis D as an effect size.  I've included the table so you can see 
what group's I'm comparing.  I know there's a great function for filling in ?1 
and ?2 : mahalanobis(x, center, cov, inverted=FALSE, ...)  I need to turn the 
sub groups scores for y (young), m (middle) and o (old) into clusters.

The problem is I lack the knowledge around cluster analysis of what goes into 
the function for [x, center, & cov.]  I have only a basic understanding of this 
topic (a picture of a measured distance between two clusters on a graph).  I 
think I have to turn the data into a matrix but lack direction.  Could someone 
please use my data set or a similar one (a multivariate with at least 3 outcome 
variables) and actually run this function (mahalanobis).  Then please send me 
your output from [R] starting from the data set all the way to the statistic.  
PS I know the Mahalanobis D should be ?1=3.93 & ?2=3.04.
 
I’ve read and reread the manual around mahalanobis() and have searched through 
the list serve for information.  The info is for people who already have a 
grasp of how to implement this concept.
 
I am running the latest version of R on a windows 7 machine.  
 
Effect Sizes 
Contrasts|   Dependent Variables
  Friends  ParentsStrangersAll  
 
Young-Middle  -1.8768797* -3.2842941* -1.1094004*  ?1   
 
Middle-Old1.34900725* 1.54919532* -2.0107882*  ?2   

 
(sorry the column names and values don’t line up)
Age  Friend Agression Parent Agression Stranger Agression
y  8  7  8
y  5  6  8
y  6  3  7
y  5  5  7
m 151310
m 13119
m 12129
m 18107
o  111110
o  104  12
o  129  12
o  9  8  14
y  137  7
y  9  5  10
y  114  4
y  153  4
m 14128
m 101511
m 12118
m 109  9
o  108  11
o  131113
o  9  8  12
o  7  9  16   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to substract a valur from dataframe with condition

2011-03-21 Thread David Winsemius


On Mar 21, 2011, at 5:39 PM, joe82 wrote:


Hello All,

I need help with my dataframe, it is big but here I am using a small  
table

as an example.

My dataframe df looks like:
   X1  X2X3
1   2011-02  0.00 96.00
2   2011-02  0.00  2.11
3   2011-02  2.00  3.08
4   2011-02  0.06  2.79
5   2011-02  0.00 96.00
6   2011-02  0.00 97.00
7   2011-02  0.08  2.23

I want values in columns X2 and X3 to be checked if they are greater  
than

50, if yes, then subtract from '100'

df should look like:

  X1  X2X3
1   2011-02  0.00 4.00
2   2011-02  0.00  2.11
3   2011-02  2.00  3.08
4   2011-02  0.06  2.79
5   2011-02  0.00 4.00
6   2011-02  0.00 3.00
7   2011-02  0.08  2.23



df[ , 2:3] <- apply(df[ , 2:3], 2,
 function(x) ifelse(x >50, 100-x, x) )

OR:

df[ , 2:3] <- sapply(df[ , 2:3],
 function(x) ifelse(x >50, 100-x, x) )

(or lapply would work as well)

OR;

df$X2[df$X2 >50] <- 100 - df$X2[df$X2 >50]
df$X1[df$X1 >50] <- 100 - df$X1[df$X1 >50]

--
David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to substract a valur from dataframe with condition

2011-03-21 Thread joe82
Hello All,

I need help with my dataframe, it is big but here I am using a small table
as an example.

My dataframe df looks like:
X1  X2X3
1   2011-02  0.00 96.00
2   2011-02  0.00  2.11
3   2011-02  2.00  3.08
4   2011-02  0.06  2.79
5   2011-02  0.00 96.00
6   2011-02  0.00 97.00
7   2011-02  0.08  2.23

I want values in columns X2 and X3 to be checked if they are greater than
50, if yes, then subtract from '100'

df should look like:

   X1  X2X3
1   2011-02  0.00 4.00
2   2011-02  0.00  2.11
3   2011-02  2.00  3.08
4   2011-02  0.06  2.79
5   2011-02  0.00 4.00
6   2011-02  0.00 3.00
7   2011-02  0.08  2.23


Please help, I will really appreciate that.

Thanks,

Joe




--
View this message in context: 
http://r.789695.n4.nabble.com/How-to-substract-a-valur-from-dataframe-with-condition-tp3394907p3394907.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Basic Looping Trouble

2011-03-21 Thread armstrwa
Hi all,

Forgive me for this basic question.  I've been doing some research and
haven't been able to figure out how to best do this yet.

I have 75 variables defined as vector time series.  I am trying to create a
script to automate calculations on each of these variables, but I wasn't
sure how to go about calling variables in a script.

I am hoping to write a loop that calls a list of variable names and runs
several tests using each of the datasets I have entered.

For example, say I have a defined 5 variables: var1, var2,...var5.  How
could I create a script that would run, say, a MannKendall correlation test
on each variable?

Thank you very much, and again, sorry for the rudimentary question.

Billy

--
View this message in context: 
http://r.789695.n4.nabble.com/Basic-Looping-Trouble-tp3394860p3394860.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Exponential distribution

2011-03-21 Thread danielepippo
The code is like this:

plot(0,0, type="n", xlim=c(1,15), ylim=0:1, xlab="...", 
ylab="...", main="Example)
lines(1:15, line1, col=4)
lines(1:15, line2, col=2)
legend("topright", c("alpha = 0.2", "alpha = 0.4"), col=c(4,2),  lty=1)

How can I build the vectors line1 and line2 ?

--
View this message in context: 
http://r.789695.n4.nabble.com/Exponential-distribution-tp3394476p3395057.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Kendall v MannKendall Functions

2011-03-21 Thread armstrwa
Hi,

I am running a correlation analysis on a temporal dataset.  I was wondering
if you would receive the same tau and p values running the function:

MannKendall(x), where x is the dependant variable that changes with time

 as you would running:

 Kendall(d,x), where x is the exact same dataset as the x entered into
MannKendall and d is the date on which the observation was made (assuming
that the order is the same as the data I entered into the Kendall function).

Can anyone elucidate this for me?

Thanks.

Billy

--
View this message in context: 
http://r.789695.n4.nabble.com/Kendall-v-MannKendall-Functions-tp3394821p3394821.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Merging by() results back into original data frame?

2011-03-21 Thread baptiste auguie
I find it quite neat with plyr,

library(plyr)
ddply(d, .(group), transform, max=max(val))

HTH,

baptiste

On 22 March 2011 12:09, William Dunlap  wrote:
>> -Original Message-
>> From: r-help-boun...@r-project.org
>> [mailto:r-help-boun...@r-project.org] On Behalf Of ivo welch
>> Sent: Monday, March 21, 2011 3:43 PM
>> To: r-help
>> Subject: [R] Merging by() results back into original data frame?
>>
>> dear R experts---I am trying to figure out what the recommended way is
>> to merge by() results back into the original data frame.  for example,
>> I want to have a flag that tells me whether a particular row contains
>> the maximum for its group.
>>
>>   d <- data.frame(group=as.factor(rep(1:3,each=3)), val=rnorm(9))
>
> ave() could do what you want without using by().  E.g.,
>
>  > d$isGroupMax <- with(d, ave(val, group, FUN=max) == val)
>  > d
>   group         val isGroupMax
>  1     1  0.21496662      FALSE
>  2     1 -1.44767939      FALSE
>  3     1  0.39635971       TRUE
>  4     2  0.60235172      FALSE
>  5     2  0.94581401       TRUE
>  6     2  0.01665084      FALSE
>  7     3 -0.58277312      FALSE
>  8     3  0.82930370      FALSE
>  9     3  1.02906920       TRUE
>
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com
>
>>   highestvals <- by( d, d$group, function(d)(max(d$val)) )
>>
>>   ## and now?  iterate over levels( d$group ) ?  how do I merge
>> highestvals back into d?
>>
>> advice appreciated.
>>
>> sincerely,
>>
>> /iaw
>> 
>> Ivo Welch (ivo.we...@brown.edu, ivo.we...@gmail.com)
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Merging by() results back into original data frame?

2011-03-21 Thread ivo welch
thank you, william and bill.  wow, this was fast.  I have been tearing
my hair out over this one, trying to work the wrong tool.  (this would
make a good "see also" in the "by" function.)

Ivo Welch (ivo.we...@brown.edu, ivo.we...@gmail.com)




On Mon, Mar 21, 2011 at 7:09 PM, William Dunlap  wrote:
>> -Original Message-
>> From: r-help-boun...@r-project.org
>> [mailto:r-help-boun...@r-project.org] On Behalf Of ivo welch
>> Sent: Monday, March 21, 2011 3:43 PM
>> To: r-help
>> Subject: [R] Merging by() results back into original data frame?
>>
>> dear R experts---I am trying to figure out what the recommended way is
>> to merge by() results back into the original data frame.  for example,
>> I want to have a flag that tells me whether a particular row contains
>> the maximum for its group.
>>
>>   d <- data.frame(group=as.factor(rep(1:3,each=3)), val=rnorm(9))
>
> ave() could do what you want without using by().  E.g.,
>
>  > d$isGroupMax <- with(d, ave(val, group, FUN=max) == val)
>  > d
>   group         val isGroupMax
>  1     1  0.21496662      FALSE
>  2     1 -1.44767939      FALSE
>  3     1  0.39635971       TRUE
>  4     2  0.60235172      FALSE
>  5     2  0.94581401       TRUE
>  6     2  0.01665084      FALSE
>  7     3 -0.58277312      FALSE
>  8     3  0.82930370      FALSE
>  9     3  1.02906920       TRUE
>
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com
>
>>   highestvals <- by( d, d$group, function(d)(max(d$val)) )
>>
>>   ## and now?  iterate over levels( d$group ) ?  how do I merge
>> highestvals back into d?
>>
>> advice appreciated.
>>
>> sincerely,
>>
>> /iaw
>> 
>> Ivo Welch (ivo.we...@brown.edu, ivo.we...@gmail.com)
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Merging by() results back into original data frame?

2011-03-21 Thread William Dunlap
> -Original Message-
> From: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] On Behalf Of ivo welch
> Sent: Monday, March 21, 2011 3:43 PM
> To: r-help
> Subject: [R] Merging by() results back into original data frame?
> 
> dear R experts---I am trying to figure out what the recommended way is
> to merge by() results back into the original data frame.  for example,
> I want to have a flag that tells me whether a particular row contains
> the maximum for its group.
> 
>   d <- data.frame(group=as.factor(rep(1:3,each=3)), val=rnorm(9))

ave() could do what you want without using by().  E.g.,

 > d$isGroupMax <- with(d, ave(val, group, FUN=max) == val)
 > d
   group val isGroupMax
 1 1  0.21496662  FALSE
 2 1 -1.44767939  FALSE
 3 1  0.39635971   TRUE
 4 2  0.60235172  FALSE
 5 2  0.94581401   TRUE
 6 2  0.01665084  FALSE
 7 3 -0.58277312  FALSE
 8 3  0.82930370  FALSE
 9 3  1.02906920   TRUE

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com  

>   highestvals <- by( d, d$group, function(d)(max(d$val)) )
> 
>   ## and now?  iterate over levels( d$group ) ?  how do I merge
> highestvals back into d?
> 
> advice appreciated.
> 
> sincerely,
> 
> /iaw
> 
> Ivo Welch (ivo.we...@brown.edu, ivo.we...@gmail.com)
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Merging by() results back into original data frame?

2011-03-21 Thread Thomas Lumley
On Tue, Mar 22, 2011 at 11:42 AM, ivo welch  wrote:
> dear R experts---I am trying to figure out what the recommended way is
> to merge by() results back into the original data frame.  for example,
> I want to have a flag that tells me whether a particular row contains
> the maximum for its group.
>
>  d <- data.frame(group=as.factor(rep(1:3,each=3)), val=rnorm(9))
>  highestvals <- by( d, d$group, function(d)(max(d$val)) )
>
>  ## and now?  iterate over levels( d$group ) ?  how do I merge
> highestvals back into d?

The easiest approach is probably to use ave() instead of by().
Otherwise match() will tell you where to put each number.

   -thomas

-- 
Thomas Lumley
Professor of Biostatistics
University of Auckland

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] odfWeave Error unzipping file in Win 7

2011-03-21 Thread Duncan Murdoch

On 11-03-21 6:28 PM, rmail...@justemail.net wrote:

I tried with no spaces in either file name and got the same error.


Can you put together an example that produces the error without using 
odfWeave?  To do that, figure out what odfWeave is doing, and do it 
outside of that package.


If not, then we'll assume it's an odfWeave bug.  If you can duplicate 
this without odfWeave, obviously the error lies elsewhere.


One possibile odfWeave bug:  according to the messages below, it looks 
as though it is assuming that


unzip -o "Report input template.odt"

is a valid command.  Perhaps that is not true with the PATH in effect at 
the time you're trying to run it, and with the quote handling in effect 
then.


Duncan Murdoch





- Original message -
From: "Max Kuhn"
To: rmail...@justemail.net
Cc: r-help@r-project.org
Date: Mon, 21 Mar 2011 17:04:40 -0400
Subject: Re: [R] odfWeave Error unzipping file in Win 7

I don't think that this is the issue, but test it on a file without spaces.

On Mon, Mar 21, 2011 at 2:25 PM,  wrote:


I have a very similar error that cropped up when I upgraded to R 2.12 and 
persists at R 2.12.1. I am running R on Windows XP and OO is at version 3.2. I 
did not make any changes to my R code or ODF code or configuration to produce 
this error. Only upgraded R.

Many Thanks,

Eric

R session:



odfWeave ( 'Report input template.odt' , 'August 2011.odt')

  Copying  Report input template.odt
  Setting wd to  
C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2/odfWeave2153483
  Unzipping ODF file using unzip -o "Report input template.odt"
Error in odfWeave("Report input template.odt", "August 2011.odt") :
  Error unzipping file




When I start a shell and go to the temp directory in question and copy the 
exact command that the error message says produced an error the command runs 
fine. Here is that session:

Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

H:\>c:

C:\>cd C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2/odfWeave2153483

C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>dir
  Volume in drive C has no label.
  Volume Serial Number is 7464-62CA

  Directory of C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483

03/21/2011  11:11 AM.
03/21/2011  11:11 AM..
03/21/2011  11:11 AM13,780 Report input template.odt
   1 File(s) 13,780 bytes
   2 Dir(s)   7,987,343,360 bytes free

C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>unzip -o "Report 
input template.odt"
Archive:  Report input template.odt
  extracting: mimetype
   creating: Configurations2/statusbar/
  inflating: Configurations2/accelerator/current.xml
   creating: Configurations2/floater/
   creating: Configurations2/popupmenu/
   creating: Configurations2/progressbar/
   creating: Configurations2/menubar/
   creating: Configurations2/toolbar/
   creating: Configurations2/images/Bitmaps/
  inflating: content.xml
  inflating: manifest.rdf
  inflating: styles.xml
  extracting: meta.xml
  inflating: Thumbnails/thumbnail.png
  inflating: settings.xml
  inflating: META-INF/manifest.xml

C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>







- Original message -
From: "psycho-ld"
To: r-help@r-project.org
Date: Sun, 23 Jan 2011 01:47:44 -0800 (PST)
Subject: [R] odfWeave Error unzipping file in Win 7


Hey guys,

I´m just getting started with R (version 2.12.0) and odfWeave and kinda
stumble from one problem to the next, the current one is the following:

trying to use odfWeave:


odfctrl<- odfWeaveControl(

+ zipCmd = c("C:/Program Files/unz552dN/VBunzip.exe $$file$$ .",
+  "C:/Program Files/unz552dN/VBunzip.exe $$file$$"))


odfWeave("C:/testat.odt", "C:/iris.odt", control = odfctrl)

  Copying  C:/testat.odt
  Setting wd to
D:\Users\egf\AppData\Local\Temp\Rtmpmp4E1J/odfWeave23103351832
  Unzipping ODF file using C:/Program Files/unz552dN/VBunzip.exe
"testat.odt"
Fehler in odfWeave("C:/testat.odt", "C:/iris.odt", control = odfctrl) :
  Error unzipping file

so I tried a few other unzipping programs like jar and 7-zip, but still the
same problem occurs, I also tried to install zip and unzip, but then I get
some error message that registration failed (Error 1904 )

so if there are anymore questions, just ask, would be great if someone could
help me though

cheers
psycho-ld

--
View this message in context: 
http://r.789695.n4.nabble.com/odfWeave-Error-unzipping-file-in-Win-7-tp3232359p3232359.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mai

Re: [R] Help with POSIXct

2011-03-21 Thread Bill.Venables
You might try

dat$F1 <- format(as.Date(dat$F1), format = "%b-%y")

although it rather depends on the class of F1 as it has been read.  

Bill Venables.

(It would be courteous of you to give us yor name, by the way.) 

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of i...@mathewanalytics.com
Sent: Tuesday, 22 March 2011 6:31 AM
To: r-help@r-project.org
Subject: [R] Help with POSIXct


I rarely work with dates in R, so I know very little about the
POSIXct and POSIXlt classes. I'm importing an excel file into R using
the RODBC package, and am having issues reformatting the dates.


1. The important info:
> sessionInfo()
R version 2.12.2 (2011-02-25)
Platform: i386-pc-mingw32/i386 (32-bit)

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
States.1252   
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C   
 
[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base 

other attached packages:
[1] RODBC_1.3-2

loaded via a namespace (and not attached):
[1] tools_2.12.2


2. My question:

My data looks like the following once it is imported into R.

one <- odbcConnectExcel("marketshare.xls")
dat <- sqlFetch(one, "Sheet1")
close(one)

> dat
   F1 Marvel DC
1  2010-01-01 42 34
2  2010-02-01 45 34
3  2010-03-01 47 29
4  2010-04-01 45 32
5  2010-05-01 45 35
6  2010-06-01 42 34

Variable F1, is supposed to be Jan-10, Feb-10, Mar-10, etc.
However, in the process of importing the .xls file, R is reformating
the dates. How can I retrieve the original month-year format
that I have in my excel file.


3. While the R help list discourages against asking multiple questions in 
an inquiry, it'd be really helpful if someone could point me to any good
online
resources on the POSIXct and POSIXlt classes.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Merging by() results back into original data frame?

2011-03-21 Thread ivo welch
dear R experts---I am trying to figure out what the recommended way is
to merge by() results back into the original data frame.  for example,
I want to have a flag that tells me whether a particular row contains
the maximum for its group.

  d <- data.frame(group=as.factor(rep(1:3,each=3)), val=rnorm(9))
  highestvals <- by( d, d$group, function(d)(max(d$val)) )

  ## and now?  iterate over levels( d$group ) ?  how do I merge
highestvals back into d?

advice appreciated.

sincerely,

/iaw

Ivo Welch (ivo.we...@brown.edu, ivo.we...@gmail.com)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] odfWeave Error unzipping file in Win 7

2011-03-21 Thread rmailbox
I tried with no spaces in either file name and got the same error.



- Original message -
From: "Max Kuhn" 
To: rmail...@justemail.net
Cc: r-help@r-project.org
Date: Mon, 21 Mar 2011 17:04:40 -0400
Subject: Re: [R] odfWeave Error unzipping file in Win 7

I don't think that this is the issue, but test it on a file without spaces.

On Mon, Mar 21, 2011 at 2:25 PM,   wrote:
>
> I have a very similar error that cropped up when I upgraded to R 2.12 and 
> persists at R 2.12.1. I am running R on Windows XP and OO is at version 3.2. 
> I did not make any changes to my R code or ODF code or configuration to 
> produce this error. Only upgraded R.
>
> Many Thanks,
>
> Eric
>
> R session:
>
>
>> odfWeave ( 'Report input template.odt' , 'August 2011.odt')
>  Copying  Report input template.odt
>  Setting wd to  
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2/odfWeave2153483
>  Unzipping ODF file using unzip -o "Report input template.odt"
> Error in odfWeave("Report input template.odt", "August 2011.odt") :
>  Error unzipping file
>
> 
>
>
> When I start a shell and go to the temp directory in question and copy the 
> exact command that the error message says produced an error the command runs 
> fine. Here is that session:
>
> Microsoft Windows XP [Version 5.1.2600]
> (C) Copyright 1985-2001 Microsoft Corp.
>
> H:\>c:
>
> C:\>cd C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2/odfWeave2153483
>
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>dir
>  Volume in drive C has no label.
>  Volume Serial Number is 7464-62CA
>
>  Directory of 
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483
>
> 03/21/2011  11:11 AM              .
> 03/21/2011  11:11 AM              ..
> 03/21/2011  11:11 AM            13,780 Report input template.odt
>               1 File(s)         13,780 bytes
>               2 Dir(s)   7,987,343,360 bytes free
>
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>unzip -o 
> "Report input template.odt"
> Archive:  Report input template.odt
>  extracting: mimetype
>   creating: Configurations2/statusbar/
>  inflating: Configurations2/accelerator/current.xml
>   creating: Configurations2/floater/
>   creating: Configurations2/popupmenu/
>   creating: Configurations2/progressbar/
>   creating: Configurations2/menubar/
>   creating: Configurations2/toolbar/
>   creating: Configurations2/images/Bitmaps/
>  inflating: content.xml
>  inflating: manifest.rdf
>  inflating: styles.xml
>  extracting: meta.xml
>  inflating: Thumbnails/thumbnail.png
>  inflating: settings.xml
>  inflating: META-INF/manifest.xml
>
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>
>
>
>
>
>
>
>
> - Original message -
> From: "psycho-ld" 
> To: r-help@r-project.org
> Date: Sun, 23 Jan 2011 01:47:44 -0800 (PST)
> Subject: [R] odfWeave Error unzipping file in Win 7
>
>
> Hey guys,
>
> I´m just getting started with R (version 2.12.0) and odfWeave and kinda
> stumble from one problem to the next, the current one is the following:
>
> trying to use odfWeave:
>
>> odfctrl <- odfWeaveControl(
> +             zipCmd = c("C:/Program Files/unz552dN/VBunzip.exe $$file$$ .",
> +              "C:/Program Files/unz552dN/VBunzip.exe $$file$$"))
>>
>> odfWeave("C:/testat.odt", "C:/iris.odt", control = odfctrl)
>  Copying  C:/testat.odt
>  Setting wd to
> D:\Users\egf\AppData\Local\Temp\Rtmpmp4E1J/odfWeave23103351832
>  Unzipping ODF file using C:/Program Files/unz552dN/VBunzip.exe
> "testat.odt"
> Fehler in odfWeave("C:/testat.odt", "C:/iris.odt", control = odfctrl) :
>  Error unzipping file
>
> so I tried a few other unzipping programs like jar and 7-zip, but still the
> same problem occurs, I also tried to install zip and unzip, but then I get
> some error message that registration failed (Error 1904 )
>
> so if there are anymore questions, just ask, would be great if someone could
> help me though
>
> cheers
> psycho-ld
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/odfWeave-Error-unzipping-file-in-Win-7-tp3232359p3232359.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 

Max

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, repr

Re: [R] R as a non-functional language

2011-03-21 Thread Bill.Venables
That's not the point.  The point is that R has functions which have 
side-effects and hence does not meet the strict requirements for a functional 
language. 

-Original Message-
From: ONKELINX, Thierry [mailto:thierry.onkel...@inbo.be] 
Sent: Monday, 21 March 2011 7:20 PM
To: russ.abb...@gmail.com; Venables, Bill (CMIS, Dutton Park)
Cc: r-help@r-project.org
Subject: RE: [R] R as a non-functional language

Dear Russ,

Why not use simply

pH <- c(area1 = 4.5, area2 = 7, mud = 7.3, dam = 8.2, middle = 6.3)

That notation is IMHO the most readable for students.

Best regards,

Thierry


ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek
team Biometrie & Kwaliteitszorg
Gaverstraat 4
9500 Geraardsbergen
Belgium

Research Institute for Nature and Forest
team Biometrics & Quality Assurance
Gaverstraat 4
9500 Geraardsbergen
Belgium

tel. + 32 54/436 185
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey
  

> -Oorspronkelijk bericht-
> Van: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Namens Russ Abbott
> Verzonden: zondag 20 maart 2011 6:46
> Aan: bill.venab...@csiro.au
> CC: r-help@r-project.org
> Onderwerp: Re: [R] R as a non-functional language
> 
> I'm afraid I disagree.  As a number of people have shown, 
> it's certainly possible to get the end result
> 
> > pH <- c(4.5,7,7.3,8.2,6.3)
> > names(pH) <- c('area1','area2','mud','dam','middle')
> > pH
>  area1  area2muddam middle
>4.57.07.38.26.3
> 
> using a single expression. But what makes this non-functional 
> is that the
> names() function operates on a reference to the pH 
> object/entity/element. In other words, the names() function 
> has a side effect, which is not permitted in strictly 
> functional programming.
> 
> I don't know if R has threads. But imagine it did and that one ran
> 
> > names(pH) <- c('area1','area2','mud','dam','middle')
> 
> and
> 
>  > names(pH) <- c('areaA','areaB','dirt','blockage','center')
> 
> in simultaneous threads. Since they are both operating on the 
> same pH element, it is uncertain what the result would be. 
> That's one of the things that functional programming prevents.
> 
> *-- Russ *
> 
> 
> 
> On Sat, Mar 19, 2011 at 10:22 PM,  wrote:
> 
> > PS the form
> >
> > names(p) <- c(...)
> >
> > is still functional, of course.  It is just a bit of 
> syntactic sugar 
> > for the clumsier
> >
> > p <- `names<-`(p, c(...))
> >
> > e.g.
> > > pH <- `names<-`(pH, letters[1:5])
> > > pH
> >  a   b   c   d   e
> > 4.5 7.0 7.3 8.2 6.3
> > >
> >
> >
> >
> > -Original Message-
> > From: Venables, Bill (CMIS, Dutton Park)
> > Sent: Sunday, 20 March 2011 3:09 PM
> > To: 'Gabor Grothendieck'; 'russ.abb...@gmail.com'
> > Cc: 'r-help@r-project.org'
> > Subject: RE: [R] R as a non-functional language
> >
> > The idiom I prefer is
> >
> > pH <- structure(c(4.5,7,7.3,8.2,6.3),
> >names = c('area1','area2','mud','dam','middle'))
> >
> > -Original Message-
> > From: r-help-boun...@r-project.org 
> > [mailto:r-help-boun...@r-project.org]
> > On Behalf Of Gabor Grothendieck
> > Sent: Sunday, 20 March 2011 2:33 PM
> > To: russ.abb...@gmail.com
> > Cc: r-help@r-project.org
> > Subject: Re: [R] R as a non-functional language
> >
> > On Sun, Mar 20, 2011 at 12:20 AM, Russ Abbott 
> 
> > wrote:
> > > I'm reading Torgo (2010) *Data Mining with 
> > > R*in
> > > preparation for a class I'll be teaching next quarter.  Here's an 
> > > example that is very non-functional.
> > >
> > >> pH <- c(4.5,7,7.3,8.2,6.3)
> > >> names(pH) <- c('area1','area2','mud','dam','middle')
> > >> pH
> > >  area1  area2muddam middle
> > >   4.57.07.38.26.3
> > >
> > >
> > > This sort of thing seems to be quite common in R.
> >
> > Try this:
> >
> > pH <- setNames(c(4.5,7,7.3,8.2,6.3),
> > c('area1','area2','mud','dam','middle'))
> >
> >
> >
> >
> > --
> > Statistics & Software Consulting
> > GKX Group, GKX Associates Inc.
> > tel: 1-877-GKX-GROUP
> > email: ggrothendieck at gmail.com
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org ma

Re: [R] odfWeave Error unzipping file in Win 7

2011-03-21 Thread Max Kuhn
I don't think that this is the issue, but test it on a file without spaces.

On Mon, Mar 21, 2011 at 2:25 PM,   wrote:
>
> I have a very similar error that cropped up when I upgraded to R 2.12 and 
> persists at R 2.12.1. I am running R on Windows XP and OO is at version 3.2. 
> I did not make any changes to my R code or ODF code or configuration to 
> produce this error. Only upgraded R.
>
> Many Thanks,
>
> Eric
>
> R session:
>
>
>> odfWeave ( 'Report input template.odt' , 'August 2011.odt')
>  Copying  Report input template.odt
>  Setting wd to  
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2/odfWeave2153483
>  Unzipping ODF file using unzip -o "Report input template.odt"
> Error in odfWeave("Report input template.odt", "August 2011.odt") :
>  Error unzipping file
>
> 
>
>
> When I start a shell and go to the temp directory in question and copy the 
> exact command that the error message says produced an error the command runs 
> fine. Here is that session:
>
> Microsoft Windows XP [Version 5.1.2600]
> (C) Copyright 1985-2001 Microsoft Corp.
>
> H:\>c:
>
> C:\>cd C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2/odfWeave2153483
>
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>dir
>  Volume in drive C has no label.
>  Volume Serial Number is 7464-62CA
>
>  Directory of 
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483
>
> 03/21/2011  11:11 AM              .
> 03/21/2011  11:11 AM              ..
> 03/21/2011  11:11 AM            13,780 Report input template.odt
>               1 File(s)         13,780 bytes
>               2 Dir(s)   7,987,343,360 bytes free
>
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>unzip -o 
> "Report input template.odt"
> Archive:  Report input template.odt
>  extracting: mimetype
>   creating: Configurations2/statusbar/
>  inflating: Configurations2/accelerator/current.xml
>   creating: Configurations2/floater/
>   creating: Configurations2/popupmenu/
>   creating: Configurations2/progressbar/
>   creating: Configurations2/menubar/
>   creating: Configurations2/toolbar/
>   creating: Configurations2/images/Bitmaps/
>  inflating: content.xml
>  inflating: manifest.rdf
>  inflating: styles.xml
>  extracting: meta.xml
>  inflating: Thumbnails/thumbnail.png
>  inflating: settings.xml
>  inflating: META-INF/manifest.xml
>
> C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>
>
>
>
>
>
>
>
> - Original message -
> From: "psycho-ld" 
> To: r-help@r-project.org
> Date: Sun, 23 Jan 2011 01:47:44 -0800 (PST)
> Subject: [R] odfWeave Error unzipping file in Win 7
>
>
> Hey guys,
>
> I´m just getting started with R (version 2.12.0) and odfWeave and kinda
> stumble from one problem to the next, the current one is the following:
>
> trying to use odfWeave:
>
>> odfctrl <- odfWeaveControl(
> +             zipCmd = c("C:/Program Files/unz552dN/VBunzip.exe $$file$$ .",
> +              "C:/Program Files/unz552dN/VBunzip.exe $$file$$"))
>>
>> odfWeave("C:/testat.odt", "C:/iris.odt", control = odfctrl)
>  Copying  C:/testat.odt
>  Setting wd to
> D:\Users\egf\AppData\Local\Temp\Rtmpmp4E1J/odfWeave23103351832
>  Unzipping ODF file using C:/Program Files/unz552dN/VBunzip.exe
> "testat.odt"
> Fehler in odfWeave("C:/testat.odt", "C:/iris.odt", control = odfctrl) :
>  Error unzipping file
>
> so I tried a few other unzipping programs like jar and 7-zip, but still the
> same problem occurs, I also tried to install zip and unzip, but then I get
> some error message that registration failed (Error 1904 )
>
> so if there are anymore questions, just ask, would be great if someone could
> help me though
>
> cheers
> psycho-ld
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/odfWeave-Error-unzipping-file-in-Win-7-tp3232359p3232359.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 

Max

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Help with POSIXct

2011-03-21 Thread info

I rarely work with dates in R, so I know very little about the
POSIXct and POSIXlt classes. I'm importing an excel file into R using
the RODBC package, and am having issues reformatting the dates.


1. The important info:
> sessionInfo()
R version 2.12.2 (2011-02-25)
Platform: i386-pc-mingw32/i386 (32-bit)

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
States.1252   
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C   
 
[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base 

other attached packages:
[1] RODBC_1.3-2

loaded via a namespace (and not attached):
[1] tools_2.12.2


2. My question:

My data looks like the following once it is imported into R.

one <- odbcConnectExcel("marketshare.xls")
dat <- sqlFetch(one, "Sheet1")
close(one)

> dat
   F1 Marvel DC
1  2010-01-01 42 34
2  2010-02-01 45 34
3  2010-03-01 47 29
4  2010-04-01 45 32
5  2010-05-01 45 35
6  2010-06-01 42 34

Variable F1, is supposed to be Jan-10, Feb-10, Mar-10, etc.
However, in the process of importing the .xls file, R is reformating
the dates. How can I retrieve the original month-year format
that I have in my excel file.


3. While the R help list discourages against asking multiple questions in 
an inquiry, it'd be really helpful if someone could point me to any good
online
resources on the POSIXct and POSIXlt classes.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Part of density plot not showing up

2011-03-21 Thread Jim Silverton
I am doing a histogram with 2 superimposed densities. However, the density
of one of the graphs is not coming out..its being erased.. Any ideas on how
to fix this problem?

-- 
Thanks,
Jim.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stucked with as.numeric function

2011-03-21 Thread Tóth Dénes


>
> On Mar 21, 2011, at 2:59 PM, Tóth Dénes wrote:
>
>>
>> Hi,
>>
>> I guess you have commas as decimals in your data. Replace it to
>> decimal
>> points.
>
> If that is true then the easiest fix would be to set the proper
> decimal argument in read.table

In this particular case, sure. If patsko happens to work with
international folks, with different softwares, the easiest way is to keep
your OS language settings in English, and maintain your data using always
the same (English-like) format, avoid unicode characters etc.

Denes



>
> ?read.table  # with ... , dec = "," ,
>
> --
> david.
>
>
>
>>
>> Best,
>>  Denes
>>
>>
>>>
>>> On Mar 21, 2011, at 1:57 PM, pat...@gmx.de wrote:
>>>
 Hi list,

 I have problems with the as.numeric function. I have imported
 probabilities from external data, but they are classified as factors
 as str() shows. Therefore my goal is to convert the colum from
 factor to numeric level with keeping the decimals.

 I have googled the problem for a while now and kept to several
 advices like
 http://cran.r-project.org/doc/FAQ/R-FAQ.html#How-do-I-convert-factors-to-numeric_003f
 and history from the list but it is impossible for me to convert
 the data to numeric without rounding or ranking the values.
>>>
>>> Are you sure about rounding? The console display of numbers is
>>> determined by options that can be changed:
>>>
>>> ?options
>>>
>>> The "ranking" may represent an error on your part. You offer nothing
>>> that can be used to check your interpretations. Why not offer:
>>>
>>> dput(head(data_object))
>>>
>>>

 E.g.:
 Simply using as.numeric puts the values into ranked classes as
 explained in the manual,
 As.numeric(as.character(probas)) as well as as.numeric(levels(probas
 $forecast_probs))[as.integer(probas$forecast_probs)]
 return “NA” for every row.
>>>
>>> Then maybe they were NA to begin with?
>>>
>>> Have you tried importing with colClasses set to "numeric" for the
>>> columns you knew to be such?

>>>
>>> --
>>>
>>> David Winsemius, MD
>>> Heritage Laboratories
>>> West Hartford, CT
>>>
>>> __
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>>
>
> David Winsemius, MD
> Heritage Laboratories
> West Hartford, CT
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Computing row differences in new columns

2011-03-21 Thread Henrique Dallazuanna
Try this:

dat$DATE <- as.Date(dat$DATE, "%d-%b-%y")
dat <- cbind(dat, lapply(mapply(tapply, MoreArgs = list(INDEX =
c(dat$SUBJECT, unique(dat$SUBJECT)), FUN = diff),

lapply(dat[,2:3], c, unique(dat$SUBJECT) / NA), SIMPLIFY = FALSE),
unlist))


On Mon, Mar 21, 2011 at 4:38 PM, Roberto Lodeiro Muller
 wrote:
>
> -Original Message-
> From: Roberto Lodeiro Muller 
> To: roberto.mul...@doctor.com
> Sent: Mon, Mar 21, 2011 3:37 pm
> Subject: Re: [R] Computing row differences in new columns
>
>
> Sorry, my data appeared badly formatted to me, so I converted it to plain 
> text:
>
> And just to clarify, for each subject in the first row it should appear the 
> difference to the next row, so that the last entry on each subject would be a 
> NA.
>
> Thanks again for your help
>
> Roberto
>
> SUBJECT    DATE       RESULT     DateDiff   ResultDiff
> 10751      22-Jul-03  3.5
> 10751      13-Feb-04  1.3
> 10751      20-Aug-04  1.6
> 10751      08-Mar-05  1.7
> 10751      30-Aug-05  1.6
> 10751      21-Feb-06  1.3
> 10751      31-Aug-06  1.2
> 10751      27-Feb-07  1.5
> 10751      29-Aug-07  1
> 10752      29-Jul-03  5.9
> 10752      24-Feb-04  5
> 10752      25-Aug-04  3.6
> 10752      11-Mar-05  5.1
> 10752      18-Sep-05  2.2
> 10752      23-Feb-06  3.1
> 10752      24-Aug-06  3.7
> 10752      27-Feb-07  6
>
>
>
>
>
>
> -Original Message-
> From: Roberto Lodeiro Muller 
> To: r-help@r-project.org
> Sent: Mon, Mar 21, 2011 3:23 pm
> Subject: [R] Computing row differences in new columns
>
>
>
> i
> I have the following columns with dates and results, sorted by subject and 
> date.
> 'd like to compute the differences in dates and results for each patient, 
> based
> n the previous row. Obviously the last entry for each subject should be a NA.
>  Which would be the best way to accomplished that ?
> I guess questions like that have been already answered a thousand times, so I
> pologize for asking one more time.
> Thanks
> Roberto
>
>
>
>
>
> SUBJECT
> ate
> esult
> ateDiff
> esultDiff
> 10751
> 2-Jul-03
> .5
>
>
> 0751
> 3-Feb-04
> .3
>
> 10751
> 0-Aug-04
> .6
>
> 10751
> 8-Mar-05
> .7
>
> 10751
> 0-Aug-05
> .6
>
> 10751
> 1-Feb-06
> .3
>
> 10751
> 1-Aug-06
> .2
>
> 10751
> 7-Feb-07
> .5
>
> 10751
> 9-Aug-07
>
>
> 10752
> 9-Jul-03
> .9
>
> 10752
> 4-Feb-04
>
>
> 10752
> 5-Aug-04
> .6
>
> 10752
> 1-Mar-05
> .1
>
> 10752
> 8-Sep-05
> .2
>
> 10752
> 3-Feb-06
> .1
>
> 10752
> 4-Aug-06
> .7
>
> 10752
> 7-Feb-07
>
>
>
>    [[alternative HTML version deleted]]
> __
> -h...@r-project.org mailing list
> ttps://stat.ethz.ch/mailman/listinfo/r-help
> LEASE do read the posting guide http://www.R-project.org/posting-guide.html
> nd provide commented, minimal, self-contained, reproducible code.
>
>
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Computing row differences in new columns

2011-03-21 Thread David Winsemius


On Mar 21, 2011, at 3:38 PM, Roberto Lodeiro Muller wrote:



-Original Message-
From: Roberto Lodeiro Muller 
To: roberto.mul...@doctor.com
Sent: Mon, Mar 21, 2011 3:37 pm
Subject: Re: [R] Computing row differences in new columns


Sorry, my data appeared badly formatted to me, so I converted it to  
plain text:


And just to clarify, for each subject in the first row it should  
appear the difference to the next row, so that the last entry on  
each subject would be a NA.


Thanks again for your help

Roberto

SUBJECTDATE   RESULT DateDiff   ResultDiff
10751  22-Jul-03  3.5
10751  13-Feb-04  1.3
10751  20-Aug-04  1.6
10751  08-Mar-05  1.7
10751  30-Aug-05  1.6
10751  21-Feb-06  1.3
10751  31-Aug-06  1.2
10751  27-Feb-07  1.5
10751  29-Aug-07  1
10752  29-Jul-03  5.9
10752  24-Feb-04  5
10752  25-Aug-04  3.6
10752  11-Mar-05  5.1
10752  18-Sep-05  2.2
10752  23-Feb-06  3.1
10752  24-Aug-06  3.7
10752  27-Feb-07  6


dat$dt <- as.Date(dat$DATE, format="%d-%b-%y")
dat$diffdt <- c(diff(dat$dt), NA)
dat$diffRES <- c(diff(dat$RES), NA)


--

David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Computing row differences in new columns

2011-03-21 Thread Dennis Murphy
Hi:

Here's one way. Calling your data frame below d,

# Function to compute consecutive differences
dif <- function(x) c(diff(x), NA)

# Make sure that DATE is a Date variable
d$DATE <- as.Date(d$DATE, format = '%d-%b-%y')

# Apply the function to each individual
ddply(d, 'SUBJECT', transform, diffDate = dif(DATE), diffResult =
dif(RESULT))

   SUBJECT   DATE RESULT diffDate diffResult
110751 2003-07-223.5  206   -2.2
210751 2004-02-131.3  1890.3
310751 2004-08-201.6  2000.1
410751 2005-03-081.7  175   -0.1
510751 2005-08-301.6  175   -0.3
610751 2006-02-211.3  191   -0.1
710751 2006-08-311.2  1800.3
810751 2007-02-271.5  183   -0.5
910751 2007-08-291.0   NA NA
10   10752 2003-07-295.9  210   -0.9
11   10752 2004-02-245.0  183   -1.4
12   10752 2004-08-253.6  1981.5
13   10752 2005-03-115.1  191   -2.9
14   10752 2005-09-182.2  1580.9
15   10752 2006-02-233.1  1820.6
16   10752 2006-08-243.7  1872.3
17   10752 2007-02-276.0   NA NA

If you want the NA first (which is reasonable), flip the NA and diff(x) in
the function definition.

HTH,
Dennis



On Mon, Mar 21, 2011 at 12:38 PM, Roberto Lodeiro Muller <
roberto.mul...@doctor.com> wrote:

>
> -Original Message-
> From: Roberto Lodeiro Muller 
> To: roberto.mul...@doctor.com
> Sent: Mon, Mar 21, 2011 3:37 pm
> Subject: Re: [R] Computing row differences in new columns
>
>
> Sorry, my data appeared badly formatted to me, so I converted it to plain
> text:
>
> And just to clarify, for each subject in the first row it should appear the
> difference to the next row, so that the last entry on each subject would be
> a NA.
>
> Thanks again for your help
>
> Roberto
>
> SUBJECTDATE   RESULT DateDiff   ResultDiff
> 10751  22-Jul-03  3.5
> 10751  13-Feb-04  1.3
> 10751  20-Aug-04  1.6
> 10751  08-Mar-05  1.7
> 10751  30-Aug-05  1.6
> 10751  21-Feb-06  1.3
> 10751  31-Aug-06  1.2
> 10751  27-Feb-07  1.5
> 10751  29-Aug-07  1
> 10752  29-Jul-03  5.9
> 10752  24-Feb-04  5
> 10752  25-Aug-04  3.6
> 10752  11-Mar-05  5.1
> 10752  18-Sep-05  2.2
> 10752  23-Feb-06  3.1
> 10752  24-Aug-06  3.7
> 10752  27-Feb-07  6
>
>
>
>
>
>
> -Original Message-
> From: Roberto Lodeiro Muller 
> To: r-help@r-project.org
> Sent: Mon, Mar 21, 2011 3:23 pm
> Subject: [R] Computing row differences in new columns
>
>
>
> i
> I have the following columns with dates and results, sorted by subject and
> date.
> 'd like to compute the differences in dates and results for each patient,
> based
> n the previous row. Obviously the last entry for each subject should be a
> NA.
>  Which would be the best way to accomplished that ?
> I guess questions like that have been already answered a thousand times, so
> I
> pologize for asking one more time.
> Thanks
> Roberto
>
>
>
>
>
> SUBJECT
> ate
> esult
> ateDiff
> esultDiff
> 10751
> 2-Jul-03
> .5
>
>
> 0751
> 3-Feb-04
> .3
>
> 10751
> 0-Aug-04
> .6
>
> 10751
> 8-Mar-05
> .7
>
> 10751
> 0-Aug-05
> .6
>
> 10751
> 1-Feb-06
> .3
>
> 10751
> 1-Aug-06
> .2
>
> 10751
> 7-Feb-07
> .5
>
> 10751
> 9-Aug-07
>
>
> 10752
> 9-Jul-03
> .9
>
> 10752
> 4-Feb-04
>
>
> 10752
> 5-Aug-04
> .6
>
> 10752
> 1-Mar-05
> .1
>
> 10752
> 8-Sep-05
> .2
>
> 10752
> 3-Feb-06
> .1
>
> 10752
> 4-Aug-06
> .7
>
> 10752
> 7-Feb-07
>
>
>
>[[alternative HTML version deleted]]
> __
> -h...@r-project.org mailing list
> ttps://stat.ethz.ch/mailman/listinfo/r-help
> LEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> nd provide commented, minimal, self-contained, reproducible code.
>
>
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Computing row differences in new columns

2011-03-21 Thread Alexander Engelhardt

(mods: can you un-flag me if I am still flagged?)

Hi,

First, I would convert the DATE column to an integer. I found this by 
quick googling, perhaps as.Date is your weapon of choice:

https://stat.ethz.ch/pipermail/r-help/2008-August/169870.html

For the column generation I think you will have to use a for-loop here:

yourdata$ResultDiff <- 0
yourdata$DateDiff <- 0

for( i in 1:nrow(yourdata) ){
	yourdata$ResultDiff[i] <- yourdata$RESULT[i] - yourdata$RESULT[i+1] # 
should automatically NA the last cell

yourdata$DateDiff[i] <- yourdata$datenumber[i] - 
yourdata$datenumber[i+1]
}

where datenumber is your integer column of the date.

-- Alex


Am 21.03.2011 20:38, schrieb Roberto Lodeiro Muller:


-Original Message-
From: Roberto Lodeiro Muller
To: roberto.mul...@doctor.com
Sent: Mon, Mar 21, 2011 3:37 pm
Subject: Re: [R] Computing row differences in new columns


Sorry, my data appeared badly formatted to me, so I converted it to plain text:

And just to clarify, for each subject in the first row it should appear the 
difference to the next row, so that the last entry on each subject would be a 
NA.

Thanks again for your help

Roberto

SUBJECTDATE   RESULT DateDiff   ResultDiff
10751  22-Jul-03  3.5
10751  13-Feb-04  1.3
10751  20-Aug-04  1.6
10751  08-Mar-05  1.7
10751  30-Aug-05  1.6
10751  21-Feb-06  1.3
10751  31-Aug-06  1.2
10751  27-Feb-07  1.5
10751  29-Aug-07  1
10752  29-Jul-03  5.9
10752  24-Feb-04  5
10752  25-Aug-04  3.6
10752  11-Mar-05  5.1
10752  18-Sep-05  2.2
10752  23-Feb-06  3.1
10752  24-Aug-06  3.7
10752  27-Feb-07  6


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How do I delete multiple blank variables from a data frame?

2011-03-21 Thread Rita Carreira

Allan and Josh,
Thanks for the help. I did  

d[, sapply(d, function(x) !all(is.na(x)))] 

and it worked great.
Thanks so much again!


Rita


"If you think education is expensive, try ignorance"--Derek Bok




> Date: Sat, 19 Mar 2011 08:36:43 +
> From: all...@cybaea.com
> To: jwiley.ps...@gmail.com
> CC: ritacarre...@hotmail.com; r-help@r-project.org
> Subject: Re: [R] How do I delete multiple blank variables from a data frame?
> 
> 
> 
> On 19/03/11 01:35, Joshua Wiley wrote:
> > Hi Rita,
> >
> > This is far from the most efficient or elegant way, but:
> >
> > ## two column data frame, one all NAs
> > d<- data.frame(1:10, NA)
> > ## use apply to create logical vector and subset d
> > d[, apply(d, 2, function(x) !all(is.na(x)))]
> 
> This works, but apply converts d to a matrix which is not needed, so try
> 
> d[, sapply(d, function(x) !all(is.na(x)))]
> 
> 
> if performance is an issue (apply is about 3x slower on your test data 
> frame d, more for larger data frames).
> 
> For the related problem of removing columns of constant-or-na values, 
> the best I could come up with is
> 
> zv.1 <- function(x) {
>  ## The literal approach
>  y <- var(x, na.rm = TRUE)
>  return(is.na(y) || y == 0)
> }
> sapply(train, zv.1)
> 
> See 
> http://www.cybaea.net/Blogs/Data/R-Eliminating-observed-values-with-zero-variance.html
>  
> for the benchmarks.
> 
> Allan
> 
> 
> > I am just apply()ing to each column (the 2) of d, the function
> > !all(is.na(x)) which will return FALSE if all of x is missing and TRUE
> > otherwise.  The result is a logical vector the same length as the
> > number of columns in d that is used to subset only the d columns with
> > at least some non-missing values.  For documentation see:
> >
> > ?apply
> > ?is.na
> > ?all
> > ?"["
> > ?Logic
> >
> > HTH,
> >
> > Josh
> >
> > On Fri, Mar 18, 2011 at 3:35 PM, Rita Carreira  
> > wrote:
> >> Dear List Members,I have 55 data frames, each of which with 272 variables 
> >> and 267 observations. Some of these variables are blanks but the blanks 
> >> are not the same for every data frame. I would like to write a procedure 
> >> in which I import a data frame, see which variables are blank, and delete 
> >> those variables. My data frames have variables named P1 to P136 and Q1 to 
> >> Q136.
> >> I have a couple of questions regarding this issue:
> >> 1) Is a loop an efficient way to address this problem? If not, what are my 
> >> alternatives and how do I implement them?2) I have been playing with a 
> >> single data frame to try to figure out a way of having R go through the 
> >> columns and see which ones it should delete. I have figured out how to 
> >> delete rows with missing data (newdata<- na.omit(olddata)) but how do I do 
> >> it for columns???
[[elided Hotmail spam]]
> >> Rita  "If you think education is 
> >> expensive, try ignorance"--Derek Bok
> >>
> >>
> >>
> >> [[alternative HTML version deleted]]
> >>
> >> __
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide 
> >> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
> >
> >
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Exponential distribution

2011-03-21 Thread danielepippo
Dear R-users,

 I have to plot a exponential distribution like the plot in the pdf 
attached. I've write this code but I don't know how to draw the two lines..
Can anyone help me please?

Thank you very much

Pippo http://r.789695.n4.nabble.com/file/n3394476/exponential_smoothing.pdf
exponential_smoothing.pdf 

--
View this message in context: 
http://r.789695.n4.nabble.com/Exponential-distribution-tp3394476p3394476.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Computing row differences in new columns

2011-03-21 Thread Roberto Lodeiro Muller

-Original Message-
From: Roberto Lodeiro Muller 
To: roberto.mul...@doctor.com
Sent: Mon, Mar 21, 2011 3:37 pm
Subject: Re: [R] Computing row differences in new columns


Sorry, my data appeared badly formatted to me, so I converted it to plain text:
 
And just to clarify, for each subject in the first row it should appear the 
difference to the next row, so that the last entry on each subject would be a 
NA.
 
Thanks again for your help
 
Roberto
 
SUBJECTDATE   RESULT DateDiff   ResultDiff
10751  22-Jul-03  3.5
10751  13-Feb-04  1.3
10751  20-Aug-04  1.6
10751  08-Mar-05  1.7
10751  30-Aug-05  1.6
10751  21-Feb-06  1.3
10751  31-Aug-06  1.2
10751  27-Feb-07  1.5
10751  29-Aug-07  1  
10752  29-Jul-03  5.9   
10752  24-Feb-04  5  
10752  25-Aug-04  3.6
10752  11-Mar-05  5.1
10752  18-Sep-05  2.2
10752  23-Feb-06  3.1
10752  24-Aug-06  3.7
10752  27-Feb-07  6  

 


 

-Original Message-
From: Roberto Lodeiro Muller 
To: r-help@r-project.org
Sent: Mon, Mar 21, 2011 3:23 pm
Subject: [R] Computing row differences in new columns



i 
I have the following columns with dates and results, sorted by subject and 
date. 
'd like to compute the differences in dates and results for each patient, based 
n the previous row. Obviously the last entry for each subject should be a NA.
 Which would be the best way to accomplished that ?
I guess questions like that have been already answered a thousand times, so I 
pologize for asking one more time.
Thanks
Roberto





SUBJECT
ate
esult
ateDiff
esultDiff
10751
2-Jul-03
.5


0751
3-Feb-04
.3

10751
0-Aug-04
.6

10751
8-Mar-05
.7

10751
0-Aug-05
.6

10751
1-Feb-06
.3

10751
1-Aug-06
.2

10751
7-Feb-07
.5

10751
9-Aug-07


10752
9-Jul-03
.9

10752
4-Feb-04


10752
5-Aug-04
.6

10752
1-Mar-05
.1

10752
8-Sep-05
.2

10752
3-Feb-06
.1

10752
4-Aug-06
.7

10752
7-Feb-07



[[alternative HTML version deleted]]
__
-h...@r-project.org mailing list
ttps://stat.ethz.ch/mailman/listinfo/r-help
LEASE do read the posting guide http://www.R-project.org/posting-guide.html
nd provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Randomly generating data

2011-03-21 Thread Dennis Murphy
Hi:

It's unclear what you want for your vector of probabilities, but here's one
approach:

u$p <- runif(1000)# generate a random vector of success
probabilities
samp <- function(x) {
  p <- as.numeric(x[3]) # x will be character as constructed,
pick out the numeric value of p
  x[sample(c(1, 2), 1, prob = c(p, 1 - p))] # randomly sample
first or second element
  }
# Apply function samp to each row of the data frame
u$V3 <- apply(u, 1, samp)
u$p <- NULL # get rid of the probability vector

head(u)

You weren't clear about how you wanted to select the success probabilities
of each element in each row, so I just generated them randomly. Adapt to
your situation.

HTH,
Dennis

On Mon, Mar 21, 2011 at 10:43 AM, Lisa  wrote:

> Hi, everybody,
>
> I have a problem and need your help.
> There are two columns that look like this:
>
>  [1,] "t"  "f"
>  [2,] "f"  "t"
>  [3,] "t"  "f"
>  [4,] "t"  "t"
>  [5,] "f"  "f"
>
> I just want to generate the third column based on these two columns. First,
> I randomly choose one of the two columns, say, the first column. So, the
> first character of the third column is “t” that looks like this:
>
>  [1,] "t"  "f"  "t"
>  [2,] "f"  "t"
>  [3,] "t"  "f"
>  [4,] "t"  "t"
>  [5,] "f"  "f"
>
> Second, I determine the second character of the third column with an
> additional Bernoulli trial with a probability, for example, rbinom(1, 1,
> 0.3). If the random generation in R is 0, we keep "f", the second character
> in the first column because the first column has been chosen in the first
> step, while if the random generation in R is 1, we choose "t" instead, the
> second character in the second column.
>
> Third, I determine the third character of the third column with a Bernoulli
> trial but a different probability, for example, rbinom(1, 1, 0.7).
>
> Repeat these processes…
>
> How can I do this efficiently if there are more than thousand records
> (rows)?  Can anybody please help how to get this done? Thanks a lot in
> advance
>
> Lisa
>
>
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Randomly-generating-data-tp3394250p3394250.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Correlation for no of variables

2011-03-21 Thread Patrick Burns

Getting the correlation of a 1000 by 1500
matrix takes about 3.5 seconds on my
unimpressive Windows machine.  Is that
really a tremendous amount of time?

You don't say what you are using the
correlation matrix for.  It is common for
a semi-definite matrix (as you will be getting)
to cause problems for applications.

Some ways of getting a positive definite
matrix are explained in the blog post:
http://www.portfolioprobe.com/2011/03/07/factor-models-of-variance-in-finance/


On 21/03/2011 15:34, Vincy Pyne wrote:

Dear R helpers,

Suppose I have stock returns data of say 1500 companies each for say last 4 
years. Thus I have a matrix of dimension say 1000 * 1500 i.e. 1500 columns 
representing companies and 1000 rows of their returns.

I need to find the correlation matrix of these 1500 companies.

So I can find out the correlation as

cor(returns) and expect to get 1500 * 1500 matrix. However, the process takes a 
tremendous time. Is there any way in expediting such a process. In reality, I 
may be dealing with lots of even 5000 stocks and may simulate even 10 stock 
returns.



Kindly guide.

Vincy





[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Patrick Burns
pbu...@pburns.seanet.com
twitter: @portfolioprobe
http://www.portfolioprobe.com/blog
http://www.burns-stat.com
(home of 'Some hints for the R beginner'
and 'The R Inferno')

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Computing row differences in new columns

2011-03-21 Thread Roberto Lodeiro Muller

Hi 

I have the following columns with dates and results, sorted by subject and 
date. I'd like to compute the differences in dates and results for each 
patient, based on the previous row. Obviously the last entry for each subject 
should be a NA.

 Which would be the best way to accomplished that ?

I guess questions like that have been already answered a thousand times, so I 
apologize for asking one more time.

Thanks

Roberto











SUBJECT
Date
Result
DateDiff
ResultDiff

10751
22-Jul-03
3.5
 


10751
13-Feb-04
1.3



10751
20-Aug-04
1.6



10751
08-Mar-05
1.7



10751
30-Aug-05
1.6



10751
21-Feb-06
1.3



10751
31-Aug-06
1.2



10751
27-Feb-07
1.5



10751
29-Aug-07
1



10752
29-Jul-03
5.9



10752
24-Feb-04
5



10752
25-Aug-04
3.6



10752
11-Mar-05
5.1



10752
18-Sep-05
2.2



10752
23-Feb-06
3.1



10752
24-Aug-06
3.7



10752
27-Feb-07
6





[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stucked with as.numeric function

2011-03-21 Thread David Winsemius


On Mar 21, 2011, at 2:59 PM, Tóth Dénes wrote:



Hi,

I guess you have commas as decimals in your data. Replace it to  
decimal

points.


If that is true then the easiest fix would be to set the proper  
decimal argument in read.table


?read.table  # with ... , dec = "," ,

--
david.





Best,
 Denes




On Mar 21, 2011, at 1:57 PM, pat...@gmx.de wrote:


Hi list,

I have problems with the as.numeric function. I have imported
probabilities from external data, but they are classified as factors
as str() shows. Therefore my goal is to convert the colum from
factor to numeric level with keeping the decimals.

I have googled the problem for a while now and kept to several
advices like
http://cran.r-project.org/doc/FAQ/R-FAQ.html#How-do-I-convert-factors-to-numeric_003f
and history from the list but it is impossible for me to convert
the data to numeric without rounding or ranking the values.


Are you sure about rounding? The console display of numbers is
determined by options that can be changed:

?options

The "ranking" may represent an error on your part. You offer nothing
that can be used to check your interpretations. Why not offer:

dput(head(data_object))




E.g.:
Simply using as.numeric puts the values into ranked classes as
explained in the manual,
As.numeric(as.character(probas)) as well as as.numeric(levels(probas
$forecast_probs))[as.integer(probas$forecast_probs)]
return “NA” for every row.


Then maybe they were NA to begin with?

Have you tried importing with colClasses set to "numeric" for the
columns you knew to be such?




--

David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.






David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] string interpolation

2011-03-21 Thread Henrique Dallazuanna
Try this:

sprintf("%s_%s", rep(1:58, each = 2), c("input", "output"))


On Mon, Mar 21, 2011 at 4:03 PM, Justin Haynes  wrote:
> Is there a way to do this in R? I have data in the form:
>
> 57_input  57_output  58_input  58_output  etc.
>
> can i use a for loop (i in 57:n)  that plots only the outputs?  I want
> this to be robust so im not specifying a column id but rather
> something like c++ code,
>
> %s_input, i
>
> is that doable in R?
>
> Thanks,
> justin
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] string interpolation

2011-03-21 Thread Douglas Bates
On Mon, Mar 21, 2011 at 2:03 PM, Justin Haynes  wrote:
> Is there a way to do this in R? I have data in the form:
>
> 57_input  57_output  58_input  58_output  etc.
>
> can i use a for loop (i in 57:n)  that plots only the outputs?  I want
> this to be robust so im not specifying a column id but rather
> something like c++ code,
>
> %s_input, i

It's not entirely clear what you want here but it may help to look at
the output of

n <- 59
(onms <- paste(57:n, "output", sep="_"))





>
> is that doable in R?
>
> Thanks,
> justin
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] string interpolation

2011-03-21 Thread Justin Haynes
Is there a way to do this in R? I have data in the form:

57_input  57_output  58_input  58_output  etc.

can i use a for loop (i in 57:n)  that plots only the outputs?  I want
this to be robust so im not specifying a column id but rather
something like c++ code,

%s_input, i

is that doable in R?

Thanks,
justin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stucked with as.numeric function

2011-03-21 Thread Kenn Konstabel
On Mon, Mar 21, 2011 at 5:57 PM,  wrote:

> Hi list,
>
> I have problems with the as.numeric function. I have imported probabilities
> from external data, but they are classified as factors as str() shows.
> Therefore my goal is to convert the colum from factor to numeric level with
> keeping the decimals.
>
> I have googled the problem for a while now and kept to several  advices
> like
> http://cran.r-project.org/doc/FAQ/R-FAQ.html#How-do-I-convert-factors-to-numeric_003fand
>  history from the list but it is impossible for me to convert the data to
> numeric without rounding or ranking the values.
>
> E.g.:
> Simply using as.numeric puts the values into ranked classes as explained in
> the manual,
> As.numeric(as.character(probas)) as well as
> as.numeric(levels(probas$forecast_probs))[as.integer(probas$forecast_probs)]
>  return “NA” for every row.
>
> Anyone any idea?
>

An example would help a lot ( + how did you read in the external data?) but
2 suggestions:

1. it may be easier to deal with the problem when reading in the data
(read.csv ant the like have an option "as.is") rather than dealing with the
factors
2. are you by any chance using comma as decimal separator? then use dec=","
with read.table or read.csv2 instead of read.csv



> --
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stucked with as.numeric function

2011-03-21 Thread Tóth Dénes

Hi,

I guess you have commas as decimals in your data. Replace it to decimal
points.

Best,
  Denes


>
> On Mar 21, 2011, at 1:57 PM, pat...@gmx.de wrote:
>
>> Hi list,
>>
>> I have problems with the as.numeric function. I have imported
>> probabilities from external data, but they are classified as factors
>> as str() shows. Therefore my goal is to convert the colum from
>> factor to numeric level with keeping the decimals.
>>
>> I have googled the problem for a while now and kept to several
>> advices like
>> http://cran.r-project.org/doc/FAQ/R-FAQ.html#How-do-I-convert-factors-to-numeric_003f
>>  and history from the list but it is impossible for me to convert
>> the data to numeric without rounding or ranking the values.
>
> Are you sure about rounding? The console display of numbers is
> determined by options that can be changed:
>
> ?options
>
> The "ranking" may represent an error on your part. You offer nothing
> that can be used to check your interpretations. Why not offer:
>
> dput(head(data_object))
>
>
>>
>> E.g.:
>> Simply using as.numeric puts the values into ranked classes as
>> explained in the manual,
>> As.numeric(as.character(probas)) as well as as.numeric(levels(probas
>> $forecast_probs))[as.integer(probas$forecast_probs)]
>> return “NA” for every row.
>
> Then maybe they were NA to begin with?
>
> Have you tried importing with colClasses set to "numeric" for the
> columns you knew to be such?
>>
>
> --
>
> David Winsemius, MD
> Heritage Laboratories
> West Hartford, CT
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stucked with as.numeric function

2011-03-21 Thread David Winsemius


On Mar 21, 2011, at 1:57 PM, pat...@gmx.de wrote:


Hi list,

I have problems with the as.numeric function. I have imported  
probabilities from external data, but they are classified as factors  
as str() shows. Therefore my goal is to convert the colum from  
factor to numeric level with keeping the decimals.


I have googled the problem for a while now and kept to several   
advices like http://cran.r-project.org/doc/FAQ/R-FAQ.html#How-do-I-convert-factors-to-numeric_003f 
 and history from the list but it is impossible for me to convert  
the data to numeric without rounding or ranking the values.


Are you sure about rounding? The console display of numbers is  
determined by options that can be changed:


?options

The "ranking" may represent an error on your part. You offer nothing  
that can be used to check your interpretations. Why not offer:


dput(head(data_object))




E.g.:
Simply using as.numeric puts the values into ranked classes as  
explained in the manual,
As.numeric(as.character(probas)) as well as as.numeric(levels(probas 
$forecast_probs))[as.integer(probas$forecast_probs)]

return “NA” for every row.


Then maybe they were NA to begin with?

Have you tried importing with colClasses set to "numeric" for the  
columns you knew to be such?




--

David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lat Lon NetCDF subset

2011-03-21 Thread David Pierce
hgreatrex wrote:
> Hi,
>
> ...
> I've managed to extract the lat / lon variables, so I know that I could
> work
> out manually which pixels correspond to which location. However, it's easy
> to make a mistake in such a calculation, so I thought I would check here
> before reinventing the wheel.

Hi hgreatrex,

this is an example of how I approach this problem, which works for me but
may or may not work for you. :)

CCD20 = open.ncdf("ccd1983_01-dk1_20.nc")
lat = CCD20$dim$lat$vals   # NOTE: dim values are CACHED, don't read them
lon = CCD20$dim$lon$vals

lower_left_lon_lat = c(5,30)
upper_right_lon_lat = c(10,40)

ix0 = wherenearest( lower_left_lon_lat[1],  lon )
ix1 = wherenearest( upper_right_lon_lat[1], lon )
iy0 = wherenearest( lower_left_lon_lat[2],  lat )
iy1 = wherenearest( upper_right_lon_lat[2], lat )

countx = ix1 - ix0 + 1
county = iy1 - iy0 + 1

z = get.var.ncdf( CCD20, "data", start=c(ix0,iy0), count=c(countx,county)

Obviously, you can easily wrap into a separate function an extended
version of 'get.var.ncdf' that takes lat/lon values rather than indices. 
The 'wherenearest(val,matrix)' function is probably obvious, but is
something like this:

dist = abs(matrix-val)
index = which.min(dist)
return( index )

Hope that helps,

--Dave

---
David W. Pierce
Division of Climate, Atmospheric Science, and Physical Oceanography
Scripps Institution of Oceanography
(858) 534-8276 (voice)  /  (858) 534-8561 (fax)dpie...@ucsd.edu

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] odfWeave Error unzipping file in Win 7

2011-03-21 Thread rmailbox

I have a very similar error that cropped up when I upgraded to R 2.12 and 
persists at R 2.12.1. I am running R on Windows XP and OO is at version 3.2. I 
did not make any changes to my R code or ODF code or configuration to produce 
this error. Only upgraded R.

Many Thanks,

Eric

R session:


> odfWeave ( 'Report input template.odt' , 'August 2011.odt')
  Copying  Report input template.odt 
  Setting wd to  
C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2/odfWeave2153483 
  Unzipping ODF file using unzip -o "Report input template.odt" 
Error in odfWeave("Report input template.odt", "August 2011.odt") : 
  Error unzipping file




When I start a shell and go to the temp directory in question and copy the 
exact command that the error message says produced an error the command runs 
fine. Here is that session:

Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

H:\>c:

C:\>cd C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2/odfWeave2153483

C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>dir
 Volume in drive C has no label.
 Volume Serial Number is 7464-62CA

 Directory of C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483

03/21/2011  11:11 AM  .
03/21/2011  11:11 AM  ..
03/21/2011  11:11 AM13,780 Report input template.odt
   1 File(s) 13,780 bytes
   2 Dir(s)   7,987,343,360 bytes free

C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>unzip -o 
"Report input template.odt"
Archive:  Report input template.odt
 extracting: mimetype
   creating: Configurations2/statusbar/
  inflating: Configurations2/accelerator/current.xml
   creating: Configurations2/floater/
   creating: Configurations2/popupmenu/
   creating: Configurations2/progressbar/
   creating: Configurations2/menubar/
   creating: Configurations2/toolbar/
   creating: Configurations2/images/Bitmaps/
  inflating: content.xml
  inflating: manifest.rdf
  inflating: styles.xml
 extracting: meta.xml
  inflating: Thumbnails/thumbnail.png
  inflating: settings.xml
  inflating: META-INF/manifest.xml

C:\DOCUME~1\Koster01\LOCALS~1\Temp\Rtmp4uCcY2\odfWeave2153483>







- Original message -
From: "psycho-ld" 
To: r-help@r-project.org
Date: Sun, 23 Jan 2011 01:47:44 -0800 (PST)
Subject: [R] odfWeave Error unzipping file in Win 7


Hey guys,

I´m just getting started with R (version 2.12.0) and odfWeave and kinda
stumble from one problem to the next, the current one is the following:

trying to use odfWeave:

> odfctrl <- odfWeaveControl(
+ zipCmd = c("C:/Program Files/unz552dN/VBunzip.exe $$file$$ .",
+  "C:/Program Files/unz552dN/VBunzip.exe $$file$$"))
> 
> odfWeave("C:/testat.odt", "C:/iris.odt", control = odfctrl)
  Copying  C:/testat.odt 
  Setting wd to 
D:\Users\egf\AppData\Local\Temp\Rtmpmp4E1J/odfWeave23103351832 
  Unzipping ODF file using C:/Program Files/unz552dN/VBunzip.exe
"testat.odt" 
Fehler in odfWeave("C:/testat.odt", "C:/iris.odt", control = odfctrl) : 
  Error unzipping file

so I tried a few other unzipping programs like jar and 7-zip, but still the
same problem occurs, I also tried to install zip and unzip, but then I get
some error message that registration failed (Error 1904 )

so if there are anymore questions, just ask, would be great if someone could
help me though

cheers
psycho-ld

-- 
View this message in context: 
http://r.789695.n4.nabble.com/odfWeave-Error-unzipping-file-in-Win-7-tp3232359p3232359.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Stucked with as.numeric function

2011-03-21 Thread patsko
Hi list,

I have problems with the as.numeric function. I have imported probabilities 
from external data, but they are classified as factors as str() shows. 
Therefore my goal is to convert the colum from factor to numeric level with 
keeping the decimals.

I have googled the problem for a while now and kept to several  advices like 
http://cran.r-project.org/doc/FAQ/R-FAQ.html#How-do-I-convert-factors-to-numeric_003f
 and history from the list but it is impossible for me to convert the data to 
numeric without rounding or ranking the values. 

E.g.:
Simply using as.numeric puts the values into ranked classes as explained in the 
manual, 
As.numeric(as.character(probas)) as well as 
as.numeric(levels(probas$forecast_probs))[as.integer(probas$forecast_probs)]
 return “NA” for every row.

Anyone any idea?

--

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Lat Lon NetCDF subset

2011-03-21 Thread hgreatrex
Hi,

I'm trying to read a subset of a netcdf file into R, but although I'm
relatively experienced using R, I'm still new to netCDF files, so this may
be a very simple/stupid question!  

I've included an example of the type of file I'm looking at here.  

www.met.reading.ac.uk/~swp06hg/ccd1983_01-dk1_20.nc   (~7Mb)

It's a 2D array of the variable CCD along with its lat and long coordinates.
This is its R summary
#[1] "file ccd1983_01-dk1_20.nc has 2 dimensions:"
#[1] "lat   Size: 1974"
#[1] "lon   Size: 1894"
#[1] ""
#[1] "file ccd1983_01-dk1_20.nc has 1 variables:"
#[1] "short data[lon,lat]  Longname:data Missval:NA"


I want to be able to do 2 things:
1) Extract a lat-long defined box
2) Extract a selection of pixels whose lat-long coordinates are defined in a
separate data.frame (basically I want the value of CCD at specified
rain-gauge locations)

So far my code is as follows

##
library(ncdf)
 CCD20 = open.ncdf("ccd1983_01-dk1_20.nc")
 lat = get.var.ncdf(CCD20, "lat") # coordinate variable
 lon = get.var.ncdf(CCD20, "lon") # coordinate variable
 z = get.var.ncdf(CCD20, "data") 
##

I'm then manually searching for the long/lat I want.  I know the ncdf
package will let me define a user defined box in terms of pixels e.g.

 z = get.var.ncdf(CCD20, "data", start=c(11,1), count=c( 5,-1) 
 
However, as netCDF files are often used to store gridded spatial data, if
anyone knows of a package or function which would let me do the equivalent
of

 z = get.var.ncdf(CCD20,"data",lat=c(30,40),lon=c(5,10))

or let me extract individual pixels directly from lat-long coordinates.

I've managed to extract the lat / lon variables, so I know that I could work
out manually which pixels correspond to which location. However, it's easy
to make a mistake in such a calculation, so I thought I would check here
before reinventing the wheel.  

Sorry if I'm asking something which isn't possible or doesn't make sense,
but I very much look forward to reading any replies!  I'm also not aware of
an RnetCDF mailing list, but sorry if this is the wrong list to ask this
question.

Best wishes
Helen Greatrex

PS Also, thank you to the writers of 
http://www.image.ucar.edu/GSP/Software/Netcdf/ for getting me started with
the package.



--
View this message in context: 
http://r.789695.n4.nabble.com/Lat-Lon-NetCDF-subset-tp3394326p3394326.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BFGS and Neldear-Mead

2011-03-21 Thread LC-Bea
Ravi please look my last mail. 
I tried again, and in my last mail i described the results, i reach the
convergence but the values are still very diferent.

thank you so much!

Lucía

--
View this message in context: 
http://r.789695.n4.nabble.com/BFGS-and-Neldear-Mead-tp3388017p3394334.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Clustering problem

2011-03-21 Thread Abhishek Pratap
Hi Guys

I want to apply a clustering algo to my dataset in order to find the
regions points(X,Y) which have similar values(percent_GC and
mean_phred_quality). Details below.

I have sampled 1% of points from my main data set of 85 million
points.  The result is still somewhat large 800K points and  looks
like following.


 X Ypercent_GC  mean_phred_quality
1  4286 930   0.50   0.13
2  4825 947   0.50   20.33
3  8207 932   0.32   26.50
4  8451 940   0.48   24.81
5  9331 931   0.38   16.93
6 11501 949   0.49  31.28

What I want to do is find local regions in which I have associations
between these 4 values i.e points X,Y have close correlation with
percent_GC and mean_phred_quality.

PS:  I did calculate the overall pearson correlation coeff between
percent_GC and mean_phred_quality and it is not statistically
significant which got me interested into finding local regions where
it may be.

I would really appreciate your help as I am still a rookie in applying
clustering algorithms.

Thanks!
-Abhi

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Need help with error

2011-03-21 Thread Savitri N Appana
Thank you for your suggestion Allan.  I should have paid attention to
the posting instructions.


Pls find below the sample code from the ?splsda in the caret package. 
Note:  It used to work fine in R v2.8.1, but this error shows up now,
given that I've modified the code on how the splsda or predict.splsda
functions are called, i.e. as caret:::splsda and caret:::predict.splsda
b/c I am running R v2.12.1 now.


 sample code below..
library(caret)


data(mdrr)
set.seed(1)
inTrain <- sample(seq(along = mdrrClass), 450)
 
nzv <- nearZeroVar(mdrrDescr)
filteredDescr <- mdrrDescr[, -nzv]


training <- filteredDescr[inTrain,]
test <- filteredDescr[-inTrain,]
trainMDRR <- mdrrClass[inTrain]
testMDRR <- mdrrClass[-inTrain]
 
preProcValues <- preProcess(training)


trainDescr <- predict(preProcValues, training)
testDescr <- predict(preProcValues, test)


splsFit <- caret:::splsda(trainDescr, trainMDRR, 
  K = 5, eta = .9,
  probMethod = "Bayes")
>splsFit### ERROR is HERE
Error in switch(classifier, logistic = { : EXPR must be a length 1
vector



confusionMatrix(
caret:::predict.splsda(splsFit, testDescr),
testMDRR)




Again, thank you in advance for any explanation re the error.


Best,
Savi

>>> Allan Engelhardt  03/19/11 4:24 AM >>>
As it says at the bottom of every post:

> PLEASE do read the posting
guidehttp://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

Without an example that fails, it is hard to help.

Allan

On 18/03/11 16:26, Savitri N Appana wrote:
> Hi R users,
>
> I am getting the following error when using the splsda function in R
> v2.12.1:
>
> "Error in switch(classifier, logistic = { : EXPR must be a length 1
> vector"
>
> What does this mean and how do I fix this?
>
> Thank you in advance!
>
> Best,
> Savi
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Randomly generating data

2011-03-21 Thread Lisa
Hi, everybody,

I have a problem and need your help.
There are two columns that look like this:

 [1,] "t"  "f" 
 [2,] "f"  "t" 
 [3,] "t"  "f" 
 [4,] "t"  "t" 
 [5,] "f"  "f" 
 
I just want to generate the third column based on these two columns. First,
I randomly choose one of the two columns, say, the first column. So, the
first character of the third column is “t” that looks like this:

 [1,] "t"  "f"  "t"
 [2,] "f"  "t" 
 [3,] "t"  "f" 
 [4,] "t"  "t" 
 [5,] "f"  "f" 
 
Second, I determine the second character of the third column with an
additional Bernoulli trial with a probability, for example, rbinom(1, 1,
0.3). If the random generation in R is 0, we keep "f", the second character
in the first column because the first column has been chosen in the first
step, while if the random generation in R is 1, we choose "t" instead, the
second character in the second column.

Third, I determine the third character of the third column with a Bernoulli
trial but a different probability, for example, rbinom(1, 1, 0.7). 

Repeat these processes…

How can I do this efficiently if there are more than thousand records
(rows)?  Can anybody please help how to get this done? Thanks a lot in
advance

Lisa



--
View this message in context: 
http://r.789695.n4.nabble.com/Randomly-generating-data-tp3394250p3394250.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Correlation for no of variables

2011-03-21 Thread Vincy Pyne
Thanks Mr Langfelder,

Definitely I will go through the packages you have suggested. Actually, I will 
be multiplying three matrices of the order (1 X 1500)%*%(1500 X 1500) %*% 
(1500, 1) giving me one value at the end.

I will be starting my process in a couple of days time and in between will 
refer to the packages you have suggested.

Thanks again

Vincy

--- On Mon, 3/21/11, Peter Langfelder  wrote:

From: Peter Langfelder 
Subject: Re: [R] Correlation for no of variables
To: "Vincy Pyne" 
Cc: r-help@r-project.org
Received: Monday, March 21, 2011, 4:50 PM

On Mon, Mar 21, 2011 at 8:34 AM, Vincy Pyne  wrote:
> Dear R helpers,
>
> Suppose I have stock returns data of say 1500 companies each for say last 4 
> years. Thus I have a matrix of dimension say 1000 * 1500 i.e. 1500 columns 
> representing companies and 1000 rows of their returns.
>
> I need to find the correlation matrix of these 1500 companies.
>
> So I can find out the correlation as
>
> cor(returns) and expect to get 1500 * 1500 matrix. However, the process takes 
> a tremendous time. Is there any way in expediting such a process. In reality, 
> I may be dealing with lots of even 5000 stocks and may simulate even 10 
> stock returns.


How long is "tremendous time"?

What platform are you on? If you can compile R against a tuned BLAS
library, stats::cor will run faster IF you do not have any missing
data.

If you do have missing data, you may want to try the package WGCNA
(where we work with bigger correlation matrices) that implements a
correlation calculation that is faster particularly if there are few
missing data. This will also run faster if you do have a tuned BLAS
installed.

HTH,

Peter

>
>
>
> Kindly guide.
>
> Vincy
>
>
>
>
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Convert Sweave document to a function

2011-03-21 Thread Brian Diggs

On 3/20/2011 12:19 PM, David.Epstein wrote:

I like Sweave, which I consider to be a great contribution. I have just
written a .Rnw document that comes to about 6 pages of mixed code and
mathematical explanation. Now I want to turn the R code into a function. My
R code currently contains statements like N<-1000 and theta<- pi/10. In the
next version of the document, I want N and theta to be parameters of a
function, so that they can be easily varied. My explanation of the code is
still valid, and it seems to me that, if I only knew how to manage the
trick, I would need to change almost nothing in the latex.

The document contains about 6 different code chunks, and 7 different chunks
of latex.

I tried putting
functionname<- function(N,theta) {
into the first code chunk and
}
into the last code chunk, but Sweave said this was poor grammar and rejected
it.

Is there a reasonable way to make my .Rnw source into a function definition?
I would like maintainability of the code to be a criterion for "reasonable",
and I would like to keep latex explanations of what the code is doing
adjacent to the code being explained.

One other point is that I will want to export some of the variables computed
in the function to outside the function, so that they are not variables
local to the function body. I mention this only because it may affect the
solution, if any, to my problem.

Thanks for any help
David


The problem you ran into is that an R function can only contain R code 
(and that each Sweave chunk must be parseable on its own). The best 
solution I know of (though it may not be a good one), is to put all the 
TeX code inside of a cat(), drop all the noweb notation (which may mean 
doing yourself what Sweave is doing itself in, for example, fig=TRUE or 
echo=TRUE chunks), and then wrap that in a function call.


For example, an Sweave set (pulled from Sweave-test-1.Rnw):


Now we look at Gaussian data:

<<>>=
library(stats)
x <- rnorm(20)
print(x)
print(t1 <- t.test(x))
@
Note that we can easily integrate some numbers into standard text: The
third element of vector \texttt{x} is \Sexpr{x[3]}, the
$p$-value of the test is \Sexpr{format.pval(t1$p.value)}. % $

Now we look at a summary of the famous iris data set, and we want to
see the commands in the code chunks:


Would turn into (untested):

cat("
Now we look at Gaussian data:
")
cat("
\\begin{Schunk}
\\begin{Sinput}
library(stats)
x <- rnorm(20)
print(x)
print(t1 <- t.test(x))
\\end{Sinput}
\\begin{Soutput}
")
library(stats)
x <- rnorm(20)
print(x)
print(t1 <- t.test(x))
cat("
\\end{Soutput}
\\end{Schunk}

Note that we can easily integrate some numbers into standard text: The
third element of vector \texttt{x} is ",x[3],", the
$p$-value of the test is ",format.pval(t1$p.value),"."
,sep="")
cat("
Now we look at a summary of the famous iris data set, and we want to
see the commands in the code chunks:
")


This has several drawbacks.  First, having to put all the TeX inside of 
a cat is ugly (and you lose any editor support for it actually being 
TeX). Second, you have to manually do all the Sweave part yourself, 
including duplicating the input and output (if both are wanted), meaning 
it is easy for things to get out of sync, and creating and including 
figures.


A different approach which might work better is the brew package.  It is 
not Sweave, but can be used to created a file which can then be passed 
to Sweave (I think); I've not used it, but from what I've seen others 
say about it, it may be an approach to this sort of meta-templating in 
multiple languages (TeX and R).



--
View this message in context: 
http://r.789695.n4.nabble.com/Convert-Sweave-document-to-a-function-tp3391654p3391654.html
Sent from the R help mailing list archive at Nabble.com.



--
Brian S. Diggs, PhD
Senior Research Associate, Department of Surgery
Oregon Health & Science University

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help regarding RPostgreSQL and R 2.12.2

2011-03-21 Thread Gabor Grothendieck
On Mon, Mar 21, 2011 at 3:50 AM, Shaunak Bangale
 wrote:
> Prof Ripley,
> Thanks for the first reply. Yes, I am using windows xp.
> The problem that I am facing is:
> I want to use RPostgreSQL and xtable packages together.
> R 2.11.1 has RPostgreSQL but not xtable.  R 2.12.2 has xtable but not 
> RPostgreSQL.
> I suppose there should be some way to use RPostgreSQL with R 2.12.2.
> Sorry if I am sounding naïve, but How do I do the compilation that you just 
> mentioned?
> With your suggestion I went through RW-FAQs , posting guide.
> I ended up at the this page: 
> http://cran.r-project.org/bin/windows/contrib/2.12/ReadMe
> That says: ' Packages related to many database system must be linked to the 
> exact
> version of the database system the user has installed, hence it does
> not make sense to provide binaries for packages '
> PostgreSQL 9.0 is installed on my system which I am trying to integrate with 
> R 2.12.2.
> Please throw some light on what can be done.
>

RpgSQL is available for both those versions of R.

-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lattice histogram and grouping

2011-03-21 Thread Evans, David G (DFG)
Hi,
>From the following pseudo-code (tweaked from another user): 
library(lattice)
variable<-sample(rep(1:2,100))
individual<-rep(1:3, length(variable))
group<-rep(LETTERS[1:2],length(variable)/2)
mydata<-data.frame(variable,individual,group)
individual<-as.factor(individual)
group<-as.factor(group)
histogram(~variable|individual+group)

I get six panels, one for each of  individuals 1-3 in group A and one
for each of individuals 1-3 in group B .  
What I want is three panels, one for each individual, but with  A and B
paired in the same panel.  I think the "groups="argument does this sort
of superposition for other lattice functions, but is apparently
unavailable for histogram().  

Thanks very much for any help.  

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Correlation for no of variables

2011-03-21 Thread Peter Langfelder
On Mon, Mar 21, 2011 at 8:34 AM, Vincy Pyne  wrote:
> Dear R helpers,
>
> Suppose I have stock returns data of say 1500 companies each for say last 4 
> years. Thus I have a matrix of dimension say 1000 * 1500 i.e. 1500 columns 
> representing companies and 1000 rows of their returns.
>
> I need to find the correlation matrix of these 1500 companies.
>
> So I can find out the correlation as
>
> cor(returns) and expect to get 1500 * 1500 matrix. However, the process takes 
> a tremendous time. Is there any way in expediting such a process. In reality, 
> I may be dealing with lots of even 5000 stocks and may simulate even 10 
> stock returns.


How long is "tremendous time"?

What platform are you on? If you can compile R against a tuned BLAS
library, stats::cor will run faster IF you do not have any missing
data.

If you do have missing data, you may want to try the package WGCNA
(where we work with bigger correlation matrices) that implements a
correlation calculation that is faster particularly if there are few
missing data. This will also run faster if you do have a tuned BLAS
installed.

HTH,

Peter

>
>
>
> Kindly guide.
>
> Vincy
>
>
>
>
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sweave, white space and code blocks

2011-03-21 Thread Dennis Murphy
Hi:


I have a template preamble file for Sweave; near the bottom, I put in this
commented code some time ago so I would never forget it:

%% --
%% IMPORTANT NOTE
%%  All Sweave tokens MUST begin in column 1 or they will be ignored
%%  by Sweave.
%% --

Dennis

On Mon, Mar 21, 2011 at 8:08 AM, David.Epstein
wrote:

> Sweave is very useful, and I'm gradually getting used to it.
>
> I've just been battling Sweave over the re-use of code chunks. As I am
> pretty ignorant in the byways of both Sweave and R, this took a chunk of
> time to sort out. Here is what I learned:
>
> If one re-uses a code chunk, then Sweave (but not Stangle) will insist that
> <>
> start in column 1. In particular, white space to its left is not allowed.
>
> I can't find any mention of this in the Sweave manual, nor in the Sweave
> FAQ. I'm planning to ask for this to be included in the Sweave FAQ, as it
> has certainly been frequently asked by me today.
>
> David
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Sweave-white-space-and-code-blocks-tp3393811p3393811.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Curry with `[.function` ?

2011-03-21 Thread Gabor Grothendieck
On Mon, Mar 21, 2011 at 12:20 PM, Kenn Konstabel  wrote:
> On Mon, Mar 21, 2011 at 2:53 PM, Gabor Grothendieck
>  wrote:
>>
>> On Mon, Mar 21, 2011 at 8:46 AM, Kenn Konstabel 
>> wrote:
>> > Dear all,
>> >
>> > I sometimes use the following function:
>> >
>> > Curry <- function(FUN,...) {
>> >   # by Byron Ellis,
>> > https://stat.ethz.ch/pipermail/r-devel/2007-November/047318.html
>> >   .orig <- list(...)
>> >   function(...) do.call(FUN,c(.orig,list(...)))
>> >   }
>> >
>> > ... and have thought it might be convenient to have a method for [ doing
>> > this. As a simple example,
>> >
>> >> apply(M, 1, mean[trim=0.1])  # hypothetical equivalent to apply(M, 1,
>> > Curry(mean, trim=0.1))
>> >
>> >  would be easier to understand  than passing arguments by ...
>> >
>> >> apply(M, 1, mean, trim=0.1)
>> >
>> > and much shorter than using an anonymous function
>> >
>> >> apply(M, 1, function(x) mean(x, trim=0.1)
>> >
>> > This would be much more useful for complicated functions that may take
>> > several functions as arguments. For example (my real examples are too
>> > long
>> > but this seems general enough),
>> >
>> > foo <- function(x, ...) {
>> >     dots <- list(...)
>> >     mapply(function(f) f(x), dots)
>> >     }
>> >
>> > foo(1:10, mean, sd)
>> > foo(c(1:10, NA), mean, mean[trim=0.1, na.rm=TRUE], sd[na.rm=TRUE])
>> >
>> > Defining `[.function` <- Curry won't help:
>> >
>> >> mean[trim=0.1]
>> > Error in mean[trim = 0.1] : object of type 'closure' is not subsettable
>> >
>> > One can write summary and other methods for class "function" without
>> > such
>> > problems, so this has something to do with [ being a primitive function
>> > and
>> > not using UseMethod, it would be foolish to re-define it as an
>> > "ordinary"
>> > generic function.
>> >
>> > Redefining mean as structure(mean, class="function") will make it work
>> > but
>> > then one would have to do it for all functions which is not feasible.
>> >
>> >> class(mean) <- class(mean)
>> >> class(sd)<-class(sd)
>> >> foo(c(1:10, NA), mean, mean[na.rm=TRUE], mean[trim=0.1, na.rm=TRUE],
>> > sd[na.rm=TRUE])
>> > [1]       NA 5.50 5.50 3.027650
>> >
>> > Or one could define a short-named function (say, .) doing this:
>> >
>> >> rm(mean, sd) ## removing the modified copies from global environment
>> >> .<-function(x) structure(x, class=class(x))
>> >> foo(c(1:10, NA), mean, .(mean)[na.rm=TRUE], .(mean)[trim=0.1,
>> >> na.rm=TRUE],
>> > .(sd)[na.rm=TRUE])
>> >
>> > But this is not as nice. (And neither is replacing "[" with "Curry" by
>> > using
>> > substitute et al. inside `foo`, - this would make it usable only within
>> > functions that one could be bothered to redefine this way - probably
>> > none.)
>> >
>> > Thanks in advance for any ideas and comments (including the ones saying
>> > that
>> > this is an awful idea)
>>
>> If the aim is to find some short form to express functions then
>> gsubfn's fn$ construct provides a way. It allows you to specify
>> functions using formula notation.  The left hand side of the formula
>> is the arguments and the right hand side is the body.  If no args are
>> specified then it uses the free variables in the body in the order
>> encountered to form the args.  Thus one could write the following.
>> (Since no args were specified and the right hand side uses the free
>> variable x it assumes that there is a single arg x.)
>>
>> library(gsubfn)
>> fn$apply(longley, 2, ~ mean(x, trim = 0.1))
>> fn$lapply(longley, ~ mean(x, trim = 0.1))
>> fn$sapply(longley, ~ mean(x, trim = 0.1))
>>
>> fn$ can preface just about any function.  It does not have to be one
>> of the above.  See ?fn and http://gsubfn.googlecode.com for more info.
>>
>
> Thanks a lot! This is not exactly what I meant but it's a very useful hint.
> The idea was to find a concise way of using function with different default
> values for some of the arguments or to "fix" some of the arguments so that
> the result is a unary function.
>
> I couldn't find a way to do it with gsubfn package directly (although maybe
> I should have another look) but by stealing some of your ideas:
>
> . <- structure(NA, class="rice")
> `$.rice` <- function(x,y) structure(match.fun(y), class="function")
> `[.function` <- function(FUN,...) {
>    # https://stat.ethz.ch/pipermail/r-devel/2007-November/047318.html
>    .orig <- list(...)
>    function(...) do.call(FUN,c(.orig,list(...)))
>    }
>
> .$mean[trim=0.1](c(0:99,1e+20))
> # 50
>

Using gsubfn's fn$ the trick is to  use the identity function:

  library(gsubfn)
  mean.trim.1 <- fn$identity(~ mean(x, trim = 0.1))

Now
  mean.trim.1(x)
is the same as
  mean(x, trim = 0.1)

or in one line:

fn$identity(~ mean(x, trim = 0.1))(c(.99, 1e20))

-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting gu

Re: [R] recalling different data frames (the way you do in Excel VB but now) in

2011-03-21 Thread Ista Zahn
Hi Bodnar,
The "R way" is to put the data frames in a list:

id <-c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3)
a <-c(3,1,3,3,1,3,3,3,3,1,3,2,1,2,1,3,3,2,1,1,1,3,1,3,3,3,2,1,1,3)
b <-c(3,2,1,1,1,1,1,1,1,1,1,2,1,3,2,1,1,1,2,1,3,1,2,2,1,3,3,2,3,2)
c <-c(1,3,2,3,2,1,2,3,3,2,2,3,1,2,3,3,3,1,1,2,3,3,1,2,2,3,2,2,3,2)
d <-c(3,3,3,1,3,2,2,1,2,3,2,2,2,1,3,1,2,2,3,2,3,2,3,2,1,1,1,1,1,2)
e <-c(2,3,1,2,1,2,3,3,1,1,2,1,1,3,3,2,1,1,3,3,2,2,3,3,3,2,3,2,1,4)
dat <-data.frame(id,a,b,c,d,e)

and operate recursively on the list:

dat.list <- split(dat, dat$id)
for(i in names(dat.list)){
dat.list[i] #do this and that
}

or

lapply(dat.list, function.that.does.this.and.that)

Best,
Ista
On Mon, Mar 21, 2011 at 12:05 PM, Bodnar Laszlo EB_HU
 wrote:
> Hello everyone,
>
> I'd like to ask you a question again, basically focusing on referring to 
> different objects.
>
> Let's suppose we create the following databases this way:
>
> id <-c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3)
> a <-c(3,1,3,3,1,3,3,3,3,1,3,2,1,2,1,3,3,2,1,1,1,3,1,3,3,3,2,1,1,3)
> b <-c(3,2,1,1,1,1,1,1,1,1,1,2,1,3,2,1,1,1,2,1,3,1,2,2,1,3,3,2,3,2)
> c <-c(1,3,2,3,2,1,2,3,3,2,2,3,1,2,3,3,3,1,1,2,3,3,1,2,2,3,2,2,3,2)
> d <-c(3,3,3,1,3,2,2,1,2,3,2,2,2,1,3,1,2,2,3,2,3,2,3,2,1,1,1,1,1,2)
> e <-c(2,3,1,2,1,2,3,3,1,1,2,1,1,3,3,2,1,1,3,3,2,2,3,3,3,2,3,2,1,4)
> df <-data.frame(id,a,b,c,d,e)
> df
>
> for (i in 1:3) assign(paste("df", i, sep="."), split(df,df$id)[[i]])
> df.1
> df.2
> df.3
>
> Now in the next step I'd like to get or "recall" these databases but not just 
> simply sending 'df.1', 'df.2', 'df.3' to R (because my real df database is 
> much bigger and much more complicated than this simplified one as usual). I 
> would have liked to do this the similar way you recall these things in Excel 
> Visual Basic.
>
> You know, if we were in an Excel VB world I would do something like:
> sub exercise ()
> for i = 1 to 3
>                df.i
>                // perform this and that etc.//
>        next i
> end sub
>
> Returning to R: first I wanted to do it this way:
> for (i in 1:3)
>    df.i (perform this and that etc...)
>
> But of course it is wrong. Is there a proper way to handle this one? What do 
> I miss? I do not know if my question is clear...
> Thank you very much and thanks for the previous answers as well!
> Happy R-exploring!
> Laszlo
>
> 
> Ez az e-mail és az összes hozzá tartozó csatolt melléklet titkos és/vagy 
> jogilag, szakmailag vagy más módon védett információt tartalmazhat. 
> Amennyiben nem Ön a levél címzettje akkor a levél tartalmának közlése, 
> reprodukálása, másolása, vagy egyéb más úton történő terjesztése, 
> felhasználása szigorúan tilos. Amennyiben tévedésből kapta meg ezt az 
> üzenetet kérjük azonnal értesítse az üzenet küldőjét. Az Erste Bank Hungary 
> Zrt. (EBH) nem vállal felelősséget az információ teljes és pontos - 
> címzett(ek)hez történő - eljuttatásáért, valamint semmilyen késésért, 
> kapcsolat megszakadásból eredő hibáért, vagy az információ felhasználásából 
> vagy annak megbízhatatlanságából eredő kárért.
>
> Az üzenetek EBH-n kívüli küldője vagy címzettje tudomásul veszi és 
> hozzájárul, hogy az üzenetekhez más banki alkalmazott is hozzáférhet az EBH 
> folytonos munkamenetének biztosítása érdekében.
>
>
> This e-mail and any attached files are confidential and/...{{dropped:19}}
>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] value changed after paste() function

2011-03-21 Thread Bert Gunter
Inline below.

On Mon, Mar 21, 2011 at 8:18 AM, kellysong  wrote:
> Hi all,
>
> I am new user in R, I am trying to to use paste function to concatenate
> strings which has been fetched from database table(postgres database), the
> following are my code:
>
> when i do:
>  > sqlFetch(channel,'transactions')
>
> outputs are following(only parts, there are around 200 transactions in
> total)
>
>    orderid productname price quantity discount
> 1    T10248      potato  1.80       10     0.00
> 2    T10248      sweets  2.00        5     0.00
> 3    T10249        milk  1.99        9     0.00
> 4    T10249       apple  2.35       40     0.00
>
>
> when i do:
>> paste(sqlFetch(channel,'transactions'))
>
> outputs are:
> [1] "c(1, 1, 2, 2)"
> [2] "c(7, 8, 6, 1)"
> [3] "c(1.8, 2, 1.99, 2.35) "
> [4] "c(10, 5, 9, 40)"
> [5] "c(0, 0, 0, 0) "
>
>
> any idea why the orderid and the productname have been changed?? and what

The results of sqlFetch is a data frame whose alphanumeric columns are
factors. You need to spend some (more) time with "An Introduction to
R" learning about R's data structures and basic procedures BEFORE you
start using it. After you have done so,

?do.call

and

?factor

will tell you how to deal with this sort of thing. But it is doubtful
that you will understand until you have first spent some time with the
basic docs.

-- Bert

> should i do to solve this??
>
> Any help would be very appreciated!
>
>
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/value-changed-after-paste-function-tp3393845p3393845.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Bert Gunter
Genentech Nonclinical Biostatistics
467-7374
http://devo.gene.com/groups/devo/depts/ncb/home.shtml

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Curry with `[.function` ?

2011-03-21 Thread Kenn Konstabel
On Mon, Mar 21, 2011 at 2:53 PM, Gabor Grothendieck  wrote:

> On Mon, Mar 21, 2011 at 8:46 AM, Kenn Konstabel 
> wrote:
> > Dear all,
> >
> > I sometimes use the following function:
> >
> > Curry <- function(FUN,...) {
> >   # by Byron Ellis,
> > https://stat.ethz.ch/pipermail/r-devel/2007-November/047318.html
> >   .orig <- list(...)
> >   function(...) do.call(FUN,c(.orig,list(...)))
> >   }
> >
> > ... and have thought it might be convenient to have a method for [ doing
> > this. As a simple example,
> >
> >> apply(M, 1, mean[trim=0.1])  # hypothetical equivalent to apply(M, 1,
> > Curry(mean, trim=0.1))
> >
> >  would be easier to understand  than passing arguments by ...
> >
> >> apply(M, 1, mean, trim=0.1)
> >
> > and much shorter than using an anonymous function
> >
> >> apply(M, 1, function(x) mean(x, trim=0.1)
> >
> > This would be much more useful for complicated functions that may take
> > several functions as arguments. For example (my real examples are too
> long
> > but this seems general enough),
> >
> > foo <- function(x, ...) {
> > dots <- list(...)
> > mapply(function(f) f(x), dots)
> > }
> >
> > foo(1:10, mean, sd)
> > foo(c(1:10, NA), mean, mean[trim=0.1, na.rm=TRUE], sd[na.rm=TRUE])
> >
> > Defining `[.function` <- Curry won't help:
> >
> >> mean[trim=0.1]
> > Error in mean[trim = 0.1] : object of type 'closure' is not subsettable
> >
> > One can write summary and other methods for class "function" without such
> > problems, so this has something to do with [ being a primitive function
> and
> > not using UseMethod, it would be foolish to re-define it as an "ordinary"
> > generic function.
> >
> > Redefining mean as structure(mean, class="function") will make it work
> but
> > then one would have to do it for all functions which is not feasible.
> >
> >> class(mean) <- class(mean)
> >> class(sd)<-class(sd)
> >> foo(c(1:10, NA), mean, mean[na.rm=TRUE], mean[trim=0.1, na.rm=TRUE],
> > sd[na.rm=TRUE])
> > [1]   NA 5.50 5.50 3.027650
> >
> > Or one could define a short-named function (say, .) doing this:
> >
> >> rm(mean, sd) ## removing the modified copies from global environment
> >> .<-function(x) structure(x, class=class(x))
> >> foo(c(1:10, NA), mean, .(mean)[na.rm=TRUE], .(mean)[trim=0.1,
> na.rm=TRUE],
> > .(sd)[na.rm=TRUE])
> >
> > But this is not as nice. (And neither is replacing "[" with "Curry" by
> using
> > substitute et al. inside `foo`, - this would make it usable only within
> > functions that one could be bothered to redefine this way - probably
> none.)
> >
> > Thanks in advance for any ideas and comments (including the ones saying
> that
> > this is an awful idea)
>
> If the aim is to find some short form to express functions then
> gsubfn's fn$ construct provides a way. It allows you to specify
> functions using formula notation.  The left hand side of the formula
> is the arguments and the right hand side is the body.  If no args are
> specified then it uses the free variables in the body in the order
> encountered to form the args.  Thus one could write the following.
> (Since no args were specified and the right hand side uses the free
> variable x it assumes that there is a single arg x.)
>
> library(gsubfn)
> fn$apply(longley, 2, ~ mean(x, trim = 0.1))
> fn$lapply(longley, ~ mean(x, trim = 0.1))
> fn$sapply(longley, ~ mean(x, trim = 0.1))
>
> fn$ can preface just about any function.  It does not have to be one
> of the above.  See ?fn and http://gsubfn.googlecode.com for more info.
>
>
Thanks a lot! This is not exactly what I meant but it's a very useful hint.
The idea was to find a concise way of using function with different default
values for some of the arguments or to "fix" some of the arguments so that
the result is a unary function.

I couldn't find a way to do it with gsubfn package directly (although maybe
I should have another look) but by stealing some of your ideas:

. <- structure(NA, class="rice")
`$.rice` <- function(x,y) structure(match.fun(y), class="function")
`[.function` <- function(FUN,...) {
   # https://stat.ethz.ch/pipermail/r-devel/2007-November/047318.html
   .orig <- list(...)
   function(...) do.call(FUN,c(.orig,list(...)))
   }

.$mean[trim=0.1](c(0:99,1e+20))
# 50

Thanks again,





> --
> Statistics & Software Consulting
> GKX Group, GKX Associates Inc.
> tel: 1-877-GKX-GROUP
> email: ggrothendieck at gmail.com
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] recalling different data frames (the way you do in Excel VB)

2011-03-21 Thread Bodnar Laszlo EB_HU
Hello everyone,

I'd like to ask you a question again, basically focusing on referring to 
different objects.

Let's suppose we create the following databases this way:

id <-c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3)
a <-c(3,1,3,3,1,3,3,3,3,1,3,2,1,2,1,3,3,2,1,1,1,3,1,3,3,3,2,1,1,3)
b <-c(3,2,1,1,1,1,1,1,1,1,1,2,1,3,2,1,1,1,2,1,3,1,2,2,1,3,3,2,3,2)
c <-c(1,3,2,3,2,1,2,3,3,2,2,3,1,2,3,3,3,1,1,2,3,3,1,2,2,3,2,2,3,2)
d <-c(3,3,3,1,3,2,2,1,2,3,2,2,2,1,3,1,2,2,3,2,3,2,3,2,1,1,1,1,1,2)
e <-c(2,3,1,2,1,2,3,3,1,1,2,1,1,3,3,2,1,1,3,3,2,2,3,3,3,2,3,2,1,4)
df <-data.frame(id,a,b,c,d,e)
df

for (i in 1:3) assign(paste("df", i, sep="."), split(df,df$id)[[i]])
df.1
df.2
df.3

Now in the next step I'd like to get or "recall" these databases but not just 
simply sending 'df.1', 'df.2', 'df.3' to R (because my real df database is much 
bigger and much more complicated than this simplified one as usual). I would 
have liked to do this the similar way you recall these things in Excel Visual 
Basic.

You know, if we were in an Excel VB world I would do something like:
sub exercise ()
for i = 1 to 3
df.i
// perform this and that etc.//
next i
end sub

Returning to R: first I wanted to do it this way:
for (i in 1:3)
df.i (perform this and that etc...)

But of course it is wrong. Is there a proper way to handle this one? What do I 
miss? I do not know if my question is clear...
Thank you very much and thanks for the previous answers as well!
Happy R-exploring!
Laszlo


Ez az e-mail és az összes hozzá tartozó csatolt melléklet titkos és/vagy 
jogilag, szakmailag vagy más módon védett információt tartalmazhat. 
Amennyiben nem Ön a levél címzettje akkor a levél tartalmának közlése, 
reprodukálása, másolása, vagy egyéb más úton történő terjesztése, 
felhasználása szigorúan tilos. Amennyiben tévedésből kapta meg ezt az 
üzenetet kérjük azonnal értesítse az üzenet küldőjét. Az Erste Bank 
Hungary Zrt. (EBH) nem vállal felelősséget az információ teljes és pontos 
- címzett(ek)hez történő - eljuttatásáért, valamint semmilyen 
késésért, kapcsolat megszakadásból eredő hibáért, vagy az információ 
felhasználásából vagy annak megbízhatatlanságából eredő kárért.

Az üzenetek EBH-n kívüli küldője vagy címzettje tudomásul veszi és 
hozzájárul, hogy az üzenetekhez más banki alkalmazott is hozzáférhet az 
EBH folytonos munkamenetének biztosítása érdekében.


This e-mail and any attached files are confidential and/...{{dropped:19}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] recalling different data frames (the way you do in Excel VB but now) in

2011-03-21 Thread Bodnar Laszlo EB_HU
Hello everyone,

I'd like to ask you a question again, basically focusing on referring to 
different objects.

Let's suppose we create the following databases this way:

id <-c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3)
a <-c(3,1,3,3,1,3,3,3,3,1,3,2,1,2,1,3,3,2,1,1,1,3,1,3,3,3,2,1,1,3)
b <-c(3,2,1,1,1,1,1,1,1,1,1,2,1,3,2,1,1,1,2,1,3,1,2,2,1,3,3,2,3,2)
c <-c(1,3,2,3,2,1,2,3,3,2,2,3,1,2,3,3,3,1,1,2,3,3,1,2,2,3,2,2,3,2)
d <-c(3,3,3,1,3,2,2,1,2,3,2,2,2,1,3,1,2,2,3,2,3,2,3,2,1,1,1,1,1,2)
e <-c(2,3,1,2,1,2,3,3,1,1,2,1,1,3,3,2,1,1,3,3,2,2,3,3,3,2,3,2,1,4)
df <-data.frame(id,a,b,c,d,e)
df

for (i in 1:3) assign(paste("df", i, sep="."), split(df,df$id)[[i]])
df.1
df.2
df.3

Now in the next step I'd like to get or "recall" these databases but not just 
simply sending 'df.1', 'df.2', 'df.3' to R (because my real df database is much 
bigger and much more complicated than this simplified one as usual). I would 
have liked to do this the similar way you recall these things in Excel Visual 
Basic.

You know, if we were in an Excel VB world I would do something like:
sub exercise ()
for i = 1 to 3
df.i
// perform this and that etc.//
next i
end sub

Returning to R: first I wanted to do it this way:
for (i in 1:3)
df.i (perform this and that etc...)

But of course it is wrong. Is there a proper way to handle this one? What do I 
miss? I do not know if my question is clear...
Thank you very much and thanks for the previous answers as well!
Happy R-exploring!
Laszlo


Ez az e-mail és az összes hozzá tartozó csatolt melléklet titkos és/vagy 
jogilag, szakmailag vagy más módon védett információt tartalmazhat. 
Amennyiben nem Ön a levél címzettje akkor a levél tartalmának közlése, 
reprodukálása, másolása, vagy egyéb más úton történő terjesztése, 
felhasználása szigorúan tilos. Amennyiben tévedésből kapta meg ezt az 
üzenetet kérjük azonnal értesítse az üzenet küldőjét. Az Erste Bank 
Hungary Zrt. (EBH) nem vállal felelősséget az információ teljes és pontos 
- címzett(ek)hez történő - eljuttatásáért, valamint semmilyen 
késésért, kapcsolat megszakadásból eredő hibáért, vagy az információ 
felhasználásából vagy annak megbízhatatlanságából eredő kárért.

Az üzenetek EBH-n kívüli küldője vagy címzettje tudomásul veszi és 
hozzájárul, hogy az üzenetekhez más banki alkalmazott is hozzáférhet az 
EBH folytonos munkamenetének biztosítása érdekében.


This e-mail and any attached files are confidential and/...{{dropped:19}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] binary data with correlation

2011-03-21 Thread yvonne fabian
Dear posters,
I have a question concerning binary data analysis. I have presence absence
data of 5 sampling sessions within 3 years, of 12 fields. Each field had 12
traps. I would like to analyse the data with a Generalized Estimating
Equations (GEE) Model in R. For the abundance data I used a gls with the
function: 
correlation = corARMA(form = ~ session|trapfield, p = 1, q = 0)

But now I want to use presence absence data…I know about the problem with
correlated -binary data- but maybe somebody already created a solution?
Something like that, which is not working….

rgee1 <- geeglm(Alus01 ~ treatment + scale(LAI) + scale(log(Tot_Sp)) +
scale(AveH) , data = plates, family = binomial, waves = session, id =
trapfield, corstr = "ar1")

Any help is appreciated a lot….
Thanks
Yvonne


--
View this message in context: 
http://r.789695.n4.nabble.com/binary-data-with-correlation-tp3393884p3393884.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] value changed after paste() function

2011-03-21 Thread kellysong
Hi all,

I am new user in R, I am trying to to use paste function to concatenate
strings which has been fetched from database table(postgres database), the
following are my code:

when i do:
 > sqlFetch(channel,'transactions')

outputs are following(only parts, there are around 200 transactions in
total)

orderid productname price quantity discount
1T10248  potato  1.80   10 0.00
2T10248  sweets  2.005 0.00
3T10249milk  1.999 0.00
4T10249   apple  2.35   40 0.00


when i do:
> paste(sqlFetch(channel,'transactions'))

outputs are:
[1] "c(1, 1, 2, 2)" 
[2] "c(7, 8, 6, 1)" 
  
[3] "c(1.8, 2, 1.99, 2.35) "   
[4] "c(10, 5, 9, 40)" 
[5] "c(0, 0, 0, 0) "


any idea why the orderid and the productname have been changed?? and what
should i do to solve this??

Any help would be very appreciated!



--
View this message in context: 
http://r.789695.n4.nabble.com/value-changed-after-paste-function-tp3393845p3393845.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] appending collums in for loop

2011-03-21 Thread Scott Chamberlain
I can't reproduce your work, but I think you just need 

> regionMatchABCDE[,i] <- cbind(regionMatch[,10:18])

instead of 

regionMatchABCDE <- cbind(regionMatch[,10:18])

within the for loop

Scott 
On Monday, March 21, 2011 at 7:36 AM, Who Am I? wrote:
Hoi All,
> 
> I am trying to append collums to a data frame in a for loop. I read in
> tables, do some processing and then write the result to a data.frame. But,
> the thing I want is, that the results are appended to the data frame in
> stead of overwriting the results of the prevous table.
> It has to look something like this:
> 
> After going trough the loop once:
> Array 1 
> 1 
> 2 
> 3 
> 4 
> 5 
> 
> After going trough the loop twice: 
> Array 1 Array 2 
> 1 1 
> 2 2 
> 3 3 
> 4 4 
> 5 5 
> 
> After going trough the loop three times: 
> Array 1 Array 2 Array 3
> 1 1 1
> 2 2 2
> 3 3 3
> 4 4 4
> 5 5 5
> 
> This is my code:
> 
> setwd("J:/Stage/Datasets2/Datasets/outData")
> 
> masterTable<-read.table("AR1000900A_N_241110_(Mapping250K_Nsp)_2,Mapping250K_Nsp,CNprobes.tab
> _SNP_IDs.xls",sep="\t", dec=",", fill=T, header=T)
> masterTable<-data.frame(masterTable)
> 
> fileNames<-list.files(getwd(), pattern='_0,5 -0,51.xls')
> regionMatchABCDE<-data.frame()
> 
> for(i in 1:5) {
>  fileName <- fileNames[i]
>  newFile <- file.path(getwd(), paste(fileNames[i], "samen_0,5
> -0,51.xls"))
>  snpidFile<-read.table(fileNames[i],sep="\t", dec=",", fill=T, header=T)
>  snpidFile<-data.frame(snpidFile)
>  regionMatch<-cbind(masterTable, masterTable[match(masterTable$Pos,
> snpidFile$Pos),])
>  regionMatchABCDE<-cbind(regionMatch[,10:18])
> }
> 
> write.table(regionMatchABCDE, file= "Array 0-1-2-3-4-5.xls", col.names=T,
> row.names=F, quote=F, sep = "\t")
> 
> Thanks!
> 
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/appending-collums-in-for-loop-tp3393445p3393445.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BFGS and Neldear-Mead

2011-03-21 Thread LC-Bea
I´m triying to give you more information

-other function (the problem its the same with both)

pen.wls<-function(x){ ax<-x[32:40]
indice<-x[1:31]
bx<-x[41:49]
awlsbind1<-matrix(rep(ax,31),nrow=9,ncol=31)
indice<-as.vector(indice-mean(indice))
bx<-as.vector(bx/sum(bx))
me<-(awlsbind1+bx%*%t(indice))
result<-sum(defunciones*(lts-me)^2)+10*sum(((t(u)%*%x)-c)^2)
result}

-I run it with optimx and i reach convergence with nealder-mead
but values are still different

BFGS an NM, both reach the convergence, code=0, but still produces different
results

-KKT conditions ara 1=T and 2=F
but, this conditions are usefull when whe have constraints?

i included one condition 10*sum(((t(u)%*%x)-c)^2), that makes that
sum(kt)=0
and sum(bx)=1

i hope that this information helps to understand where the problem is...
Lucía


--
View this message in context: 
http://r.789695.n4.nabble.com/BFGS-and-Neldear-Mead-tp3388017p3393731.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sweave, white space and code blocks

2011-03-21 Thread David.Epstein
Sweave is very useful, and I'm gradually getting used to it.

I've just been battling Sweave over the re-use of code chunks. As I am
pretty ignorant in the byways of both Sweave and R, this took a chunk of
time to sort out. Here is what I learned:

If one re-uses a code chunk, then Sweave (but not Stangle) will insist that
<>
start in column 1. In particular, white space to its left is not allowed.

I can't find any mention of this in the Sweave manual, nor in the Sweave
FAQ. I'm planning to ask for this to be included in the Sweave FAQ, as it
has certainly been frequently asked by me today.

David

--
View this message in context: 
http://r.789695.n4.nabble.com/Sweave-white-space-and-code-blocks-tp3393811p3393811.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Correlation for no of variables

2011-03-21 Thread Vincy Pyne
Dear R helpers,

Suppose I have stock returns data of say 1500 companies each for say last 4 
years. Thus I have a matrix of dimension say 1000 * 1500 i.e. 1500 columns 
representing companies and 1000 rows of their returns.

I need to find the correlation matrix of these 1500 companies. 

So I can find out the correlation as 

cor(returns) and expect to get 1500 * 1500 matrix. However, the process takes a 
tremendous time. Is there any way in expediting such a process. In reality, I 
may be dealing with lots of even 5000 stocks and may simulate even 10 stock 
returns.



Kindly guide. 

Vincy 





[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Curry with `[.function` ?

2011-03-21 Thread Gabor Grothendieck
On Mon, Mar 21, 2011 at 8:46 AM, Kenn Konstabel  wrote:
> Dear all,
>
> I sometimes use the following function:
>
> Curry <- function(FUN,...) {
>   # by Byron Ellis,
> https://stat.ethz.ch/pipermail/r-devel/2007-November/047318.html
>   .orig <- list(...)
>   function(...) do.call(FUN,c(.orig,list(...)))
>   }
>
> ... and have thought it might be convenient to have a method for [ doing
> this. As a simple example,
>
>> apply(M, 1, mean[trim=0.1])  # hypothetical equivalent to apply(M, 1,
> Curry(mean, trim=0.1))
>
>  would be easier to understand  than passing arguments by ...
>
>> apply(M, 1, mean, trim=0.1)
>
> and much shorter than using an anonymous function
>
>> apply(M, 1, function(x) mean(x, trim=0.1)
>
> This would be much more useful for complicated functions that may take
> several functions as arguments. For example (my real examples are too long
> but this seems general enough),
>
> foo <- function(x, ...) {
>     dots <- list(...)
>     mapply(function(f) f(x), dots)
>     }
>
> foo(1:10, mean, sd)
> foo(c(1:10, NA), mean, mean[trim=0.1, na.rm=TRUE], sd[na.rm=TRUE])
>
> Defining `[.function` <- Curry won't help:
>
>> mean[trim=0.1]
> Error in mean[trim = 0.1] : object of type 'closure' is not subsettable
>
> One can write summary and other methods for class "function" without such
> problems, so this has something to do with [ being a primitive function and
> not using UseMethod, it would be foolish to re-define it as an "ordinary"
> generic function.
>
> Redefining mean as structure(mean, class="function") will make it work but
> then one would have to do it for all functions which is not feasible.
>
>> class(mean) <- class(mean)
>> class(sd)<-class(sd)
>> foo(c(1:10, NA), mean, mean[na.rm=TRUE], mean[trim=0.1, na.rm=TRUE],
> sd[na.rm=TRUE])
> [1]       NA 5.50 5.50 3.027650
>
> Or one could define a short-named function (say, .) doing this:
>
>> rm(mean, sd) ## removing the modified copies from global environment
>> .<-function(x) structure(x, class=class(x))
>> foo(c(1:10, NA), mean, .(mean)[na.rm=TRUE], .(mean)[trim=0.1, na.rm=TRUE],
> .(sd)[na.rm=TRUE])
>
> But this is not as nice. (And neither is replacing "[" with "Curry" by using
> substitute et al. inside `foo`, - this would make it usable only within
> functions that one could be bothered to redefine this way - probably none.)
>
> Thanks in advance for any ideas and comments (including the ones saying that
> this is an awful idea)

If the aim is to find some short form to express functions then
gsubfn's fn$ construct provides a way. It allows you to specify
functions using formula notation.  The left hand side of the formula
is the arguments and the right hand side is the body.  If no args are
specified then it uses the free variables in the body in the order
encountered to form the args.  Thus one could write the following.
(Since no args were specified and the right hand side uses the free
variable x it assumes that there is a single arg x.)

library(gsubfn)
fn$apply(longley, 2, ~ mean(x, trim = 0.1))
fn$lapply(longley, ~ mean(x, trim = 0.1))
fn$sapply(longley, ~ mean(x, trim = 0.1))

fn$ can preface just about any function.  It does not have to be one
of the above.  See ?fn and http://gsubfn.googlecode.com for more info.

-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BFGS and Neldear-Mead

2011-03-21 Thread Ravi Varadhan
Yes, your "optimx" result tells me that Nelder-Mead did not converge.  It was 
terminated because it exceeded the maximum number of iterations allowed.  In 
"optimx" you can increase the `itnmax' argument from its default value to 
accomplish this.

Ravi.

---
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology School of Medicine Johns Hopkins 
University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of LC-Bea
Sent: Monday, March 21, 2011 9:12 AM
To: r-help@r-project.org
Subject: Re: [R] BFGS and Neldear-Mead

Sorry for posting too much!


i found that in Nelder-Mead the result is "1"
i run this algorith too many times with several data 
maybe the problem is that

i have to give a bigger number of iterations

Lucía

--
View this message in context: 
http://r.789695.n4.nabble.com/BFGS-and-Neldear-Mead-tp3388017p3393527.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] strange PREDICTIONS from a PIECEWISE LINEAR (mixed) MODEL

2011-03-21 Thread Kevin Wright
1. Try using set.seed for better reproducibility.
2. Does this really work for you?  I get


mydf <- data.frame(x, y,id)
mod2<-lme(y ~ x + x*(x>-1), random=~x|id,
+   data=mydf)
Error in lme.formula(y ~ x + x * (x > -1), random = ~x | id, data = mydf) :
  nlminb problem, convergence error code = 1
  message = iteration limit reached without convergence (9)

Enter a frame number, or 0 to exit

1: lme(y ~ x + x * (x > -1), random = ~x | id, data = mydf)
2: lme.formula(y ~ x + x * (x > -1), random = ~x | id, data = mydf)



Kevin



On Sat, Mar 19, 2011 at 6:16 AM, Federico Bonofiglio wrote:

> Hi Dears,
>
> When I introduce an interaciton in a piecewise model I obtain some quite
> unusual results.
>
> If that would't take u such a problem I'd really appreciate an advise from
> you.
>
> I've reproduced an example below...
>
> Many thanks
>
>
>
>
> x<-rnorm(1000)
>
> y<-exp(-x)+rnorm(1000)
>
> plot(x,y)
> abline(v=-1,col=2,lty=2)
>
>
> mod<-lm(y~x+x*(x>-1))
>
> summary(mod)
>
> yy<-predict(mod)
>
> lines(x[order(x)],yy[order(x)],col=2,lwd=2)
>
>
> #--lme
>
> #grouping factor, unbalanced
>
> g<-as.character(c(1:200))
> id<-sample(g,size=1000,replace=T,
> prob=sample(0:1,200,rep=T))
>
> table(id)   #unbalanced
>
>
>
> mod2<-lme(y~x+x*(x>-1),random=~x|id,
> data=data.frame(x,y,id))
>
> summary(mod2)
>
>
> newframe<-data.frame(  #fictious id
> id="fictious",
> x)
>
> newframe[1:5,]
>
> #predictions
>
> yy2<-predict(mod2,level=0, newdata=newframe)
>
>
> lines(x[order(x)],yy2[order(x)],col="blue",lwd=2)
>
>
>
> # add variable in the model
>
> z<-rgamma(1000,4,6)
>
> mod3<-lme(y~x+x*(x>-1)+z
> ,random=~x|id,
> data=data.frame(x,y,z,id))
>
> summary(mod3)
>
>
> #new id
>
> newframe2<-data.frame(  #fictious id
> id="fictious",
> x,
> z)
>
>
> #predict
>
> yy3<-predict(mod3,level=0, newdata=newframe2)
>
>
> lines(x[order(x)],yy3[order(x)],col="green",lwd=2)
>
>
>
> # ADD INTERACTION  z:x
>
>
>
> mod4<-lme(y~x+x*(x>-1)+
>
> z+
>   z:x+
> z:x*(x>-1)
>
>   ,random=~x|id,
> data=data.frame(x,y,z,id))
>
>
>
> #predict
>
> yy4<-predict(mod4,level=0, newdata=newframe2)
>
>
> lines(x[order(x)],yy4[order(x)],col="violet",lwd=2)  #something bizarre
>#starts to happen
>#in the predicted values
>
> # they begin to jiggle around the straight line
>
>
>
>
>
>
>
>
> --
> *Little u can do against ignorance,it will always disarm u:
> is the 2nd principle of thermodinamics made manifest, ...entropy in
> expansion.**But setting order is the real quest 4 truth, ..and the
> mission of a (temporally) wise dude.
> *
>
>[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] appending collums in for loop

2011-03-21 Thread stephen sefick
I don't think I understand.  Can you work up a dummy example that will
run independent of your actual data, and produce the problem?  This
will help everyone diagnose the problem.  Naively this sounds like an
indexing problem.

Stephen

On Mon, Mar 21, 2011 at 7:56 AM, Who Am I?  wrote:
> I forgot to say I need it to work in a for loop because it will be used for
> over 35 files. I previously programmed it in the most unorthodox way
> possible:
>
> setwd("J:/Stage/Datasets2/Datasets/outData")
> data1<-read.table("AR1000900A_N_241110_(Mapping250K_Nsp)_2,Mapping250K_Nsp,CNprobes.tab
> _SNP_IDs.xls",sep="\t", dec=",", fill=T, header=T)
> bukuA<-read.table("AR1000900A_N_241110_(Mapping250K_Nsp)_2,Mapping250K_Nsp,CNprobes.tab
> _SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
> bukuB<-read.table("AR1000901A_N_241110_(Mapping250K_Nsp),Mapping250K_Nsp,CNprobes.tab
> _SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
> bukuC<-read.table("AR1000902A_N_241110_(Mapping250K_Nsp),Mapping250K_Nsp,CNprobes.tab
> _SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
> bukuD<-read.table("AR1000903A_N_291110_(Mapping250K_Nsp),Mapping250K_Nsp,CNprobes.tab
> _SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
> bukuE<-read.table("AR1000904A_N_241110_(Mapping250K_Nsp),Mapping250K_Nsp,CNprobes.tab
> _SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
> bukuA<-data.frame(bukuA)
> bukuB<-data.frame(bukuB)
> bukuC<-data.frame(bukuC)
> bukuD<-data.frame(bukuD)
> bukuE<-data.frame(bukuE)
> regionMatchA<-cbind(data1, bukuA[match(data1$Pos, bukuA$Pos),])
> regionMatchB<-cbind(data1, bukuB[match(data1$Pos, bukuB$Pos),])
> regionMatchC<-cbind(data1, bukuC[match(data1$Pos, bukuC$Pos),])
> regionMatchD<-cbind(data1, bukuD[match(data1$Pos, bukuD$Pos),])
> regionMatchE<-cbind(data1, bukuE[match(data1$Pos, bukuE$Pos),])
>
>
> regionMatchABCDE<-cbind(data1[,7],data1[,3],
> regionMatchA[,10:18],regionMatchB[,10:18],regionMatchC[,10:18],regionMatchD[,10:18],regionMatchE[,10:18])
> write.table(regionMatchABCDE, file= "Array0-1-2-3-4-5_0,5 -0,5.xls",
> append=F, col.names=T, row.names=F, quote=F, sep = "\t", dec=",")
>
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/appending-collums-in-for-loop-tp3393446p3393481.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Stephen Sefick

| Auburn University                                         |
| Biological Sciences                                      |
| 331 Funchess Hall                                       |
| Auburn, Alabama                                         |
| 36849                                                           |
|___|
| sas0...@auburn.edu                                  |
| http://www.auburn.edu/~sas0025                 |
|___|

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods.  We are mammals, and have not exhausted the
annoying little problems of being mammals.

                                -K. Mullis

"A big computer, a complex algorithm and a long time does not equal science."

                              -Robert Gentleman

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] appending collums in for loop

2011-03-21 Thread Who Am I?
I forgot to say I need it to work in a for loop because it will be used for
over 35 files. I previously programmed it in the most unorthodox way
possible:

setwd("J:/Stage/Datasets2/Datasets/outData")
data1<-read.table("AR1000900A_N_241110_(Mapping250K_Nsp)_2,Mapping250K_Nsp,CNprobes.tab
_SNP_IDs.xls",sep="\t", dec=",", fill=T, header=T)
bukuA<-read.table("AR1000900A_N_241110_(Mapping250K_Nsp)_2,Mapping250K_Nsp,CNprobes.tab
_SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
bukuB<-read.table("AR1000901A_N_241110_(Mapping250K_Nsp),Mapping250K_Nsp,CNprobes.tab
_SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
bukuC<-read.table("AR1000902A_N_241110_(Mapping250K_Nsp),Mapping250K_Nsp,CNprobes.tab
_SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
bukuD<-read.table("AR1000903A_N_291110_(Mapping250K_Nsp),Mapping250K_Nsp,CNprobes.tab
_SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
bukuE<-read.table("AR1000904A_N_241110_(Mapping250K_Nsp),Mapping250K_Nsp,CNprobes.tab
_SNP_IDs.xls _0,5 -0,51.xls", sep="\t", dec=",", fill=T, header=T)
bukuA<-data.frame(bukuA)
bukuB<-data.frame(bukuB)
bukuC<-data.frame(bukuC)
bukuD<-data.frame(bukuD)
bukuE<-data.frame(bukuE)
regionMatchA<-cbind(data1, bukuA[match(data1$Pos, bukuA$Pos),])
regionMatchB<-cbind(data1, bukuB[match(data1$Pos, bukuB$Pos),])
regionMatchC<-cbind(data1, bukuC[match(data1$Pos, bukuC$Pos),])
regionMatchD<-cbind(data1, bukuD[match(data1$Pos, bukuD$Pos),])
regionMatchE<-cbind(data1, bukuE[match(data1$Pos, bukuE$Pos),])


regionMatchABCDE<-cbind(data1[,7],data1[,3],
regionMatchA[,10:18],regionMatchB[,10:18],regionMatchC[,10:18],regionMatchD[,10:18],regionMatchE[,10:18])
write.table(regionMatchABCDE, file= "Array0-1-2-3-4-5_0,5 -0,5.xls",
append=F, col.names=T, row.names=F, quote=F, sep = "\t", dec=",")


--
View this message in context: 
http://r.789695.n4.nabble.com/appending-collums-in-for-loop-tp3393446p3393481.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] appending collums in for loop

2011-03-21 Thread Who Am I?
Hoi All,

I am trying to append collums to a data frame in a for loop. I read in
tables, do some processing and then write the result to a data.frame. But,
the thing I want is, that the results are appended to the data frame in
stead of overwriting the results of the prevous table.
It has to look something like this:

After going trough the loop once:
Array 1 
1   
2   
3   
4   
5   

After going trough the loop twice:  
Array 1 Array 2 
1 1 
2 2 
3 3 
4 4 
5 5 

After going trough the loop three times:
Array 1 Array 2 Array 3
1  1  1
2  2  2
3  3  3
4  4  4
5  5  5

This is my code:

setwd("J:/Stage/Datasets2/Datasets/outData")

masterTable<-read.table("AR1000900A_N_241110_(Mapping250K_Nsp)_2,Mapping250K_Nsp,CNprobes.tab
_SNP_IDs.xls",sep="\t", dec=",", fill=T, header=T)
masterTable<-data.frame(masterTable)

fileNames<-list.files(getwd(), pattern='_0,5 -0,51.xls')
regionMatchABCDE<-data.frame()

for(i in 1:5) {
fileName <- fileNames[i]
newFile <- file.path(getwd(), paste(fileNames[i], "samen_0,5
-0,51.xls"))
snpidFile<-read.table(fileNames[i],sep="\t", dec=",", fill=T, header=T)
snpidFile<-data.frame(snpidFile)
regionMatch<-cbind(masterTable, masterTable[match(masterTable$Pos,
snpidFile$Pos),])
regionMatchABCDE<-cbind(regionMatch[,10:18])
}

write.table(regionMatchABCDE, file= "Array 0-1-2-3-4-5.xls", col.names=T,
row.names=F, quote=F, sep = "\t")

Thanks!

--
View this message in context: 
http://r.789695.n4.nabble.com/appending-collums-in-for-loop-tp3393446p3393446.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BFGS and Neldear-Mead

2011-03-21 Thread LC-Bea
Sorry for posting too much!


i found that in Nelder-Mead the result is "1"
i run this algorith too many times with several data 
maybe the problem is that

i have to give a bigger number of iterations

Lucía

--
View this message in context: 
http://r.789695.n4.nabble.com/BFGS-and-Neldear-Mead-tp3388017p3393527.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] appending collums in for loop

2011-03-21 Thread Who Am I?
Hoi All,

I am trying to append collums to a data frame in a for loop. I read in
tables, do some processing and then write the result to a data.frame. But,
the thing I want is, that the results are appended to the data frame in
stead of overwriting the results of the prevous table.
It has to look something like this:

After going trough the loop once:
Array 1 
1   
2   
3   
4   
5   

After going trough the loop twice:  
Array 1 Array 2 
1 1 
2 2 
3 3 
4 4 
5 5 

After going trough the loop three times:
Array 1 Array 2 Array 3
1  1  1
2  2  2
3  3  3
4  4  4
5  5  5

This is my code:

setwd("J:/Stage/Datasets2/Datasets/outData")

masterTable<-read.table("AR1000900A_N_241110_(Mapping250K_Nsp)_2,Mapping250K_Nsp,CNprobes.tab
_SNP_IDs.xls",sep="\t", dec=",", fill=T, header=T)
masterTable<-data.frame(masterTable)

fileNames<-list.files(getwd(), pattern='_0,5 -0,51.xls')
regionMatchABCDE<-data.frame()

for(i in 1:5) {
fileName <- fileNames[i]
newFile <- file.path(getwd(), paste(fileNames[i], "samen_0,5
-0,51.xls"))
snpidFile<-read.table(fileNames[i],sep="\t", dec=",", fill=T, header=T)
snpidFile<-data.frame(snpidFile)
regionMatch<-cbind(masterTable, masterTable[match(masterTable$Pos,
snpidFile$Pos),])
regionMatchABCDE<-cbind(regionMatch[,10:18])
}

write.table(regionMatchABCDE, file= "Array 0-1-2-3-4-5.xls", col.names=T,
row.names=F, quote=F, sep = "\t")

Thanks!

--
View this message in context: 
http://r.789695.n4.nabble.com/appending-collums-in-for-loop-tp3393445p3393445.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BFGS and Neldear-Mead

2011-03-21 Thread LC-Bea
i´ve tried with optimx
these are the results
pleae help me with this... i´m triying to understand what they mean...


> lcmleQN







 
par
1 2.32805062, 1.94101989, 1.96853015, 1.12342948, 1.40725816, 1.30226031,
0.55687649, 0.67490698, 0.69883229, 0.72925173, 0.54030535, 0.54461379,
0.36190589, 0.31421293, 0.11386490, -0.28615571, -0.14935098, -0.30644983,
-0.56747037, -0.38677021, -0.41294107, -1.00118828, -0.94659793,
-0.76224978, -0.62148013, -1.18925770, -1.47986739, -1.72204331,
-1.16313012, -1.74013570, -1.87021566, -5.29621049, -8.00726862,
-7.02957636, -6.67849924, -6.04102731, -5.16783472, -4.34209917,
-3.54995556, -2.34885117, 0.29181040, 0.15986090, 0.02910809, 0.07948031,
0.13340674, 0.11592655, 0.07209577, 0.07753882, 0.04075539
fvalues method fns grs itns conv KKT1  KKT2 xtimes
1 -81140057   BFGS 527  28 NULL0 TRUE FALSE   0.67
> lcmleS







 
par
1 2.17772594, 2.01672660, 2.08178576, 0.70717754, 1.36665299, 1.52691701,
0.61881301, 0.66814873, 0.91881338, 0.74934458, 0.43494487, 0.46521162,
0.22710505, 0.29096933, 0.13061327, -0.48398756, -0.14132034, -0.40960459,
-0.66999565, -0.34176956, -0.21534550, -1.07791017, -0.89938104,
-0.54921569, -0.51838385, -1.17441612, -1.46214041, -1.76872051,
-0.92208257, -1.79888615, -1.95263053, -5.20704215, -7.90426743,
-6.92991136, -6.57552982, -5.95087991, -5.08538490, -4.29595406,
-3.52006025, -2.31470204, 0.28528187, 0.15976461, 0.03450613, 0.08503689,
0.14405771, 0.10326205, 0.07218856, 0.07254385, 0.04414397
fvalues  method fns grs itns conv  KKT1  KKT2 xtimes
1 -81101764 Nelder-Mead 502  NA NULL1 FALSE FALSE   0.11


thanks again!!!

Lucía
(je! that´s my name)


--
View this message in context: 
http://r.789695.n4.nabble.com/BFGS-and-Neldear-Mead-tp3388017p3393510.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Package Installation

2011-03-21 Thread Joshua Ulrich
On Sun, Mar 20, 2011 at 2:42 PM, Bogaso Christofer
 wrote:
> Dear all, can somebody guide me how to install the package RQuantLib, for
> which windows binary is not available. I have tried installing it with R CMD
> INSTALL in my windows vista machine (I have Rtools installed), however it
> stopped due to an error saying: "compilation failed for package
> 'RQuantLib'".
>
>
>
> What is should do right to install this package?
>

I wrote about how to build RQuantLib on Windows on my blog:
http://blog.fosstrading.com/2010/12/build-rquantlib-on-32-bit-windows.html

>
>
> Thanks and regards,
>
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


--
Joshua Ulrich  |  FOSS Trading: www.fosstrading.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Curry with `[.function` ?

2011-03-21 Thread Kenn Konstabel
Dear all,

I sometimes use the following function:

Curry <- function(FUN,...) {
   # by Byron Ellis,
https://stat.ethz.ch/pipermail/r-devel/2007-November/047318.html
   .orig <- list(...)
   function(...) do.call(FUN,c(.orig,list(...)))
   }

... and have thought it might be convenient to have a method for [ doing
this. As a simple example,

> apply(M, 1, mean[trim=0.1])  # hypothetical equivalent to apply(M, 1,
Curry(mean, trim=0.1))

 would be easier to understand  than passing arguments by ...

> apply(M, 1, mean, trim=0.1)

and much shorter than using an anonymous function

> apply(M, 1, function(x) mean(x, trim=0.1)

This would be much more useful for complicated functions that may take
several functions as arguments. For example (my real examples are too long
but this seems general enough),

foo <- function(x, ...) {
 dots <- list(...)
 mapply(function(f) f(x), dots)
 }

foo(1:10, mean, sd)
foo(c(1:10, NA), mean, mean[trim=0.1, na.rm=TRUE], sd[na.rm=TRUE])

Defining `[.function` <- Curry won't help:

> mean[trim=0.1]
Error in mean[trim = 0.1] : object of type 'closure' is not subsettable

One can write summary and other methods for class "function" without such
problems, so this has something to do with [ being a primitive function and
not using UseMethod, it would be foolish to re-define it as an "ordinary"
generic function.

Redefining mean as structure(mean, class="function") will make it work but
then one would have to do it for all functions which is not feasible.

> class(mean) <- class(mean)
> class(sd)<-class(sd)
> foo(c(1:10, NA), mean, mean[na.rm=TRUE], mean[trim=0.1, na.rm=TRUE],
sd[na.rm=TRUE])
[1]   NA 5.50 5.50 3.027650

Or one could define a short-named function (say, .) doing this:

> rm(mean, sd) ## removing the modified copies from global environment
> .<-function(x) structure(x, class=class(x))
> foo(c(1:10, NA), mean, .(mean)[na.rm=TRUE], .(mean)[trim=0.1, na.rm=TRUE],
.(sd)[na.rm=TRUE])

But this is not as nice. (And neither is replacing "[" with "Curry" by using
substitute et al. inside `foo`, - this would make it usable only within
functions that one could be bothered to redefine this way - probably none.)

Thanks in advance for any ideas and comments (including the ones saying that
this is an awful idea)

Best regards,

Kenn

Kenn Konstabel
Department of Chronic Diseases
National Institute for Health Development
Hiiu 42
Tallinn, Estonia

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Combination that adds a value

2011-03-21 Thread Petr Savicky
On Mon, Mar 21, 2011 at 04:08:50AM -0700, Julio Rojas wrote:
> Dear all, I have three vectors "x<-1:n", "y<-1:m" and "z<-2:(n+m)". I need to 
> find all combinations of "x" and "y" that add up a given value of "z", e.g.,  
> for "z==3" the combinations will be "list(c(1,2),c(2,1))".

Dear Julio:

Try the following.

  n <- 3
  m <- 4
  x <- 1:n
  y <- 1:m
  a <- expand.grid(x=x, y=y)
  b <- subset(a, x + y == 3)
  b

x y
  2 2 1
  4 1 2

In order to create a list of the required pairs, try
the following

  out <- vector("list", nrow(b))
  for (i in seq.int(along=out)) {
  out[[i]] <- c(b[i, 1], b[i, 2])
  }

Hope this helps.

Petr Savicky.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] linear regression in a ragged array

2011-03-21 Thread David Winsemius


On Mar 21, 2011, at 5:25 AM, Marcel Curlin wrote:


Hello,
I have a large dataset of the form

subj   var1   var2
001100200
001120226
001130238
001140245
001150300
002110205
002125209
003101233
003115254

I would like to perform linear regression of var2 on var1 for each  
subject
separately. It seems like I should be able to use the tapply  
function as you
do for simple operations (like finding a mean of var1 for each  
subject), but
I am not sure of the correct syntax for this. Is there a way to do  
this?




tapply works on vectors, split works on data.frames.

lapply(split(dat, dat$subj), function(x) lm(var2 ~ var1, data=x))

--

David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BFGS and Neldear-Mead

2011-03-21 Thread LC-Bea
Thank you for your answer!


the message that gives as result is 0, successful completion

so that point wouldnt be the problem


sorry if my english its not good (i speak spanish!)

thanks!




--
View this message in context: 
http://r.789695.n4.nabble.com/BFGS-and-Neldear-Mead-tp3388017p3393173.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rqss help in Quantreg

2011-03-21 Thread Roger Koenker

On Mar 20, 2011, at 10:24 PM, Man Zhang wrote:

> Dear All,
> 
> I'm trying to construct confidence interval for an additive quantile 
> regression 
> model.
> 
> In the quantreg package, vignettes section: Additive Models for Conditional 
> Quantiles
> http://cran.r-project.org/web/packages/quantreg/index.html
> 
> It describes how to construct the intervals, it gives the covariance matrix 
> for 
> the full set of parameters, \theta is given by the sandwich formula
> V = \tau (1-\tau) (t(X) \phi X)^(-1) (t(X) X)^(-1) (t(X) \phi X)^(-1)
> but gives no derivation of this result. 
> 
> Does anyone know how to obtain the above results? I need to construct these 
> intervals with several adjustments, so I need to understand how "V" is 
> constructed. Any proofs?

As the author of the aforementioned vignette I suppose that I am meant to feel
chastised by this request.  I do not.  There is a reasonable expectation in
mathematics and most other fields that readers have some responsibility
to familiarize themselves with the related literature, especially that cited in
the work that they are currently reading.

There seems to be a growing, deplorable expectation that learning a new subject 
is
like being spoon fed some sort of pablum -- it isn't, sometimes you need to
chew.

I would remind would be R-posters of the admonition of Edward Davenant
(quoted by Patrick Billingsley in his magisterial P&M)

I would have a man knockt on the head that should write anything in 
Mathematiques that had been written of before. 



> 
> Thanks, any help is greatly appreciated
> Man Zhang
> 
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] linear regression in a ragged array

2011-03-21 Thread Marcel Curlin
Hello,
I have a large dataset of the form

subj   var1   var2   
001100200
001120226
001130238
001140245
001150300
002110205
002125209
003101233
003115254

I would like to perform linear regression of var2 on var1 for each subject
separately. It seems like I should be able to use the tapply function as you
do for simple operations (like finding a mean of var1 for each subject), but
I am not sure of the correct syntax for this. Is there a way to do this?

Many thanks, Marcel

--
View this message in context: 
http://r.789695.n4.nabble.com/linear-regression-in-a-ragged-array-tp3393033p3393033.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] BFGS and Neldear-Mead

2011-03-21 Thread LC-Bea
hello! this is one of the two functions i use



pen.mle<-function(x){ ax<-x[32:40]
indice<-x[1:31]
bx<-x[41:49]
amlebind1<-matrix(rep(ax,31),nrow=9,ncol=31)
mv<-(-(defunciones*log((exp(amlebind1+bx%*%t(indice)))*poblacion)-((exp(amlebind1+bx%*%t(indice)))*poblacion)))
result<-sum(mv)+10*sum(((t(u)%*%x)-c)^2)
result }

thank!



--
View this message in context: 
http://r.789695.n4.nabble.com/BFGS-and-Neldear-Mead-tp3388017p3393167.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] exploring dist()

2011-03-21 Thread bra86
dear all, thanks

it helped a lot, the problem was that I should treat my data 
as set of species, not samples, so as output I get 4087*4087 matrix, which
afterwards will be applied for PCO,

the thing was, that in cause of big number of values, programm do not show
all 
computed distances and even lower triangle is not seen, 

finally I applied

>distm<-dist(hell.dframe,diag=TRUE,upper=TRUE)

and it worked

thanks for kind explanation!

--
View this message in context: 
http://r.789695.n4.nabble.com/exploring-dist-tp3387187p3393170.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R as a non-functional language

2011-03-21 Thread ONKELINX, Thierry
Dear Russ,

Why not use simply

pH <- c(area1 = 4.5, area2 = 7, mud = 7.3, dam = 8.2, middle = 6.3)

That notation is IMHO the most readable for students.

Best regards,

Thierry


ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek
team Biometrie & Kwaliteitszorg
Gaverstraat 4
9500 Geraardsbergen
Belgium

Research Institute for Nature and Forest
team Biometrics & Quality Assurance
Gaverstraat 4
9500 Geraardsbergen
Belgium

tel. + 32 54/436 185
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey
  

> -Oorspronkelijk bericht-
> Van: r-help-boun...@r-project.org 
> [mailto:r-help-boun...@r-project.org] Namens Russ Abbott
> Verzonden: zondag 20 maart 2011 6:46
> Aan: bill.venab...@csiro.au
> CC: r-help@r-project.org
> Onderwerp: Re: [R] R as a non-functional language
> 
> I'm afraid I disagree.  As a number of people have shown, 
> it's certainly possible to get the end result
> 
> > pH <- c(4.5,7,7.3,8.2,6.3)
> > names(pH) <- c('area1','area2','mud','dam','middle')
> > pH
>  area1  area2muddam middle
>4.57.07.38.26.3
> 
> using a single expression. But what makes this non-functional 
> is that the
> names() function operates on a reference to the pH 
> object/entity/element. In other words, the names() function 
> has a side effect, which is not permitted in strictly 
> functional programming.
> 
> I don't know if R has threads. But imagine it did and that one ran
> 
> > names(pH) <- c('area1','area2','mud','dam','middle')
> 
> and
> 
>  > names(pH) <- c('areaA','areaB','dirt','blockage','center')
> 
> in simultaneous threads. Since they are both operating on the 
> same pH element, it is uncertain what the result would be. 
> That's one of the things that functional programming prevents.
> 
> *-- Russ *
> 
> 
> 
> On Sat, Mar 19, 2011 at 10:22 PM,  wrote:
> 
> > PS the form
> >
> > names(p) <- c(...)
> >
> > is still functional, of course.  It is just a bit of 
> syntactic sugar 
> > for the clumsier
> >
> > p <- `names<-`(p, c(...))
> >
> > e.g.
> > > pH <- `names<-`(pH, letters[1:5])
> > > pH
> >  a   b   c   d   e
> > 4.5 7.0 7.3 8.2 6.3
> > >
> >
> >
> >
> > -Original Message-
> > From: Venables, Bill (CMIS, Dutton Park)
> > Sent: Sunday, 20 March 2011 3:09 PM
> > To: 'Gabor Grothendieck'; 'russ.abb...@gmail.com'
> > Cc: 'r-help@r-project.org'
> > Subject: RE: [R] R as a non-functional language
> >
> > The idiom I prefer is
> >
> > pH <- structure(c(4.5,7,7.3,8.2,6.3),
> >names = c('area1','area2','mud','dam','middle'))
> >
> > -Original Message-
> > From: r-help-boun...@r-project.org 
> > [mailto:r-help-boun...@r-project.org]
> > On Behalf Of Gabor Grothendieck
> > Sent: Sunday, 20 March 2011 2:33 PM
> > To: russ.abb...@gmail.com
> > Cc: r-help@r-project.org
> > Subject: Re: [R] R as a non-functional language
> >
> > On Sun, Mar 20, 2011 at 12:20 AM, Russ Abbott 
> 
> > wrote:
> > > I'm reading Torgo (2010) *Data Mining with 
> > > R*in
> > > preparation for a class I'll be teaching next quarter.  Here's an 
> > > example that is very non-functional.
> > >
> > >> pH <- c(4.5,7,7.3,8.2,6.3)
> > >> names(pH) <- c('area1','area2','mud','dam','middle')
> > >> pH
> > >  area1  area2muddam middle
> > >   4.57.07.38.26.3
> > >
> > >
> > > This sort of thing seems to be quite common in R.
> >
> > Try this:
> >
> > pH <- setNames(c(4.5,7,7.3,8.2,6.3),
> > c('area1','area2','mud','dam','middle'))
> >
> >
> >
> >
> > --
> > Statistics & Software Consulting
> > GKX Group, GKX Associates Inc.
> > tel: 1-877-GKX-GROUP
> > email: ggrothendieck at gmail.com
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and

[R] Combination that adds a value

2011-03-21 Thread Julio Rojas
Dear all, I have three vectors "x<-1:n", "y<-1:m" and "z<-2:(n+m)". I need to 
find all combinations of "x" and "y" that add up a given value of "z", e.g.,  
for "z==3" the combinations will be "list(c(1,2),c(2,1))".

Thanks in advance for your help.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Mixed modelling course in Halifax

2011-03-21 Thread Highland Statistics

Apologies for cross-posting

We would like to announce a mixed effects modelling course in Halifax, 
Canada. June 2011.


For details, see: http://www.highstat.com/statscourse.htm


Kind regards,

Alain Zuur

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] predicting values from multiple regression

2011-03-21 Thread Anna Lee
Dennis: thank you so much! I got it now and it works just perfectly.
Thanks a lot to the others too!
Anna

2011/3/21 Dennis Murphy :
> Hi:
>
> To amplify Ista's and David's comments:
>
> (1) You should not be inputting separate vectors into lm(), especially if
> you intend to do prediction. They should be combined into a data frame
> instead. This is not a requirement, but it's a much safer strategy for
> modeling in R.
> (2) Your covariate st does not have a linear component. It should,
> particularly if this is an empirical model rather than a theoretical one.
> (3) You should be using poly(var, 2) to create orthogonal columns in the
> model matrix for the variables that are to contain quadratic terms.
> (4) The newdata =  argument of predict.lm() [whose help page you should read
> carefully] requires a data frame with columns having precisely the same
> variable names as exist in the RHS of the model formula in lm().
>
> Example:
> dd <- data.frame(y = rnorm(50), x1 = rnorm(50), x2 = runif(50, -2, 2), x3 =
> rpois(50, 10))
>
> #  fit yhat = b0 + b1 * x1 + b2 * x1^2 + b3 * x2 + b4 * x3 + b5 * x3^2
> mod <- lm(y ~ poly(x1, 2) + x2 + poly(x3, 2), data = dd)
>
> # Note that the names of the variables in newd are the same as those on the
> RHS of the formula in mod
> newd <- data.frame(x1 = rnorm(5), x2 = runif(5, -2, 2), x3 = rpois(5,
> 10))  # new data points
> # Append predictions to newd
> cbind(newd, predict(mod, newdata = newd)) # predictions at new
> data points
>
> # To just get predictions at the observed points, all you need is
> predict(mod)
>
> HTH,
> Dennis
>
> On Sun, Mar 20, 2011 at 11:54 AM, Anna Lee  wrote:
>>
>> Hey List,
>>
>> I did a multiple regression and my final model looks as follows:
>>
>> model9<-lm(calP ~ nsP + I(st^2) + distPr + I(distPr^2))
>>
>> Now I tried to predict the values for calP from this model using the
>> following function:
>>
>> xv<-seq(0,89,by=1)
>> yv<-predict(model9,list(distPr=xv,st=xv,nsP=xv))
>>
>> The predicted values are however strange. Now I do not know weather
>> just the model does not fit the data (actually all coefficiets are
>> significant and the plot(model) shows a good shape) or wether I did
>> something wrong with my prediction command. Does anyone have an
>> idea???
>>
>> --
>>
>>
>> Thanks a lot, Anna
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>



-- 



Der Inhalt dieser E-Mail ist vertraulich. Sollte Ihnen die E-Mail
irrtümlich zugesandt worden sein, bitte ich Sie, mich unverzüglich zu
benachrichtigen und die E-Mail zu löschen.

This e-mail is confidential. If you have received it in error, please
notify me immediately and delete it from your system.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] predicting values from multiple regression

2011-03-21 Thread Anna Lee
Dennis: Thank you so much! I got it now - it just works perfectly. Thanks a lot!
Anna

2011/3/21 Dennis Murphy :
> Hi:
>
> To amplify Ista's and David's comments:
>
> (1) You should not be inputting separate vectors into lm(), especially if
> you intend to do prediction. They should be combined into a data frame
> instead. This is not a requirement, but it's a much safer strategy for
> modeling in R.
> (2) Your covariate st does not have a linear component. It should,
> particularly if this is an empirical model rather than a theoretical one.
> (3) You should be using poly(var, 2) to create orthogonal columns in the
> model matrix for the variables that are to contain quadratic terms.
> (4) The newdata =  argument of predict.lm() [whose help page you should read
> carefully] requires a data frame with columns having precisely the same
> variable names as exist in the RHS of the model formula in lm().
>
> Example:
> dd <- data.frame(y = rnorm(50), x1 = rnorm(50), x2 = runif(50, -2, 2), x3 =
> rpois(50, 10))
>
> #  fit yhat = b0 + b1 * x1 + b2 * x1^2 + b3 * x2 + b4 * x3 + b5 * x3^2
> mod <- lm(y ~ poly(x1, 2) + x2 + poly(x3, 2), data = dd)
>
> # Note that the names of the variables in newd are the same as those on the
> RHS of the formula in mod
> newd <- data.frame(x1 = rnorm(5), x2 = runif(5, -2, 2), x3 = rpois(5,
> 10))  # new data points
> # Append predictions to newd
> cbind(newd, predict(mod, newdata = newd)) # predictions at new
> data points
>
> # To just get predictions at the observed points, all you need is
> predict(mod)
>
> HTH,
> Dennis
>
> On Sun, Mar 20, 2011 at 11:54 AM, Anna Lee  wrote:
>>
>> Hey List,
>>
>> I did a multiple regression and my final model looks as follows:
>>
>> model9<-lm(calP ~ nsP + I(st^2) + distPr + I(distPr^2))
>>
>> Now I tried to predict the values for calP from this model using the
>> following function:
>>
>> xv<-seq(0,89,by=1)
>> yv<-predict(model9,list(distPr=xv,st=xv,nsP=xv))
>>
>> The predicted values are however strange. Now I do not know weather
>> just the model does not fit the data (actually all coefficiets are
>> significant and the plot(model) shows a good shape) or wether I did
>> something wrong with my prediction command. Does anyone have an
>> idea???
>>
>> --
>>
>>
>> Thanks a lot, Anna
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>



-- 



Der Inhalt dieser E-Mail ist vertraulich. Sollte Ihnen die E-Mail
irrtümlich zugesandt worden sein, bitte ich Sie, mich unverzüglich zu
benachrichtigen und die E-Mail zu löschen.

This e-mail is confidential. If you have received it in error, please
notify me immediately and delete it from your system.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Feature request: rating/review system for R packages

2011-03-21 Thread Jim Lemon

On 03/21/2011 04:33 AM, Janko Thyson wrote:
...
Hi Janko,
As Dieter said, Crantastic is an opportunity for R users to give both 
quickie ratings and reviews of packages. I have to say that doing a 
review isn't trivial. I feel that I should use a package for a while 
before I can review it, and the big packages would take quite some time 
to work through even the majority of functions, especially if you didn't 
normally use them. Nonetheless, I try to keep a running tally on the 
packages that I use, and when I've got a feeling for the capability, 
reliability and ease of use, I try to sit down and write one. I have an 
idea that many packages are downloaded and one or two useful functions 
are used a lot by any given user.


Ben's idea has been floated before, but either no one has put it 
together or I haven't heard of it. That would probably produce a lot 
more information, and the sum of a package's usage is meaningful.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Number of edges in a graph

2011-03-21 Thread Peter Ehlers

?ecount

Peter Ehlers

On 2011-03-20 22:02, kparamas wrote:

Hi,

I have a igraph graph object.
g<- watts.strogatz.game(1, 100, 5, 0.05)

If I have the summary of g, it prints

summary(g)

Vertices: 100
Edges: 500
Directed: FALSE
No graph attributes.
No vertex attributes.
No edge attributes.

How to print only the number of edges in g?


--
View this message in context: 
http://r.789695.n4.nabble.com/Number-of-edges-in-a-graph-tp3392633p3392633.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Replacing Period in String

2011-03-21 Thread Petr Savicky
On Sun, Mar 20, 2011 at 11:49:05PM -0500, Sparks, John James wrote:
> Dear R Users,
> 
> I am working with gsub for the first time.  I am trying to remove some
> characters from a string.  I have hit the problem where the period is the
> shorthand for 'everything' in the R language when what I want to remove is
> the actual periods.  In the example below, I simply want to remove the
> periods as I have removed the comma, but instead the complete string is
> wiped out.  I would appreciate it if someone could let me know how I
> communicate that I want to remove the period verbatim to R.
> 
> Many thanks.
> --John Sparks
> 
> > txt="This is a test. However, it is only a test."
> > txt2<-gsub(",","",txt)
> > txt2
> [1] "This is a test. However it is only a test."
> > txt3<-gsub(".","",txt)
> > txt3
> [1] ""

In order to force a "." to be interpreted literally, it is
possible to use "\\." as follows

  gsub("\\.","",txt)
  [1] "This is a test However, it is only a test"

Petr Savicky.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Number of edges in a graph

2011-03-21 Thread Petr Savicky
On Sun, Mar 20, 2011 at 10:02:10PM -0700, kparamas wrote:
> Hi,
> 
> I have a igraph graph object.
> g <- watts.strogatz.game(1, 100, 5, 0.05)
> 
> If I have the summary of g, it prints
> > summary(g)
> Vertices: 100 
> Edges: 500 
> Directed: FALSE 
> No graph attributes.
> No vertex attributes.
> No edge attributes.
> 
> How to print only the number of edges in g?

Hi.

The function watts.strogatz.game() is probably from an extension
package. Can you send the name of the package and a reproducible
code to generate a simple example of the graph?

Try str(g). This shows the names of the components of object g,
which can then be used to extract only some of them.

Hope this helps.

Petr Savicky.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help regarding RPostgreSQL and R 2.12.2

2011-03-21 Thread Shaunak Bangale
Prof Ripley,
Thanks for the first reply. Yes, I am using windows xp.
The problem that I am facing is:
I want to use RPostgreSQL and xtable packages together.
R 2.11.1 has RPostgreSQL but not xtable.  R 2.12.2 has xtable but not 
RPostgreSQL.
I suppose there should be some way to use RPostgreSQL with R 2.12.2.
Sorry if I am sounding naïve, but How do I do the compilation that you just 
mentioned?
With your suggestion I went through RW-FAQs , posting guide.
I ended up at the this page: 
http://cran.r-project.org/bin/windows/contrib/2.12/ReadMe
That says: ' Packages related to many database system must be linked to the 
exact
version of the database system the user has installed, hence it does
not make sense to provide binaries for packages '
PostgreSQL 9.0 is installed on my system which I am trying to integrate with R 
2.12.2.
Please throw some light on what can be done.

Thanks
Shaunak




-Original Message-
From: Prof Brian Ripley [mailto:rip...@stats.ox.ac.uk]
Sent: Friday, March 18, 2011 2:54 PM
To: Shaunak Bangale
Cc: r-help@R-project.org
Subject: Re: [R] help regarding RPostgreSQL and R 2.12.2

You haven't even told us your OS, but it sounds like it might be
Windows.

In any case, RPostgreSQL is (like RMySQL) not distributed in binary
form, and you need to compile it from the sources against your
PostgreSQL version (which does matter, including the compiler used to
build it).

This is all covered in the rw-FAQ, if this is Windows (see the posting
guide).

On Fri, 18 Mar 2011, Shaunak Bangale wrote:

> Hi R-team,
> While using R 2.12.2, I came across a problem that it doesn't have 
> RPostgreSQL package in the list of "install packages".
> As my original code was written in R 2.11.1, I could use the RPostgreSQL 
> package.
> I am moving to R 2.12.2 to use the newly added package "xtable" in newer 
> version R 2.12.2.
> Please help me to solve this problem and tell me if the RPostgreSQL package 
> can be added to R 2.12.2.
> Sooner reply will be appreciated.
>
> Thanks in advance.
>
> Regards,
> Shaunak
>
> Shaunak Bangale | Innovation Analyst| +91 9986215550| 
> www.mu-sigma.com |
>
>
>
> 
> This email message may contain proprietary, private and confidential 
> information. The information transmitted is intended only for the person(s) 
> or entities to which it is addressed. Any review, retransmission, 
> dissemination or other use of, or taking of any action in reliance upon, this 
> information by persons or entities other than the intended recipient is 
> prohibited and may be illegal. If you received this in error, please contact 
> the sender and delete the message from your system.
>
> Mu Sigma takes all reasonable steps to ensure that its electronic 
> communications are free from viruses. However, given Internet accessibility, 
> the Company cannot accept liability for any virus introduced by this e-mail 
> or any attachment and you are advised to use up-to-date virus checking 
> software.
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

 This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R part of Google Summer of Code 2011

2011-03-21 Thread Prof. John C Nash
Last Friday we learned that R is accepted again for the Google Summer of Code.

R's "ideas" are at 
http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2011

On that page is a link to our google groups list for mentors and prospective 
students.
See http://www.google-melange.com/ for the official Google site. Note that 
successful
student applicants are paid a stipend that is generally considered to be quite 
attractive.

Students interested should
1) look at the ideas and get in touch with mentors of projects of interest.
2) join the google group for gsoc-r http://groups.google.com/group/gsoc-r which 
we use to
administer the program for R

Right now Melange (the system Google uses to run the program) is being reset. 
Should be
back real soon.

We hope for a number of strong applications. Last year we did not take all the 
slots
offered by Google as we did not feel all the student proposals were strong 
enough.

John Nash (co-admin with Claudia Beleites of GSoC-R)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Package Installation

2011-03-21 Thread Prof Brian Ripley

On Mon, 21 Mar 2011, Bogaso Christofer wrote:


Dear all, can somebody guide me how to install the package RQuantLib, for
which windows binary is not available. I have tried installing it with R CMD
INSTALL in my windows vista machine (I have Rtools installed), however it
stopped due to an error saying: "compilation failed for package
'RQuantLib'".


And what did the package maintainer say when you asked (see the 
posting guide)?


You'll need to work on the C++ code in the package and its dependent
library QuantLib.

There is a good reason why a Windows binary is not available 
Actually, I did at some point in 2010 make a x64 binary package by
cross-compiling, but as this is C++ you need an exact match between
the cross-compiler and the native libraries, and in the 32-bit case
the cross-compilers available were not compatible enough.


What is should do right to install this package?


Use a more capable OS. i.e. one with POSIX 1003 tools and C libraries.


[[alternative HTML version deleted]]


PLEASE do follow the posting guide 
No HTML in postings
'at a minimum' information
Consult the package maintainer
R-devel is the list for discussion of compiler code

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help regarding RPostgreSQL and R 2.12.2

2011-03-21 Thread Prof Brian Ripley

On Mon, 21 Mar 2011, Shaunak Bangale wrote:


Prof Ripley,
Thanks for the first reply. Yes, I am using windows xp.
The problem that I am facing is:
I want to use RPostgreSQL and xtable packages together.
R 2.11.1 has RPostgreSQL but not xtable.  R 2.12.2 has xtable but not 
RPostgreSQL.
I suppose there should be some way to use RPostgreSQL with R 2.12.2.
Sorry if I am sounding naïve, but How do I do the compilation that you just 
mentioned?
With your suggestion I went through RW-FAQs , posting guide.


You still haven't followed them!  Where is the 'at a minimum' 
information asked for in the posting guide?



I ended up at the this page: 
http://cran.r-project.org/bin/windows/contrib/2.12/ReadMe
That says: ' Packages related to many database system must be linked to the 
exact
version of the database system the user has installed, hence it does
not make sense to provide binaries for packages '
PostgreSQL 9.0 is installed on my system which I am trying to integrate with R 
2.12.2.
Please throw some light on what can be done.


You compile from the sources.  See
http://cran.r-project.org/bin/windows/base/rw-FAQ.html#Can-I-install-packages-into-libraries-in-this-version_003f
and the references therein.

It seems unreasonable (and 'naive') to expect the people who gave you 
the free gift of R *and* wrote the FAQs to search and read them for 
you.


In any case, you are simply wrong in your premise: xtable does exist 
for R 2.11.1 and you could very easily install that from the sources 
or use 
http://cran.r-project.org/bin/windows/contrib/2.11/xtable_1.5-6.zip




Thanks
Shaunak




-Original Message-
From: Prof Brian Ripley [mailto:rip...@stats.ox.ac.uk]
Sent: Friday, March 18, 2011 2:54 PM
To: Shaunak Bangale
Cc: r-help@R-project.org
Subject: Re: [R] help regarding RPostgreSQL and R 2.12.2

You haven't even told us your OS, but it sounds like it might be
Windows.

In any case, RPostgreSQL is (like RMySQL) not distributed in binary
form, and you need to compile it from the sources against your
PostgreSQL version (which does matter, including the compiler used to
build it).

This is all covered in the rw-FAQ, if this is Windows (see the posting
guide).

On Fri, 18 Mar 2011, Shaunak Bangale wrote:


Hi R-team,
While using R 2.12.2, I came across a problem that it doesn't have RPostgreSQL package in 
the list of "install packages".
As my original code was written in R 2.11.1, I could use the RPostgreSQL 
package.
I am moving to R 2.12.2 to use the newly added package "xtable" in newer 
version R 2.12.2.
Please help me to solve this problem and tell me if the RPostgreSQL package can 
be added to R 2.12.2.
Sooner reply will be appreciated.

Thanks in advance.

Regards,
Shaunak

Shaunak Bangale | Innovation Analyst| +91 9986215550| 
www.mu-sigma.com |


--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sample size of longitudinal and skewed data

2011-03-21 Thread Lao Meng
Hi all:
I have a question about the sample size calculation.

It's a pilot study,which includes 2 groups(low,high),3 time point(3,6,9
monthes).Each person has 3 results which according to the

3 time points.So it's a longitudinal study.

I want to calculate the minimum sample size from the pilot study, but can't
find the solution since the data is highly skewed and

it's a longitudinal study or multi-level model,which can't use common
algorithm.

Any suggestions from you are welcome.


The demo data is as follow:

id group time result
a low   3  0
a low  6   0
a low  9  3
b low 3   0
b low 6 0
b low 9 5
c high   3 0
c high   6   10
c high   9   80
d high   3   50
d high  6 65
d high  9 100
... ...


Thanks for your help.

My best.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   >