[R] Add constraints to rbinom

2013-06-28 Thread Anamika Chaudhuri
Hi:

I am trying to generate Beta-Binomial random variables and then
calculate Binomial exact confidence intervals to the rate..I was
wondering if I needed to add a constraint such that x=n to it, how do
I go about it. The reason is I am getting data where xn which is
giving a rate1. Heres my code:

set.seed(111)
k-63
x-NULL
p-rbeta(k,3,3)# so that the mean nausea rate is alpha/(alpha+beta)
min-10
max-60
n-as.integer(runif(k,min,max))
for(i in 1:k)
x-cbind(x,rbinom(300,n,p[i]))
x-t(x)
rate-t(t(x)/n)
se_rate-sqrt(rate*(1-rate)/n)



# Exact Confidence Interval

l_cl_exact-qbeta(.025,x,n-x+1)
u_cl_exact-qbeta(.975,x+1,n-x)

for (i in 1:63){

for (j in 1:300)
{

if (x[i,j]==0)
{
l_cl_exact[i,j]-0
u_cl_exact[i,j]-u_cl_exact[i,j]
}
else if (x[i,j]==n[i])
{
l_cl_exact[i,j]-l_cl_exact[i,j]
u_cl_exact[i,j]-1
}
else
l_cl_exact[i,j]-l_cl_exact[i,j]
u_cl_exact[i,j]-u_cl_exact[i,j]

#print(c(i,j))

}
}


Really appreciate any help.
Thanks
Anamika

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Scatter plot with error bars

2013-06-28 Thread Blaser Nello
Are you sure, you want to calculate 68% confidence intervals? 

Use the add-argument in ?errorbar to add to the previous plot.
errbar(x2,y2,y2+1.96*SD2, y2-1.96*SD2, col=green,pch=19, add=TRUE)

Best, 
Nello

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On Behalf Of beginner
Sent: Freitag, 28. Juni 2013 02:24
To: r-help@r-project.org
Subject: [R] Scatter plot with error bars

Hi

I would like to plot multiple data sets on a scatter plot with error
bars.
To do this I write the following code:

install.packages(Hmisc)
library(Hmisc)

x1-data1[,1]
y1-data1[,2]
x2-data2[,1]
y2-data2[,2]
x3-data3[,1]
y3-data3[,2]

SD1-data1[,3]
SD2-data2[,3]
SD3-data3[,4]

delta-runif(5)
errbar(x1,y1,y1+SD1, y1-SD1, col=red,pch=19) lines(x1,y1,col=red,
pch=19, lty=3) errbar(x2,y2,y2+SD2, y2-SD2, col=green,pch=19)
lines(x2,y2,col=green, pch=19, lty=3) errbar(x3,y3,y3+SD3, y3-SD3,
col=blue,pch=19) lines(x3,y3,col=blue, pch=19, lty=3)

However, with this code I can obtain only the scatter plot for x1, y1,
but not for the other data sets. Could you please let me know how should
I modify the code presented above ?

In other situations, when I try to make a scatter plot of several data
sets without error bars, I usually use points () function. 
However it does not work in the presented case... 

I would be very grateful for your help. 



--
View this message in context:
http://r.789695.n4.nabble.com/Scatter-plot-with-error-bars-tp4670502.htm
l
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multivariate version of aggregate

2013-06-28 Thread Jannis
Yes, I had a look at that function. From the documentation, however, it 
did not get clear to me how to split the dataframe into subsets of rows 
based on an index argument. Like:



testframe - data.frame(a=rnorm(100), b = rnorm(100))
indices  - rep(c(1,2), each = 50)


results - ddply(.data = testframe, INDICES= indices, .fun = function(x) 
corr(x[,1], x[,2]))


Where the last command would yield the correlations between column 1 and 
2 of the first 50 and of the last 50 values.


Any ideas?

Jannis

On 27.06.2013 21:43, Greg Snow wrote:

Look at the plyr package, probably the ddply function in that package.  You
can write your own function to do whatever you want on the pieces of the
split apart object.  Correlation between a specified pair of columns would
be simple.


On Thu, Jun 27, 2013 at 11:26 AM, Jannis bt_jan...@yahoo.de wrote:


Dear List members,


i am seeking a multivariate version of aggregate. I want to compute, fro
example the correlation between subsets of two vectors. In aggregate, i can
only supply one vector with indices for subsets. Is  there ready function
for this or do i need to program my own?


Cheers
Jannis

__**
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/**listinfo/r-helphttps://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/**
posting-guide.html http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.






__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multivariate version of aggregate

2013-06-28 Thread Rui Barradas

Hello,

You can solve your problem using only base R, with no need for an 
external package. The two instrucitons below are two ways of doing the same.




sapply(split(testframe, indices), function(x) cor(x[, 1], x[, 2]))

as.vector(by(testframe, indices, function(x) cor(x[, 1], x[, 2])))



Hope this helps,

Rui Barradas

Em 28-06-2013 09:31, Jannis escreveu:

Yes, I had a look at that function. From the documentation, however, it
did not get clear to me how to split the dataframe into subsets of rows
based on an index argument. Like:


testframe - data.frame(a=rnorm(100), b = rnorm(100))
indices  - rep(c(1,2), each = 50)


results - ddply(.data = testframe, INDICES= indices, .fun = function(x)
corr(x[,1], x[,2]))

Where the last command would yield the correlations between column 1 and
2 of the first 50 and of the last 50 values.

Any ideas?

Jannis

On 27.06.2013 21:43, Greg Snow wrote:

Look at the plyr package, probably the ddply function in that
package.  You
can write your own function to do whatever you want on the pieces of the
split apart object.  Correlation between a specified pair of columns
would
be simple.


On Thu, Jun 27, 2013 at 11:26 AM, Jannis bt_jan...@yahoo.de wrote:


Dear List members,


i am seeking a multivariate version of aggregate. I want to compute, fro
example the correlation between subsets of two vectors. In aggregate,
i can
only supply one vector with indices for subsets. Is  there ready
function
for this or do i need to program my own?


Cheers
Jannis

__**
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/**listinfo/r-helphttps://stat.ethz.ch/mailman/listinfo/r-help

PLEASE do read the posting guide http://www.R-project.org/**
posting-guide.html http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.






__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multivariate version of aggregate

2013-06-28 Thread Jannis
Thanks a lot to everybody who responded! My solution now looks similar 
to Ruis and Davids suggestions.



Jannis


On 28.06.2013 11:00, Rui Barradas wrote:

Hello,

You can solve your problem using only base R, with no need for an 
external package. The two instrucitons below are two ways of doing the 
same.




sapply(split(testframe, indices), function(x) cor(x[, 1], x[, 2]))

as.vector(by(testframe, indices, function(x) cor(x[, 1], x[, 2])))



Hope this helps,

Rui Barradas

Em 28-06-2013 09:31, Jannis escreveu:

Yes, I had a look at that function. From the documentation, however, it
did not get clear to me how to split the dataframe into subsets of rows
based on an index argument. Like:


testframe - data.frame(a=rnorm(100), b = rnorm(100))
indices  - rep(c(1,2), each = 50)


results - ddply(.data = testframe, INDICES= indices, .fun = function(x)
corr(x[,1], x[,2]))

Where the last command would yield the correlations between column 1 and
2 of the first 50 and of the last 50 values.

Any ideas?

Jannis

On 27.06.2013 21:43, Greg Snow wrote:

Look at the plyr package, probably the ddply function in that
package.  You
can write your own function to do whatever you want on the pieces of 
the

split apart object.  Correlation between a specified pair of columns
would
be simple.


On Thu, Jun 27, 2013 at 11:26 AM, Jannis bt_jan...@yahoo.de wrote:


Dear List members,


i am seeking a multivariate version of aggregate. I want to 
compute, fro

example the correlation between subsets of two vectors. In aggregate,
i can
only supply one vector with indices for subsets. Is  there ready
function
for this or do i need to program my own?


Cheers
Jannis

__**
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/**listinfo/r-helphttps://stat.ethz.ch/mailman/listinfo/r-help 



PLEASE do read the posting guide http://www.R-project.org/**
posting-guide.html http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.






__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Scatter plot with error bars

2013-06-28 Thread beginner
Thank you very much for your help !



--
View this message in context: 
http://r.789695.n4.nabble.com/Scatter-plot-with-error-bars-tp4670502p4670530.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multivariate version of aggregate

2013-06-28 Thread arun
Hi,
 set.seed(45)
 testframe - data.frame(a=rnorm(100), b = rnorm(100))
 indices  - rep(c(1,2), each = 50)
library(plyr)

ddply(testframe,.(indices),summarize, Cor1=cor(a,b))

#  indices Cor1
#1   1  0.002770524
#2   2 -0.10173


A.K.


- Original Message -
From: Jannis bt_jan...@yahoo.de
To: Greg Snow 538...@gmail.com
Cc: r-help r-help@r-project.org
Sent: Friday, June 28, 2013 4:31 AM
Subject: Re: [R] multivariate version of aggregate

Yes, I had a look at that function. From the documentation, however, it 
did not get clear to me how to split the dataframe into subsets of rows 
based on an index argument. Like:


testframe - data.frame(a=rnorm(100), b = rnorm(100))
indices      - rep(c(1,2), each = 50)


results - ddply(.data = testframe, INDICES= indices, .fun = function(x) 
corr(x[,1], x[,2]))

Where the last command would yield the correlations between column 1 and 
2 of the first 50 and of the last 50 values.

Any ideas?

Jannis

On 27.06.2013 21:43, Greg Snow wrote:
 Look at the plyr package, probably the ddply function in that package.  You
 can write your own function to do whatever you want on the pieces of the
 split apart object.  Correlation between a specified pair of columns would
 be simple.


 On Thu, Jun 27, 2013 at 11:26 AM, Jannis bt_jan...@yahoo.de wrote:

 Dear List members,


 i am seeking a multivariate version of aggregate. I want to compute, fro
 example the correlation between subsets of two vectors. In aggregate, i can
 only supply one vector with indices for subsets. Is  there ready function
 for this or do i need to program my own?


 Cheers
 Jannis

 __**
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/**listinfo/r-helphttps://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/**
 posting-guide.html http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multiple csv files for T-test

2013-06-28 Thread arun
HI,
According to ?t.test() documentation
If ‘paired’ is ‘TRUE’ then both ‘x’ and ‘y’ must be specified and
 they must be the same length.  Missing values are silently removed
 (in pairs if ‘paired’ is ‘TRUE’)

#Example with missing values
set.seed(24)
dat1- as.data.frame(matrix(sample(c(NA,20:40),40,replace=TRUE),ncol=4))
set.seed(285)
dat2- as.data.frame(matrix(sample(c(NA,35:60),40,replace=TRUE),ncol=4)) 

 sapply(colnames(dat1),function(i) 
t.test(dat1[,i],dat2[,i],paired=TRUE)$p.value) 
#  V1   V2   V3   V4 
#7.004488e-05 1.374986e-03 6.666004e-04 3.749257e-04 


#Removing missing values and then do the test
sapply(colnames(dat1),function(i) 
{x1-na.omit(cbind(dat1[,i],dat2[,i]));t.test(x1[,1],x1[,2],paired=TRUE)$p.value})
 
#  V1   V2   V3   V4 
#7.004488e-05 1.374986e-03 6.666004e-04 3.749257e-04 

A.K.




thanks very much, you're help is much appreciated. 

Just another small question, what's the best way to deal with missing data?  If 
i want to do a paired t-test? 


- Original Message -
From: arun smartpink...@yahoo.com
To: R help r-help@r-project.org
Cc: 
Sent: Thursday, June 27, 2013 1:47 PM
Subject: Re: multiple csv files for T-test

Hi,
I used as.data.frame(matrix(...)) just to create an example dataset.  In your 
case, you don't need to do that.  Using the same example:

set.seed(24)
dat1- as.data.frame(matrix(sample(20:40,40,replace=TRUE),ncol=4))
set.seed(285)
dat2- as.data.frame(matrix(sample(35:60,40,replace=TRUE),ncol=4))

write.csv(dat1,file1.csv,row.names=FALSE)
write.csv(dat2,file2.csv,row.names=FALSE)
data1- read.csv(file1.csv)
data2- read.csv(file2.csv)

###Your code:
 dat1New- as.data.frame(matrix(data1))  
 dat2New- as.data.frame(matrix(data2)) 
###It is always useful to check ?str() 


str(dat1New)
#'data.frame':    4 obs. of  1 variable:
# $ V1:List of 4
 # ..$ : int  26 24 34 30 33 39 25 36 36 25
  #..$ : int  32 27 34 34 26 38 24 20 30 22
 # ..$ : int  21 31 35 22 24 34 21 32 33 20
 #..$ : int  26 25 27 23 39 24 35 33 34 40



 dat1New
#  V1
#1 26, 24, 34, 30, 33, 39, 25, 36, 36, 25
#2 32, 27, 34, 34, 26, 38, 24, 20, 30, 22
#3 21, 31, 35, 22, 24, 34, 21, 32, 33, 20
#4 26, 25, 27, 23, 39, 24, 35, 33, 34, 40
 dat2New
#  V1
#1 53, 40, 47, 57, 57, 53, 35, 42, 53, 41
#2 54, 37, 43, 40, 57, 42, 37, 53, 60, 39
#3 54, 60, 46, 50, 35, 41, 58, 45, 36, 53
#4 52, 56, 44, 40, 38, 53, 47, 46, 60, 50
 sapply(colnames(dat1New),function(i) 
t.test(dat1New[,i],dat2New[,i],paired=TRUE)$p.value) 
#Error in x - y : non-numeric argument to binary operator


##Just using data1 and data2

sapply(colnames(data1),function(i) 
t.test(data1[,i],data2[,i],paired=TRUE)$p.value) 
#  V1   V2   V3   V4 
#3.202629e-05 6.510644e-04 6.215225e-04 3.044760e-04 


#or using dat1New and dat2New
sapply(seq_along(dat1New$V1),function(i) 
t.test(dat1New$V1[[i]],dat2New$V1[[i]],paired=TRUE)$p.value)
#[1] 3.202629e-05 6.510644e-04 6.215225e-04 3.044760e-04



A.K.



thanks for the reply, I am getting the following error 
Error in x - y : non-numeric argument to binary operator 

This is what I enter below 

 data1 -read.csv(file1.csv) 
 data2 -read.csv(file2.csv) 
 dat1- as.data.frame(matrix(data1)) 
 dat2- as.data.frame(matrix(data2)) 
 sapply(colnames(dat1),function(i) 
 t.test(dat1[,i],dat2[,i],paired=TRUE)$p.value) 

As far as I can see all my values are numeric...? 


- Original Message -
From: arun smartpink...@yahoo.com
To: R help r-help@r-project.org
Cc: 
Sent: Thursday, June 27, 2013 10:17 AM
Subject: Re: multiple csv files for T-test

Hi,
May be this helps:
#You can use ?read.csv() to read the two files.

set.seed(24)
dat1- as.data.frame(matrix(sample(20:40,40,replace=TRUE),ncol=4))
set.seed(285)
dat2- as.data.frame(matrix(sample(35:60,40,replace=TRUE),ncol=4))
sapply(colnames(dat1),function(i) t.test(dat1[,i],dat2[,i],paired=TRUE)$p.value)
#  V1   V2   V3   V4 
#3.202629e-05 6.510644e-04 6.215225e-04 3.044760e-04 

A.K.

Hi 
I am fairly new to R so if this is a stupid question please forgive me. 

I have a CSV file with multiple parameters (50).  I have another
CSV file with the same parameters after treatment.  Is there a way I 
can read these two files into R and do multiple paired T-test as all the
parameters are in the same columns in each file? 

Thanks in advance

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Changing legend to fill colour in ggplot

2013-06-28 Thread John Kane
 
http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example

While the str() data is useful it is much better to provide real sample data.  
Use dput() to supply it.

 If it is a large file send a sample head(dput(df, somenumber)) with 
somenumber just enough to provide a representative sample.

We currrently have no idea of what the graph looks like as the R-help list 
strips that out .With the actual data a reader can just copy it and your code 
into R and see exactly what you are getting.

Please don't post in HTML. Plain text is much easier to read, the HTML gets 
dropped and any formating goes to pot.

I think the actual answer is probably straight forward but we really should 
have the data

Thanks

John Kane
Kingston ON Canada


 -Original Message-
 From: suparna.mitra...@gmail.com
 Sent: Fri, 28 Jun 2013 13:25:59 +0800
 To: r-help@r-project.org
 Subject: [R] Changing legend to fill colour in ggplot
 
 Hello R experts,
   I am having a problem to edit legend in ggplot using four variables.
 
 My data structure is :
 str(df)
 'data.frame': 10 obs. of  6 variables:
  $ id: Factor w/ 2 levels 639A,640: 1 1 1 1 1
 2
 2 2 2 2
  $ species   : Factor w/ 5 levels acinetobacter_sp,..:
 2
 5 1 4 3 2 5 1 4 3
  $ genome_coverage_bp: int  8196 3405 8625 22568 2128 6100 1841
 3914 8487 1064
  $ genome_length : int  3571237 2541445 3912725 3479613
 5460977
 3571237 2541445 3912725 3479613 5460977
  $ genome_coverage_percentage: Factor w/ 10 levels 0.02%,0.04%,..: 8
 5
 7 10 2 6 3 4 9 1
  $ avg_depth_coverage: num  121.96 2.81 19.84 399.63 1.64 ...
 
 
 Now what I did is
 p=ggplot(df,aes(genome_coverage_percentage,avg_depth_coverage))+geom_point(aes(colour
 = species,shape = factor(id)))
 p+scale_shape_discrete(name  =,labels=c(Patient 1, Patient 2))
 That creats the plot below.
 But I want to change the circles of legend in fill colour. So that it
 doesn't look like it is only from Patient 1, as that also has circle.
 Can anybody help me please?
 
 Thanks a lot in advance :)
  Mitra
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


FREE 3D MARINE AQUARIUM SCREENSAVER - Watch dolphins, sharks  orcas on your 
desktop!

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] choicemodelr is misbehaving under R3.0

2013-06-28 Thread Dimitri Liakhovitski
Dear R-studio and R colleagues,

I've interacted with the authors of the package ChoiceModelR and it looks
like there is a problem in the way RStudio interacts with R3.0.

At least the package ChoiceModelR works just fine under both R gui and
Rstudio when R version is  3.0
When R3.0 is tested - and only R gui is used, the package works just fine.
However, when RStudio is used with R3.0 then the computations by
ChoiceModelR take hours instead of minutes. The problem has been replicated
with different Windows 7 PCs (32 and 64 bits) and even with a Linux
computer.

We are not sure if the same kind of problem exists for other R packages -
when one uses RStudio in combination with R3.0.

Could you please look into it? Thank you very much!

Dimitri




On Thu, Jun 27, 2013 at 10:37 AM, Dimitri Liakhovitski 
dimitri.liakhovit...@gmail.com wrote:

 Hi John (and other authors of ChoiceModelR package),

 I am experiencing a weird thing when using the function choicemodelr under
 R3.0.1.
 Before I updated to R3.0, I had used choicemodelr unde R2.15. It was
 always as fast or faster than exactly the same task in Sawtooth software.
 Now, I've tried to run it under R3.0.1. It seems to be doing the job. But
 something is slowing it down dramatically. In the beginning, it showed time
 left for estimation as 14 min. But then time passed and the time left
 increased(!) instead of decreasing. My whole run took 1.8 hours. I've run
 the same thing in Sawtooth - it took 13 min.
 I replicated the same result under different conditions (restarted my PC,
 had no other stuff running).

 My PC is a 64-bit PC (Windows).
 Then I asked a colleague who also has R3.0 on his PC to run my code. His
 PC is a 32-bit one (Windows). Same thing happened to him. It showed time
 left as 14 min in the beginning and then the time left started growing.
 Then, I asked a colleague who has R2.15 (on a 32-bit Windows machine) to
 run my code. It took him 12 minutes!

 So, something is going on with ChoiceModelR unde R3.0


 Thanks for looking into it.
 --
 Dimitri Liakhovitski




-- 
Dimitri Liakhovitski

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to create a function returning an array ?

2013-06-28 Thread Duncan Murdoch

On 27/06/2013 11:38 PM, David Winsemius wrote:

On Jun 27, 2013, at 8:04 PM, Kaptue Tchuente, Armel wrote:

 Hi there,

 I would like to know how to change the line img=as.single(rnorm(m))) such 
that instead of being a vector of length m as it is now, img is an array of dimension 
c=(n,m,o) for instance

 -
 read_ts-function(n,m,o,img) {
   out-.Fortran(read_ts,
as.integer(n),
as.integer(m),
as.integer(o),
img=as.single(rnorm(n)))
   return(out$img)
 --


Well, assuming that  the 'out$img' object has a R-length of n*m*o , wouldn't if 
be simpler to just change the return call to:


In fact, out$img has a length of n, same as on input.  .Fortran won't 
change the length of its arguments.


Duncan Murdoch



return( array( out$img, dim=c(n,m,o) )

I don't think you wnat start naming your dimension vectors c.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] choicemodelr is misbehaving under R3.0

2013-06-28 Thread John Kane
I have encountered what looks to be a problem with ggpairs in ggally. No idea 
it is from 3.0 as I had never used ggpairs before update to 3.0 but it sounds a 
bit similar
http://support.rstudio.org/help/discussions/problems/6796-ggpairs-in-ggally-very-slow-in-rstudio-and-may-cause-a-crash

John Kane
Kingston ON Canada


 -Original Message-
 From: dimitri.liakhovit...@gmail.com
 Sent: Fri, 28 Jun 2013 09:11:40 -0400
 To: r-help@r-project.org, jcol...@decisionanalyst.com, i...@rstudio.com
 Subject: Re: [R] choicemodelr is misbehaving under R3.0
 
 Dear R-studio and R colleagues,
 
 I've interacted with the authors of the package ChoiceModelR and it
 looks
 like there is a problem in the way RStudio interacts with R3.0.
 
 At least the package ChoiceModelR works just fine under both R gui and
 Rstudio when R version is  3.0
 When R3.0 is tested - and only R gui is used, the package works just
 fine.
 However, when RStudio is used with R3.0 then the computations by
 ChoiceModelR take hours instead of minutes. The problem has been
 replicated
 with different Windows 7 PCs (32 and 64 bits) and even with a Linux
 computer.
 
 We are not sure if the same kind of problem exists for other R packages -
 when one uses RStudio in combination with R3.0.
 
 Could you please look into it? Thank you very much!
 
 Dimitri
 
 
 
 
 On Thu, Jun 27, 2013 at 10:37 AM, Dimitri Liakhovitski 
 dimitri.liakhovit...@gmail.com wrote:
 
 Hi John (and other authors of ChoiceModelR package),
 
 I am experiencing a weird thing when using the function choicemodelr
 under
 R3.0.1.
 Before I updated to R3.0, I had used choicemodelr unde R2.15. It was
 always as fast or faster than exactly the same task in Sawtooth
 software.
 Now, I've tried to run it under R3.0.1. It seems to be doing the job.
 But
 something is slowing it down dramatically. In the beginning, it showed
 time
 left for estimation as 14 min. But then time passed and the time left
 increased(!) instead of decreasing. My whole run took 1.8 hours. I've
 run
 the same thing in Sawtooth - it took 13 min.
 I replicated the same result under different conditions (restarted my
 PC,
 had no other stuff running).
 
 My PC is a 64-bit PC (Windows).
 Then I asked a colleague who also has R3.0 on his PC to run my code. His
 PC is a 32-bit one (Windows). Same thing happened to him. It showed time
 left as 14 min in the beginning and then the time left started growing.
 Then, I asked a colleague who has R2.15 (on a 32-bit Windows machine) to
 run my code. It took him 12 minutes!
 
 So, something is going on with ChoiceModelR unde R3.0
 
 
 Thanks for looking into it.
 --
 Dimitri Liakhovitski
 
 
 
 
 --
 Dimitri Liakhovitski
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


FREE ONLINE PHOTOSHARING - Share your photos online with your friends and 
family!
Visit http://www.inbox.com/photosharing to find out more!

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lexical scoping is not what I expect

2013-06-28 Thread S Ellison
 
  I too find R's lexical scoping rules straightforward.
  However, I'd say that if your code relies on lexical 
  scoping to find something, you should probably rewrite your code.
 
 Except of course that almost every function relies on lexical 
 scoping to some extent!

This could get messy, because a) that's true and b) it actually leads to some 
genuine risks when 'globals' get redefined or masked*. 

How about I amend the assertion to if your code relies on lexical scoping to 
find a variable you defined, you should probably rewrite your code.
and leave it at that, subject to some common sense about whether you know what 
you're doing?

Steve E


*Example
 sin.deg  - function(deg) sin(deg * pi/180)
 sin.deg(45)
[1] 0.7071068
#looks about right 

 pi - 3.2   #Indiana General Assembly bill #247, 1897. 
 sin.deg(45)
[1] 0.7173561
#oops ...  




***
This email and any attachments are confidential. Any use...{{dropped:8}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lexical scoping is not what I expect

2013-06-28 Thread Duncan Murdoch

On 28/06/2013 9:28 AM, S Ellison wrote:
  
  I too find R's lexical scoping rules straightforward.

  However, I'd say that if your code relies on lexical
  scoping to find something, you should probably rewrite your code.

 Except of course that almost every function relies on lexical
 scoping to some extent!

This could get messy, because a) that's true and b) it actually leads to some 
genuine risks when 'globals' get redefined or masked*.

How about I amend the assertion to if your code relies on lexical scoping to find a 
variable you defined, you should probably rewrite your code.
and leave it at that, subject to some common sense about whether you know what 
you're doing?


That still isn't right, because users should feel free to define 
functions and call them from their other functions.


I think who defined it isn't the issue, the issue is whether it might 
change unexpectedly.  The user owns globalenv().  The package author 
owns the package namespace.  So packages should almost never read or 
write things directly from/to globalenv() (the user might change them), 
but they can create their own private environments and write there.


Where it gets a little less clear is when the user writes a function.  I 
would say functions should never write directly to globalenv(), but it's 
perfectly fine to reference constants there (like other functions 
written by the user).  Referencing things there that change is the risky 
thing.


Duncan Murdoch




Steve E


*Example
 sin.deg  - function(deg) sin(deg * pi/180)
 sin.deg(45)
[1] 0.7071068
#looks about right

 pi - 3.2   #Indiana General Assembly bill #247, 1897.
 sin.deg(45)
[1] 0.7173561
#oops ...




***
This email and any attachments are confidential. Any use...{{dropped:8}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help with tables

2013-06-28 Thread arun
HI,
May be this helps:
dat1- read.table(text=
date1 time  date    timeSec topic pupilId correct
02/01/2013 14:58 02/01/2013 140323 fdp.fdp 40 TRUE
02/01/2013 14:59 02/01/2013 140372 fdp.fdp 150 TRUE
03/01/2013 11:23 03/01/2013 213833 fdp.percentage_calc_foundation 15 TRUE
03/01/2013 11:23 03/01/2013 213839 fdp.percentage_calc_foundation 57 TRUE
03/01/2013 11:24 03/01/2013 213845 fdp.percentage_calc_foundation 92 TRUE
03/01/2013 11:24 03/01/2013 213852 fdp.percentage_calc_foundation 65 TRUE
03/01/2013 11:24 03/01/2013 213855 fdp.percentage_calc_foundation 111 TRUE
03/01/2013 11:24 03/01/2013 213860 fdp.percentage_calc_foundation 34 TRUE
03/01/2013 11:24 03/01/2013 213864 fdp.percentage_calc_foundation 109 FALSE
03/01/2013 11:24 03/01/2013 213868 fdp.percentage_calc_foundation 148 FALSE
03/01/2013 11:24 03/01/2013 213877 fdp.percentage_calc_foundation 69 FALSE
03/01/2013 11:24 03/01/2013 213878 fdp.percentage_calc_foundation 61 TRUE
03/01/2013 11:24 03/01/2013 213878 fdp.percentage_calc_foundation 11 TRUE
03/01/2013 11:24 03/01/2013 213879 algebra.core.formulae 134 TRUE
03/01/2013 11:24 03/01/2013 213881 fdp.percentage_calc_foundation 63 TRUE
03/01/2013 11:24 03/01/2013 213886 fdp.percentage_calc_foundation 40 TRUE
03/01/2013 11:24 03/01/2013 213887 algebra.core.formulae 68 TRUE
03/01/2013 11:24 03/01/2013 213898 fdp.percentage_calc_foundation 109 TRUE
03/01/2013 11:24 03/01/2013 213899 algebra.core.formulae 111 TRUE
03/01/2013 11:25 03/01/2013 213901 algebra.core.formulae 101 FALSE
03/01/2013 11:25 03/01/2013 213924 fdp.percentage_calc_foundation 150 TRUE
03/01/2013 11:25 03/01/2013 213958 fdp.percentage_calc_foundation 77 TRUE
03/01/2013 11:25 03/01/2013 213959 fdp.percentage_calc_foundation 134 TRUE
03/01/2013 11:26 03/01/2013 213961 algebra.core.formulae 150 TRUE
03/01/2013 11:26 03/01/2013 214007 algebra.core.formulae 114 TRUE
03/01/2013 11:26 03/01/2013 214008 fdp.percentage_calc_foundation 55 FALSE
03/01/2013 11:26 03/01/2013 214009 fdp.percentage_calc_foundation 67 TRUE
03/01/2013 11:26 03/01/2013 214010 fdp.percentage_calc_foundation 24 TRUE
03/01/2013 11:26 03/01/2013 214014 algebra.core.formulae 114 TRUE
03/01/2013 11:26 03/01/2013 214014 algebra.core.equations 55 TRUE
03/01/2013 11:26 03/01/2013 214015 algebra.core.formulae 97 TRUE
03/01/2013 11:26 03/01/2013 214015 fdp.percentage_calc_foundation 154 FALSE
03/01/2013 11:26 03/01/2013 214017 algebra.core.formulae 21 FALSE
03/01/2013 11:26 03/01/2013 214017 fdp.percentage_calc_foundation 24 TRUE
03/01/2013 11:26 03/01/2013 214019 fdp.percentage_calc_foundation 149 TRUE
03/01/2013 11:26 03/01/2013 214019 fdp.percentage_calc_foundation 119 TRUE
03/01/2013 11:27 03/01/2013 214022 algebra.core.formulae 21 TRUE
03/01/2013 11:27 03/01/2013 214023 algebra.core.formulae 103 TRUE
03/01/2013 11:27 03/01/2013 214023 fdp.percentage_calc_foundation 55 TRUE
03/01/2013 11:27 03/01/2013 214024 fdp.percentage_calc_foundation 24 TRUE
03/01/2013 11:27 03/01/2013 214026 algebra.core.formulae 149 TRUE
03/01/2013 11:27 03/01/2013 214026 fdp.percentage_calc_foundation 154 TRUE
03/01/2013 11:27 03/01/2013 214027 algebra.core.formulae 24 TRUE
03/01/2013 11:27 03/01/2013 214078 algebra.core.equations 67 FALSE
03/01/2013 11:28 03/01/2013 214085 fdp.percentage_calc_foundation 119 TRUE
03/01/2013 11:28 03/01/2013 214085 fdp.percentage_calc_foundation 55 FALSE
03/01/2013 11:28 03/01/2013 214085 fdp.percentage_calc_foundation 149 TRUE
03/01/2013 11:28 03/01/2013 214086 algebra.core.formulae 67 FALSE
03/01/2013 11:29 03/01/2013 214169 algebra.core.formulae 92 TRUE
03/01/2013 11:29 03/01/2013 214172 algebra.core.formulae 15 TRUE
03/01/2013 11:29 03/01/2013 214172 algebra.core.equations 119 TRUE
03/01/2013 11:29 03/01/2013 214173 algebra.core.formulae 46 TRUE
03/01/2013 11:29 03/01/2013 214173 fdp.percentage_calc_foundation 146 TRUE
,sep=,header=TRUE,stringsAsFactors=FALSE)
 dat2- 
data.frame(timestamp=as.POSIXct(paste(dat1[,1],dat1[,2]),format=%m/%d/%Y 
%H:%M), dat1[,-c(1:2)])

 library(xts)
xt1- xts(dat2[,-1],dat2[,1])

library(stringr)

##1st part
 nrow(xt1[2013-03-01 11:15/2013-03-01 11:28])
#[1] 46

##2nd part
table(xt1[2013-03-01 11:15/2013-03-01 11:28,topic])
#
#    algebra.core.equations  algebra.core.formulae 
# 2 14 
#fdp.percentage_calc_foundation 
#    30 


###3rd question 

Subxt1-xt1[2013-03-01 11:15/2013-03-01 11:28]
#Based on number of correct responses
vec1-sort(with(Subxt1,tapply(as.logical(str_trim(correct)),list(pupilId),sum)))
head(vec1,3)
#101 148  69 
#  0   0   0 
 tail(vec1,3)
# 55 149  24 
#  2   3   4 

#Based on proportion of correct responses
vec2-with(Subxt1,tapply(as.logical(str_trim(correct)),list(pupilId),length))
vec2New- vec2[names(vec1)]
 vec3-sort(vec1/vec2New)
 head(vec3,3)
#101 148  69 
#  0   0   0 
 tail(vec3,3)
#150 149  24 
#  1   1   1 

A.K.



For date 03/01/2013 I need to find how many responses I have between 
11.15 and 

Re: [R] Data Package Query

2013-06-28 Thread Jeff Newmiller
You need to learn to execute one statement at a time in order to debug this 
yourself. Copy and paste is your friend. Hint: I already told you that the data 
function is inappropriate if the data does not come from a package.

You should be learning to use the str(), head(), and ls() functions to explore 
your R in-memory environment, and use the built-in help system with the 
question mark (?str) or the help.search() and RSiteSearch() functions.
---
Jeff NewmillerThe .   .  Go Live...
DCN:jdnew...@dcn.davis.ca.usBasics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
--- 
Sent from my phone. Please excuse my brevity.

Yasmine Refai y_re...@hotmail.com wrote:

hello,
 
please advice what is wrong at the below syntax:
Trial-read.table(Trial.txt,header=TRUE)
Trial
save.image(file=Trial.RData)
data(Trial)
fit-logistf(data=Trial, y~x1+x2)

 
and here is the error I get:
Warning message:
In data(Trial) : data set ‘Trial’ not found

 
regards,
yasmine

 
 Date: Fri, 28 Jun 2013 10:29:21 +1200
 From: rolf.tur...@xtra.co.nz
 To: jdnew...@dcn.davis.ca.us
 CC: y_re...@hotmail.com; r-help@r-project.org
 Subject: Re: [R] Data Package Query
 
 On 28/06/13 04:47, Jeff Newmiller wrote:
 
  SNIP
  A common error by beginners (which may or may not be your problem
in this case) is to create a variable called data. Unfortunately this
hides the function named data and from that time forward that R
session doesn't work when you type example code that uses the data
function.
 
  SNIP
 
 This is simply not true.  I believe it *used* to be true, sometime 
 wa back,
 but hasn't been true for years.  The R language is much cleverer now.
 
 If there
 is a function melvin() somewhere on the search path and also a data
object
 melvin (earlier on the search path) then doing
 
  melvin(whatever)
 
 will correctly call the function melvin() with no complaints.  The R 
 language
 can tell by the parentheses that you mean the *function* melvin and

 not the
 data object melvin.
 
 E.g.
 
  data - 42
  require(akima)
  akima
  Error: object 'akima' not found
  data(akima)  # No error message, nor nothin'!
  akima
  # The data set akima is displayed.
 
 All that being said it is ***BAD PRACTICE***, just in terms of 
 comprehensibility
 and avoiding confusion, to give a data set set the same name as a
function
 (either built in, or one of your own).
 
  fortune(dog)
 
 is relevant.
 
  cheers,
 
  Rolf Turner
 


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to create a function returning an array ?

2013-06-28 Thread Kaptue Tchuente, Armel
@ Duncan, I have already used the syntax that you proposed before asking for 
help by writing something like 
 read_ts-function(n,m,img) {
  out-.Fortran(read_ts,
as.integer(n),
as.integer(m),
img=as.single(rnorm(n*m)))
 return(out$img)
 alpha-read_ts(n,m)
 dim(alpha)-c(n*m)
 alpha-t(alpha)
My worry with this syntax is that (i) the program is not very efficient because 
n and m are very big and these two additional instructions (dim(alpha)-c(n*m 
and alpha-t(alpha) can be skipped just by directly declaring img as an array 
in fortran instead of a vector and (ii) the syntax will become more complex 
dealing with a multidimensional array instead of a matrix as in this example.
And this is why I'm looking for the correct instruction to declare img as an 
array instead of a vector.

Armel

-Original Message-
From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com] 
Sent: Friday, June 28, 2013 8:16 AM
To: David Winsemius
Cc: Kaptue Tchuente, Armel; r-help@r-project.org
Subject: Re: [R] How to create a function returning an array ?

On 27/06/2013 11:38 PM, David Winsemius wrote:
 On Jun 27, 2013, at 8:04 PM, Kaptue Tchuente, Armel wrote:

  Hi there,
 
  I would like to know how to change the line 
  img=as.single(rnorm(m))) such that instead of being a vector of 
  length m as it is now, img is an array of dimension c=(n,m,o) for 
  instance
 
  -
  read_ts-function(n,m,o,img) {
out-.Fortran(read_ts,
 as.integer(n),
 as.integer(m),
 as.integer(o),
 img=as.single(rnorm(n)))
return(out$img)
  --
 

 Well, assuming that  the 'out$img' object has a R-length of n*m*o , wouldn't 
 if be simpler to just change the return call to:

In fact, out$img has a length of n, same as on input.  .Fortran won't change 
the length of its arguments.

Duncan Murdoch


 return( array( out$img, dim=c(n,m,o) )

 I don't think you wnat start naming your dimension vectors c.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to create a function returning an array ?

2013-06-28 Thread Duncan Murdoch

On 28/06/2013 10:18 AM, Kaptue Tchuente, Armel wrote:

@ Duncan, I have already used the syntax that you proposed before asking for 
help by writing something like
 read_ts-function(n,m,img) {
  out-.Fortran(read_ts,
as.integer(n),
as.integer(m),
img=as.single(rnorm(n*m)))
 return(out$img)
 alpha-read_ts(n,m)
 dim(alpha)-c(n*m)
 alpha-t(alpha)
My worry with this syntax is that (i) the program is not very efficient because n and 
m are very big and these two additional instructions (dim(alpha)-c(n*m and 
alpha-t(alpha) can be skipped just by directly declaring img as an array in 
fortran instead of a vector and (ii) the syntax will become more complex dealing with 
a multidimensional array instead of a matrix as in this example.
And this is why I'm looking for the correct instruction to declare img as an 
array instead of a vector.


There are several typos in your code above, but I think your intention 
is clear.


You can do what you are asking for, but not with .Fortran.  It only 
handles vectors for input and output.  You'll need to use .Call (which 
means writing in C or C++).  If you're familiar with C++, using Rcpp is 
probably the easiest way to do this.  If not, I'd rewrite the Fortran 
code to avoid the need for the transpose at the end, and do


dim(out$img) - c(n,m)
return(out$img)

within your read_ts function.  I think this is reasonably efficient.

Duncan Murdoch




Armel

-Original Message-
From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
Sent: Friday, June 28, 2013 8:16 AM
To: David Winsemius
Cc: Kaptue Tchuente, Armel; r-help@r-project.org
Subject: Re: [R] How to create a function returning an array ?

On 27/06/2013 11:38 PM, David Winsemius wrote:
 On Jun 27, 2013, at 8:04 PM, Kaptue Tchuente, Armel wrote:

  Hi there,
 
  I would like to know how to change the line
  img=as.single(rnorm(m))) such that instead of being a vector of
  length m as it is now, img is an array of dimension c=(n,m,o) for
  instance
 
  -
  read_ts-function(n,m,o,img) {
out-.Fortran(read_ts,
 as.integer(n),
 as.integer(m),
 as.integer(o),
 img=as.single(rnorm(n)))
return(out$img)
  --
 

 Well, assuming that  the 'out$img' object has a R-length of n*m*o , wouldn't 
if be simpler to just change the return call to:

In fact, out$img has a length of n, same as on input.  .Fortran won't change 
the length of its arguments.

Duncan Murdoch


 return( array( out$img, dim=c(n,m,o) )

 I don't think you wnat start naming your dimension vectors c.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to create a function returning an array ?

2013-06-28 Thread Kaptue Tchuente, Armel
Please could you explain what you mean by several typos in my code?

Anyway, it works very well and keep in minds that here I just wrote a very 
simple code to illustrate what I really want since in the reality, the program 
is more complex than what you see.
May be I'm wrong but I also prefer to call this fortran function because I 
realized that (i) R is not very efficient for large data sets reading and 
processing and (ii) the speed of execution in Fortran is faster than in C.

Armel

-Original Message-
From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com] 
Sent: Friday, June 28, 2013 9:30 AM
To: Kaptue Tchuente, Armel
Cc: r-help@r-project.org
Subject: Re: [R] How to create a function returning an array ?

On 28/06/2013 10:18 AM, Kaptue Tchuente, Armel wrote:
 @ Duncan, I have already used the syntax that you proposed before 
 asking for help by writing something like
  read_ts-function(n,m,img) {
   out-.Fortran(read_ts,
 as.integer(n),
 as.integer(m),
 img=as.single(rnorm(n*m)))
  return(out$img)
  alpha-read_ts(n,m)
  dim(alpha)-c(n*m)
  alpha-t(alpha)
 My worry with this syntax is that (i) the program is not very efficient 
 because n and m are very big and these two additional instructions 
 (dim(alpha)-c(n*m and alpha-t(alpha) can be skipped just by directly 
 declaring img as an array in fortran instead of a vector and (ii) the syntax 
 will become more complex dealing with a multidimensional array instead of a 
 matrix as in this example.
 And this is why I'm looking for the correct instruction to declare img as an 
 array instead of a vector.

There are several typos in your code above, but I think your intention is clear.

You can do what you are asking for, but not with .Fortran.  It only handles 
vectors for input and output.  You'll need to use .Call (which means writing in 
C or C++).  If you're familiar with C++, using Rcpp is probably the easiest way 
to do this.  If not, I'd rewrite the Fortran code to avoid the need for the 
transpose at the end, and do

dim(out$img) - c(n,m)
return(out$img)

within your read_ts function.  I think this is reasonably efficient.

Duncan Murdoch



 Armel

 -Original Message-
 From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
 Sent: Friday, June 28, 2013 8:16 AM
 To: David Winsemius
 Cc: Kaptue Tchuente, Armel; r-help@r-project.org
 Subject: Re: [R] How to create a function returning an array ?

 On 27/06/2013 11:38 PM, David Winsemius wrote:
  On Jun 27, 2013, at 8:04 PM, Kaptue Tchuente, Armel wrote:
 
   Hi there,
  
   I would like to know how to change the line 
   img=as.single(rnorm(m))) such that instead of being a vector of 
   length m as it is now, img is an array of dimension c=(n,m,o) for 
   instance
  
   -
   read_ts-function(n,m,o,img) {
 out-.Fortran(read_ts,
  as.integer(n),
  as.integer(m),
  as.integer(o),
  img=as.single(rnorm(n)))
 return(out$img)
   --
  
 
  Well, assuming that  the 'out$img' object has a R-length of n*m*o , 
  wouldn't if be simpler to just change the return call to:

 In fact, out$img has a length of n, same as on input.  .Fortran won't change 
 the length of its arguments.

 Duncan Murdoch

 
  return( array( out$img, dim=c(n,m,o) )
 
  I don't think you wnat start naming your dimension vectors c.
 


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to create a function returning an array ?

2013-06-28 Thread Duncan Murdoch

On 28/06/2013 10:46 AM, Kaptue Tchuente, Armel wrote:

Please could you explain what you mean by several typos in my code?


1. img is passed as an argument, but never used.
2. There is no closing brace on the function body.
3.  You set the dimension to n*m, when I believe you wanted c(n, m).

Duncan Murdoch



Anyway, it works very well and keep in minds that here I just wrote a very 
simple code to illustrate what I really want since in the reality, the program 
is more complex than what you see.
May be I'm wrong but I also prefer to call this fortran function because I 
realized that (i) R is not very efficient for large data sets reading and 
processing and (ii) the speed of execution in Fortran is faster than in C.

Armel

-Original Message-
From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
Sent: Friday, June 28, 2013 9:30 AM
To: Kaptue Tchuente, Armel
Cc: r-help@r-project.org
Subject: Re: [R] How to create a function returning an array ?

On 28/06/2013 10:18 AM, Kaptue Tchuente, Armel wrote:
 @ Duncan, I have already used the syntax that you proposed before
 asking for help by writing something like
  read_ts-function(n,m,img) {
   out-.Fortran(read_ts,
 as.integer(n),
 as.integer(m),
 img=as.single(rnorm(n*m)))
  return(out$img)
  alpha-read_ts(n,m)
  dim(alpha)-c(n*m)
  alpha-t(alpha)
 My worry with this syntax is that (i) the program is not very efficient because n 
and m are very big and these two additional instructions (dim(alpha)-c(n*m and 
alpha-t(alpha) can be skipped just by directly declaring img as an array in fortran 
instead of a vector and (ii) the syntax will become more complex dealing with a 
multidimensional array instead of a matrix as in this example.
 And this is why I'm looking for the correct instruction to declare img as an 
array instead of a vector.

There are several typos in your code above, but I think your intention is clear.

You can do what you are asking for, but not with .Fortran.  It only handles 
vectors for input and output.  You'll need to use .Call (which means writing in 
C or C++).  If you're familiar with C++, using Rcpp is probably the easiest way 
to do this.  If not, I'd rewrite the Fortran code to avoid the need for the 
transpose at the end, and do

dim(out$img) - c(n,m)
return(out$img)

within your read_ts function.  I think this is reasonably efficient.

Duncan Murdoch



 Armel

 -Original Message-
 From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
 Sent: Friday, June 28, 2013 8:16 AM
 To: David Winsemius
 Cc: Kaptue Tchuente, Armel; r-help@r-project.org
 Subject: Re: [R] How to create a function returning an array ?

 On 27/06/2013 11:38 PM, David Winsemius wrote:
  On Jun 27, 2013, at 8:04 PM, Kaptue Tchuente, Armel wrote:
 
   Hi there,
  
   I would like to know how to change the line
   img=as.single(rnorm(m))) such that instead of being a vector of
   length m as it is now, img is an array of dimension c=(n,m,o) for
   instance
  
   -
   read_ts-function(n,m,o,img) {
 out-.Fortran(read_ts,
  as.integer(n),
  as.integer(m),
  as.integer(o),
  img=as.single(rnorm(n)))
 return(out$img)
   --
  
 
  Well, assuming that  the 'out$img' object has a R-length of n*m*o , 
wouldn't if be simpler to just change the return call to:

 In fact, out$img has a length of n, same as on input.  .Fortran won't change 
the length of its arguments.

 Duncan Murdoch

 
  return( array( out$img, dim=c(n,m,o) )
 
  I don't think you wnat start naming your dimension vectors c.
 




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lexical scoping is not what I expect

2013-06-28 Thread John Fox
Dear Duncan and Steve,

Since Steve's example raises it, I've never understood why it's legal to
change the built-in global constants in R, including T and F. That just
seems to me to set a trap for users. Why not treat these as reserved
symbols, like TRUE, Inf, etc.?

Best,
 John

 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
 project.org] On Behalf Of Duncan Murdoch
 Sent: Friday, June 28, 2013 9:40 AM
 To: S Ellison
 Cc: r-help@r-project.org
 Subject: Re: [R] Lexical scoping is not what I expect
 
 On 28/06/2013 9:28 AM, S Ellison wrote:
 
I too find R's lexical scoping rules straightforward.
However, I'd say that if your code relies on lexical
scoping to find something, you should probably rewrite your code.
  
   Except of course that almost every function relies on lexical
   scoping to some extent!
 
  This could get messy, because a) that's true and b) it actually leads
 to some genuine risks when 'globals' get redefined or masked*.
 
  How about I amend the assertion to if your code relies on lexical
 scoping to find a variable you defined, you should probably rewrite
 your code.
  and leave it at that, subject to some common sense about whether you
 know what you're doing?
 
 That still isn't right, because users should feel free to define
 functions and call them from their other functions.
 
 I think who defined it isn't the issue, the issue is whether it might
 change unexpectedly.  The user owns globalenv().  The package author
 owns the package namespace.  So packages should almost never read or
 write things directly from/to globalenv() (the user might change them),
 but they can create their own private environments and write there.
 
 Where it gets a little less clear is when the user writes a function.
 I
 would say functions should never write directly to globalenv(), but
 it's
 perfectly fine to reference constants there (like other functions
 written by the user).  Referencing things there that change is the
 risky
 thing.
 
 Duncan Murdoch
 
 
 
  Steve E
 
 
  *Example
   sin.deg  - function(deg) sin(deg * pi/180)
   sin.deg(45)
  [1] 0.7071068
  #looks about right
 
   pi - 3.2   #Indiana General Assembly bill #247, 1897.
   sin.deg(45)
  [1] 0.7173561
  #oops ...
 
 
 
 
  ***
  This email and any attachments are confidential. Any
 use...{{dropped:8}}
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lexical scoping is not what I expect

2013-06-28 Thread Duncan Murdoch

On 28/06/2013 10:54 AM, John Fox wrote:

Dear Duncan and Steve,

Since Steve's example raises it, I've never understood why it's legal to
change the built-in global constants in R, including T and F. That just
seems to me to set a trap for users. Why not treat these as reserved
symbols, like TRUE, Inf, etc.?


If we weren't allowed to change T and F, would we be allowed to change 
other constants, like functions mean or max? If we couldn't change 
anything defined in the base package, would we be allowed to change 
things defined in other packages like stats or utils or graphics?  I 
think it's simply a matter of setting the line somewhere, and R has 
chosen to set it in the most permissive place that's reasonable.  It 
assumes that its users know what they are doing.


Why not allow changes to TRUE or Inf?   TRUE is a constant, whereas T is 
a variable containing TRUE.  Inf is also a constant, corresponding to a 
length one vector containing that value.  Those are treated by the 
parser like 2 or hello.  It would be really bad if someone could 
change the meaning of 2 (though I hear some old Fortran compilers 
allowed that), but is it really so bad to allow someone to define their 
own plot function, or temperature variable named T?


Duncan Murdoch


Best,
  John

 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
 project.org] On Behalf Of Duncan Murdoch
 Sent: Friday, June 28, 2013 9:40 AM
 To: S Ellison
 Cc: r-help@r-project.org
 Subject: Re: [R] Lexical scoping is not what I expect

 On 28/06/2013 9:28 AM, S Ellison wrote:
 
I too find R's lexical scoping rules straightforward.
However, I'd say that if your code relies on lexical
scoping to find something, you should probably rewrite your code.
  
   Except of course that almost every function relies on lexical
   scoping to some extent!
 
  This could get messy, because a) that's true and b) it actually leads
 to some genuine risks when 'globals' get redefined or masked*.
 
  How about I amend the assertion to if your code relies on lexical
 scoping to find a variable you defined, you should probably rewrite
 your code.
  and leave it at that, subject to some common sense about whether you
 know what you're doing?

 That still isn't right, because users should feel free to define
 functions and call them from their other functions.

 I think who defined it isn't the issue, the issue is whether it might
 change unexpectedly.  The user owns globalenv().  The package author
 owns the package namespace.  So packages should almost never read or
 write things directly from/to globalenv() (the user might change them),
 but they can create their own private environments and write there.

 Where it gets a little less clear is when the user writes a function.
 I
 would say functions should never write directly to globalenv(), but
 it's
 perfectly fine to reference constants there (like other functions
 written by the user).  Referencing things there that change is the
 risky
 thing.

 Duncan Murdoch


 
  Steve E
 
 
  *Example
   sin.deg  - function(deg) sin(deg * pi/180)
   sin.deg(45)
  [1] 0.7071068
#looks about right
 
   pi - 3.2   #Indiana General Assembly bill #247, 1897.
   sin.deg(45)
  [1] 0.7173561
#oops ...
 
 
 
 
  ***
  This email and any attachments are confidential. Any
 use...{{dropped:8}}
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
  and provide commented, minimal, self-contained, reproducible code.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lexical scoping is not what I expect

2013-06-28 Thread Prof Brian Ripley

On 28/06/2013 15:54, John Fox wrote:

Dear Duncan and Steve,

Since Steve's example raises it, I've never understood why it's legal to
change the built-in global constants in R, including T and F. That just
seems to me to set a trap for users. Why not treat these as reserved
symbols, like TRUE, Inf, etc.?


Because people wanted to use them as names of things.  Maybe a T (not t) 
statistic.


And BTW, you can change the value of T for yourself, but you will not 
change it for any package code, including R itself, since the base 
namespace is ahead of the workspace in namespace scoping.  Because of 
that, it is years since I have seen anyone actually trip themselves up 
over this.




Best,
  John


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
project.org] On Behalf Of Duncan Murdoch
Sent: Friday, June 28, 2013 9:40 AM
To: S Ellison
Cc: r-help@r-project.org
Subject: Re: [R] Lexical scoping is not what I expect

On 28/06/2013 9:28 AM, S Ellison wrote:



I too find R's lexical scoping rules straightforward.
However, I'd say that if your code relies on lexical
scoping to find something, you should probably rewrite your code.


Except of course that almost every function relies on lexical
scoping to some extent!


This could get messy, because a) that's true and b) it actually leads

to some genuine risks when 'globals' get redefined or masked*.


How about I amend the assertion to if your code relies on lexical

scoping to find a variable you defined, you should probably rewrite
your code.

and leave it at that, subject to some common sense about whether you

know what you're doing?

That still isn't right, because users should feel free to define
functions and call them from their other functions.

I think who defined it isn't the issue, the issue is whether it might
change unexpectedly.  The user owns globalenv().  The package author
owns the package namespace.  So packages should almost never read or
write things directly from/to globalenv() (the user might change them),
but they can create their own private environments and write there.

Where it gets a little less clear is when the user writes a function.
I
would say functions should never write directly to globalenv(), but
it's
perfectly fine to reference constants there (like other functions
written by the user).  Referencing things there that change is the
risky
thing.

Duncan Murdoch




Steve E


*Example

sin.deg  - function(deg) sin(deg * pi/180)
sin.deg(45)

[1] 0.7071068
#looks about right


pi - 3.2   #Indiana General Assembly bill #247, 1897.
sin.deg(45)

[1] 0.7173561
#oops ...




***
This email and any attachments are confidential. Any

use...{{dropped:8}}


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-

guide.html

and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-
guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to create a function returning an array ?

2013-06-28 Thread Kaptue Tchuente, Armel
Your comments #2 and 3 are correct.
The closing brace is missing and. Normally the end of the pr after the 
instruction 
For the comment #1 relate to the declaration img, I don't know if there is 
another way to import data from fortran in R.

Armel

-Original Message-
From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com] 
Sent: Friday, June 28, 2013 9:53 AM
To: Kaptue Tchuente, Armel
Cc: r-help@r-project.org
Subject: Re: [R] How to create a function returning an array ?

On 28/06/2013 10:46 AM, Kaptue Tchuente, Armel wrote:
 Please could you explain what you mean by several typos in my code?

1. img is passed as an argument, but never used.
2. There is no closing brace on the function body.
3.  You set the dimension to n*m, when I believe you wanted c(n, m).

Duncan Murdoch


 Anyway, it works very well and keep in minds that here I just wrote a very 
 simple code to illustrate what I really want since in the reality, the 
 program is more complex than what you see.
 May be I'm wrong but I also prefer to call this fortran function because I 
 realized that (i) R is not very efficient for large data sets reading and 
 processing and (ii) the speed of execution in Fortran is faster than in C.

 Armel

 -Original Message-
 From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
 Sent: Friday, June 28, 2013 9:30 AM
 To: Kaptue Tchuente, Armel
 Cc: r-help@r-project.org
 Subject: Re: [R] How to create a function returning an array ?

 On 28/06/2013 10:18 AM, Kaptue Tchuente, Armel wrote:
  @ Duncan, I have already used the syntax that you proposed before 
  asking for help by writing something like
   read_ts-function(n,m,img) {
out-.Fortran(read_ts,
  as.integer(n),
  as.integer(m),
  img=as.single(rnorm(n*m)))
   return(out$img)
   alpha-read_ts(n,m)
   dim(alpha)-c(n*m)
   alpha-t(alpha)
  My worry with this syntax is that (i) the program is not very efficient 
  because n and m are very big and these two additional instructions 
  (dim(alpha)-c(n*m and alpha-t(alpha) can be skipped just by directly 
  declaring img as an array in fortran instead of a vector and (ii) the 
  syntax will become more complex dealing with a multidimensional array 
  instead of a matrix as in this example.
  And this is why I'm looking for the correct instruction to declare img as 
  an array instead of a vector.

 There are several typos in your code above, but I think your intention is 
 clear.

 You can do what you are asking for, but not with .Fortran.  It only 
 handles vectors for input and output.  You'll need to use .Call (which 
 means writing in C or C++).  If you're familiar with C++, using Rcpp 
 is probably the easiest way to do this.  If not, I'd rewrite the 
 Fortran code to avoid the need for the transpose at the end, and do

 dim(out$img) - c(n,m)
 return(out$img)

 within your read_ts function.  I think this is reasonably efficient.

 Duncan Murdoch


 
  Armel
 
  -Original Message-
  From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
  Sent: Friday, June 28, 2013 8:16 AM
  To: David Winsemius
  Cc: Kaptue Tchuente, Armel; r-help@r-project.org
  Subject: Re: [R] How to create a function returning an array ?
 
  On 27/06/2013 11:38 PM, David Winsemius wrote:
   On Jun 27, 2013, at 8:04 PM, Kaptue Tchuente, Armel wrote:
  
Hi there,
   
I would like to know how to change the line 
img=as.single(rnorm(m))) such that instead of being a vector 
of length m as it is now, img is an array of dimension c=(n,m,o) 
for instance
   
-
read_ts-function(n,m,o,img) {
  out-.Fortran(read_ts,
   as.integer(n),
   as.integer(m),
   as.integer(o),
   img=as.single(rnorm(n)))
  return(out$img)
--
   
  
   Well, assuming that  the 'out$img' object has a R-length of n*m*o , 
   wouldn't if be simpler to just change the return call to:
 
  In fact, out$img has a length of n, same as on input.  .Fortran won't 
  change the length of its arguments.
 
  Duncan Murdoch
 
  
   return( array( out$img, dim=c(n,m,o) )
  
   I don't think you wnat start naming your dimension vectors c.
  
 


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lexical scoping is not what I expect

2013-06-28 Thread John Fox
Dear Duncan,

I think that it's reasonable to make a distinction between symbols
representing functions and those representing built-in constants, even if,
as in the case of pi and T (and letters, etc.), the latter are technically
implemented as variables. 

Although it may be natural to use the variable T for temperature or F for an
F-value (or pi for what?), nothing fundamental is gained by the ability to
do so (one could use temp or fvalue or Pi), while in the case of a function,
one can consciously mask, e.g., the standard mean(). I'm not sure that it's
generally wise for users to redefine functions like mean(), but I can see a
use for the ability to do that.

Please don't feel the need to reply if you don't wish to -- this isn't in my
opinion an important issue.

Best,
 John

 -Original Message-
 From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
 Sent: Friday, June 28, 2013 11:07 AM
 To: John Fox
 Cc: 'S Ellison'; r-help@r-project.org
 Subject: Re: [R] Lexical scoping is not what I expect
 
 On 28/06/2013 10:54 AM, John Fox wrote:
  Dear Duncan and Steve,
 
  Since Steve's example raises it, I've never understood why it's legal
 to
  change the built-in global constants in R, including T and F. That
 just
  seems to me to set a trap for users. Why not treat these as reserved
  symbols, like TRUE, Inf, etc.?
 
 If we weren't allowed to change T and F, would we be allowed to change
 other constants, like functions mean or max? If we couldn't change
 anything defined in the base package, would we be allowed to change
 things defined in other packages like stats or utils or graphics?  I
 think it's simply a matter of setting the line somewhere, and R has
 chosen to set it in the most permissive place that's reasonable.  It
 assumes that its users know what they are doing.
 
 Why not allow changes to TRUE or Inf?   TRUE is a constant, whereas T
 is
 a variable containing TRUE.  Inf is also a constant, corresponding to a
 length one vector containing that value.  Those are treated by the
 parser like 2 or hello.  It would be really bad if someone could
 change the meaning of 2 (though I hear some old Fortran compilers
 allowed that), but is it really so bad to allow someone to define their
 own plot function, or temperature variable named T?
 
 Duncan Murdoch
 
  Best,
John
 
   -Original Message-
   From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
   project.org] On Behalf Of Duncan Murdoch
   Sent: Friday, June 28, 2013 9:40 AM
   To: S Ellison
   Cc: r-help@r-project.org
   Subject: Re: [R] Lexical scoping is not what I expect
  
   On 28/06/2013 9:28 AM, S Ellison wrote:
   
  I too find R's lexical scoping rules straightforward.
  However, I'd say that if your code relies on lexical
  scoping to find something, you should probably rewrite your
 code.

 Except of course that almost every function relies on lexical
 scoping to some extent!
   
This could get messy, because a) that's true and b) it actually
 leads
   to some genuine risks when 'globals' get redefined or masked*.
   
How about I amend the assertion to if your code relies on
 lexical
   scoping to find a variable you defined, you should probably rewrite
   your code.
and leave it at that, subject to some common sense about whether
 you
   know what you're doing?
  
   That still isn't right, because users should feel free to define
   functions and call them from their other functions.
  
   I think who defined it isn't the issue, the issue is whether it
 might
   change unexpectedly.  The user owns globalenv().  The package
 author
   owns the package namespace.  So packages should almost never read
 or
   write things directly from/to globalenv() (the user might change
 them),
   but they can create their own private environments and write there.
  
   Where it gets a little less clear is when the user writes a
 function.
   I
   would say functions should never write directly to globalenv(), but
   it's
   perfectly fine to reference constants there (like other functions
   written by the user).  Referencing things there that change is the
   risky
   thing.
  
   Duncan Murdoch
  
  
   
Steve E
   
   
*Example
 sin.deg  - function(deg) sin(deg * pi/180)
 sin.deg(45)
[1] 0.7071068
#looks about right
   
 pi - 3.2   #Indiana General Assembly bill #247, 1897.
 sin.deg(45)
[1] 0.7173561
#oops ...
   
   
   
   
   
 ***
This email and any attachments are confidential. Any
   use...{{dropped:8}}
   
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-
 project.org/posting-
   guide.html
and provide commented, minimal, self-contained, reproducible
 code.
  
   

[R] use of formula in survey analysis with replicated weights

2013-06-28 Thread LE TERTRE Alain
Hi there,
I would like to use a formula inside a call to withReplicates in a survey 
analysis.
If the initial call with formula expressed inside the function works as 
expected, defining the formula outside gives an error message.
See example below, adapted from survey:withReplicates help page.

library(survey)
library(quantreg)

data(api)
## one-stage cluster sample
dclus1-svydesign( id=~dnum, weights=~pw, data=apiclus1, fpc=~fpc)
## convert to bootstrap
bclus1-as.svrepdesign( dclus1, type=bootstrap, replicates=100)

## median regression
withReplicates( bclus1, quote( coef( rq( api00~api99, tau=0.5, 
weights=.weights
   theta SE
(Intercept) 87.78505 18.850
api990.91589  0.028

# Defining formula outside
Myformula - as.formula(  api00~api99)
# Rerun the same analysis
withReplicates( bclus1, quote( coef( rq( formula= Myformula, tau=0.5, 
weights=.weights
Erreur dans eval(expr, envir, enclos) : objet 'api00' introuvable

# I suspect the evaluation not done in the right environment. 
#If you specify with data option in rq, the initial dataframe, formula is then 
correctly evaluated but .weights are not found.

withReplicates( bclus1, quote( coef( rq( formula= Myformula, tau=0.5, 
weights=.weights, data=apiclus1 
Erreur dans eval(expr, envir, enclos) : objet '.weights' introuvable

Any help greatly appreciated

O__  Alain Le Tertre
 c/ /'_ --- Institut de Veille Sanitaire (InVS)/ Département Santé Environnement
(*) \(*) -- Responsable de l'unité Statistiques  Outils
~~ - 12 rue du val d'Osne
94415 Saint Maurice cedex FRANCE
Voice: 33 1 41 79 67 62 Fax: 33 1 41 79 67 68
email: a.leter...@invs.sante.fr
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lexical scoping is not what I expect

2013-06-28 Thread John Fox
Dear Brian,

 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
 project.org] On Behalf Of Prof Brian Ripley
 Sent: Friday, June 28, 2013 11:16 AM
 To: r-help@r-project.org
 Subject: Re: [R] Lexical scoping is not what I expect
 
 On 28/06/2013 15:54, John Fox wrote:
  Dear Duncan and Steve,
 
  Since Steve's example raises it, I've never understood why it's legal
 to
  change the built-in global constants in R, including T and F. That
 just
  seems to me to set a trap for users. Why not treat these as reserved
  symbols, like TRUE, Inf, etc.?
 
 Because people wanted to use them as names of things.  Maybe a T (not
 t)
 statistic.
 
 And BTW, you can change the value of T for yourself, but you will not
 change it for any package code, including R itself, since the base
 namespace is ahead of the workspace in namespace scoping.  Because of
 that, it is years since I have seen anyone actually trip themselves up
 over this.

I was unaware that the base namespace is ahead of the workspace in this
context, and that effectively answers my objection.

Thanks,
 John

 
 
  Best,
John
 
  -Original Message-
  From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
  project.org] On Behalf Of Duncan Murdoch
  Sent: Friday, June 28, 2013 9:40 AM
  To: S Ellison
  Cc: r-help@r-project.org
  Subject: Re: [R] Lexical scoping is not what I expect
 
  On 28/06/2013 9:28 AM, S Ellison wrote:
 
  I too find R's lexical scoping rules straightforward.
  However, I'd say that if your code relies on lexical
  scoping to find something, you should probably rewrite your code.
 
  Except of course that almost every function relies on lexical
  scoping to some extent!
 
  This could get messy, because a) that's true and b) it actually
 leads
  to some genuine risks when 'globals' get redefined or masked*.
 
  How about I amend the assertion to if your code relies on lexical
  scoping to find a variable you defined, you should probably rewrite
  your code.
  and leave it at that, subject to some common sense about whether
 you
  know what you're doing?
 
  That still isn't right, because users should feel free to define
  functions and call them from their other functions.
 
  I think who defined it isn't the issue, the issue is whether it
 might
  change unexpectedly.  The user owns globalenv().  The package author
  owns the package namespace.  So packages should almost never read or
  write things directly from/to globalenv() (the user might change
 them),
  but they can create their own private environments and write there.
 
  Where it gets a little less clear is when the user writes a
 function.
  I
  would say functions should never write directly to globalenv(), but
  it's
  perfectly fine to reference constants there (like other functions
  written by the user).  Referencing things there that change is the
  risky
  thing.
 
  Duncan Murdoch
 
 
 
  Steve E
 
 
  *Example
  sin.deg  - function(deg) sin(deg * pi/180)
  sin.deg(45)
  [1] 0.7071068
#looks about right
 
  pi - 3.2   #Indiana General Assembly bill #247, 1897.
  sin.deg(45)
  [1] 0.7173561
#oops ...
 
 
 
 
  ***
  This email and any attachments are confidential. Any
  use...{{dropped:8}}
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-
  guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-
  guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 
 
 --
 Brian D. Ripley,  rip...@stats.ox.ac.uk
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide 

[R] R Benchmarks

2013-06-28 Thread ivo welch
Dear R group:

I just bought a Haswell i7-4770k machine, so I went through the
trouble of creating a small website with some comparative benchmarks.
I also made it easy for others to contribute benchmarks.  if you are
interested, it's all at http://R.ivo-welch.info/ .  enjoy.

/iaw

Ivo Welch (ivo.we...@gmail.com)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] use of formula in survey analysis with replicated weights

2013-06-28 Thread Milan Bouchet-Valat
Le vendredi 28 juin 2013 à 17:44 +0200, LE TERTRE Alain a écrit :
 Hi there,
 I would like to use a formula inside a call to withReplicates in a survey 
 analysis.
 If the initial call with formula expressed inside the function works as 
 expected, defining the formula outside gives an error message.
 See example below, adapted from survey:withReplicates help page.
 
 library(survey)
 library(quantreg)
 
 data(api)
 ## one-stage cluster sample
 dclus1-svydesign( id=~dnum, weights=~pw, data=apiclus1, fpc=~fpc)
 ## convert to bootstrap
 bclus1-as.svrepdesign( dclus1, type=bootstrap, replicates=100)
 
 ## median regression
 withReplicates( bclus1, quote( coef( rq( api00~api99, tau=0.5, 
 weights=.weights
theta SE
 (Intercept) 87.78505 18.850
 api990.91589  0.028
 
 # Defining formula outside
 Myformula - as.formula(  api00~api99)
 # Rerun the same analysis
 withReplicates( bclus1, quote( coef( rq( formula= Myformula, tau=0.5, 
 weights=.weights
 Erreur dans eval(expr, envir, enclos) : objet 'api00' introuvable
 
 # I suspect the evaluation not done in the right environment. 
 #If you specify with data option in rq, the initial dataframe, formula is 
 then correctly evaluated but .weights are not found.
 
 withReplicates( bclus1, quote( coef( rq( formula= Myformula, tau=0.5, 
 weights=.weights, data=apiclus1 
 Erreur dans eval(expr, envir, enclos) : objet '.weights' introuvable
 
 Any help greatly appreciated
Here is a workaround:
Myformula - api00 ~ api99

withReplicates(bclus1, quote(coef(rq(formula(Myformula), tau=0.5, 
weights=.weights

This solution makes sure the formula uses the environment where the
weights are available. If you call as.formula() from outside the
function, it will use the global environment. If you pass a character
string, it will be converted to a formula object deep in a function and
will thus use an environment where the weights are not be available
either.

Note that the same problem happens when using lm().


Regards

 O__  Alain Le Tertre
  c/ /'_ --- Institut de Veille Sanitaire (InVS)/ Département Santé 
 Environnement
 (*) \(*) -- Responsable de l'unité Statistiques  Outils
 ~~ - 12 rue du val d'Osne
 94415 Saint Maurice cedex FRANCE
 Voice: 33 1 41 79 67 62 Fax: 33 1 41 79 67 68
 email: a.leter...@invs.sante.fr
  
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Changing legend to fill colour in ggplot

2013-06-28 Thread Brian Diggs
On 6/27/2013 12:34 AM, Suparna Mitra wrote:
 Hello R experts,
I am having a problem to edit legend in ggplot using four variables.

 My data structure is :
 str(df)
 'data.frame': 10 obs. of  6 variables:
   $ id: Factor w/ 2 levels 639A,640: 1 1 1 1 1 2
 2 2 2 2
   $ species   : Factor w/ 5 levels acinetobacter_sp,..: 2
 5 1 4 3 2 5 1 4 3
   $ genome_coverage_bp: int  8196 3405 8625 22568 2128 6100 1841
 3914 8487 1064
   $ genome_length : int  3571237 2541445 3912725 3479613 5460977
 3571237 2541445 3912725 3479613 5460977
   $ genome_coverage_percentage: Factor w/ 10 levels 0.02%,0.04%,..: 8 5
 7 10 2 6 3 4 9 1
   $ avg_depth_coverage: num  121.96 2.81 19.84 399.63 1.64 ...


 Now what I did is
 p=ggplot(df,aes(genome_coverage_percentage,avg_depth_coverage))+geom_point(aes(colour
 = species,shape = factor(id)))
 p+scale_shape_discrete(name  =,labels=c(Patient 1, Patient 2))
 That creats the plot below.
 But I want to change the circles of legend in fill colour. So that it
 doesn't look like it is only from Patient 1, as that also has circle.
 Can anybody help me please?

You can change the default aesthetics displayed in the legend using the 
override.aes argument to guide_legend.

scale_colour_discrete(guide=guide_legend(override.aes=aes(shape=15)))

Also, when giving data, especially a data set this small, give the 
output of dput(df) as that gives the complete data in a format that can 
be used to recreate it exactly in someoneelse's session. If I had that, 
I would test this to make sure it looks right.

 Thanks a lot in advance :)
 Mitra





-- 
Brian S. Diggs, PhD
Senior Research Associate, Department of Surgery
Oregon Health  Science University

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] RE : use of formula in survey analysis with replicated weights

2013-06-28 Thread LE TERTRE Alain
Thanks for the workaround and the explanation

Alain

De : Milan Bouchet-Valat [nalimi...@club.fr]
Date d'envoi : vendredi 28 juin 2013 19:20
À : LE TERTRE Alain
Cc : 'R-help@r-project.org'
Objet : Re: [R] use of formula in survey analysis with replicated weights

Le vendredi 28 juin 2013 à 17:44 +0200, LE TERTRE Alain a écrit :
 Hi there,
 I would like to use a formula inside a call to withReplicates in a survey 
 analysis.
 If the initial call with formula expressed inside the function works as 
 expected, defining the formula outside gives an error message.
 See example below, adapted from survey:withReplicates help page.

 library(survey)
 library(quantreg)

 data(api)
 ## one-stage cluster sample
 dclus1-svydesign( id=~dnum, weights=~pw, data=apiclus1, fpc=~fpc)
 ## convert to bootstrap
 bclus1-as.svrepdesign( dclus1, type=bootstrap, replicates=100)

 ## median regression
 withReplicates( bclus1, quote( coef( rq( api00~api99, tau=0.5, 
 weights=.weights
theta SE
 (Intercept) 87.78505 18.850
 api990.91589  0.028

 # Defining formula outside
 Myformula - as.formula(  api00~api99)
 # Rerun the same analysis
 withReplicates( bclus1, quote( coef( rq( formula= Myformula, tau=0.5, 
 weights=.weights
 Erreur dans eval(expr, envir, enclos) : objet 'api00' introuvable

 # I suspect the evaluation not done in the right environment.
 #If you specify with data option in rq, the initial dataframe, formula is 
 then correctly evaluated but .weights are not found.

 withReplicates( bclus1, quote( coef( rq( formula= Myformula, tau=0.5, 
 weights=.weights, data=apiclus1 
 Erreur dans eval(expr, envir, enclos) : objet '.weights' introuvable

 Any help greatly appreciated
Here is a workaround:
Myformula - api00 ~ api99

withReplicates(bclus1, quote(coef(rq(formula(Myformula), tau=0.5, 
weights=.weights

This solution makes sure the formula uses the environment where the
weights are available. If you call as.formula() from outside the
function, it will use the global environment. If you pass a character
string, it will be converted to a formula object deep in a function and
will thus use an environment where the weights are not be available
either.

Note that the same problem happens when using lm().


Regards

 O__  Alain Le Tertre
  c/ /'_ --- Institut de Veille Sanitaire (InVS)/ Département Santé 
 Environnement
 (*) \(*) -- Responsable de l'unité Statistiques  Outils
 ~~ - 12 rue du val d'Osne
 94415 Saint Maurice cedex FRANCE
 Voice: 33 1 41 79 67 62 Fax: 33 1 41 79 67 68
 email: a.leter...@invs.sante.fr


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Transforming boolean subsetting into index subsetting

2013-06-28 Thread Julio Sergio
One of the techniques to subset, a vector for instance, is the following:

  V - c(18, 8, 5, 41, 8, 7)
  V
  ## [1] 18  8  5 41  8  7
  ( I - abs(V - 9) = 3 )
  ## [1] FALSE  TRUE FALSE FALSE  TRUE  TRUE
  V[I]
  ## [1] 8 8 7

However, sometimes we are interested in the indexes of the elements where the 
condition holds. Is there an easy way to transform the I vector into an 
indexes vector similar to:  I == c(2,5,6) ?

I know that I can traverse the I vector with a for() loop collecting the 
indexes, I just wonder if such an operation can be avoided.

Thanks,

  -Sergio.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming boolean subsetting into index subsetting

2013-06-28 Thread peter dalgaard

On Jun 28, 2013, at 23:58 , Julio Sergio wrote:

 One of the techniques to subset, a vector for instance, is the following:
 
  V - c(18, 8, 5, 41, 8, 7)
  V
  ## [1] 18  8  5 41  8  7
  ( I - abs(V - 9) = 3 )
  ## [1] FALSE  TRUE FALSE FALSE  TRUE  TRUE
  V[I]
  ## [1] 8 8 7
 
 However, sometimes we are interested in the indexes of the elements where the 
 condition holds. Is there an easy way to transform the I vector into an 
 indexes vector similar to:  I == c(2,5,6) ?
 
 I know that I can traverse the I vector with a for() loop collecting the 
 indexes, I just wonder if such an operation can be avoided.
 

which(I)

-- 
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd@cbs.dk  Priv: pda...@gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming boolean subsetting into index subsetting

2013-06-28 Thread Bert Gunter
?which

-- Bert

On Fri, Jun 28, 2013 at 2:58 PM, Julio Sergio julioser...@gmail.com wrote:
 One of the techniques to subset, a vector for instance, is the following:

   V - c(18, 8, 5, 41, 8, 7)
   V
   ## [1] 18  8  5 41  8  7
   ( I - abs(V - 9) = 3 )
   ## [1] FALSE  TRUE FALSE FALSE  TRUE  TRUE
   V[I]
   ## [1] 8 8 7

 However, sometimes we are interested in the indexes of the elements where the
 condition holds. Is there an easy way to transform the I vector into an
 indexes vector similar to:  I == c(2,5,6) ?

 I know that I can traverse the I vector with a for() loop collecting the
 indexes, I just wonder if such an operation can be avoided.

 Thanks,

   -Sergio.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 

Bert Gunter
Genentech Nonclinical Biostatistics

Internal Contact Info:
Phone: 467-7374
Website:
http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming boolean subsetting into index subsetting

2013-06-28 Thread William Dunlap
And there is no real magic in which().  You already know that
   V[I]
where 'I' is a logical vector the length of 'V' extracts the values
in 'V' corresponding to the TRUE's in 'I'.  Replace 'V' with the vector
made by seq_along(I), (== 1, 2, ..., length(I)), and you have the
essentials of which(I).

which(I) does add an extra twist - it considers missing values in I to be the
same as FALSE's:
   seq_along(I)[ !is.na(I)  I ]

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com

 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
 Behalf
 Of Bert Gunter
 Sent: Friday, June 28, 2013 5:01 PM
 To: Julio Sergio
 Cc: r-h...@stat.math.ethz.ch
 Subject: Re: [R] Transforming boolean subsetting into index subsetting
 
 ?which
 
 -- Bert
 
 On Fri, Jun 28, 2013 at 2:58 PM, Julio Sergio julioser...@gmail.com wrote:
  One of the techniques to subset, a vector for instance, is the following:
 
V - c(18, 8, 5, 41, 8, 7)
V
## [1] 18  8  5 41  8  7
( I - abs(V - 9) = 3 )
## [1] FALSE  TRUE FALSE FALSE  TRUE  TRUE
V[I]
## [1] 8 8 7
 
  However, sometimes we are interested in the indexes of the elements where 
  the
  condition holds. Is there an easy way to transform the I vector into an
  indexes vector similar to:  I == c(2,5,6) ?
 
  I know that I can traverse the I vector with a for() loop collecting the
  indexes, I just wonder if such an operation can be avoided.
 
  Thanks,
 
-Sergio.
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 
 
 --
 
 Bert Gunter
 Genentech Nonclinical Biostatistics
 
 Internal Contact Info:
 Phone: 467-7374
 Website:
 http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-
 biostatistics/pdb-ncb-home.htm
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lexical scoping is not what I expect

2013-06-28 Thread Rolf Turner

On 29/06/13 02:54, John Fox wrote:

Dear Duncan and Steve,

Since Steve's example raises it, I've never understood why it's legal to
change the built-in global constants in R, including T and F. That just
seems to me to set a trap for users. Why not treat these as reserved
symbols, like TRUE, Inf, etc.?


I rather enjoy being able to set

pi - 3

:-)

cheers,

Rolf Turner

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lexical scoping is not what I expect

2013-06-28 Thread Yihui Xie
I just realized this was also possible:

 assign('TRUE', FALSE)
 TRUE
[1] TRUE
 get('TRUE')
[1] FALSE

but it is probably a different story.

Regards,
Yihui
--
Yihui Xie xieyi...@gmail.com
Phone: 206-667-4385 Web: http://yihui.name
Fred Hutchinson Cancer Research Center, Seattle


On Fri, Jun 28, 2013 at 6:30 PM, Rolf Turner rolf.tur...@xtra.co.nz wrote:
 On 29/06/13 02:54, John Fox wrote:

 Dear Duncan and Steve,

 Since Steve's example raises it, I've never understood why it's legal to
 change the built-in global constants in R, including T and F. That just
 seems to me to set a trap for users. Why not treat these as reserved
 symbols, like TRUE, Inf, etc.?


 I rather enjoy being able to set

 pi - 3

 :-)

 cheers,

 Rolf Turner

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quantile Regression/(package (quantreg))

2013-06-28 Thread Frank Harrell

Mike,

Do something like:

require(rms)
dd - datadist(mydatarame); options(datadist='dd')
f - Rq(y ~ rcs(age,4)*sex, tau=.5)  # use rq function in quantreg
summary(f)  # inter-quartile-range differences in medians of y (b/c tau=.5)
plot(Predict(f, age, sex))   # show age effect on median as a continuous 
variable


For more help type ?summary.rms and ?Predict

Frank



When performing quantile regression (r package I used quantreg), the 
value of the quantile refers to the quantile value of the dependent 
variable.
Typically when trying to predict, since the information we have are the 
independent variables, I am interested in trying to estimate the 
coefficients based on the quantile values of the independent variables' 
distribution. So that I can get an understanding, for certain ranges of 
the predictor/independent variable values, the (target/dependent 
variable) has (a certain level of exposure to the 
predictors)/(coefficients).

Is there any way I can achieve that?

Just in case, if I am incorrect about my understanding on the way 
quantiles are interpreted when using the package quantreg, please let me 
know.


Thanks
Mike
--
Frank E Harrell Jr Professor and Chairman  School of Medicine
   Department of Biostatistics Vanderbilt University

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.