[R] Parameters estimation for extreme value models

2013-05-26 Thread assaedi76 assaedi76
Thanks in advance R
users

 

I have time series data
and I need to estimate the parameters involved in three different models for
generalized extreme values

 

Model 1:  a, b,
c are constants.

 

Model 2: a(t)=B0+B1 t,
but  b, c are constants

 

Model 3: c(t)=
Exp(B0+B1 t)  but a, b are constants

 

Where a, b and c are location, scale and
shape parameter respectively; t is time.







Regards 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Parameters estimation for extreme value models

2013-05-26 Thread Jeff Newmiller
Well, then, you had better get busy and stop posting here. To learn why, read 
the Posting Guide. Some pointers:

a) No homework help here.
b) No posting in HTML.
c) This list is for questions about R, not statements about your needs.

---
Jeff NewmillerThe .   .  Go Live...
DCN:jdnew...@dcn.davis.ca.usBasics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
--- 
Sent from my phone. Please excuse my brevity.

assaedi76 assaedi76 assaed...@yahoo.com wrote:

Thanks in advance R
users

�

I have time series data
and I need to estimate the parameters involved in three different
models for
generalized extreme values

�

Model 1: �a, b,
c�are constants.

�

Model 2:�a(t)=B0+B1 t,
but� b, c�are constants

�

Model 3:�c(t)=
Exp(B0+B1 t) �but�a, b�are constants

�

Where a, b�and�c�are location, scale and
shape parameter respectively; t is time.







Regards�
   [[alternative HTML version deleted]]





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] curiosity: next-gen x86 processors and FP32?

2013-05-26 Thread Jeff Newmiller
I am no HPC expert, but I have been computing for awhile.

There are already many CPU-specific optimizations built into most compilers 
used to compile the R source code. Anyone sincerely interested in getting work 
done today should get on with their work and hope that most of the power of new 
processors gets delivered the same way.

The reason single precision is so uncommon in many computing environments is 
that numerical errors propagate much faster with single precision. I don't 
expect the typical R user to want to perform detailed uncertainty analysis 
every time they set up a computation to decide whether it can be computed with 
sufficient accuracy using SP.

Most speed problems I have encountered have been related to memory (swapping, 
fragmentation) and algorithm inefficiency, not CPU speed.
---
Jeff NewmillerThe .   .  Go Live...
DCN:jdnew...@dcn.davis.ca.usBasics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
--- 
Sent from my phone. Please excuse my brevity.

ivo welch ivo.we...@anderson.ucla.edu wrote:

dear R experts:

although my question may be better asked on the HPC R mailing list, it
is really about something that average R users who don't plan to write
clever HPC-optimized code would care about: is there a quantum
performance leap on the horizon with CPUs?

like most R average non-HPC users, I want to stick mostly to
mainstream R, often with library parallel but that's it.  I like R to
be fast and effortless.  I don't want to have to rewrite my code
greatly to take advantage of my CPU.  the CUDA forth-and-back on the
memory which requires code rewrites makes CUDA not too useful for me.
in fact, I don't even like setting up computer clusters.  I run code
only on my single personal machine.

now, I am looking at the two upcoming processors---intel haswell (next
month) and amd kaveri (end of year).  does either of them have the
potential to be a quantum leap for R without complex code rewrites?
I presume that any quantum leaps would have to come from R using a
different numerical vector engine.   (I tried different compiler
optimizations when compiling R (such as AVX) on the 1-year old i7-27*,
but it did not really make a difference in basic R benchmarks, such as
simple OLS calculations.  I thought AVX would provide a faster vector
engine, but something didn't really compute here.  pun intended.)

I would guess that haswell will be a nice small evolutionary step
forward.  5-20%, perhaps.  but nothing like a factor 2.

[tomshardware details how intel FP32 math is 4 times as fast as double
math on the i7 architecture.  for most of my applications, a 4 times
speedup at a sacrifice in precision would be worth it.  R seems to use
only doubles---even as.single is not even converting to single, much
less inducing calculations to be single-precision.  so I guess this is
a no-go.  correct?? ]

kaveri's hUMA on the other hand could be a quantum leap.  kaveri could
have the GPU transparently offer common standard built-in vector
operations that we use in R, i.e., improve the speed of many programs
without the need for a rewrite, by a factor of 5?  hard to believe,
but it would seem that AMD actually beat Intel for R users.  a big
turnaround, given their recent deemphasis of FP on the CPU.
(interestingly, the amd-built Xbox One and PS4 processors were also
reported to have  hUMA.)

worth waiting for kaveri?   anything I can do to drastically speed up
R on intel i7 by going to FP32?

regards,

/iaw

Ivo Welch (ivo.we...@gmail.com)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] load ff object in a different computer

2013-05-26 Thread Djordje Bajic
Hi all,

I am having trouble loading a ff object previously saved in a different
computer. I have both files .ffData and .RData, and the first of them is
13Mb large from which I know the data is therein. But when I try to ffload
it,

checkdir error:  cannot create /home/_myUser_
 Permission denied
 unable to process
home/_myUser_/Rtempdir/ff1a831d500b8d.ff.

and some warnings.  In the original computer, this temporary file is
deleted each time I exit R, and my expectation is that data is actually
stored in the .ffData and .RData that I have here. But maybe I don't realy
understand the underpinnings of ff. In thois computer, my username is
different, so user xyz does not exist. I tried changing the option
Rtempdir but no luck.

In addition, when I open the .ffData file to see what is inside, there is
only a path to the ff1a831... temporary file. As information about ff
package in internet is rather scarce, could anyome please help me to
understand this, and possibly recover my data if it is possible?

Thank you!

Djordje

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] avoiding eval parse with indexing

2013-05-26 Thread Martin Ivanov
 Hello,
I would like to get an advice on how the notorious eval(parse()) construct 
could possibly 
be avoided in the following example. I have an array x, which can have 
different number of dimensions,
but I am only interested in extracting, say, the first element of the first 
dimension. Currently I achieve this
in this way:

eval(parse(text=paste0(x[1, paste(rep(, , length(dim(x)) - 1), 
collapse=), ])))

Is it possible to avoid the eval parse here? How?

Best regards,

Martin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] When creating a data frame with data.frame() transforms integers into factors

2013-05-26 Thread Bert Gunter
1. Please always cc. the list; do not reply just to me.

2.  OK, I see. I ERRED. Had you cc'ed the list, someone might have
pointed this out. The correct example reproduces what you saw.

z- sample(1:10,30,rep=TRUE)
table(z)
w - data.frame(table(z))
w

 z  Freq
1   12
2   23
3   31
4   43
5   55
6   63
7   75
8   84
9   91
10 103

 sapply(w,class)
z  Freq
 factor integer

This is exactly what is expected and documented.  See ?table. So the
question is: What do you expect?  table() produces an array whose
cross-classifying factors are the dimensions. data.frame converts this
into a data frame. Perhaps the following will help clarify:

 z - data.frame(fac1= sample(LETTERS[1:3],10,rep=TRUE),
  fac2 = sample(c(j,k),10,rep=TRUE))
 z
   fac1 fac2
1 Ak
2 Bk
3 Ck
4 Ck
5 Bk
6 Ck
7 Ck
8 Aj
9 Aj
10Cj

 table(z)

fac2
fac1 j k
   A 2 1
   B 0 2
   C 1 4

 data.frame(table(z))

  fac1 fac2 Freq
1Aj2
2Bj0
3Cj1
4Ak1
5Bk2
6Ck4

 table(z['fac1'])

A B C
3 2 5

 data.frame(table(z['fac1']))
  Var1 Freq
1A3
2B2
3C5

Cheers,
Bert

On Sat, May 25, 2013 at 6:54 PM, António Camacho toin...@gmail.com wrote:
 Hello Bert
 Thanks for your prompt reply.
 I tried your example and it worked without a problem.

 But what i want is to create a data frame from the output of the function
 table(), so in your example i tried sapply(data.frame(tbl),class) and the
 output was z -- factor and Freq ---integer.
 What is happening in the table() function that is transforming the integers
 in z into values with labels ?
 because when i do names(tbl) it returns each value of z as a name

 I read the manual for  [  but i didn't understand it completely. I have to
 read the introduction to R more carefully.

 I also tried using [, [[ and $ for the extraction of the values from
 the 'posts' column, but the problem persisted.

 Like i said, this code was taken from an example in a webpage. I contacted
 the author and he confirmed me that the code worked on his machine, that was
 running R 2.15.1
 Maybe something changed between versions in the data.frame() ??

 I really don't understant what I am doing wrong.

 António

 On 2013/05/26, at 01:44, Bert Gunter wrote:

 Huh?

 z - sample(1:10,30,rep=TRUE)
 tbl - table(z)
 tbl

 z
 1 2 3 4 5 6 7 8 9 10
 4 3 2 6 3 3 2 2 2 3

 data.frame(z)

z
 1   5
 2   2
 3   4
 4   1
 5   6
 6   4
 7  10
 8   4
 9   3
 10  8
 11 10
 12  4
 13  3
 14  9
 15  2
 16  2
 17  6
 18  1
 19  4
 20  7
 21  9
 22 10
 23  7
 24  5
 25  5
 26  6
 27  8
 28  1
 29  1
 30  4

 sapply(data.frame(z),class)

z
 integer

 Your error: you used df['posts']  . You should have used df[,'posts'] .

 The former is a data frame. The latter is a vector. Read the
 Introduction to R tutorial or ?[ if you don't understand why.

 -- Bert

 -- Bert

 On Sat, May 25, 2013 at 12:36 PM, António Camacho toin...@gmail.com
 wrote:

 Hello


 I am novice to R and i was learning how to do a scatter plot with R using
 an example from a website.

 My setup is iMac with Mac OS X 10.8.3, with R 3.0.1, default install,
 without additional packages loaded

 I created a .csv file in vim with  the following content
 userID,user,posts
 1,user1,581
 2,user2,281
 3,user3,196
 4,user4,150
 5,user5,282
 6,user6,184
 7,user7,90
 8,user8,74
 9,user9,45
 10,user10,20
 11,user11,3
 12,user12,1
 13,user13,345
 14,user14,123

 i imported the file into R using : ' df - read.csv('file.csv')
 to confirm the data types i did : 'sappily(df, class) '
 that returns userID -- integer ; user --- factor ; posts ---
 integer
 then i try to create another data frame with the number of posts and its
 frequencies,
 so i did: 'postFreqCount-data.frame(table(df['posts']))'
 this gives me the postFreqCount data frame with two columns, one called
 'Var1' that has the number of posts each user did, and another collumn
 'Freq' with the frequency of each number of posts.
 the problem is that if i do : 'sappily(postFreqCount['Var1'],class)' it
 returns factor.
 So the data.frame() function transformed a variable that was integer
 (posts) to a variable (Var1) that has the same values but is factor.
 I want to know how to prevent this from happening. How do i keep the
 values
 from being transformed from integer to factor ?

 Thank you for your help

 António

[[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




 --

 Bert Gunter
 Genentech Nonclinical Biostatistics

 Internal Contact Info:
 Phone: 467-7374
 Website:

 

Re: [R] What does this say? Error in rep((Intercept), nrow(a0)) : invalid 'times' argument

2013-05-26 Thread Bert Gunter
Mike:

On Sat, May 25, 2013 at 7:24 PM, C W tmrs...@gmail.com wrote:
 Thomas, thanks for the cool trick.  I always thought browser() was the
 only thing existed, apparently not.

Which you would have known had you read the docs!

See section 9 on Debugging in the R Language Definition Manual

-- Bert


 Mike

 On Sat, May 25, 2013 at 9:09 PM, Thomas Stewart
 tgs.public.m...@gmail.com wrote:
 Mike-

 You can use the traceback function to see where the error is:

  bob - matrix(rnorm(100*180), nrow=180)
  yyy - rnorm(180)
  fit1 - cv.glmnet(bob, yyy, family=mgaussian)
 Error in rep((Intercept), nrow(a0)) : invalid 'times' argument
  traceback()
 6: predict.multnet(object, newx, s, type, exact, offset, ...)
 5: predict.mrelnet(glmnet.object, type = nonzero)
 4: predict(glmnet.object, type = nonzero)
 3: lapply(X = X, FUN = FUN, ...)
 2: sapply(predict(glmnet.object, type = nonzero), length)
 1: cv.glmnet(bob, yyy, family = mgaussian)

 So, thee error is in the predict.multnet function.  If you peak at that
 function, you see where the function falls apart.  It seems that the
 function wants a0 to be a matrix but in this example it is a vector.  I'm
 not familiar enough with the package to offer advice on how to fix this.

 -tgs



 On Sat, May 25, 2013 at 4:14 PM, C W tmrs...@gmail.com wrote:

 Dear list,
 I am using glmnet.  I have no idea what this error is telling me.
 Here's my code,

  bob - matrix(rnorm(100*180), nrow=180)
  yyy - rnorm(180)
  fit1 - cv.glmnet(bob, yyy, family=mgaussian)
 Error in rep((Intercept), nrow(a0)) : invalid 'times' argument

 In fact, I peeked inside cv.glmnet() using,
  glmnet:cv.glmnet

 Can't even find the error message in the code.  I am clueless at the
 moment.
 Thanks in advance,
 Mike

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 

Bert Gunter
Genentech Nonclinical Biostatistics

Internal Contact Info:
Phone: 467-7374
Website:
http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] load ff object in a different computer

2013-05-26 Thread Milan Bouchet-Valat
Le dimanche 26 mai 2013 à 13:53 +0200, Djordje Bajic a écrit :
 Hi all,
 
 I am having trouble loading a ff object previously saved in a different
 computer. I have both files .ffData and .RData, and the first of them is
 13Mb large from which I know the data is therein. But when I try to ffload
 it,
 
 checkdir error:  cannot create /home/_myUser_
  Permission denied
  unable to process
 home/_myUser_/Rtempdir/ff1a831d500b8d.ff.
 
 and some warnings.  In the original computer, this temporary file is
 deleted each time I exit R, and my expectation is that data is actually
 stored in the .ffData and .RData that I have here. But maybe I don't realy
 understand the underpinnings of ff. In thois computer, my username is
 different, so user xyz does not exist. I tried changing the option
 Rtempdir but no luck.
 
 In addition, when I open the .ffData file to see what is inside, there is
 only a path to the ff1a831... temporary file. As information about ff
 package in internet is rather scarce, could anyome please help me to
 understand this, and possibly recover my data if it is possible?
Please tell us exactly how you saved that ff object. You should try to
reproduce the problem with very simple data you post in your message
using dput(), and provide us with all the code and the errors it
triggers.


Regards

 Thank you!
 
 Djordje
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] avoiding eval parse with indexing

2013-05-26 Thread Berend Hasselman

On 26-05-2013, at 15:56, Martin Ivanov tra...@abv.bg wrote:

 Hello,
 I would like to get an advice on how the notorious eval(parse()) construct 
 could possibly 
 be avoided in the following example. I have an array x, which can have 
 different number of dimensions,
 but I am only interested in extracting, say, the first element of the first 
 dimension. Currently I achieve this
 in this way:
 
 eval(parse(text=paste0(x[1, paste(rep(, , length(dim(x)) - 1), 
 collapse=), ])))
 
 Is it possible to avoid the eval parse here? How?


I tried this

x1 - array(runif(9),dim=c(3,3))
x2 - array(runif(8),dim=c(2,2,2))

and then

x1[1] and x2[1] gave me what you wanted.
I don't know if it is the coRRect way to do what you want.

Berend

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] avoiding eval parse with indexing

2013-05-26 Thread Bert Gunter
Martin:

Well, assuming I understand, one approach would be to first get the
dim attribute of the array and then create the appropriate call using
that:

 z - array(1:24,dim=2:4)
 d - dim(z)
 ix -lapply(d[-c(1,1)],seq_len)
  do.call([, c(list(z),1,1,ix))
[1]  1  7 13 19

Is that what you want?

-- Bert



On Sun, May 26, 2013 at 6:56 AM, Martin Ivanov tra...@abv.bg wrote:
  Hello,
 I would like to get an advice on how the notorious eval(parse()) construct 
 could possibly
 be avoided in the following example. I have an array x, which can have 
 different number of dimensions,
 but I am only interested in extracting, say, the first element of the first 
 dimension. Currently I achieve this
 in this way:

 eval(parse(text=paste0(x[1, paste(rep(, , length(dim(x)) - 1), 
 collapse=), ])))

 Is it possible to avoid the eval parse here? How?

 Best regards,

 Martin

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 

Bert Gunter
Genentech Nonclinical Biostatistics

Internal Contact Info:
Phone: 467-7374
Website:
http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] avoiding eval parse with indexing

2013-05-26 Thread Florent D.
library(abind)
asub(x, 1, 1)

On Sun, May 26, 2013 at 10:43 AM, Bert Gunter gunter.ber...@gene.com wrote:
 Martin:

 Well, assuming I understand, one approach would be to first get the
 dim attribute of the array and then create the appropriate call using
 that:

 z - array(1:24,dim=2:4)
 d - dim(z)
 ix -lapply(d[-c(1,1)],seq_len)
  do.call([, c(list(z),1,1,ix))
 [1]  1  7 13 19

 Is that what you want?

 -- Bert



 On Sun, May 26, 2013 at 6:56 AM, Martin Ivanov tra...@abv.bg wrote:
  Hello,
 I would like to get an advice on how the notorious eval(parse()) construct 
 could possibly
 be avoided in the following example. I have an array x, which can have 
 different number of dimensions,
 but I am only interested in extracting, say, the first element of the first 
 dimension. Currently I achieve this
 in this way:

 eval(parse(text=paste0(x[1, paste(rep(, , length(dim(x)) - 1), 
 collapse=), ])))

 Is it possible to avoid the eval parse here? How?

 Best regards,

 Martin

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



 --

 Bert Gunter
 Genentech Nonclinical Biostatistics

 Internal Contact Info:
 Phone: 467-7374
 Website:
 http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Setting default graphics device options

2013-05-26 Thread Vishal Belsare
Hi,

Is it possible to :

[1] set a default location to plot graphs in png format with specific
dimensions  resolution. I want to plot to a directory which is a shared on
the network (samba share), so as to view the plots from a different machine.

[2] call dev.off() 'automagically' after a call to the plot function, by
(somehow) setting it as a default behavior in .Rprofile.site? This would be
nice to have, so as to update an image viewer running on a local machine
which keeps displaying the image(s) in the shared plot folder on the remote
machine (which runs R)

I was thinking on the lines of adding the following to .Rprofile.site :
__

prior2plot - function() {plotfile -  paste('/srv/samba/share/Rplot-',
as.character(format(Sys.time(), %Y%m%d-%H%M%S)), '.png', sep='');
png(filename=plotfile, width=1280, height=800)}

setHook(before.plot.new, prior2plot())

__

However, the above does not seem to work beyond a first plot.

Best wishes,

Vishal

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] avoiding eval parse with indexing

2013-05-26 Thread arun


  the code returns Error:
 do.call([, c(list(z),1,1,ix))
#Error in 1:24[1, 1, 1:3, 1:4] : incorrect number of dimensions
May be something is missing.
A.K.



- Original Message -
From: Bert Gunter gunter.ber...@gene.com
To: Martin Ivanov tra...@abv.bg
Cc: r-help@r-project.org
Sent: Sunday, May 26, 2013 10:43 AM
Subject: Re: [R] avoiding eval parse with indexing

Martin:

Well, assuming I understand, one approach would be to first get the
dim attribute of the array and then create the appropriate call using
that:

 z - array(1:24,dim=2:4)
 d - dim(z)
 ix -lapply(d[-c(1,1)],seq_len)
  do.call([, c(list(z),1,1,ix))
[1]  1  7 13 19

Is that what you want?

-- Bert



On Sun, May 26, 2013 at 6:56 AM, Martin Ivanov tra...@abv.bg wrote:
  Hello,
 I would like to get an advice on how the notorious eval(parse()) construct 
 could possibly
 be avoided in the following example. I have an array x, which can have 
 different number of dimensions,
 but I am only interested in extracting, say, the first element of the first 
 dimension. Currently I achieve this
 in this way:

 eval(parse(text=paste0(x[1, paste(rep(, , length(dim(x)) - 1), 
 collapse=), ])))

 Is it possible to avoid the eval parse here? How?

 Best regards,

 Martin

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 

Bert Gunter
Genentech Nonclinical Biostatistics

Internal Contact Info:
Phone: 467-7374
Website:
http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] avoiding eval parse with indexing

2013-05-26 Thread arun
Hi,
You could use:
library(abind)
#using Berend's and Bert's example
x1 - array(runif(9),dim=c(3,3))
x2 - array(runif(8),dim=c(2,2,2))
z - array(1:24,dim=2:4)
#applying your code:
 
eval(parse(text=paste0(x1[1,paste(rep(,,length(dim(x1))-1),collapse=),])))
#[1] 0.6439062 0.7139397 0.6017418
eval(parse(text=paste0(x2[1,paste(rep(,,length(dim(x2))-1),collapse=),])))
#    [,1]  [,2]
#[1,] 0.026671344 0.2116831
#[2,] 0.003903368 0.1551140
eval(parse(text=paste0(z[1,paste(rep(,,length(dim(z))-1),collapse=),])))
# [,1] [,2] [,3] [,4]
#[1,]    1    7   13   19
#[2,]    3    9   15   21
#[3,]    5   11   17   23

 asub(x1,1,1,drop=TRUE)
#[1] 0.6439062 0.7139397 0.6017418
 asub(x2,1,1,drop=TRUE)
#    [,1]  [,2]
#[1,] 0.026671344 0.2116831
#[2,] 0.003903368 0.1551140
 asub(z,1,1,drop=TRUE)
# [,1] [,2] [,3] [,4]
#[1,]    1    7   13   19
#[2,]    3    9   15   21
#[3,]    5   11   17   23
A.K. 



- Original Message -
From: Martin Ivanov tra...@abv.bg
To: r-help@r-project.org
Cc: 
Sent: Sunday, May 26, 2013 9:56 AM
Subject: [R] avoiding eval parse with indexing

Hello,
I would like to get an advice on how the notorious eval(parse()) construct 
could possibly 
be avoided in the following example. I have an array x, which can have 
different number of dimensions,
but I am only interested in extracting, say, the first element of the first 
dimension. Currently I achieve this
in this way:

eval(parse(text=paste0(x[1, paste(rep(, , length(dim(x)) - 1), 
collapse=), ])))

Is it possible to avoid the eval parse here? How?

Best regards,

Martin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [Fw: Re: avoiding eval parse with indexing ]

2013-05-26 Thread Martin Ivanov
 ---BeginMessage---
 Dear Mr Gunter,

with a slight correction:
z - array(1:24,dim=2:4)
d - dim(z)
ix - lapply(d[-1], seq_len)
do.call([, c(list(z),1,ix))

[,1] [,2] [,3] [,4]
[1,]17   13   19
[2,]39   15   21
[3,]5   11   17   23

your suggestion worked and is exactly what I wanted! 
Thank You very much indeed!

I want to get rid of all eval(parse) constructs in my code,
and this was the greatest obstacle I had. Now I will be able to do this.
As they say, eval parse in the code is just lack on knowledge.

Best regards,

Martin


  Оригинално писмо 
 От:   Florent D.  
 Относно: Re: [R] avoiding eval parse with indexing
 До: Bert Gunter 
 Изпратено на: Неделя, 2013, Май 26 18:07:26 EEST
 
 
 library(abind)
 asub(x, 1, 1)
 
 On Sun, May 26, 2013 at 10:43 AM, Bert Gunter  wrote:
  Martin:
 
  Well, assuming I understand, one approach would be to first get the
  dim attribute of the array and then create the appropriate call using
  that:
 
  z - array(1:24,dim=2:4)
  d - dim(z)
  ix -lapply(d[-c(1,1)],seq_len)
   do.call(quot;[quot;, c(list(z),1,1,ix))
  [1]  1  7 13 19
 
  Is that what you want?
 
  -- Bert
 
 
 
  On Sun, May 26, 2013 at 6:56 AM, Martin Ivanov  wrote:
   Hello,
  I would like to get an advice on how the notorious eval(parse()) construct 
  could possibly
  be avoided in the following example. I have an array x, which can have 
  different number of dimensions,
  but I am only interested in extracting, say, the first element of the 
  first dimension. Currently I achieve this
  in this way:
 
  eval(parse(text=paste0(quot;x[1quot;, paste(rep(quot;, quot;, 
  length(dim(x)) - 1), collapse=quot;quot;), quot;]quot;)))
 
  Is it possible to avoid the eval parse here? How?
 
  Best regards,
 
  Martin
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 
 
  --
 
  Bert Gunter
  Genentech Nonclinical Biostatistics
 
  Internal Contact Info:
  Phone: 467-7374
  Website:
  http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 ---End Message---
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] curiosity: next-gen x86 processors and FP32?

2013-05-26 Thread ivo welch
I think this is mostly but not fully correct.

most users are better off with double precision most of the time...but
not all of the time if the speedup and memory savings are 4 and 2,
respectively.

algorithm inefficiency may well be true, too---but if I spend one week
of my time (or even 3 days) to tune my program for a one time job that
then saves me one week, it's a net loss.  let's put a value on the
time to tune algorithms...$100/hour?  often, it is worth more maxing
memory and CPU instead.   my question is thus whether the tradeoffs
are becoming even more stark.  if a future vector-GPU can speed up my
FP by a factor of 5, I really shouldn't spend much time tuning
algorithms and write my programs in a simple straightforward way
instead.  YMMV.

memory swapping is death, speedwise.  anyone who doesn't max out RAM
and uses R is myopic IMHO.  unfortunately, standard i7 haswells are
limited to 32GB.  this makes R suitable for the analysis of data sets
that are about 4-6GB in size.  R is prolific in making copies of
structures in memory, even if a little bit of cleverness could avoid
it.   x$bigstruct[bignum] - 1.  R often errs on the side of
non-tuning its algorithms, too.  that's why data.table exists (though
I don't like it for some of its semantic oddities).  if it makes sense
to tune algorithms, it would be as low a level as possible on behalf
of software that is used by as many people as possible.  then again, I
am grateful that we have volunteers who develop R for free.

/iaw


On Sun, May 26, 2013 at 1:01 AM, Jeff Newmiller
jdnew...@dcn.davis.ca.us wrote:
 I am no HPC expert, but I have been computing for awhile.

 There are already many CPU-specific optimizations built into most compilers 
 used to compile the R source code. Anyone sincerely interested in getting 
 work done today should get on with their work and hope that most of the power 
 of new processors gets delivered the same way.

 The reason single precision is so uncommon in many computing environments is 
 that numerical errors propagate much faster with single precision. I don't 
 expect the typical R user to want to perform detailed uncertainty analysis 
 every time they set up a computation to decide whether it can be computed 
 with sufficient accuracy using SP.

 Most speed problems I have encountered have been related to memory (swapping, 
 fragmentation) and algorithm inefficiency, not CPU speed.
 ---
 Jeff NewmillerThe .   .  Go Live...
 DCN:jdnew...@dcn.davis.ca.usBasics: ##.#.   ##.#.  Live Go...
   Live:   OO#.. Dead: OO#..  Playing
 Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
 /Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
 ---
 Sent from my phone. Please excuse my brevity.

 ivo welch ivo.we...@anderson.ucla.edu wrote:

dear R experts:

although my question may be better asked on the HPC R mailing list, it
is really about something that average R users who don't plan to write
clever HPC-optimized code would care about: is there a quantum
performance leap on the horizon with CPUs?

like most R average non-HPC users, I want to stick mostly to
mainstream R, often with library parallel but that's it.  I like R to
be fast and effortless.  I don't want to have to rewrite my code
greatly to take advantage of my CPU.  the CUDA forth-and-back on the
memory which requires code rewrites makes CUDA not too useful for me.
in fact, I don't even like setting up computer clusters.  I run code
only on my single personal machine.

now, I am looking at the two upcoming processors---intel haswell (next
month) and amd kaveri (end of year).  does either of them have the
potential to be a quantum leap for R without complex code rewrites?
I presume that any quantum leaps would have to come from R using a
different numerical vector engine.   (I tried different compiler
optimizations when compiling R (such as AVX) on the 1-year old i7-27*,
but it did not really make a difference in basic R benchmarks, such as
simple OLS calculations.  I thought AVX would provide a faster vector
engine, but something didn't really compute here.  pun intended.)

I would guess that haswell will be a nice small evolutionary step
forward.  5-20%, perhaps.  but nothing like a factor 2.

[tomshardware details how intel FP32 math is 4 times as fast as double
math on the i7 architecture.  for most of my applications, a 4 times
speedup at a sacrifice in precision would be worth it.  R seems to use
only doubles---even as.single is not even converting to single, much
less inducing calculations to be single-precision.  so I guess this is
a no-go.  correct?? ]

kaveri's hUMA on the other hand could be a quantum leap.  kaveri could
have the GPU transparently offer common standard built-in vector

Re: [R] avoiding eval parse with indexing

2013-05-26 Thread Martin Ivanov
 
Dear Arun,

Thank You very much, your suggestion also works and seems to also be more 
convenient.
I think though, that Mr Gunter's suggestion should be more efficient, as it 
uses directly the Extract operator.

Thank You all very much for Your responsiveness. R does have a wonderful 
community!

Best regards,
Martin



  Оригинално писмо 
 От:  arun 
 Относно: Re: [R] avoiding eval parse with indexing
 До: Martin Ivanov 
 Изпратено на: Неделя, 2013, Май 26 18:48:33 EEST
 
 
 Hi,
 You could use:
 library(abind)
 #using Berend's and Bert's example
 x1 - array(runif(9),dim=c(3,3))
 x2 - array(runif(8),dim=c(2,2,2))
 z - array(1:24,dim=2:4)
 #applying your code:
  
 eval(parse(text=paste0(quot;x1[1quot;,paste(rep(quot;,quot;,length(dim(x1))-1),collapse=quot;quot;),quot;]quot;)))
 #[1] 0.6439062 0.7139397 0.6017418
 eval(parse(text=paste0(quot;x2[1quot;,paste(rep(quot;,quot;,length(dim(x2))-1),collapse=quot;quot;),quot;]quot;)))
 #    [,1]  [,2]
 #[1,] 0.026671344 0.2116831
 #[2,] 0.003903368 0.1551140
 eval(parse(text=paste0(quot;z[1quot;,paste(rep(quot;,quot;,length(dim(z))-1),collapse=quot;quot;),quot;]quot;)))
 # [,1] [,2] [,3] [,4]
 #[1,]    1    7   13   19
 #[2,]    3    9   15   21
 #[3,]    5   11   17   23
 
  asub(x1,1,1,drop=TRUE)
 #[1] 0.6439062 0.7139397 0.6017418
  asub(x2,1,1,drop=TRUE)
 #    [,1]  [,2]
 #[1,] 0.026671344 0.2116831
 #[2,] 0.003903368 0.1551140
  asub(z,1,1,drop=TRUE)
 # [,1] [,2] [,3] [,4]
 #[1,]    1    7   13   19
 #[2,]    3    9   15   21
 #[3,]    5   11   17   23
 A.K. 
 
 
 
 - Original Message -
 From: Martin Ivanov 
 To: r-help@r-project.org
 Cc: 
 Sent: Sunday, May 26, 2013 9:56 AM
 Subject: [R] avoiding eval parse with indexing
 
 Hello,
 I would like to get an advice on how the notorious eval(parse()) construct 
 could possibly 
 be avoided in the following example. I have an array x, which can have 
 different number of dimensions,
 but I am only interested in extracting, say, the first element of the first 
 dimension. Currently I achieve this
 in this way:
 
 eval(parse(text=paste0(quot;x[1quot;, paste(rep(quot;, quot;, 
 length(dim(x)) - 1), collapse=quot;quot;), quot;]quot;)))
 
 Is it possible to avoid the eval parse here? How?
 
 Best regards,
 
 Martin
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] avoiding eval parse with indexing

2013-05-26 Thread arun
Hi,
Another way would be:
library(arrayhelpers)
 slice(aperm(x1,c(2,1)),j=1)
#[1] 0.6439062 0.7139397 0.6017418
 slice(aperm(x2,c(2,1,3)),j=1)
#    [,1]  [,2]
#[1,] 0.026671344 0.2116831
#[2,] 0.003903368 0.1551140


  slice(aperm(z,c(2,1,3)),j=1)
# [,1] [,2] [,3] [,4]
#[1,]    1    7   13   19
#[2,]    3    9   15   21
#[3,]    5   11   17   23

#or
  array(z[slice.index(z,1)==1],dim= dim(z)[2:3])
# [,1] [,2] [,3] [,4]
#[1,]    1    7   13   19
#[2,]    3    9   15   21
#[3,]    5   11   17   23
array(x2[slice.index(x2,1)==1],dim= dim(x2)[2:3])
#    [,1]  [,2]
#[1,] 0.026671344 0.2116831
#[2,] 0.003903368 0.1551140
array(x1[slice.index(x1,1)==1],dim= dim(x1)[1])
#[1] 0.6439062 0.7139397 0.6017418
A.K.

 



- Original Message -
From: arun smartpink...@yahoo.com
To: Martin Ivanov tra...@abv.bg
Cc: R help r-help@r-project.org; Berend Hasselman b...@xs4all.nl; Bert 
Gunter gunter.ber...@gene.com
Sent: Sunday, May 26, 2013 11:48 AM
Subject: Re: [R] avoiding eval parse with indexing

Hi,
You could use:
library(abind)
#using Berend's and Bert's example
x1 - array(runif(9),dim=c(3,3))
x2 - array(runif(8),dim=c(2,2,2))
z - array(1:24,dim=2:4)
#applying your code:
 
eval(parse(text=paste0(x1[1,paste(rep(,,length(dim(x1))-1),collapse=),])))
#[1] 0.6439062 0.7139397 0.6017418
eval(parse(text=paste0(x2[1,paste(rep(,,length(dim(x2))-1),collapse=),])))
#    [,1]  [,2]
#[1,] 0.026671344 0.2116831
#[2,] 0.003903368 0.1551140
eval(parse(text=paste0(z[1,paste(rep(,,length(dim(z))-1),collapse=),])))
# [,1] [,2] [,3] [,4]
#[1,]    1    7   13   19
#[2,]    3    9   15   21
#[3,]    5   11   17   23

 asub(x1,1,1,drop=TRUE)
#[1] 0.6439062 0.7139397 0.6017418
 asub(x2,1,1,drop=TRUE)
#    [,1]  [,2]
#[1,] 0.026671344 0.2116831
#[2,] 0.003903368 0.1551140
 asub(z,1,1,drop=TRUE)
# [,1] [,2] [,3] [,4]
#[1,]    1    7   13   19
#[2,]    3    9   15   21
#[3,]    5   11   17   23
A.K. 



- Original Message -
From: Martin Ivanov tra...@abv.bg
To: r-help@r-project.org
Cc: 
Sent: Sunday, May 26, 2013 9:56 AM
Subject: [R] avoiding eval parse with indexing

Hello,
I would like to get an advice on how the notorious eval(parse()) construct 
could possibly 
be avoided in the following example. I have an array x, which can have 
different number of dimensions,
but I am only interested in extracting, say, the first element of the first 
dimension. Currently I achieve this
in this way:

eval(parse(text=paste0(x[1, paste(rep(, , length(dim(x)) - 1), 
collapse=), ])))

Is it possible to avoid the eval parse here? How?

Best regards,

Martin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] SAPPLY function for COLUMN NULL

2013-05-26 Thread arun
colnames(dd)
#[1] col1 colb
null_vector- colnames(dd)
sapply(null_vector,makeNull,dd)
# col1 colb
#[1,]   NA    4
#[2,]    2   NA
#[3,]    3    2
#[4,]    4   NA
#[5,]    1    4
#[6,]   NA    5
#[7,]    1    6
A.K.


I am trying to make a column value in a dataframe = NA if there is a 0 
or high value in that column. I need to do this process repeatedly, 
hence I have to define a function. Here is the code, that I am using and is 
not working. Please advise on where I am making an error. 

makeNull - function(col, data=dd) { 
is.na(data[[col]]) - data[[col]] ==0 
is.na(data[[col]]) - data[[col]]  99 
return(data[[col]]) 
} 

dd - data.frame(col1=c(0,2,3,4,1,0,1),colb=c(4,0,2,0,4,5,6)) 
null_vector=c(cola,colb) 
sapply(null_vector,function(x) makeNull(x,dd)) 

Error in `[[-.data.frame`(`*tmp*`, col, value = logical(0)) :  replacement 
has 0 rows, data has 7 


Thank you in advance. 
-Sanjeev

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Setting default graphics device options

2013-05-26 Thread Henrik Bengtsson
Hi,

see the R.devices package
[http://cran.r-project.org/web/packages/R.devices/].   FYI, there is a
vignette [R.devices-overview.pdf], but for some reason it's hard find.
 However it is there: help.start() - R.devices - 'User guides,
package vignettes and other documentation.' - R.devices-overview.pdf.

First, don't forget to call library(R.devices).

CREATE IMAGE FILE ATOMICALLY:
To create a PNG image file 'GaussianDensity.png' in subdirectory
figures/ (of the current working directory), do:

toPNG(GaussianDensity, aspectRatio=0.6, scale=2, {
  curve(dnorm, from=-5, to=+5)
})

This will, while still using png()/dev.off() internally, (1)
automatically add filename extension, (2) set the height of the image
as 0.6 times the default width, (3) rescale height and width to be 2
times the default, and (3) make sure the PNG device is closed
afterward (no more forgetting about dev.off()).

It's also make sure not to leave incomplete image files behind in case
there's an error in your plot code.  There's an option to change that
behavior too, e.g. so it instead renames the incomplete file for easy
identification.


DEFAULT OUTPUT DIRECTORY:
You can set the default output directory as:

options(devEval/args/path=/srv/samba/share/)

Then, whenever you call toPNG(), it will instead save the file to
/srv/samba/share/.  If the directory is missing, it will be created
automatically.


DEFAULT FILENAME:
Currently it is not possible to set a default filename pattern.
However, you can do something like:

imgname - function() {
  # NOTE: No filename extension
  sprintf(Rplot-%s, format(Sys.time(), %Y%m%d-%H%M%S))
}

and then use:

toPNG(imgname(), aspectRatio=0.6, scale=2, {
  curve(dnorm, from=-5, to=+5)
})

(I'll think about adding support for a default image name format).


DEFAULT DEVICE OPTIONS:
To change the default image dimensions, do:

 devOptions(png, width=1280, height=800);

Importantly, in order for these devOptions() to apply, you must use
toPNG() [or devEval(png)]; they won't apply if you call png()
explicitly.  To check the default options, do:

 str(devOptions(png))
List of 11
 $ filename  : chr Rplot%03d.png
 $ units : chr px
 $ pointsize : num 12
 $ bg: chr white
 $ res   : logi NA
 $ family: chr sans
 $ restoreConsole: logi TRUE
 $ type  : language c(windows, cairo, cairo-png)
 $ antialias : language c(default, none, cleartype, grey, subpixel
)
 $ width : num 1280
 $ height: num 800

The default defaults are inferred from the default in R, so if you
don't change anything you'll get the same output as calling
png()/dev.off().

BTW, unless all of your images should have aspect ratio
800/1280=0.625, I'd recommend to use square defaults (just as the
png() device do), e.g. devOptions(png, width=1280, height=1280), and
then specify aspectRatio=0.625 in your toPNG() calls.


In addition to toPNG(), there are also toBMP(), toEPS(), toPDF(),
toSVG() and toTIFF(), with their own devOptions() settings.


Hope this is useful

Henrik

PS. From your example where all images have the same filename format
with timestamps, it almost looks like you want to do an automatic
log/archiving of image files generated.  If so, I also have the
R.archive package (not yet on CRAN) in development.  All you need to
do is load that package and everything else will be automatic.
Whenever R.devices creates an image file (e.g. via toPNG()), a copy of
it will be saved to ~/.Rarchive/%Y%m%d/%H%M%OS3-imgname.ext, e.g.
~/.Rarchive/2013-05-26/100330.684_GaussianDensity.png.  For every
toPNG(), toPDF() etc another copy will be created with a unique
filename.  That is useful when you do lots to EDA and want to go back
to that image you did a couple of hours ago.  If this is what you
want, let me know and I'll show you how to get access to R.archive.


On Sun, May 26, 2013 at 8:20 AM, Vishal Belsare shoot.s...@gmail.com wrote:
 Hi,

 Is it possible to :

 [1] set a default location to plot graphs in png format with specific
 dimensions  resolution. I want to plot to a directory which is a shared on
 the network (samba share), so as to view the plots from a different machine.
 prior2plot - function() {plotfile -  paste('/srv/samba/share/Rplot-',
 as.character(format(Sys.time(), %Y%m%d-%H%M%S)), '.png', sep='');
 png(filename=plotfile, width=1280, height=800)}


 [2] call dev.off() 'automagically' after a call to the plot function, by
 (somehow) setting it as a default behavior in .Rprofile.site? This would be
 nice to have, so as to update an image viewer running on a local machine
 which keeps displaying the image(s) in the shared plot folder on the remote
 machine (which runs R)

 I was thinking on the lines of adding the following to .Rprofile.site :
 __

 prior2plot - function() {plotfile -  paste('/srv/samba/share/Rplot-',
 as.character(format(Sys.time(), %Y%m%d-%H%M%S)), '.png', sep='');
 png(filename=plotfile, width=1280, height=800)}

 

Re: [R] avoiding eval parse with indexing: Correction

2013-05-26 Thread Bert Gunter
Yes, for the record, the typo in my earlier post is corrected below.
(Martin's previous  correction both corrected and slightly changed
what I provided).

-- Bert

On Sun, May 26, 2013 at 7:43 AM, Bert Gunter bgun...@gene.com wrote:
 Martin:

 Well, assuming I understand, one approach would be to first get the
 dim attribute of the array and then create the appropriate call using
 that:

 z - array(1:24,dim=2:4)
 d - dim(z)

 ix -lapply(d[-c(1,2)],seq_len)
 ^   ## I typed c(1,1) previously,
a mistake.


  do.call([, c(list(z),1,1,ix))
 [1]  1  7 13 19

 Is that what you want?

 -- Bert



 On Sun, May 26, 2013 at 6:56 AM, Martin Ivanov tra...@abv.bg wrote:
  Hello,
 I would like to get an advice on how the notorious eval(parse()) construct 
 could possibly
 be avoided in the following example. I have an array x, which can have 
 different number of dimensions,
 but I am only interested in extracting, say, the first element of the first 
 dimension. Currently I achieve this
 in this way:

 eval(parse(text=paste0(x[1, paste(rep(, , length(dim(x)) - 1), 
 collapse=), ])))

 Is it possible to avoid the eval parse here? How?

 Best regards,

 Martin

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



 --

 Bert Gunter
 Genentech Nonclinical Biostatistics

 Internal Contact Info:
 Phone: 467-7374
 Website:
 http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm



-- 

Bert Gunter
Genentech Nonclinical Biostatistics

Internal Contact Info:
Phone: 467-7374
Website:
http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Setting default graphics device options

2013-05-26 Thread Henrik Bengtsson
Sorry, forgot to add: Hi, [a somewhat different approach but] see the
R.devices package /Henrik

On Sun, May 26, 2013 at 10:14 AM, Henrik Bengtsson h...@biostat.ucsf.edu 
wrote:
 Hi,

 see the R.devices package
 [http://cran.r-project.org/web/packages/R.devices/].   FYI, there is a
 vignette [R.devices-overview.pdf], but for some reason it's hard find.
  However it is there: help.start() - R.devices - 'User guides,
 package vignettes and other documentation.' - R.devices-overview.pdf.

 First, don't forget to call library(R.devices).

 CREATE IMAGE FILE ATOMICALLY:
 To create a PNG image file 'GaussianDensity.png' in subdirectory
 figures/ (of the current working directory), do:

 toPNG(GaussianDensity, aspectRatio=0.6, scale=2, {
   curve(dnorm, from=-5, to=+5)
 })

 This will, while still using png()/dev.off() internally, (1)
 automatically add filename extension, (2) set the height of the image
 as 0.6 times the default width, (3) rescale height and width to be 2
 times the default, and (3) make sure the PNG device is closed
 afterward (no more forgetting about dev.off()).

 It's also make sure not to leave incomplete image files behind in case
 there's an error in your plot code.  There's an option to change that
 behavior too, e.g. so it instead renames the incomplete file for easy
 identification.


 DEFAULT OUTPUT DIRECTORY:
 You can set the default output directory as:

 options(devEval/args/path=/srv/samba/share/)

 Then, whenever you call toPNG(), it will instead save the file to
 /srv/samba/share/.  If the directory is missing, it will be created
 automatically.


 DEFAULT FILENAME:
 Currently it is not possible to set a default filename pattern.
 However, you can do something like:

 imgname - function() {
   # NOTE: No filename extension
   sprintf(Rplot-%s, format(Sys.time(), %Y%m%d-%H%M%S))
 }

 and then use:

 toPNG(imgname(), aspectRatio=0.6, scale=2, {
   curve(dnorm, from=-5, to=+5)
 })

 (I'll think about adding support for a default image name format).


 DEFAULT DEVICE OPTIONS:
 To change the default image dimensions, do:

 devOptions(png, width=1280, height=800);

 Importantly, in order for these devOptions() to apply, you must use
 toPNG() [or devEval(png)]; they won't apply if you call png()
 explicitly.  To check the default options, do:

 str(devOptions(png))
 List of 11
  $ filename  : chr Rplot%03d.png
  $ units : chr px
  $ pointsize : num 12
  $ bg: chr white
  $ res   : logi NA
  $ family: chr sans
  $ restoreConsole: logi TRUE
  $ type  : language c(windows, cairo, cairo-png)
  $ antialias : language c(default, none, cleartype, grey, 
 subpixel
 )
  $ width : num 1280
  $ height: num 800

 The default defaults are inferred from the default in R, so if you
 don't change anything you'll get the same output as calling
 png()/dev.off().

 BTW, unless all of your images should have aspect ratio
 800/1280=0.625, I'd recommend to use square defaults (just as the
 png() device do), e.g. devOptions(png, width=1280, height=1280), and
 then specify aspectRatio=0.625 in your toPNG() calls.


 In addition to toPNG(), there are also toBMP(), toEPS(), toPDF(),
 toSVG() and toTIFF(), with their own devOptions() settings.


 Hope this is useful

 Henrik

 PS. From your example where all images have the same filename format
 with timestamps, it almost looks like you want to do an automatic
 log/archiving of image files generated.  If so, I also have the
 R.archive package (not yet on CRAN) in development.  All you need to
 do is load that package and everything else will be automatic.
 Whenever R.devices creates an image file (e.g. via toPNG()), a copy of
 it will be saved to ~/.Rarchive/%Y%m%d/%H%M%OS3-imgname.ext, e.g.
 ~/.Rarchive/2013-05-26/100330.684_GaussianDensity.png.  For every
 toPNG(), toPDF() etc another copy will be created with a unique
 filename.  That is useful when you do lots to EDA and want to go back
 to that image you did a couple of hours ago.  If this is what you
 want, let me know and I'll show you how to get access to R.archive.


 On Sun, May 26, 2013 at 8:20 AM, Vishal Belsare shoot.s...@gmail.com wrote:
 Hi,

 Is it possible to :

 [1] set a default location to plot graphs in png format with specific
 dimensions  resolution. I want to plot to a directory which is a shared on
 the network (samba share), so as to view the plots from a different machine.
 prior2plot - function() {plotfile -  paste('/srv/samba/share/Rplot-',
 as.character(format(Sys.time(), %Y%m%d-%H%M%S)), '.png', sep='');
 png(filename=plotfile, width=1280, height=800)}


 [2] call dev.off() 'automagically' after a call to the plot function, by
 (somehow) setting it as a default behavior in .Rprofile.site? This would be
 nice to have, so as to update an image viewer running on a local machine
 which keeps displaying the image(s) in the shared plot folder on the remote
 machine (which runs R)

 I was thinking on the lines of 

Re: [R] When creating a data frame with data.frame() transforms integers into factors

2013-05-26 Thread António Brito Camacho
Hello Bert.

I didn't reply to the list because i forgot. I hit reply instead of reply 
all

Thanks for your example.
I understood now that i was trying to do something that didn't made sense and 
that was why it failed.
I should have used an histogram do do a graph of the frequency of each number 
of 'posts' instead of going the convoluted way around and trying to do a 
scatterplot.
I now understand that table() transforms each value of the variable into a 
factor and counts how many times it shows up. It makes sense that these 
factors are then tranformed into character when in the data frame, because 
they are not a quantity, but the representation of the number.

Thanks for the help. Problem solved.

António Brito Camacho


No dia 26/05/2013, às 15:00, Bert Gunter gunter.ber...@gene.com escreveu:

 1. Please always cc. the list; do not reply just to me.
 
 2.  OK, I see. I ERRED. Had you cc'ed the list, someone might have
 pointed this out. The correct example reproduces what you saw.
 
 z- sample(1:10,30,rep=TRUE)
 table(z)
 w - data.frame(table(z))
 w
 
 z  Freq
 1   12
 2   23
 3   31
 4   43
 5   55
 6   63
 7   75
 8   84
 9   91
 10 103
 
 sapply(w,class)
z  Freq
 factor integer
 
 This is exactly what is expected and documented.  See ?table. So the
 question is: What do you expect?  table() produces an array whose
 cross-classifying factors are the dimensions. data.frame converts this
 into a data frame. Perhaps the following will help clarify:
 
 z - data.frame(fac1= sample(LETTERS[1:3],10,rep=TRUE),
  fac2 = sample(c(j,k),10,rep=TRUE))
 z
   fac1 fac2
 1 Ak
 2 Bk
 3 Ck
 4 Ck
 5 Bk
 6 Ck
 7 Ck
 8 Aj
 9 Aj
 10Cj
 
 table(z)
 
fac2
 fac1 j k
   A 2 1
   B 0 2
   C 1 4
 
 data.frame(table(z))
 
  fac1 fac2 Freq
 1Aj2
 2Bj0
 3Cj1
 4Ak1
 5Bk2
 6Ck4
 
 table(z['fac1'])
 
 A B C
 3 2 5
 
 data.frame(table(z['fac1']))
  Var1 Freq
 1A3
 2B2
 3C5
 
 Cheers,
 Bert
 
 On Sat, May 25, 2013 at 6:54 PM, António Camacho toin...@gmail.com wrote:
 Hello Bert
 Thanks for your prompt reply.
 I tried your example and it worked without a problem.
 
 But what i want is to create a data frame from the output of the function
 table(), so in your example i tried sapply(data.frame(tbl),class) and the
 output was z -- factor and Freq ---integer.
 What is happening in the table() function that is transforming the integers
 in z into values with labels ?
 because when i do names(tbl) it returns each value of z as a name
 
 I read the manual for  [  but i didn't understand it completely. I have to
 read the introduction to R more carefully.
 
 I also tried using [, [[ and $ for the extraction of the values from
 the 'posts' column, but the problem persisted.
 
 Like i said, this code was taken from an example in a webpage. I contacted
 the author and he confirmed me that the code worked on his machine, that was
 running R 2.15.1
 Maybe something changed between versions in the data.frame() ??
 
 I really don't understant what I am doing wrong.
 
 António
 
 On 2013/05/26, at 01:44, Bert Gunter wrote:
 
 Huh?
 
 z - sample(1:10,30,rep=TRUE)
 tbl - table(z)
 tbl
 
 z
 1 2 3 4 5 6 7 8 9 10
 4 3 2 6 3 3 2 2 2 3
 
 data.frame(z)
 
   z
 1   5
 2   2
 3   4
 4   1
 5   6
 6   4
 7  10
 8   4
 9   3
 10  8
 11 10
 12  4
 13  3
 14  9
 15  2
 16  2
 17  6
 18  1
 19  4
 20  7
 21  9
 22 10
 23  7
 24  5
 25  5
 26  6
 27  8
 28  1
 29  1
 30  4
 
 sapply(data.frame(z),class)
 
   z
 integer
 
 Your error: you used df['posts']  . You should have used df[,'posts'] .
 
 The former is a data frame. The latter is a vector. Read the
 Introduction to R tutorial or ?[ if you don't understand why.
 
 -- Bert
 
 -- Bert
 
 On Sat, May 25, 2013 at 12:36 PM, António Camacho toin...@gmail.com
 wrote:
 
 Hello
 
 
 I am novice to R and i was learning how to do a scatter plot with R using
 an example from a website.
 
 My setup is iMac with Mac OS X 10.8.3, with R 3.0.1, default install,
 without additional packages loaded
 
 I created a .csv file in vim with  the following content
 userID,user,posts
 1,user1,581
 2,user2,281
 3,user3,196
 4,user4,150
 5,user5,282
 6,user6,184
 7,user7,90
 8,user8,74
 9,user9,45
 10,user10,20
 11,user11,3
 12,user12,1
 13,user13,345
 14,user14,123
 
 i imported the file into R using : ' df - read.csv('file.csv')
 to confirm the data types i did : 'sappily(df, class) '
 that returns userID -- integer ; user --- factor ; posts ---
 integer
 then i try to create another data frame with the number of posts and its
 frequencies,
 so i did: 'postFreqCount-data.frame(table(df['posts']))'
 this gives me the postFreqCount data frame with two columns, one called
 'Var1' that has the number of posts each user did, and another collumn
 'Freq' with the frequency of each 

Re: [R] Mapping GWR Results in R

2013-05-26 Thread Roger Bivand
Patrick Likongwe patricklikongwe at yahoo.co.uk writes:

 
 Dear Team,
 
 Help me out here. I have managed to run a Geographically Weighted
 Regression in R with all results coming up. The problem now comes in
 mapping the parameter estimates and the t values that are significant in
 the model. My data is like this:

The data appear to have point support, so the output of gwr() will include a
SpatialPointsDataFrame - if you look at summary(gwr.model0$SDF), you will
see that it is a SpatialPointsDataFrame. Consequently, a plot() of the
points will show coloured points. Nothing in the your work suggests that the
GWR results are being calculated for polygons.

Please only use GWR for exploration, never look at any inferential output -
look for spatial patterning in the mapped coefficients that corresponds with
identifiable missing covariates. Then include these, and use non-GWR methods.

Please also use the R-sig-geo list, not this general list.


 
   [[alternative HTML version deleted]]

Do not post HTML!

 


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] load ff object in a different computer

2013-05-26 Thread Djordje Bajic
i *SOLVED*

Thanks Milan, I have receivd some feedback externally to the list and
managed to solve the issue.

I saved the document as follows:

x.Arr - ff(NA, dim=rep(ngen,3), vmode=double)
ffsave (x.Arr, file=x.Arr)
finalizer(x.Arr) - delete

The problem was related to the rootpath argument. As the one in one
computer does not exist in the other, the solution was to set it when
ffloading, so:

ffload(file=/path/to/saved/x.Arr, rootpath =
/path/on/your/other/computer/where/to/extract/x.Arr)

Cheers,

Djordje



2013/5/26 Milan Bouchet-Valat nalimi...@club.fr

 Le dimanche 26 mai 2013 à 13:53 +0200, Djordje Bajic a écrit :
  Hi all,
 
  I am having trouble loading a ff object previously saved in a different
  computer. I have both files .ffData and .RData, and the first of them is
  13Mb large from which I know the data is therein. But when I try to
 ffload
  it,
 
  checkdir error:  cannot create /home/_myUser_
   Permission denied
   unable to process
  home/_myUser_/Rtempdir/ff1a831d500b8d.ff.
 
  and some warnings.  In the original computer, this temporary file is
  deleted each time I exit R, and my expectation is that data is actually
  stored in the .ffData and .RData that I have here. But maybe I don't
 realy
  understand the underpinnings of ff. In thois computer, my username is
  different, so user xyz does not exist. I tried changing the option
  Rtempdir but no luck.
 
  In addition, when I open the .ffData file to see what is inside, there is
  only a path to the ff1a831... temporary file. As information about ff
  package in internet is rather scarce, could anyome please help me to
  understand this, and possibly recover my data if it is possible?
 Please tell us exactly how you saved that ff object. You should try to
 reproduce the problem with very simple data you post in your message
 using dput(), and provide us with all the code and the errors it
 triggers.


 Regards

  Thank you!
 
  Djordje
 
[[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Boundaries of consecutive integers

2013-05-26 Thread Steve Taylor
How's this:

big.gap = diff(test)  1
cbind(test[c(TRUE, big.gap)], test[c(big.gap, TRUE)])


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of Lizzy Wilbanks
Sent: Tuesday, 14 May 2013 1:18p
To: r-help@r-project.org
Subject: [R] Boundaries of consecutive integers

Hi folks,

I'm trying to accomplish something that seems like it should be 
straightforward, but I've gotten tied in knots trying to figure it out.  A toy 
example of my issue is below.  I've played with diff and can't seem to figure 
out a systematic solution that will give me the two column output independent 
of the number of breakpoints in the vector...

test-c(1:5, 22:29,33:40)
example.output-matrix(c(1,5,22,29,33,40),nrow=3,ncol=2,byrow=TRUE)


Any ideas?


Thanks!
Lizzy

--
The obvious goal of any bacterium is to become bacteria.

Lizzy Wilbanks
Graduate Student, Eisen and Facciotti Labs UC Davis, Microbiology Graduate Group

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.