Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-01 Thread Gabor Grothendieck
On 6/1/05, Paul Murrell <[EMAIL PROTECTED]> wrote:
> Hi
> 
> 
> Gabor Grothendieck wrote:
> > [moved from r-help to r-devel]
> >
> > On 5/31/05, Paul Murrell <[EMAIL PROTECTED]> wrote:
> >
> >
> >>>   # mm.row[j] gives the row in the layout of the jth cell
> >>>   # mm.col[j] gives the col in the layout of the jth cell
> >>>   mm <- matrix(seq(nr*nc), nr, nc)
> >>>   mm.row <- c(row(mm))
> >>>   mm.col <- c(col(mm))
> >>>
> >>>  # go to next cell in the array
> >>>   j <- j + 1 # increment position
> >>>  pushViewport(viewport(layout.pos.row = mm.row[j], layout.pos.col = 
> >>> mm.col[j]))
> >>>
> >>>Is that how to do it or is there some layout/mfcol-like way?
> >>
> >>
> >>That is how to do it.
> >>
> >>As far as grid is concerned, all viewports are equal and grid has no
> >>idea whether a viewport corresponds to a "plot region" or a "margin" or
> >>whatever, so grid has no concept of which viewport is the "next" one to use.
> >>
> >
> >
> > OK. Thanks.  One suggestion.  Maybe the cells in a layout could have
> > an order to them and there could be an optional argument that takes a linear
> > index directly allowing easy linear traversals:
> >
> > for(i in seq(nr*nc)) {
> >pushViewport(viewport(i)) # might need different syntax here
> >xyplot(seq(i) ~ seq(i))
> >popViewport()
> > }
> 
> 
> I think this sort of thing can easily be built on top rather than into
> the existing system.  For example, here's a function that pushes all of
> the basic cells in a layout using a simple naming convention:
> 
> layoutVPname <- function(i, j) {
>   paste("layoutViewport", i, ",", j, sep="")
> }
> 
> layoutVPpath <- function(i, j, name="layout") {
>   vpPath(name, layoutVPname(i, j))
> }
> 
> pushLayout <- function(nr, nc, name="layout") {
>   pushViewport(viewport(layout=grid.layout(nr, nc),
> name=name))
>   for (i in 1:nr) {
> for (j in 1:nc) {
>   pushViewport(viewport(layout.pos.row=i,
> layout.pos.col=j,
> name=layoutVPname(i, j)))
>   upViewport()
> }
>   }
>   upViewport()
> }
> 
> And here's a use of the function to push lots of layout cells, then draw
> lattice plots in different cells using downViewport() to go to the cell
> with the appropriate name.  In this case, we use cells by column, but
> simply reverse the order of the loops to use cells by row.
> 
> pushLayout(2, 3)
> for (i in 1:2) {
>   for (j in 1:3){
> depth <- downViewport(layoutVPpath(i, j))
> print(xyplot(seq(i*j) ~ seq(i*j)), newpage=FALSE)
> upViewport(depth)
>   }
> }
> 
> 
> > and taking it one further perhaps 'with' could have a viewport method
> > that automatically pushes the viewport on entry and pops or moves
> > up one level on exit reducing the above to:
> >
> > for(i in seq(nr*nc)) with(viewport(i), xyplot(seq(i) ~ seq(i)))
> 
> 
> The raw grid functions have a 'vp' argument for this purpose.  It would
> be nice if lattice functions had something similar (or maybe just
> print.trellis).  Here's your example using the 'vp' argument to
> grid.text() (and using the layout that was pushed above) ...
> 
> for (i in 1:2) {
>   for (j in 1:3){
> grid.text(i*j, vp=layoutVPpath(i, j))
>   }
> }

Thanks, again.   I'll try modifying your example to fit my specific
application (which requires a linear column-wise traversal ending 
at the nth cell where n may be less than the number of cells in the
matrix).

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Re: [R] Problem going back to a viewport with gridBase

2005-06-01 Thread Paul Murrell

Hi


Gabor Grothendieck wrote:

[moved from r-help to r-devel]

On 5/31/05, Paul Murrell <[EMAIL PROTECTED]> wrote:



  # mm.row[j] gives the row in the layout of the jth cell
  # mm.col[j] gives the col in the layout of the jth cell
  mm <- matrix(seq(nr*nc), nr, nc)
  mm.row <- c(row(mm))
  mm.col <- c(col(mm))

 # go to next cell in the array
  j <- j + 1 # increment position
 pushViewport(viewport(layout.pos.row = mm.row[j], layout.pos.col = mm.col[j]))

Is that how to do it or is there some layout/mfcol-like way?



That is how to do it.

As far as grid is concerned, all viewports are equal and grid has no
idea whether a viewport corresponds to a "plot region" or a "margin" or
whatever, so grid has no concept of which viewport is the "next" one to use.




OK. Thanks.  One suggestion.  Maybe the cells in a layout could have
an order to them and there could be an optional argument that takes a linear
index directly allowing easy linear traversals:

for(i in seq(nr*nc)) {
   pushViewport(viewport(i)) # might need different syntax here
   xyplot(seq(i) ~ seq(i))
   popViewport()
}



I think this sort of thing can easily be built on top rather than into 
the existing system.  For example, here's a function that pushes all of 
the basic cells in a layout using a simple naming convention:


layoutVPname <- function(i, j) {
  paste("layoutViewport", i, ",", j, sep="")
}

layoutVPpath <- function(i, j, name="layout") {
  vpPath(name, layoutVPname(i, j))
}

pushLayout <- function(nr, nc, name="layout") {
  pushViewport(viewport(layout=grid.layout(nr, nc),
name=name))
  for (i in 1:nr) {
for (j in 1:nc) {
  pushViewport(viewport(layout.pos.row=i,
layout.pos.col=j,
name=layoutVPname(i, j)))
  upViewport()
}
  }
  upViewport()
}

And here's a use of the function to push lots of layout cells, then draw 
lattice plots in different cells using downViewport() to go to the cell 
with the appropriate name.  In this case, we use cells by column, but 
simply reverse the order of the loops to use cells by row.


pushLayout(2, 3)
for (i in 1:2) {
  for (j in 1:3){
depth <- downViewport(layoutVPpath(i, j))
print(xyplot(seq(i*j) ~ seq(i*j)), newpage=FALSE)
upViewport(depth)
  }
}


and taking it one further perhaps 'with' could have a viewport method 
that automatically pushes the viewport on entry and pops or moves

up one level on exit reducing the above to:

for(i in seq(nr*nc)) with(viewport(i), xyplot(seq(i) ~ seq(i)))



The raw grid functions have a 'vp' argument for this purpose.  It would 
be nice if lattice functions had something similar (or maybe just 
print.trellis).  Here's your example using the 'vp' argument to 
grid.text() (and using the layout that was pushed above) ...


for (i in 1:2) {
  for (j in 1:3){
grid.text(i*j, vp=layoutVPpath(i, j))
  }
}

Paul
--
Dr Paul Murrell
Department of Statistics
The University of Auckland
Private Bag 92019
Auckland
New Zealand
64 9 3737599 x85392
[EMAIL PROTECTED]
http://www.stat.auckland.ac.nz/~paul/

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Floating point problem (PR#7911)

2005-06-01 Thread ahanlin

<>



--please do not edit the information below--

Version:
 platform = i686-redhat-linux-gnu
 arch = i686
 os = linux-gnu
 system = i686, linux-gnu
 status = 
 major = 2
 minor = 0.0
 year = 2004
 month = 10
 day = 04
 language = R

Search Path:
 .GlobalEnv, package:methods, package:stats, package:graphics, 
package:grDevices, package:utils, package:datasets, Autoloads, package:base

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] format.data.frame (was: [R] sink() within a function?)

2005-06-01 Thread Duncan Murdoch

Jon Stearley wrote:

On Jun 1, 2005, at 11:22 AM, Duncan Murdoch wrote:


These functions convert their first argument to a vector (or
array) of character strings which have a common format (as is done
by 'print'), fulfilling 'length(format*(x, *)) == length(x)'.  The
trimming with 'trim = TRUE' is useful when the strings are to be
used for plot 'axis' annotation.



i saw this but
   class(x)# [1] "data.frame"
   y<-format(x)
   class(y)# [1] "data.frame"
confused me, let alone y<-as.character(format(x)).  i'm still an R 
newbie...




I'll try to make it clearer.




I think you've got a right to be confused, newbie or not.
format.data.frame() doesn't seem to follow the documentation, either 
before or after my change to the docs.  The result of format(x) is not a 
vector or array or even a data.frame of character strings, it's a 
data.frame of factors.


I'm not sure this is a reasonable thing to do.  Does anyone else have
an opinion on this?

My initial feeling is that format() on a data.frame should return a 
data.frame of character vectors, it shouldn't convert them to factors. 
One should be able to expect that format(x)[1,1] gives a character 
value, rather than the underlying factor encoding as it does in this 
example:


> x <- data.frame(a=rnorm(5), b=rnorm(5))
> y <- format(x, digits=3)
> y
   a  b
1 -1.007 -0.525
2 -0.570  1.128
3  0.162  1.729
4 -1.642 -0.485
5  0.381  0.621
> cat(y[1,1],"\n")
2

Duncan Murdoch

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] 1/tan(-0) != 1/tan(0)

2005-06-01 Thread Gabor Grothendieck
On 6/1/05, Simon Urbanek <[EMAIL PROTECTED]> wrote:
> On Jun 1, 2005, at 5:50 AM, (Ted Harding) wrote:
> 
> > However, a query: Clearly from the above (ahich I can reproduce
> > too), tan() can distinguish between -0 and +0, and return different
> > results (otherwise 1/tan() would not return different results).
> >
> > But how can the user tell the difference between +0 amnd -0?
> 
> That's indeed a good question - by definition (-0)==(+0) is true,
> -0<0 is false and signum of both -0 and 0 is 0.
> 
> I don't see an obvious way of distinguishing them at R level. Besides
> computational ways (like the 1/tan trick) the only (very ugly) way
> coming to my mind is something like:
> a==0 && substr(sprintf("%f",a),1,1)=="-"
> Note that print doesn't display the sign, only printf does.

On my XP machine running R 2.1.0 patched 2005-05-14

> sprintf("%f",-0)
[1] "0.00"

does not print the sign.

however, the tan trick can be done without tan using just division:

R> sign0 <- function(x) if (x != 0) stop("x not zero") else sign(1/x)
R> sign0(0)
[1] 1
R> sign0(-0)
[1] -1

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] 1/tan(-0) != 1/tan(0)

2005-06-01 Thread Simon Urbanek

On Jun 1, 2005, at 5:50 AM, (Ted Harding) wrote:

However, a query: Clearly from the above (ahich I can reproduce  
too), tan() can distinguish between -0 and +0, and return different  
results (otherwise 1/tan() would not return different results).


But how can the user tell the difference between +0 amnd -0?


That's indeed a good question - by definition (-0)==(+0) is true,  
-0<0 is false and signum of both -0 and 0 is 0.


I don't see an obvious way of distinguishing them at R level. Besides  
computational ways (like the 1/tan trick) the only (very ugly) way  
coming to my mind is something like:

a==0 && substr(sprintf("%f",a),1,1)=="-"
Note that print doesn't display the sign, only printf does.

At C level it's better - you can use the signbit() function/macro  
there. Any other ideas?


Cheers,
Simon

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] anova.mlm again

2005-06-01 Thread Bela Bauer
[hope this doesn't get posted twice, my first mail bounced]

Hi,

again, this is about the call bug in anova.mlm. I originally reported it 
as PR#7898 and I suggested a fix at PR#7904. (None of these message were 
forwarded to R-Devel, instead I received a bounce telling me that my 
provider's SPF settings are incorrect. They are, though, so there seems 
to be a problem with the forwarder).

The fix suggested in PR#7904 doesn't resolve all problems; it fixes the 
call for the fit objects, but the same problems occurs for idata. Hence, 
there needs to be an additional change made to anova.mlm. I basically 
changed anova.mlm to the following:
#...
if(length(list(object, ...)) > 1){
cl <- match.call()
cl[[1]] <- as.name("anova.mlmlist")
cl[[2]] <- list(object, ...)
if (!is.null(cl$idata)) {
cl$idata <- idata
}
return(eval(cl))
}
# ...
Additionally, anova.mlmist has to be changed to accept a list as 
parameters. I've attached a patch of my current code against R-2.1.0.

I would suggest PR#7904 to be deleted; I meant it to be a comment on 
PR#7898, but by mistake added an extra ":" to the subject line.

Regards,

Bela

-- 
Bela Bauer - [EMAIL PROTECTED]
PGP 0x97529F5C
http://www.aitrob.de
__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Re: 1/tan(-0) != 1/tan(0)

2005-06-01 Thread Morten Welinder
> For the real problem, the R source (in C), It's simple
> to work around the fact that
> qcauchy(0, log=TRUE)
> for Morten's code proposal gives -Inf instead of +Inf.

Ouch.  Good catch.

Here is what happened: I reduced 1-exp(x) to -expm1(x) which is actual wrong for
x=0 because the results will be differently signed zeros.

I should have reduced 1-exp(x) to 0-expm1(x).

In light of this, any use of R_DT_qIv and R_DT_CIv might have the same problem.
(But, quite frankly, all uses of R_DT_qIv should be eliminated anyway.
 Only qnorm
seems to be using it without killing the right-tail and the log cases.)

Morten

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


RE: [Rd] 1/tan(-0) != 1/tan(0)

2005-06-01 Thread Ted Harding
On 01-Jun-05 Martin Maechler wrote:
> Testing the code that Morten Welinder suggested for improving
> extreme tail behavior of  qcauchy(),
> I found what you can read in the subject.
> namely that the tan() + floating-point implementation on all
> four different versions of Redhat linux, I have access to on
> i686 and amd64 architectures,
> 
> > 1/tan(c(-0,0))
> gives
> -Inf  Inf
> 
> and of course, that can well be considered a feature, since
> after all, the tan() function does jump from -Inf to +Inf at 0. 
> I was still surprised that this even happens on the R level,
> and I wonder if this distinction of "-0" and "0" shouldn't be
> mentioned in some place(s) of the R documentation.

Indeed I would myself consider this a very useful feature!

However, a query: Clearly from the above (ahich I can reproduce
too), tan() can distinguish between -0 and +0, and return
different results (otherwise 1/tan() would not return different
results).

But how can the user tell the difference between +0 amnd -0?
I've tried the following:

  > sign(c(-0,0))
  [1] 0 0
  > sign(tan(c(-0,0)))
  [1] 0 0
  > sign(1/tan(c(-0,0)))
  [1] -1  1

so sign() is not going to tell us. Is there a function which can?

Short of wrting one's own:

  sign0 <-
function(x){
  if(abs(x)>0) stop("For this test x must be +0 or -0")
  return(sign(1/tan(x)))
}

;)

Best wishes,
Ted.



E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 01-Jun-05   Time: 10:50:06
-- XFMail --

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] 1/tan(-0) != 1/tan(0)

2005-06-01 Thread Martin Maechler
Testing the code that Morten Welinder suggested for improving
extreme tail behavior of  qcauchy(),
I found what you can read in the subject.
namely that the tan() + floating-point implementation on all
four different versions of Redhat linux, I have access to on
i686 and amd64 architectures,

> 1/tan(c(-0,0))
gives
-Inf  Inf

and of course, that can well be considered a feature, since
after all, the tan() function does jump from -Inf to +Inf at 0. 
I was still surprised that this even happens on the R level,
and I wonder if this distinction of "-0" and "0" shouldn't be
mentioned in some place(s) of the R documentation.

For the real problem, the R source (in C), It's simple
to work around the fact that
qcauchy(0, log=TRUE)
for Morten's code proposal gives -Inf instead of +Inf.

Martin


> "MM" == Martin Maechler <[EMAIL PROTECTED]>
> on Wed,  1 Jun 2005 08:57:18 +0200 (CEST) writes:

> "Morten" == Morten Welinder <[EMAIL PROTECTED]>
> on Fri, 27 May 2005 20:24:36 +0200 (CEST) writes:

  .

Morten> Now that pcauchy has been fixed, it is becoming
Morten> clear that qcauchy suffers from the same problems.

Morten> 
Morten> qcauchy(pcauchy(1e100,0,1,FALSE,TRUE),0,1,FALSE,TRUE)

Morten> should yield 1e100 back, but I get 1.633178e+16.
Morten> The code below does much better.  Notes:

Morten> 1. p need not be finite.  -Inf is ok in the log_p
Morten> case and R_Q_P01_check already checks things.

MM> yes

Morten> 2. No need to disallow scale=0 and infinite
Morten> location.

MM> yes

Morten> 3. The code below uses isnan and finite directly.
Morten> It needs to be adapted to the R way of doing that.

MM> I've done this, and started testing the new code; a version will
MM> be put into the next version of R.

MM> Thank you for the suggestions.

>>> double
>>> qcauchy (double p, double location, double scale, int lower_tail, int 
log_p)
>>> {
>>> if (isnan(p) || isnan(location) || isnan(scale))
>>> return p + location + scale;

>>> R_Q_P01_check(p);
>>> if (scale < 0 || !finite(scale)) ML_ERR_return_NAN;

>>> if (log_p) {
>>> if (p > -1)
>>> lower_tail = !lower_tail, p = -expm1 (p);
>>> else
>>> p = exp (p);
>>> }
>>> if (lower_tail) scale = -scale;
>>> return location + scale / tan(M_PI * p);
>>> }

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] qcauchy accuracy (PR#7902)

2005-06-01 Thread maechler
> "Morten" == Morten Welinder <[EMAIL PROTECTED]>
> on Fri, 27 May 2005 20:24:36 +0200 (CEST) writes:

Morten> Full_Name: Morten Welinder Version: 2.1.0 OS: src
Morten> only Submission from: (NULL) (216.223.241.212)


Morten> Now that pcauchy has been fixed, it is becoming
Morten> clear that qcauchy suffers from the same problems.

Morten>
Morten> qcauchy(pcauchy(1e100,0,1,FALSE,TRUE),0,1,FALSE,TRUE)

Morten> should yield 1e100 back, but I get 1.633178e+16.
Morten> The code below does much better.  Notes:

Morten> 1. p need not be finite.  -Inf is ok in the log_p
Morten> case and R_Q_P01_check already checks things.

yes

Morten> 2. No need to disallow scale=0 and infinite
Morten> location.

yes

Morten> 3. The code below uses isnan and finite directly.
Morten> It needs to be adapted to the R way of doing that.

I've done this, and started testing the new code; a version will
be put into the next version of R.

Thank you for the suggestions.

   >> double
   >> qcauchy (double p, double location, double scale, int lower_tail, int 
log_p)
   >> {
   >>   if (isnan(p) || isnan(location) || isnan(scale))
   >> return p + location + scale;

   >>   R_Q_P01_check(p);
   >>   if (scale < 0 || !finite(scale)) ML_ERR_return_NAN;

   >>   if (log_p) {
   >> if (p > -1)
   >>lower_tail = !lower_tail, p = -expm1 (p);
   >> else
   >>p = exp (p);
   >>   }
   >>   if (lower_tail) scale = -scale;
   >>   return location + scale / tan(M_PI * p);
   >> }

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel