Re: [Rd] Cost of method dispatching: was: when can we expect Prof Tierney's compiled R?

2005-05-04 Thread Luke Tierney
On Wed, 4 May 2005, Duncan Murdoch wrote:
Vadim Ogranovich wrote:

-Original Message-
From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] Sent: Wednesday, 
April 27, 2005 1:13 AM
To: Vadim Ogranovich
Cc: Luke Tierney; r-devel@stat.math.ethz.ch
Subject: Re: [Rd] RE: [R] when can we expect Prof Tierney's compiled R?

On Tue, 26 Apr 2005, Vadim Ogranovich wrote:
...
The arithmetic shows that x[i]- is still the bottleneck. I suspect that 
this is due to a very involved dispatching/search for the appropriate 
function on the C level. There might be 
significant gain 
if loops somehow cached the result of the initial 
dispatching. This is 
what you probably referred to as additional improvements in 
the engine itself.
I'd be surprised if dispatching were the issue: have you (C-level) 
profiled to find out?  Please do so: these statements do tend to get 
perpetuated as fact.

For the record, I didn't profile the dispatching, so it is only my guess
and is not verified by C-level profiling.
The guess is based on reading the code and on the following timing on R
level:
n = 1e6; iA = seq(2,n); x = double(n);
f1 - function(x, iA) for (i in iA) x[i] = c(1.0)
f2 - function(x, iA) for (i in iA) x = c(1.0)
last.gc.time = gc.time(TRUE)
system.time(f1(x, iA), gcFirst=TRUE)
[1] 3.50 0.01 3.52 0.00 0.00
print(gc.time() - last.gc.time); last.gc.time = gc.time()
[1] 1.25 0.82 1.24 0.00 0.00
system.time(f2(x, iA), gcFirst=TRUE)
[1] 0.76 0.00 0.77 0.00 0.00
print(gc.time() - last.gc.time); last.gc.time = gc.time()
[1] 0.25 0.18 0.23 0.00 0.00
f1 and f2 are identical except that the first assigns to an element in
the vector (and thus goes through the method dispatching).
Originally I had thought that the number of allocations in f1 and in f2
must be the same, the c(1.0) call. But gc.time() shows that the number
of allocations in f1 is indeed, as Prof. Ripley suggests, bigger than in
f2. It is not clear to me where these extra allocations come from and
whether they are necessary. All x[i] = c(1.0) needs to do is to create a
new vector c(1.0), which is a step common between f1 and f2, and then
copy from the vector into x[i].
However even after discounting for gc.time the assignment to x[i] seems
to be heavy.

You cannot cache the result, as [- can change the class of x, as could 
other operations done by the rhs (e.g. if it were x[i] - g(x, i) the 
function g could change its argument).

Yes, however R may try to use the last method found and only when that
fails go for the full dispatch. This should give a lot of gain in a
typical case when the vars. types do not change.
There are probably efficiency improvements available, but they need to be 
done very carefully.  For example, the default method of [- could be called 
in one step, and as a side effect create a more specific method.  So for the 
second call we should call the more specific one, but the default call will 
still be valid in the sense that the arguments match the signature (S4) or 
the class matches the name (S3), but not in the sense that it is the method 
that should be called.

Duncan Murdoch
Let's slow down here.  In
function(x, iA) for (i in iA) x[i] = c(1.0)
there are three functions in the body, [-, [, and c.  All are
.Primitives with internal C implementations for which methods can be
written.  These implementations all look roughly like this:
if (method is available)
call the method
else { C default code }
The test of whether methods are available first looks at a bit.  If
that bit is not set there are guaranteed not to be any methods.  Only
if that bit is set does any further search for methods happen.  In
this example, and in most uses of these functions, that bit is not
set, so dispatch involves testing one bit.  This most important case
has already been optimized.  Further optimizing cases where methods
migth be available might be worth doing and will happen over time as
we learn what is necessary and feasible and where bottlenecks are.
But this sort of thing has to be done with care.  Here, however, this
just is not an issue.
The default code for [- might well merit another look--I suspect it
has not been as heavily optiized as the default code for [.  How much
difference this will make to realistic code and whether the cost of
implementation/maintenance is worth while is not clear.
On additional allocations: that is the function call mechanism of the
interpreter.  This could be done differently, but given the semantics
of argument matching and ... arguments there is a limit on what an
interpreter can realistically do.  Again, whether the cost of making
changes to the function call mechanism in terms of both implementation
cost and maintenance cost is justified is not clear.  Some of us are
likely to look at the function call overhead sometime this summer; I
would't want to bet on the outcome though.
luke
--
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University

[Rd] RE: [R] when can we expect Prof Tierney's compiled R?

2005-04-26 Thread Luke Tierney
For what it's worth (probably not much as these simple benchmarks are
rarely representative of real code and so need to be taken with a huge
chunk of salt) here is what happens with your examples in R 2.1.0 with
the current byte compiler.
Define your examples as functions:
n = 1e6; iA = seq(2,n); x = double(n);
f1 - function(x, iA) for (i in iA) x[i] = x[i-1]
f2 - function(x, iA) for (i in iA) x[i-1]
f3 - function(x, iA) for (i in iA) 1
f4 - function(x, iA) for (i in iA) x[i] = 1.0
f5 - function(x, iA) for (i in iA) i-1
f6 - function(x, iA) for (i in iA) i
Make byte compiled versions:
f1c - cmpfun(f1)
f2c - cmpfun(f2)
f3c - cmpfun(f3)
f4c - cmpfun(f4)
f5c - cmpfun(f5)
f6c - cmpfun(f6)
and run them:
 system.time(f1(x, iA))
[1] 5.43 0.04 5.56 0.00 0.00
 system.time(f1c(x, iA))
[1] 1.77 0.03 1.81 0.00 0.00
 system.time(f2(x, iA))
[1] 1.72 0.01 1.74 0.00 0.00
 system.time(f2c(x, iA))
[1] 0.63 0.00 0.63 0.00 0.00
 system.time(f3(x, iA))
[1] 0.19 0.00 0.20 0.00 0.00
 system.time(f3c(x, iA))
[1] 0.14 0.00 0.15 0.00 0.00
 system.time(f4(x, iA))
[1] 3.78 0.03 3.82 0.00 0.00
 system.time(f4c(x, iA))
[1] 1.26 0.02 1.30 0.00 0.00
 system.time(f5(x, iA))
[1] 0.99 0.00 1.00 0.00 0.00
 system.time(f5c(x, iA))
[1] 0.30 0.00 0.31 0.00 0.00
 system.time(f6(x, iA))
[1] 0.21 0.00 0.23 0.00 0.00
 system.time(f6c(x, iA))
[1] 0.17 0.00 0.20 0.00 0.00
I'll let you do the arithmetic.  The byte compiler does get rid of a
fair bit of interpreter overhead (which is large in these kinds of
examples compared to most real code) but there is still considerable
room for improvement.  The byte code engine currently uses the same
internal C code for doing the actual work as the interpreter, so
improvements there would help both interpreted and byte compiled code.
Best,
luke
On Fri, 22 Apr 2005, Vadim Ogranovich wrote:
If we are on the subject of byte compilation, let me bring a couple of
examples which have been puzzling me for some time. I'd like to know a)
if the compilation will likely to improve the performance for this type
of computations, and b) at least roughly understand the reasons for the
observed numbers, specifically why x[i]- assignment is so much slower
than x[i] extraction.
The loops below are typical in any recursive calculation where the new
value of a vector is based on its immediate neighbor say to the left.
Specifically we assign the previous value to the current element.
# this shows that the assignment x[i]- is the bottleneck in the loop
n = 1e6; iA = seq(2,n); x = double(n); system.time(for (i in iA) x[i]
= x[i-1])
[1] 4.29 0.00 4.30 0.00 0.00
n = 1e6; iA = seq(2,n); x = double(n); system.time(for (i in iA)
x[i-1])
[1] 1.46 0.01 1.46 0.00 0.00
# the overhead of the loop itself is reasonably low, just 0.17 sec
n = 1e6; iA = seq(2,n); x = double(n); system.time(for (i in iA) 1)
[1] 0.17 0.01 0.17 0.00 0.00
# pure assignment (w/o the extraction x[i]) takes 3.09 sec. Thus x[i] as
extraction is (3.09 - 0.17)/(0.79 - 0.17) = 4.7 times faster than x[i]-
as assignment. This looks a bit odd.
n = 1e6; iA = seq(2,n); x = double(n); system.time(for (i in iA) x[i]
= 1.0)
[1] 3.08 0.00 3.09 0.00 0.00
# this shows that just evaluation of (i-1) takes about (0.79 - 0.24) =
0.55 sec on my machine (AMD 64 bit). Looks too slow.
n = 1e6; iA = seq(2,n); x = double(n); system.time(for (i in iA) i-1)
[1] 0.79 0.00 0.79 0.00 0.00
n = 1e6; iA = seq(2,n); x = double(n); system.time(for (i in iA) i)
[1] 0.24 0.01 0.24 0.00 0.00
Thanks,
Vadim
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Luke Tierney
Sent: Friday, April 22, 2005 7:33 AM
To: Peter Dalgaard
Cc: Jason Liao; r-help@stat.math.ethz.ch
Subject: Re: [R] when can we expect Prof Tierney's compiled R?
On Wed, 20 Apr 2005, Peter Dalgaard wrote:
Luke Tierney [EMAIL PROTECTED] writes:
Vectorized operations in R are also as fast as compiled C (because
that is what they are :-)).  A compiler such as the one
I'm working
on will be able to make most difference for
non-vectorizable or not
very vectorizable code.  It may also be able to reduce the
need for
intermediate allocations in vectorizable code, which may
have other
benefits beyond just speed improvements.
Actually, it has struck me a couple of times that these
operations are
not as fast as they could be, since they are outside the
scope of fast
BLAS routines, but embarrassingly parallel code could easily be
written for the relevant hardware. Even on uniprocessor
systems there
might be speedups that the C compiler cannot find (e.g. because it
cannot assume that source and destination of the operation are
distinct).
My guess is that for anything beyond basic operations we are
doing OK on uniprocessors. but it would be useful to do some
testing to be sure.  For the basic operations I suspect we
are paying a heavy price for the way we handle recycling

Re: Write Barrier: was: [Rd] function-like macros undefined

2005-03-16 Thread Luke Tierney
Your original question was about macro-like functions.  INTEGER is
available to internal R code as a macro; it is also available as a
function.  Code in packages that uses standard hearders will see the
function, which is declared as
int *(INTEGER)(SEXP x);
I have no idea why you wanted to check whether INTEGER is a macro or
not.  The value returned is a pointer to the raw int data which you
can (ab)use like any other such pointer.
On Wed, 16 Mar 2005, Vadim Ogranovich wrote:
Hi,
Thank you to Duncan Murdoch for pointing to
http://www.stat.uiowa.edu/~luke/R/barrier.html.
I have a couple of questions in this regard:
* suppose that inside a C function I have a SEXP vector x of integers
and I want to increment each element by one. I understand that
int * xIPtr = INTEGER(x);
int i;
for (i=0; iLENGTH(x); ++i)
SET_VECTOR_ELT(x, i, xIPtr[i]+1);
The declaration of SET_VECTOR_ELT is
SEXP (SET_VECTOR_ELT)(SEXP x, int i, SEXP v);
Your compiler had better complain about your third argument.
is the recommended way of doing it. However it seems that only the very
first call to SET_VECTOR_ELT, i.e. the one that corresponds to i=0, is
strictly necessary. For example, and this is my question, the following
should be perfectly safe:
SET_VECTOR_ELT(x, 0, xIPtr[0]);
for (i=0; iLENGTH(x); ++i)
++xIPtr[i];

Admittedly this looks a bit odd and breaks if LENGTH(x) is zero, but it
illustrates the point.
* Now, if the above variation is safe, maybe there is a macro that
simply marks atomic SEXP-s, i.g. integers and doubles, for modification?
Vectors of non-SEXP objects are not a problem--that is why REAL,
INTEGER, etc are available as functions to access the raw data
pointers.  Only vectors of SEXP's (i.e. generic and character vector
objects) need to go through the write barrier.
* The Write Barrier document has a section Changing the
Representation of String Vectors. Is this something which is in works,
or planned, for future versions? It would be great if it were, this
should give R considerable speed boost.
This was considered at the time but is not on the table now.
luke
--
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu
__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


RE: [Rd] delay() has been deprecated for 2.1.0

2005-03-12 Thread Luke Tierney
On Sat, 12 Mar 2005 [EMAIL PROTECTED] wrote:
Thanks-- Luke Tierney's reply below is very helpful. 'makeActiveBinding' is 
brilliant and I'm pretty sure I can make it do just what I want.
One question though: my first experiment with it was this
makeActiveBinding( 'myAB', function( x) if( missing( x)) get( 'myABguts', 
env=.GlobalEnv) else assign( 'myABguts', x, .GlobalEnv), .GlobalEnv)
exists( 'myAB')
which appeared to return absolutely nothing-- not even a missing or a null. The problem, of course, is that 'myABguts' doesn't exist yet; what seems to happen, though,  is that the failure to 'get' causes a *messageless* error inside the active binding function. Is this intended?
What I get is this:
 makeActiveBinding( 'myAB', function( x)
 if( missing( x))
 get( 'myABguts', env=.GlobalEnv)
 else assign( 'myABguts', x, .GlobalEnv),
 .GlobalEnv)
NULL
Warning message:
saved workspaces with active bindings may not work properly when loaded ...
 exists( 'myAB')
Error in get(x, envir, mode, inherits) : variable myABguts was not found
[We might want to get rid of the warning message at this point.]
Currently exists() for active bindings calls the value function even
if mode is any.  Given that fact and your implementation the message
makes sense to me, and you get the same message if you try to access
the variable:
 myAB
Error in get(x, envir, mode, inherits) : variable myABguts was not found
You probably should write your function a bit more defensively.
For ordinary bindings that contain promises from delayed evaluation
exists() does not evaluate the promise for mode any, but does for
other modes of course.  The behavior of exists() for active bindings
is bug.  A quick look at the code suggests that it will be a bit
tricky to fix, so I'm not sure it will get done before 2.1.0.
Best,
luke

I did think that class(env$x) != evalq( class( x), env) was a bit weird, but 
because it was useful to me, I wan't going to complain. Of course it's entirely 
true that reliance on the undocumented is dangerous and entirely my liability-- but 
even ___documented___ features in R have been know to come  go a bit :) . 
Nowadays I just expect to be buffetted by the winds of change every so often...
Mark
-Original Message-
From: Luke Tierney [mailto:[EMAIL PROTECTED]
Sent: Sat 12/03/2005 3:03 PM
To: Bravington, Mark (CMIS, Hobart)
Cc: [EMAIL PROTECTED]; r-devel@stat.math.ethz.ch
Subject: RE: [Rd] delay() has been deprecated for 2.1.0

On Sat, 12 Mar 2005 [EMAIL PROTECTED] wrote:
 Uh-oh... I've just written a bunch of code for 'mvbutils' using 'delay', and am 
worried by the statement that there should be no way to see a promise:... object in 
R. At present, it's possible to check whether 'x' is a promise via e.g. 'class( 
.GlobalEnv$x)'. This will be different to 'class( x)' if 'x' is a promise, regardless of whether 
the promise has or has not been forced yet. This can be very useful; my recent code relies on it 
to check whether certain objects have been changed since last being saved. [These certain objects 
are originally assigned as promises to load from individual files. Read-accessing the object keeps 
it as class 'promise', whereas write-access creates a non-promise. Thus I can tell whether the 
individual files need re-saving when the entire workspace is saved.]
Relying on undocumented features when designing a package is not a
good idea.  In this case the feature of env$x returning a promise
contradicts the documentation and is therefore a bug (the
documentation says that env$x should behave like the corresponding
get() expression, and that forcec promises and returns their values).

 The has-it-changed test has been very valuable to me in allowing fast 
handling of large collections of large objects (which is why I've been adding this 
functionality to 'mvbutils'); and apart from is-it-still-a-promise, I can't think 
of any other R-level way of testing whether an object has been modified. [If there 
is another way, please let me know!]

 Is there any chance of retaining *some* R-level way of checking 
whether an object is a promise, both pre-forcing and post-forcing? (Not 
necessarily via the 'class( env$x)' method, if that's deemed objectionable.)
This would not be a good idea.  The current behavior of leaving an
evaluated promise in place is also not a documented feature as far as
I can see.  It is a convenient way of implementing lazy evaluation in
an interpreter but it has drawbacks.  One is the cost of the extra
dereference.  Another is the fact that these promises keep alive their
environments that might otherwise be inaccessible and hence available

Re: [Rd] How to use Rmpi?

2005-03-11 Thread Luke Tierney
If your computation is simple enough to express in terms of lapply or
other apply calls that you want to have run in parallel then you might
try the 'snow' package on CRAN which can run on top of Rmpi.  Some
places to get more details on that:
http://www.stat.uiowa.edu/~luke/R/cluster/cluster.html
http://www.bepress.com/cgi/viewcontent.cgi?article=1016context=uwbiostat
Best,
luke
On Thu, 10 Mar 2005, Alessandro Balboni wrote:
I need to rewrite a software in R, that runs on a cluster.
I thought Rmpi will be good for me but I can't find
any help other than the Rmpi-manual that actually only
describe the functions in the Rmpi package.
Can someone point me to some usefull guide?
For example, I would like to run a for-statement on
several processors (a subset of the statement on each
processor) but I can't figure out how to do this!
Thanks

--
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu
__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


RE: [Rd] delay() has been deprecated for 2.1.0

2005-03-11 Thread Luke Tierney
 made, rather than the global
environment, and this is usually what you want.
Package writers who use delay() will now get a warning that it has
been deprecated.  They should recode their package to use
delayedAssign instead.
Examples from CRAN of this (I am not sure if this list is exhaustive):
exactRankTests, genetics, g.data, maxstat, taskPR, coin
I have cc'd the maintainers of those packages.
If you want a single code base for your package that works in both the
upcoming R 2.1.0 and older versions, this presents a problem: older
versions don't have delayedAssign.  Here is a workalike function that
could be used in older versions:
delayedAssign - function(x, value,
eval.env = parent.frame(),
assign.env = parent.frame()) {
 assign(x, .Internal(delay(substitute(value), eval.env)),
   envir = assign.env)
}
Because this function calls the internal delay() function directly, it
should work in R 2.1.0+ as well without a warning, but the internal
function will eventually go away too, so I don't recommend using it in
the long term.
Sorry for any inconvenience that this causes.
Duncan Murdoch
__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel
__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel
--
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu
__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Re: [R] Memory Fragmentation in R

2005-02-19 Thread Luke Tierney
On Sat, 19 Feb 2005, Nawaaz Ahmed wrote:
Thanks Brian. I looked at the code (memory.c) after I sent out the first 
email and noticed the malloc() call that you mention in your reply.
Looking into this code suggested a possible scenario where R would fail in 
malloc() even if it had enough free heap address space.

I noticed that if there is enough heap address space (memory.c:1796, 
VHEAP_FREE()  alloc_size) then the garbage collector is not run. So malloc 
could fail (since there is no more address space to use), even though R 
itself has enough free space it can reclaim. A simple fix is for R to try 
doing garbage collection if malloc() fails.

I hacked memory.c() to look in R_GenHeap[LARGE_NODE_CLASS].New if malloc() 
fails (in a very similar fashion to ReleaseLargeFreeVectors())
I did a best-fit stealing from this list and returned it to allocVector(). 
This seemed to fix my particular problem - the large vectors that I had 
allocated in the previous round were still sitting in  this list. Of course, 
the right thing to do is to check if there are any free vectors of the right 
size before calling malloc() - but it was simpler to do it my way (because I 
did not have to worry about how efficient my best-fit was; memory allocation 
was anyway going to fail).

I can look deeper into this and provide more details if needed.
Thanks.  It looks like it would be a good idea to modify the malloc at
that point to try a GC if the malloc fails, then retry the malloc and
only bail if the second malloc fails.  I want to think this through a
bit more before going ahead, but I think it will be the right thing to
do.
Best,
luke

Nawaaz


Prof Brian Ripley wrote:
BTW, I think this is really an R-devel question, and if you want to pursue 
this please use that list.  (See the posting guide as to why I think so.)

This looks like fragmentation of the address space: many of us are using 
64-bit OSes with 2-4Gb of RAM precisely to avoid such fragmentation.

Notice (memory.c line 1829 in the current sources) that large vectors are 
malloc-ed separately, so this is a malloc failure, and there is not a lot R 
can do about how malloc fragments the (presumably in your case as you did 
not say) 32-bit process address space.

The message
  1101.7 Mbytes of heap free (51%)
is a legacy of an earlier gc() and is not really `free': I believe it means 
something like `may be allocated before garbage collection is triggered': 
see memory.c.

On Sat, 19 Feb 2005, Nawaaz Ahmed wrote:
I have a data set of roughly 700MB which during processing grows up to 2G 
( I'm using a 4G linux box). After the work is done I clean up (rm()) and 
the state is returned to 700MB. Yet I find I cannot run the same routine 
again as it claims to not be able to allocate memory even though gcinfo() 
claims there is 1.1G left.

At the start of the second time
===
  used  (Mb) gc trigger   (Mb)
Ncells  2261001  60.43493455   93.3
Vcells 98828592 754.1  279952797 2135.9
Before Failing
==
Garbage collection 459 = 312+51+96 (level 0) ...
1222596 cons cells free (34%)
1101.7 Mbytes of heap free (51%)
Error: cannot allocate vector of size 559481 Kb
This looks like a fragmentation problem. Anyone have a handle on this 
situation? (ie. any work around?) Anyone working on improving R's 
fragmentation problems?

On the other hand, is it possible there is a memory leak? In order to make 
my functions work on this dataset I tried to eliminate copies by coding 
with references (basic new.env() tricks). I presume that my cleaning up 
returned the temporary data (as evidenced by the gc output at the start of 
the second round of processing). Is it possible that it was not really 
cleaned up and is sitting around somewhere even though gc() thinks it has 
been returned?

Thanks - any clues to follow up will be very helpful.
Nawaaz

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel
--
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu
__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] eapply weirdness/bug

2005-02-18 Thread Luke Tierney
On Fri, 18 Feb 2005, Peter Dalgaard wrote:
[EMAIL PROTECTED] writes:
The following looks like an 'eapply' bug to me:
t/subtest e - new.env()
t/subtest e$tempo - quote( 1+'hi')
t/subtest lapply( ls( e), function( x) length( get( x,e)))
[[1]]
[1] 3
# seems reasonable-- e$tempo is a 'call' object of length 3
t/subtest eapply( e, length)
Error in 1 + hi : non-numeric argument to binary operator
t/subtest eapply( e, length)
t/subtest traceback()
1: eapply(e, length)
For some reason 'eapply' seems to *evaluate* objects of mode 'call' (it
happened with every call-mode object I tried). This shouldn't happen--
or should it?
It's probably related to the fact that
eval(substitute(length(x),list(x=e$tempo)))
Error in 1 + hi : non-numeric argument to binary operator
I.e., you cannot construct calls with a mode call argument by
substituting the value of the mode call object. (Got that? Point is
that the substitute returns quote(length(1+hi)))
It is not clear to me that there is a nice way of fixing this. You
probably need to construct calls of the form FUN(env$var) -- I suspect
that with(env, FUN(var)) or eval(FUN(var), env) would looking for
trouble. Hmm, then again, maybe it could work if FUN gets inserted as
an anonymous function...
Looks broken to me:
 e-new.env()
 assign(x,quote(y),e)
 eapply(e, function(x) x)
Error in FUN(y, ...) : Object y not found
in contrast to
 lapply(list(quote(y)),function(x) x)
[[1]]
y
looks like eapply has an extra eval in the code.  It does because the
code creates a call of the form
FUN(value)
with the literal value in place and then calls eval on this, which
results in calling eval on value.  The internal lapply in contrast
creates a call of the form
FUN(list[[index]])
and evals that.  This causes the literal list and index values to
be evaluated, which is OK since they are guaranteed to be a list
(generic vector) and integer vector and so evaluate to themselves, and
the call to [ is then evaluated, returning what is in the list at the
appropriate index and passing that, without further evluation, to FUN.
The semantics we want in eapply is I think equivalent to creating
FUN(get(name, envir))
and evaluating that, but we are not getting this.  Direct use of this
would be less efficient that the current approach, but using
FUN(quote(value))
as the constructed call should do the trick.
[There seem to be a few other unnecessary eval's in cmputing the arguments
but I haven't thought this through yet]
luke

--
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu
__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] eapply weirdness/bug

2005-02-18 Thread Luke Tierney
On Fri, 18 Feb 2005, Peter Dalgaard wrote:
Luke Tierney [EMAIL PROTECTED] writes:
looks like eapply has an extra eval in the code.  It does because the
code creates a call of the form
 FUN(value)
with the literal value in place and then calls eval on this, which
results in calling eval on value.  The internal lapply in contrast
creates a call of the form
 FUN(list[[index]])
and evals that.  This causes the literal list and index values to
be evaluated, which is OK since they are guaranteed to be a list
(generic vector) and integer vector and so evaluate to themselves, and
the call to [ is then evaluated, returning what is in the list at the
appropriate index and passing that, without further evluation, to FUN.
The semantics we want in eapply is I think equivalent to creating
 FUN(get(name, envir))
Or, as I was suggesting,
eval(substitute(F(x), list(F=FUN,x=as.name(e)), envir)
Well, you know my view of adding more nonstandard evaluation.  Any
explicit use of eval is almost always a Really Bad Idea, and in most
of the remaining cases it is a bad idea.  In any cases that still
remain it should be avoinded if at all possible. And if it seems not
possible then it is best to put the problem down for a while and think
a bit more
and evaluating that, but we are not getting this.  Direct use of this
would be less efficient that the current approach, but using
 FUN(quote(value))
as the constructed call should do the trick.
You have to be careful only to do this if the value is of mode call,
I think. Or is quote always a no-op in the other cases?
quote is fine--it always returns the object that appears as the
argument in the call.  For quote expressions created as the result of
parsing that will be a somewhat limited set of things, but for quote
calls created programmatically it can be anything.
I'm getting a bit fond of the the solution that I had because it will
also work if the FUN uses deparse(substitute()) constructions, and
once you're at the level of constructing calls via LCONS() it isn't
really inefficient either. Extra arguments could be a bit of a bother
though. (What happens to those currently?? The function doesn't seem to
pass them to .Internal.)
I believe none of our apply family of functions can be expected to do
anything very useful in situations that require nonstandard evaluation
based on the call context.  I don't beieve we explicitly document what
is supposed to happen here (and I'm not sure we want to at least at
this point: this is the sort of thing where leaving it undefined gives
alternate implementations, such as one based on compilation, some room
to work with).  But it might be worth thinking about narrowing
variability a little.  A somewhat related issue is that we don't have
a completely standard mechanism of calling an function from within C
code--we do it by creating and eval'ing (in C) a call expression, but
there may be some slight variations in the way it is done in different
places that we might want to think about at some point.
For this specific case though, I _think_ the semantics we want is this:
eapply1 - function(env, FUN, ..., all.names = FALSE) {
FUN - match.fun(FUN)
lapply(.Internal(env2list(env, all.names)), FUN, ...)
}
Not passing the ... in the current implementation is, I think, an
oversight, as is the extra evaluation that occurs.  Given that lapply
is already internal I'm not sure there really is very much benefit in
having the internal eapply.  If not I'd prefer to replace it by
something like this; if there are reasons for keeping the .Internal we
can work on replicating these semantics as closely as possible.  I
think Robert is the one who would know the issues.
luke
--
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu
__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


RE: [Rd] Very Long Expressions

2005-01-25 Thread Luke Tierney
On Mon, 24 Jan 2005, Thomas Lumley wrote:

 On Mon, 24 Jan 2005, Prof Brian Ripley wrote:
 
  On Mon, 24 Jan 2005, McGehee, Robert wrote:
 
  [Instructions to the R developers deleted.]
 
  Secondly, the ?options help (thanks for everyone who reminded me about
  this), says that expressions can have values between 25...10.
  
  However, if the original example is set past 4995 on my computers, I
  receive a stack overflow.
 
  More accurately, you caused a protection stack overflow.
 
 
 At one point we were concerned about overflowing the C stack if 
 options(expressions=) were set too high.  I think this was in the days 
 when MacOS had a very small stack, and that things are safer now.
 
 
   -thomas

I hadn't noticed that they finally kicked the default up to 8M--good.

We still need to be a bit careful since running out of C stack will
cause a protection violation and there is no portabmle way to catch
this.  But we can probably afford to loosen the defaults a bit.

luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
R-devel@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Destructive str(...)?

2004-11-01 Thread Luke Tierney
On Sun, 31 Oct 2004, Prof Brian Ripley wrote:

 Just to be 100% clear, the finalizer is called *at most* once if (as in
 tcltk) R_RegisterCFinalizer is called.  If you want it to be called
 exactly once, you need to use R_RegisterCFinalizerEx.
 
 The issue is that there may not be a final gc().
 
 BTW, str(x) is destructive here too, so we do need to improve str().
 I have code written, but access to svn.r-project.org is down (yet again).
 
  x - as.tclObj(pi)
  str(x)
 Class 'tclObj' length 1 pointer: 0x860c3f8
  str(x)
 length 1 pointer: 0x860c3f8
 

Improving str is a good idea, but as there are other uses of unclass
out there it would probably be best to change the implementation to
wrap the pointers rather than use them directly.

In hindsight it would probably have been better to use an
implementation that internally wraps esxernal pointers as well as
environments so only the bits that really do need reference behavior
get it, and maybe at some point we should consider doing that.  Name
objects should probably just disallow changing attributes as null
currently does.

luke


 
 On 31 Oct 2004, Peter Dalgaard wrote:
 
  Simon Urbanek [EMAIL PROTECTED] writes:
  
   Now, hold on a second - I thought the main point of EXTPTR is that the
   finalizer is called only once, that is when the last instance of the
   reference is disposed of by the gc (no matter how many copies existed
   meanwhile). Am I wrong and/or did I miss something? I did some tests
   which support my view, but one never knows ...
  
  How do you ensure that the finalizer is called once? By *not* copying
  the reference object! You can have as many references to it as you
  like (i.e. assign it to multiple variables), and the object itself is
  not removed until the last reference is gone, but if you modify the
  object (most likely by setting attributes, but you might also change
  the C pointer payload in a C routine), all copies are changed:
  
   x - as.tclObj(pi)
   x
  Tcl 3.14159265359
   y - x
   y
  Tcl 3.14159265359
   mode(x)
  [1] externalptr
   attr(x, Simon) - Urbanek
   attributes(y)
  $class
  [1] tclObj
  
  $Simon
  [1] Urbanek
 
 

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] RCC compatibility patch

2004-10-02 Thread Luke Tierney
I think we'd need more infomation to decidide how best to proceed.
Let's discuss this offline.

luke

On Fri, 1 Oct 2004, John Garvin wrote:

 Would you consider the following patch to eval.c to allow compatibility 
 with RCC? (It's in the applyClosure function.)
 
 @@ -432,6 +432,14 @@
   SEXP f, a, tmp;
   RCNTXT cntxt;
 
 +#ifdef RCC
 +SEXP comp;
 +PROTECT(comp = getAttrib(op, install(RCC_CompiledSymbol)));
 +if (comp != R_NilValue)  /* compiled version exists */
 +  op = comp;
 +UNPROTECT(1);
 +#endif /* RCC */
 +
   /* formals = list of formal parameters */
   /* actuals = values to be bound to formals */
   /* arglist = the tagged list of arguments */
 
 RCC (http://hipersoft.cs.rice.edu/rcc/) is a static compiler for R we're
 working on at Rice. It compiles R scripts into dynamic libraries that can
 be loaded from within R using the dyn.load function.
 
 This change enables compiled R code to be executed while retaining full
 correctness; i.e., inspection and modification of closures generated by
 RCC will work exactly as in interpreted R. This patch does not change
 the existing implementation in the default build; when enabled, it should
 not affect the evaluation of any R code except code compiled with RCC.
 
 John
 
 __
 [EMAIL PROTECTED] mailing list
 https://stat.ethz.ch/mailman/listinfo/r-devel
 

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Finalization and external pointers

2004-05-03 Thread Luke Tierney
On Mon, 3 May 2004, Duncan Murdoch wrote:

 I'm adding things to the Windows RGui so that there's more control of
 the interface from within R.
 
 One thing I'm considering is giving access to the Graphapp window
 objects using external pointers.  This raises the issue of
 finalization on both sides:
 
  - If someone creates a pointer referring to a window, then that
 pointer should be changed to NULL when the window is closed.
 
  - If garbage collection destroys a pointer referring to a window,
 then the window should know not to change that pointer to NULL later.
 
 Are there other examples like this I can look at?  I'd like to follow
 existing conventions rather than invent my own.
 
 And a related question:  is there a writeup anywhere on the
 R_RegisterFinalizerEx function?  What does the onexit argument do?

I added some notes I have on weak references and finalization at

http://www.stat.uiowa.edu/~luke/R/weakfinex.html

The notes are a bit old but I think still apply, such as they are.

The onexit argument sets a flag; when R exits normally it will attempt
to run the finalizers of all references with this flag set.

The simple examples in these notes may be of use, or not.  The Haskell
reference may also be worth a look as one writeup of the issues.

Making sure you have the right match of object lifetimes with object
identities is probably the trickiest issue--do you need to make sure
two ways of asking for an R reference to the same physical window
return the same R pointer object? Probably you do; that will affect
the design in that the pointer object you use has to exist as long as
the physical window does.  Also keep in mind that these things could
get saved in a workspace (they will be restored with NULL pointers).

Best,

luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Script editor for Windows GUI

2004-02-27 Thread Luke Tierney
On Thu, 26 Feb 2004, Simon Urbanek wrote:

 On Feb 26, 2004, at 11:58 AM, Marsland, John wrote:
 
  Is you project based upon the SJava package, because we have had lots 
  of
  problems with the callback interface?
 
 No, we are not using SJava for obvious reasons. I tried hard to fix it, 
 but for some platforms that is impossible w/o complete rewrite, so we 
 use our own interface now. The Java GUI approach has the advantage that 
 it also circumvents the event loop problems in that context.
 
  Would you consider releasing your work in progress under the GPL? We 
  are
  keen to avoid re-inventing things and its a long time until we are all 
  at
  UseR! - we could at the very least give some user feedback.
 
 I think this is a good idea, especially given the feedback I got since 
 the post :P. I'll talk to others in the developer team and maybe we 
 could leak a developer preview pre-pre-pre-alpha release in the next 
 weeks for those interested.
 
  On a slightly different tack, I have recently taken a look at Jython - 
  an
  implementation of Python in Java that produces byte code that runs on 
  the
  JVM. Combined with this there is a project called xoltar which aims to 
  bring
  functional programming to Python. This got me thinking that a R parser 
  could
  be written in Java for a core set of functionality allowing code and
  packages written in pure-R to be compiled as byte code and run on 
  the JVM.
  Then one could call  the SJava package from Java to execute anything 
  unusual
  in R proper any thoughts?
 
 Good question - due to the amount of packages that use C/Fortran code I 
 had the impression that this sounds just too crazy. But I'm really keen 
 on getting some feedback on this, because technically, one of the CS 
 students here would enjoy doing something like that ...

I think it's doable but a very big task.  Python had a head start
because Python has always been byte compiled to its own set of byte
codes.  This means the semantics evolved in a way that is more
supportive of compilation (things like being able to reliably identify
which variables are local, no lazy evaluation, etc.).  In addition
what seems to constitute the Python C core seems to be smaller than
the R core, the bits in base+nmath, say, by about a factor of 5.
Overall I think it would probably be about an order of magnitude
harder to get R to a state comparable to Jython.  That is before
starting to worry about packages needed to do anything useful.  You
can read about what was involved in creating Jython and do the math
yourself

Once we get the R byte code compiler fully operational things may
eventually become easier since we may be able to the move some things
now done in C out into R where they are easier to maintain and could
then be automatically handled by an appropriate compiler back end.
Getting to a pure Java or pure .NET/mono setting is still likely to be
a fairly dauting task for a while.

Best,

luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] save() size in XDR

2004-02-26 Thread Luke Tierney
On Thu, 26 Feb 2004, Nathan Whitehouse wrote:

  
  I don't think so, though you can guess:  integers
  are stored in 4
  bytes, floats in 8, etc.
 
   I think this would be good if we only needed a rough
 estimate, but we need something precise.  
 
  One way to solve this problem would be to create a
  connection that did
  nothing except keep track of a file position, then
  do the save to that
  connection.  However, it's not easy to define new
  connection types.
  Might be a nice package to write to allow such a
  thing.
 
   That seems reasonable to me.  I'll look into it; are
 there any resources at all for how to define new
 connection types?
 
   Thanks much,
 
 
 =
 Nathan Whitehouse
 Statistics/Programming
 Baylor College of Medicine
 Houston, TX, USA
 [EMAIL PROTECTED]
 work: 1-713-798-9029
 cell:1-512-293-5840
 
 http://rho-project.org: rho- open source web services for R.
 http://franklin.imgen.bcm.tmc.edu: Shaw laboratory, bcm.
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-devel
 

On Unix-like systems something like

 f-pipe(wc,open=wb)
 save(list=ls(all=TRUE),file=f)
 close(f)
  5  14 200

should work.

Best,

luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Scoping bug in ftable() (PR#6541)

2004-02-04 Thread Luke Tierney
On Wed, 4 Feb 2004 [EMAIL PROTECTED] wrote:

 This bug shows up in ftable() in both r-patched and r-devel:
 
  x - c(1,2)
  y - c(1,2)
  z - c(1,1)
  ftable(z,y)
   y 1 2
 z  
 1   1 1
  ftable(z,x)
   x 1
 z
 1   2
 
 Since x and y are identical, the two ftable results should be the
 same, but they are not.  
 
 I've only been able to see this when the column variable is named x,
 so it looks like a scoping problem in ftable.default.  I think the
 problem is in this line:
 
 x - do.call(table, c(as.list(substitute(list(...)))[-1],
 list(exclude = exclude)))
 
 I think this call is finding the local variable x (which has been
 used before this line) instead of the argument x and thus produces
 an incorrect result.
 
 How should this be fixed?  What we want is to convert ... into an
 evaluated list that includes the deparsed arguments as names.  Just
 plain list(...) loses the names.
 
 I think this works:
 
 args - list(...)
 names(args) -
 as.character(unlist(as.list(substitute(list(...)[-1]
 x - do.call(table, args)
 
 but isn't there an easier way?

Not sure there is.

The fact that the original code did something close to what was
intended is due to what I still think is a design flaw in do.call
(though others disagree): It does an eval of it's arguents (on top of
what ordinary function calling does to evaluate the expression
producing the argument list).  This is why there is no explicit eval
around the substitute.  It is also why do.call isn't useful when some
elements of the argument list are symbols or expressions.

luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reorganization of packages in the R distribution

2003-12-12 Thread Luke Tierney
On Fri, 12 Dec 2003, Paul Gilbert wrote:

 If  I understand this change correctly, I think is wrong for R-core to 
 think it is a small change. It has much more serious consequences for me 
 than any changes introduced  R 1.0. It definitely should not be 
 introduced at a dot level release unless there is a fairly simple 
 mechanism to deal with the implications. It breaks  6 of my 9 packages 
 on CRAN at a fairly fundamental level,  2 more at a less serious level,  
 and some packages I have not yet release.
 
 Perhaps my programming technique is not correct. I always considered 
 this trick to be a work-around for a short coming in R/S.   The issue is 
 that the correct way to do this needs to be implemented before the trick 
 that allows a work-around is eliminated.
 
 Paul Gilbert
 
 Prof Brian Ripley wrote:
 
 On Fri, 12 Dec 2003, Paul Gilbert wrote:
 
   
 
 Prof Brian Ripley wrote:
 
 
 
 There are a small number of CRAN packages that attempt to modify system
 functions and so will need updating.  (Known examples are in dse:tframe,
 gregmisc and mclust and some testing code elsewhere.)
 
   
 
 Brian
 
 What do you mean by updating?  In tframe I modify a few functions like
 
  start - function (x, ...) if (is.Ttframed(x)) start(tframe(x), 
 ...) else UseMethod(start)
 
 If that can no longer be done then this is a serious fundamental change 
 that breaks all my code. I hope that is not what you mean. I'm just 
 going away for a week, but will follow up when I return.
 
 
 
 It's always been incorrect code, and it no longer works.  You should not
 be masking system generics, as the namespace registration mechanism does
 not work on your version.
 
   
 

There are a number of options, depending on what you are trying to do.
If you want to make a definition of how the function `start' should
handle a TtFramed object in a way that should be visible to functions
defined in other packages that use the function start from the R core
packages (formarly base, not stats), then you can do that one of two
ways. The first is the disciplined and supported way: use the fact
that `start` is a generic and define a method for it, as was already
suggested.  That is the point of defining `start' as a generics.  The
undisciplined way it to change the definition in the stats package.
This can be done.  It is hard to do, and that is deliberate.

If your intent is to define a function of your own for use in your
packages that does something you want in one particular case but
otherwise defers to the function `start' in base then you can do that
too.  If this is what you want then things would be clearer if you
used a different name, like

pgStart - function (x, ...) {
if (is.Ttframed(x)) start(tframe(x), ...)
else start(x, ...)
}

If for some reason you must use the smae name, even though that may
not be doing what you think it is doing, then you can explicitly defer
to the version in stats with

start - function (x, ...) {
if (is.Ttframed(x)) stats::start(tframe(x), ...)
else stats::start(x, ...)
}

So we have infact implemented several nice pieces of rope for your use ...

Best,

luke


-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] No traceback in r-patched

2003-11-11 Thread Luke Tierney
Should be fixed in R-patched and R-devel.

Best,

luke

On Mon, 10 Nov 2003, Luke Tierney wrote:

 Thanks.  Seems this slipped in while fixing another issue (surprising
 it wasn't noticed until now).  Should be fixed in a day or two.
 
 Best,
 
 luke
 
 On Sat, 8 Nov 2003, Roger D. Peng wrote:
 
  Since it's heading towards release, I thought I'd bring this up again. 
  I'm still not getting any traceback()'s in recent R-patched.  For 
  example, I get:
  
log(a)
  Error in log(x) : Non-numeric argument to mathematical function
traceback()
  No traceback available
  
  Or when running the examples in the traceback() help page:
  
 foo - function(x) { print(1); bar(2) }
 bar - function(x) { x + a.variable.which.does.not.exist }
 ## Don't run:
 foo(2) # gives a strange error
  [1] 1
  Error in bar(2) : Object a.variable.which.does.not.exist not found
 traceback()
  No traceback available
  
  It seems the .Traceback variable is not being created.  If I'm doing 
  something incorrectly, I'd very much like to know.  I'm starting up with 
  R --vanilla.
  
version
_
  platform i686-pc-linux-gnu
  arch i686
  os   linux-gnu
  system   i686, linux-gnu
  status   alpha
  major1
  minor8.1
  year 2003
  month11
  day  07
  language R
search()
  [1] .GlobalEnv  package:methods package:ctest   package:mva
  [5] package:modreg  package:nls package:ts  Autoloads
  [9] package:base
   
  
  
  -roger
  
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-devel
  
 
 

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] No traceback in r-patched

2003-11-10 Thread Luke Tierney
Thanks.  Seems this slipped in while fixing another issue (surprising
it wasn't noticed until now).  Should be fixed in a day or two.

Best,

luke

On Sat, 8 Nov 2003, Roger D. Peng wrote:

 Since it's heading towards release, I thought I'd bring this up again. 
 I'm still not getting any traceback()'s in recent R-patched.  For 
 example, I get:
 
   log(a)
 Error in log(x) : Non-numeric argument to mathematical function
   traceback()
 No traceback available
 
 Or when running the examples in the traceback() help page:
 
foo - function(x) { print(1); bar(2) }
bar - function(x) { x + a.variable.which.does.not.exist }
## Don't run:
foo(2) # gives a strange error
 [1] 1
 Error in bar(2) : Object a.variable.which.does.not.exist not found
traceback()
 No traceback available
 
 It seems the .Traceback variable is not being created.  If I'm doing 
 something incorrectly, I'd very much like to know.  I'm starting up with 
 R --vanilla.
 
   version
   _
 platform i686-pc-linux-gnu
 arch i686
 os   linux-gnu
 system   i686, linux-gnu
 status   alpha
 major1
 minor8.1
 year 2003
 month11
 day  07
 language R
   search()
 [1] .GlobalEnv  package:methods package:ctest   package:mva
 [5] package:modreg  package:nls package:ts  Autoloads
 [9] package:base
  
 
 
 -roger
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-devel
 

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] 1.8.0 on Unix: interrupting huge print()s ??

2003-10-14 Thread Luke Tierney
On Mon, 13 Oct 2003, David Brahm wrote:

 Martin Maechler [EMAIL PROTECTED] wrote:
  When accidentally calling print() {implicitly}, we have been used here to
  press CTRL+c (twice in Emacs ESS!) for stopping the output.
  This no longer works in R 1.8.0 at least on our unix platforms.
 
 Luke Tierney [EMAIL PROTECTED] wrote:
  Needs a call to R_CheckUserInterrupt at the appropriate place...
 
 Most unfortunate!  Our system is like Martin's (Solaris 2.8, Emacs, ESS)
 and we also lose the use of ^C^C with R-1.8.0.  It's probably enough to prevent
 us from upgrading.  I saw no sign of Luke's proposed patch as of 10/13 (in the
 NEWS file); is one in the works?  Thanks.
 

The change is now checked in to patches and devel branches.

If C-c C-c does not work for you at all in ESS then this is not
related to R changes in 1.8.0 (I sometimes seem to need C-g C-c C-c).

luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] 1.8.0 on Unix: interrupting huge print()s ??

2003-10-10 Thread Luke Tierney
Needs a call to R_CheckUserInterrupt at the appropriate place.  The
only platform that currently can interrupt a long print seems to be
Rgui on Windows because of an event poll in the console output
function.  One possibility is to put in a check every 100 calls, say,
to Rvprintf in printutils.c.  I'll check that out and commit to the
patches branch unless anyone sees a problem or a better place to
check.

luke

On Fri, 10 Oct 2003, Martin Maechler wrote:

 NEWS for R 1.8.0 has
 
   USER-VISIBLE CHANGES
  
   ..
  
   o On Unix-like systems interrupt signals now set a flag that is
  checked periodically rather than calling longjmp from the
  signal handler. This is analogous to the behavior on Windows.
  This reduces responsiveness to interrupts but prevents bugs
  caused by interrupting computations in a way that leaves the
  system in an inconsistent state.  It also reduces the number
  of system calls, which can speed up computations on some
  platforms and make R more usable with systems like Mosix.
 
 and this has already caused grief here
 (actually it has several days ago, when I switched our users to
  R-1.8.0beta  __ BUT THEY DIDN'T TELL ANY R DEVELOPER __ )
 
 for a user who does use *large* matrices.
 
 When accidentally calling print() {implicitly}, we have been
 used here to press CTRL+c (twice in Emacs ESS!) for stopping the
 output.
 
 This no longer works in R 1.8.0 at least on our unix platforms.
 To reproduce, type
 
   cbind(1:1e6)
 
 and try to cut it short (it only takes a minute or so,
 whereas our user here had a matrix that needed more than 10
 minutes of screen output !)
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-devel
 

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] An artifact of base being namespace

2003-10-10 Thread Luke Tierney
On Fri, 10 Oct 2003, Duncan Murdoch wrote:

 On Fri, 10 Oct 2003 14:35:42 +0200, you wrote:
 
 Saikat This is most problematic when you are creating a
 Saikat generic for an existing function in base (as you
 Saikat very well could for log). This often makes the
 Saikat ability to make new generics out of existing
 Saikat functions somewhat useless.
 
 Assuming you're right, I'm much less sure that this consequence
 has been intended in all situations.  But I'd need to see
 concrete examples to understand your last sentence.
 
 I think this depends on whether we want all simple functions to act
 like generics, or whether we want a distinction.  Does it ever matter
 to a function in base like log10 whether log is a simple function or a
 generic?
 
 If so, then the current behaviour is right.  Whoever wrote those
 functions back in the mists of time expected log to be a simple
 function, so the namespace should guarantee that it stays as one until
 explicitly changed within the namespace.
 
 But if the distinction between generics and simple functions is only
 for efficiency (dispatching a generic is slower), then I think the
 generic should be created in the namespace where the simple function
 was originally declared.  Then log10 would call the generic which
 would dispatch to the newly created method for Saikat's data.
 
 My feeling is that the latter is what we really want.

There are significant subtleties.  On the surface having all functions
be generics would make things simpler.

On the other hand, having lazy evaluation makes other things simpler:
there is no need for special operators, macros or other such stuff;
everything can be done with ordinary functions.  if, for, try,
on.exit, switch are just functions. Some of them happen to be
implemented internally for efficiency, but this isn't essential.

Unfortunately, you can't dispatch on the type of an argument value
without computing the argument value, so lazy evaluation of an
argument and dispatching on that argument are not compatible.

There are other arguments why making every function generic may not be
a good idea and why languages like dylan with a similar concept of
generic functions to the S4 style have not gone down that route.  log
and log10 provide one illustration: With the current definitions there
is a simple relationship between log10, the two argument log and the
single argument log.  Making the single argument log generic may well
make sense, but making the two argument version or log10 generic might
lead to a situation where the basic relation log10(x) == log(x,10) ==
log(x)/log(10) is no longer true and code that depends on that
relationship will fail.

But it would be a good idea, once the design of methods stabilizes a
bit more, to review functions in base and other packages to decide
which ones ought to be generic.

Best,

luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] exceeding memory causes crash on Linux

2003-10-09 Thread Luke Tierney
On Thu, 9 Oct 2003, Paul Gilbert wrote:

 Paul Gilbert wrote:
 
  I am having an unusual difficulty with R 1.8.0 on Mandrake 9.1 running 
  a problem that takes a large amount of memory. With R 1.7.1  this ran 
  on the machine I am using (barely), but now takes more memory than is 
  available.  The first two times I tried with R 1.8.0, R exited after 
  the program had run for some time, and gave no indication of anything, 
  just returned to the shell command prompt. I ran under gdb to see if I 
  could get a bettter indication of the problem, and this crashed Linux 
  completely, or at least X, but I couldn't get another console either. 
  (I haven't had anything crash Linux in a long time.) To confirm this I 
  ran R under gdb again, and ran top to verify I was hitting memory 
  constraints (which I was), but this time R did give a message Error: 
  cannot allocate a vector of size ... 
 
 P.S. But there does not seem to be proper garbage collection after this. 
 Top showed the memory still in use and subsequent attempts to run the 
 program failed immediately trying to allocate a much smaller vector. 
 When I did gc()  explicitely it did clean up and I could start the 
 function again. The second time R exited back to the gdb prompt with a 
 message Program terminated with signal  SIGKILL, Killed. The program no 
 longer exists.
 
  I'm not worried about running the problem, but I would like a more 
  graceful exit. Might this be related to the change in error handling?

Possible but not likely.

When you really push memory for your R process to the limit you create
a situation where other programs may fail because there is no more
memory for them to get at.  At some point the kernel decides there is
a problem and starts doing things to bring the system into control by
the only means it really has: blowing away processes with a SIGKILL,
which cannot be caught so there is nothing R can do about it.  Or any
other process, say your X server, if that is the one that the kernel
decides to blow away.  I forget the actual rules the kernel uses to
handle these situations but that is the gist.

I don't think the kernel goes into this self-defense mode until it
gets close to running out of both physical memory and swap space.  One
thing you might try to do is check out how much swap space you have.
If you don't have enough you might try adding a swap file of several
gigabytes and see if that helps.

One thing to keep in mind when doing computations that produce huge
results is that R saves the last value of a successful top level
evaluation in .Last.value.  If that value is huge, gc can't do
anything about it until it is replaced by something smaller.  An
explicit call to gc() is not going to be able to release any more
things than an internal call made to satisfy an allocation, except to
the extent that some additional data will be reachable as part of the
computation that triggers the internal call.  Two successive top level
gc() calls may seem to do wonders compared to just one, just because
after the first one .Last.value has been replaced by the result
returned by gc().

The memory management system also does some of its releasing of
smaller sized allocations gradually to avoid thrashing in malloc in
most situations. This is why memory use as well as triggering
thresholds can go down gradually on successive gc calls until the
reach a steady state.  This is based on heuristics that work
reasonably well across a wide range of uses but might not be ideal for
really pushing the memory limit.  At some point we might make some of
the tuning parameters for these heuristics available at the user
level, but this isn't high priority as fiddling with them is probably
much more likely to make things worse than better.

Hope that helps,

luke


-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] segmentation fault: formula() with long variable names(PR#3680)

2003-08-14 Thread Luke Tierney
On Thu, 7 Aug 2003 [EMAIL PROTECTED] wrote:

 R version: 1.7.1
 OS: Red Hat Linux 7.2
 
 In this example, I would expect an error for the overly long variable 
 name. This is always reproducable for me.
 
  formula(paste(y~,paste(rep(x,5),collapse=)))
 Segmentation fault
 
 Sincerely,
 Jerome Asselin

The problem seems to be in parse, which formula.default calls:

 parse(text=paste(rep(x,5),collapse=))
Segmentation fault (core dumped)

We were filling the yytext buffer in gram.y with no overflow checking.
I've added some checking in R-devel, so these now give

 parse(text=paste(rep(x,5),collapse=))
Error in parse(text = paste(rep(x, 5), collapse = )) : 
input buffer overflow
 formula(paste(y~,paste(rep(x,5),collapse=)))
Error in parse(text = x) : input buffer overflow

luke


-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] An inconsistency with promise in attributes

2003-08-12 Thread Luke Tierney
On Mon, 11 Aug 2003, Saikat DebRoy wrote:

 When an attribute is a delayed expression sometimes it is not forced 
 when it is extracted.
 
   x - list()
   attr(x, p) - delay(1)
   x
 list()
 attr(,p)
 promise: 0x11e4bb8
   val - attr(x, p)
   val
 [1] 1
   attr(x, p)
 promise: 0x11e4bb8
 
 I am not quite sure whether the above is a bug or not

Promises are not forced when retrieving them from a data structure. I
don't think this is a bug (though I don't think the semantics of user
level access to promises are exactly cast in stone).

 but I think the 
 following is a bug - a promise is supposed to give its value once 
 evaluated!

   eval(attr(x, p))
 promise: 0x11e4bb8

This should probably be considered a bug in do_eval (internal eval
would force the promise).  I'd be careful fixing it though as it might
break other things.

Promises are really intended to support lazy evaluation and work best
if they are stored as values of variables in environments.  I'm not
sure I would consider other uses reliable in the long run.

Best,

luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


[Rd] codetools

2003-07-28 Thread Luke Tierney
I've put together a package codetools with some tools for examining R
source code.  It is available at

http://www.stat.uiowa.edu/~luke/R/codetools/codetools.tar.gz

It requires a reasonably current version of R-devel.

The main user functions are:

checkUsage, checkUsageEnv, checkUsagePackage: Examine closure,
closures in an environment, closures in a package, and report
information on possible problems

findGlobals: Finds global functions and variables used by a function.

showTree: Prints Lisp-style representation of expression; useful for
understanding parsing. 

This stuff is a by-product of putting together a more solid framework
for the byte code compiler I am working on and it's still very rough,
but I'm making it available now in the hope that it may be of some use
and to get some feedback and ideas for useful directions of
improvement.

luke


-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] abs() and negative output from fractions() (PR#3536)

2003-07-23 Thread Luke Tierney
DispatchGroup was not using R_LookupMethod for the group generic;
should be fixed in R-devel now.

luke

On Wed, 23 Jul 2003, Prof Brian Ripley wrote:

 Something seems wrong in the way the group generics are registered, as 
 they are not being called.  As a workaround, add
 
 export(Math.fractions,Ops.fractions,Summary.fractions)
 
 to library/MASS/NAMESPACE and the examples seem to work again.
 
 On Wed, 23 Jul 2003, Barry Rowlingson wrote:
 
  Prof Brian Ripley wrote:
   On Tue, 22 Jul 2003, Duncan Murdoch wrote:
  
   This is not even a VR bug: no one said abs() is implemented for fractions, 
   and it is not.  From the help page:
   
Arithmetic operations on `fractions' objects are possible.
   
   and abs() is not such an operation.
   
  
Something funny is happening with printing fractions - the values seem 
  correct under abs() [and other functions]:
  
xf
  [1]2 -2/5  2/5  2/3
abs(xf)
  [1]2 -2/5  2/5  2/3
abs(xf)[2]
  [1] 2/5
  
huh?
  
sqrt(xf)
  [1]2 -2/5  2/5  2/3
  Warning message:
  NaNs produced in: sqrt(xf)
  
sqrt(xf)[1:4]
  [1]   8119/5741 NaN 191/302 38804/47525
  Warning message:
  NaNs produced in: sqrt(xf)
  
Bug, undocumented behaviour, feature? I dont know. It all seems to 
  work in 1.6.0, so everyone should downgrade now... :)
  
  Baz
  
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-devel
  
 
 

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] cvs version of r-devel on darwin

2003-06-15 Thread Luke Tierney
On Sun, 15 Jun 2003, Jan de Leeuw wrote:

 No, but if you do not have dlcompat in /usr/local/lib, you also get
 link errors because it cannot find dlopen and friends. So one need
 a dlcompat somewhere where the linker can find it.
 
 Not finding environ must be due to some very recent change in the
 R code, it seems.

Yes--a bit of overzealous code cleaning by someone who thought code
conditionalized for __APPLE__ was for classic MacOS.  Should be OK
again in cvs.

luke

 
 On Sunday, Jun 15, 2003, at 08:47 US/Pacific, Stefano Iacus wrote:
 
 
  On Domenica, giu 15, 2003, at 02:39 Europe/Rome, Jan de Leeuw wrote:
 
  Does not use -L/sw/lib -dl anymore, so only works for those who have
  libdl in /usr/local/lib
 
  Cannot find _environ in linking  libR.dylib (not sure where it  
  normally
  gets it). Does not seem to need it in linking R.bin.
  I do not have /sw on my machine and I was able to build R till  
  yesterday. So it should not be a problem related to fink.
  But today I get the same error of you
 
  ld: Undefined symbols:
  _environ
  /usr/bin/libtool: internal link edit command failed
  make[3]: *** [libR.dylib] Error 1
  make[2]: *** [R] Error 2
  make[1]: *** [R] Error 1
  make: *** [R] Error 1
 
 
  ===
  Jan de Leeuw; Professor and Chair, UCLA Department of Statistics;
  Editor: Journal of Multivariate Analysis, Journal of Statistical  
  Software
  US mail: 9432 Boelter Hall, Box 951554, Los Angeles, CA 90095-1554
  phone (310)-825-9550;  fax (310)-206-5658;  email:  
  [EMAIL PROTECTED]
  homepage: http://gifi.stat.ucla.edu

  -- 
  ---
No matter where you go, there you are. --- Buckaroo Banzai
 http://gifi.stat.ucla.edu/sounds/nomatter.au
 
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-devel
 
 
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-devel
 
 
 ===
 Jan de Leeuw; Professor and Chair, UCLA Department of Statistics;
 Editor: Journal of Multivariate Analysis, Journal of Statistical  
 Software
 US mail: 9432 Boelter Hall, Box 951554, Los Angeles, CA 90095-1554
 phone (310)-825-9550;  fax (310)-206-5658;  email: [EMAIL PROTECTED]
 homepage: http://gifi.stat.ucla.edu

  
 -
No matter where you go, there you are. --- Buckaroo Banzai
 http://gifi.stat.ucla.edu/sounds/nomatter.au
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-devel
 

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


[Rd] simple tools for extracting call graph information from Rprofoutput

2003-06-13 Thread Luke Tierney
A preliminary version of a package proftools for examining Rprof
profiling output and, in particular, extracting and viewing call graph
information is available at

http://www.stat.uiowa.edu/~luke/R/codetools/proftools.tar.gz

Call graph information, including which direct calls where observed
and how much time was spent in these calls, can be very useful in
identifying performance bottlenecks.  The package produces either
printed representations of the call graph, analogous to the ones
produced by the GNU profiler gprof, or can be used to produce
graphical representations using the Graphviz command line tools or the
Rgraphviz package.

The README file in the package contains some documentation that will
eventually be worked into a vignette.

The implementation is extremely crude (a real mess would be more
accurate) and will hopefully be improved over time--at this point it
is more of an existence proof than a final product.

Performance is less than ideal, though using these tools it was
possible to identify some problem points and speed up computing the
profile data by a factor of two (in other words, it may be bad now but
it used to be worse).  More careful design of the data structures and
memoizing calculations that are now repeated is likely to improve
performance substantially.


luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] setGeneric

2003-04-01 Thread Luke Tierney
On Tue, 1 Apr 2003, Luke Tierney wrote:

 On Tue, 1 Apr 2003, John Chambers wrote:
 
  I think this is a consequence of the extra context added to make methods
  work right with R lexical scoping, namespaces, etc.  Or a subtlety in
  R's definition of missing()?
  
  The problem is that somehow the default expression for argument `ncol'
  makes that argument appear NOT to be missing.  But an attempt to
  evaluate it fails because the local variable `n' hasn't been defined
  yet.
  
  The following debugging snippets, using trace on the method for
  numeric, show what's happening.  But as to why, we need some expert
  help! (Luke?)
 
 We need a bit more info transferred across from the generic and we
 need to reset the environments of the promises for missing arguments.
 I'll look into it.

A workaround should be in place in R-devel now.

luke

-- 
Luke Tierney
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-devel