Re: [Rd] weird dir() behavior with broken symlinks

2016-10-18 Thread Karl Forner
another strange behavior of list.dirs(), that seems related:
docker run -ti rocker/r-base

> setwd(tempdir())
> file.symlink('from', 'to')
[1] TRUE
> list.dirs(recursive=FALSE)
[1] "./to"

> file.symlink('C/non_existing.doc', 'broken.txt')
[1] TRUE
> list.dirs(recursive=FALSE)
[1] "./broken.txt"


On Tue, Oct 18, 2016 at 3:08 PM, Karl Forner  wrote:

> I encountered very weird behavior of the dir() function, that I just can
> not understand.
>
> Reproducible example:
>
> docker run -ti rocker/r-base
> R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
> Copyright (C) 2016 The R Foundation for Statistical Computing
> Platform: x86_64-pc-linux-gnu (64-bit)
> > # setup
> > tmp <- tempfile()
> > dir.create(tmp)
> > setwd(tmp)
> > file.symlink('from', 'to')
>
> # First weirdness, the behavior of the recursive argument
> > dir()
> [1] "to"
> > dir(recursive=TRUE)
> character(0)
>
> # include.dirs make it work again. The doc states: Should subdirectory
> names be included in
> # recursive listings?  (They always are in non-recursive ones).
> >dir(recursive=TRUE, include.dirs=TRUE)
> [1] "to"
>
> Best,
> Karl
>
>
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] weird dir() behavior with broken symlinks

2016-10-18 Thread Karl Forner
I encountered very weird behavior of the dir() function, that I just can
not understand.

Reproducible example:

docker run -ti rocker/r-base
R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
> # setup
> tmp <- tempfile()
> dir.create(tmp)
> setwd(tmp)
> file.symlink('from', 'to')

# First weirdness, the behavior of the recursive argument
> dir()
[1] "to"
> dir(recursive=TRUE)
character(0)

# include.dirs make it work again. The doc states: Should subdirectory
names be included in
# recursive listings?  (They always are in non-recursive ones).
>dir(recursive=TRUE, include.dirs=TRUE)
[1] "to"

Best,
Karl

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] mcparallel (parallel:::mcexit) does not call finalizers

2016-06-16 Thread Karl Forner
Hello,

In the context of trying to cover a package code that use parallelized
tests using the covr package, I realized that code executed using
mcparallel() was not covered,
cf https://github.com/jimhester/covr/issues/189#issuecomment-226492623

>From my understanding, it seems that the package finalizer set by covr (cf
https://github.com/jimhester/covr/blob/79f7e0434f3d14a48c6fea994b67b9814b34e4e5/R/covr.R#L348)
is not called, because the forked process exits using parallel:::mcexit,
which is a non standard exit and does not call some of the cleanup code
(e.g. the R_CleanUp function is not called).

I was wondering if a modification of the parallel mcexit could be
considered, to make it call the finalizers, possibly triggered by a
parameter or an option, or if there are solid reasons not to do so.

Regards,
Karl Forner

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] MAX_NUM_DLLS too low ?

2015-05-08 Thread Karl Forner
Hello,

My problem is that I hit the hard-coded MAX_NUM_DLLS (100) limit of the
number of loaded DLLs.
I have a number of custom packages which interface and integrate a lot of
CRAN and Bioconductor packages.

For example, on my installation:
 Rscript -e 'library(crlmm);print(length(getLoadedDLLs()))'
gives 28 loaded DLLs.

I am currently trying to work-around that by putting external packages in
Suggests: instead of Imports:, and lazy-load them, but still I am wondering
if that threshold value of 100 is still relevant nowadays, or would it be
possible to increase it.

Thanks,

Karl Forner

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] isOpen() misbehaviour

2014-06-19 Thread Karl Forner
Thanks Joris, it makes sense now, though the doc is a bit misleading.

On Thu, Jun 19, 2014 at 3:22 PM, Joris Meys  wrote:
> Hi Karl,
>
> that is expected. The moment you close a connection, it is destroyed as well
> (see ?close). A destroyed connection cannot be tested. In fact, I've used
> isOpen() only in combination with the argument rw.
>
>> con <- file("clipboard",open="r")
>> isOpen(con,"write")
> [1] FALSE
>
> cheers
>
>
> On Thu, Jun 19, 2014 at 3:10 PM, Karl Forner  wrote:
>>
>> Hello,
>>
>> >From the doc, it says:
>>  "isOpen returns a logical value, whether the connection is currently
>> open."
>>
>> But actually it seems to die on closed connections:
>> > con <- file()
>> > isOpen(con)
>> [1] TRUE
>> > close(con)
>> > isOpen(con)
>> Error in isOpen(con) : invalid connection
>>
>> Is it expected ?
>> Tested on R-3.0.2 and R version 3.1.0 Patched (2014-06-11 r65921) on
>> linux x86_64
>>
>> Karl
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>
>
>
>
> --
> Joris Meys
> Statistical consultant
>
> Ghent University
> Faculty of Bioscience Engineering
> Department of Mathematical Modelling, Statistics and Bio-Informatics
>
> tel : +32 9 264 59 87
> joris.m...@ugent.be
> ---
> Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] isOpen() misbehaviour

2014-06-19 Thread Karl Forner
Hello,

>From the doc, it says:
 "isOpen returns a logical value, whether the connection is currently open."

But actually it seems to die on closed connections:
> con <- file()
> isOpen(con)
[1] TRUE
> close(con)
> isOpen(con)
Error in isOpen(con) : invalid connection

Is it expected ?
Tested on R-3.0.2 and R version 3.1.0 Patched (2014-06-11 r65921) on
linux x86_64

Karl

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] regression bug with getParseData and/or parse in R-3.1.0

2014-06-12 Thread Karl Forner
Thank you Duncan.

I confirm:

R version 3.1.0 Patched (2014-06-11 r65921) -- "Spring Dance"


> getParseData(parse(text = "{1}", keep.source = TRUE))
  line1 col1 line2 col2 id parent token terminal text
7 11 13  7  0  exprFALSE
1 11 11  1  7   '{' TRUE{
2 12 12  2  3 NUM_CONST TRUE1
3 12 12  3  7  exprFALSE
4 13 13  4  7   '}' TRUE}

Karl


On Thu, Jun 12, 2014 at 2:39 PM, Duncan Murdoch 
wrote:

> On 12/06/2014, 7:37 AM, Karl Forner wrote:
> > Hi,
> >
> > With R-3.1.0 I get:
> >> getParseData(parse(text = "{1}", keep.source = TRUE))
> >   line1 col1 line2 col2 id parent token terminal text
> > 7 11 13  7  9  exprFALSE
> > 1 11 11  1  7   '{' TRUE{
> > 2 12 12  2  3 NUM_CONST TRUE1
> > 3 12 12  3  5  exprFALSE
> > 4 13 13  4  7   '}' TRUE}
> >
> > Which has two problems:
> > 1) the parent of the first expression (id=7) should be 0
> > 2) the parent of the expression with id=3 should be 7
>
> I believe this has been fixed in R-patched.  Could you please check?
>
> The problem was due to an overly aggressive optimization introduced in
> R-devel in June, 2013.  It assumed a vector was initialized to zeros,
> but in some fairly common circumstances it wasn't, so the parent
> calculation was wrong.
>
> Luckily 3.1.1 has been delayed by incompatible schedules of various
> people, or this fix might have missed that too.  As with some other
> fixes in R-patched, this is a case of a bug that sat there for most of a
> year before being reported.  Please people, test pre-release versions.
>
> Duncan Murdoch
>
>
> >
> > For reference, with R-3.0.2:
> >
> >> getParseData(parse(text = "{1}", keep.source = TRUE))
> >   line1 col1 line2 col2 id parent token terminal text
> > 7 11 13  7  0  exprFALSE
> > 1 11 11  1  7   '{' TRUE{
> > 2 12 12  2  3 NUM_CONST TRUE1
> > 3 12 12  3  7  exprFALSE
> > 4 13 13  4  7   '}' TRUE}
> >
> > which is correct.
> >
> >   [[alternative HTML version deleted]]
> >
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> >
>
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] regression bug with getParseData and/or parse in R-3.1.0

2014-06-12 Thread Karl Forner
Hi,

With R-3.1.0 I get:
> getParseData(parse(text = "{1}", keep.source = TRUE))
  line1 col1 line2 col2 id parent token terminal text
7 11 13  7  9  exprFALSE
1 11 11  1  7   '{' TRUE{
2 12 12  2  3 NUM_CONST TRUE1
3 12 12  3  5  exprFALSE
4 13 13  4  7   '}' TRUE}

Which has two problems:
1) the parent of the first expression (id=7) should be 0
2) the parent of the expression with id=3 should be 7

For reference, with R-3.0.2:

> getParseData(parse(text = "{1}", keep.source = TRUE))
  line1 col1 line2 col2 id parent token terminal text
7 11 13  7  0  exprFALSE
1 11 11  1  7   '{' TRUE{
2 12 12  2  3 NUM_CONST TRUE1
3 12 12  3  7  exprFALSE
4 13 13  4  7   '}' TRUE}

which is correct.

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Rjulia: a package for R call Julia through Julia C API

2014-06-06 Thread Karl Forner
Excellent.
By any chance are you aware of a julia way to perform the opposite: call R
from julia ?
Thanks


On Fri, Jun 6, 2014 at 7:23 AM, Yu Gong  wrote:

> hello everyone,recently I write a package for R call Julia through Julia C
> API
> https://github.com/armgong/RJulia
> now the package can do
> 1 finish basic typemapping, now int boolean double R vector to julia
> 1d-array is ok,and julia int32 int64 float64 bool 1D array to R vector is
> also ok.
> 2 R STRSXP to julia string 1D array and Julia string array to STRSXP is
> written but not sure it is correct or not?
> 3 Can Set Julia gc disable at initJulia.
> to build Rjulia need git master branch julia and R.
> the package now only finish very basic function, need more work to finish.
> so any comments and advice is welcome.
> currently it can be use on unix and windows console,on windows gui it
> crashed.
>
> [[alternative HTML version deleted]]
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Fwd: [RFC] A case for freezing CRAN

2014-03-21 Thread Karl Forner
On Fri, Mar 21, 2014 at 6:27 PM, Gábor Csárdi wrote:

> On Fri, Mar 21, 2014 at 12:40 PM, Karl Forner wrote:
> [...]
>
>> Hmm, what if your package depends on packages A and B, and that A depends
>> on C v1.0 and B depends on C v1.1 ? This is just an example but I imagine
>> that will lead to a lot of complexities.
>>
>
> You'll have to be able to load (but not attach, of course!) multiple
> versions of the same package at the same time. The search paths are set up
> so that A imports v1.0 of C, B imports v1.1. This is possible to support
> with R's namespaces and imports mechanisms, I believe.
>

not really: I think there are still cases (unfortunately) where you have to
use depends, e.g. when defining S4 methods for classes implemented in other
packages.
But my point is that you would need really really smart tools, AND to be
able to install precise versions of packages.



> It requires quite some work, though, so I am obviously not saying to
> switch to it tomorrow. Having a CRAN-devel seems simpler.
>

Indeed.

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Fwd: [RFC] A case for freezing CRAN

2014-03-21 Thread Karl Forner
> On Fri, Mar 21, 2014 at 12:08 PM, Karl Forner wrote:
> [...]
>
> - "exact deps versions":
>> will put a lot of burden of the developer.
>>
>
> Not really, in my opinion, if you have the proper tools. Most likely when
> you develop any given version of your package you'll use certain versions
> of other packages, probably the most recent at that time.
>
> If there is a build tool that just puts these version numbers into the
> DESCRIPTION file, you don't need to do anything extra.
>

I of course assumed that this part was automatic.



>
> In fact, it is easier for the developer, because if you work on your
> release for a month, at the end you don't have to make sure that your
> package works with packages that were updated in the meanwhile.
>

Hmm, what if your package depends on packages A and B, and that A depends
on C v1.0 and B depends on C v1.1 ? This is just an example but I imagine
that will lead to a lot of complexities.



>
> Gabor
>
> [...]
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Fwd: [RFC] A case for freezing CRAN

2014-03-21 Thread Karl Forner
Interesting and strategic topic indeed.

One other point is that reproducibility (and backwards compatibility) is
also very important in the industry. To get acceptance it can really help
if you can easily reproduce results.

Concerning the arguments that I read in this discussion:

- "do it yourself"
The point is to discuss to find the best way for the community, and
thinking collectively about this general problems can never hurt.
Once a consensus is reached we can think about the resources.

- "don't think the effort is worth it, instead install a specific version
of package" + "new sessionInfoPlus()":
This could work, meaning achieving the same result, but not at the same
price for users, because it would require each script writer to include its
sessionInfo(),  to store them along the scripts in repositories. And prior
to running the scripts, you would have to install the snapshot of packages,
not mentioning install problems and so on.

- "versions automatically at package build time (n DESCRIPTION)":
does not really solve the problems, because if package A is submitted with
dependency B-1.0 and package C with dependency B-2 and do you do ?

- "exact deps versions":
will put a lot of burden of the developer.

- "I do not want to wait a year to get a new (or updated package)", "access
to bug fixes":

Installed packages are already setup as libraries. By default you have the
library inside the R installation, that contains base packages + those
installed by install.packages() if you have the proper permissions, the
personal library otherwise.
Why not organizing these libraries so that:
  - normal CRAN versions associated with the R version gets installed along
the base packages
  - "critical updates", meaning important bugs found in normal CRAN
versions installed in the critical/ library
  - additional packages and updated package in another library.
This way, using the existing .libPaths() mechanism, or equivalently the
lib.loc option of library, one could easily switch between the library that
will ensure full compatibility and reproducibility with the R version, or
add critical updates, or use the newer or updated packages.

- new use case.
Here in Quartz bio we have two architectures, so two R installations for
each R version. It is quite cumbersome to keep them consistent because the
installed version depends on the moment you perform the install.packages().

So I second the Jeroen proposal to have a snapshot of packages versions
tied to a given R version, well tested altogether. This implies as stated
by Herve  to keep all package source versions, and will solve the bioC
reproducibility issue.

Best,
Karl Forner








On Tue, Mar 18, 2014 at 9:24 PM, Jeroen Ooms wrote:

> This came up again recently with an irreproducible paper. Below an
> attempt to make a case for extending the r-devel/r-release cycle to
> CRAN packages. These suggestions are not in any way intended as
> criticism on anyone or the status quo.
>
> The proposal described in [1] is to freeze a snapshot of CRAN along
> with every release of R. In this design, updates for contributed
> packages treated the same as updates for base packages in the sense
> that they are only published to the r-devel branch of CRAN and do not
> affect users of "released" versions of R. Thereby all users, stacks
> and applications using a particular version of R will by default be
> using the identical version of each CRAN package. The bioconductor
> project uses similar policies.
>
> This system has several important advantages:
>
> ## Reproducibility
>
> Currently r/sweave/knitr scripts are unstable because of ambiguity
> introduced by constantly changing cran packages. This causes scripts
> to break or change behavior when upstream packages are updated, which
> makes reproducing old results extremely difficult.
>
> A common counter-argument is that script authors should document
> package versions used in the script using sessionInfo(). However even
> if authors would manually do this, reconstructing the author's
> environment from this information is cumbersome and often nearly
> impossible, because binary packages might no longer be available,
> dependency conflicts, etc. See [1] for a worked example. In practice,
> the current system causes many results or documents generated with R
> no to be reproducible, sometimes already after a few months.
>
> In a system where contributed packages inherit the r-base release
> cycle, scripts will behave the same across users/systems/time within a
> given version of R. This severely reduces ambiguity of R behavior, and
> has the potential of making reproducibility a natural part of the
> language, rather than a tedious exercise.
>
> ## Repository Management
>
> Just like scripts suffer

Re: [Rd] [PATCH] Code coverage support proof of concept

2014-03-07 Thread Karl Forner
Here's an updated version of the patch that fixes a stack imbalance bug.
N.B: the patch seems to work fine with R-3.0.2 too.

On Wed, Mar 5, 2014 at 5:16 PM, Karl Forner  wrote:
> Hello,
>
> I submit a patch for review that implements code coverage tracing in
> the R interpreter.
> It records the lines that are actually executed and their associated
> frequency for which srcref information is available.
>
> I perfectly understands that this patch will not make its way inside R
> as it is, that they are many concerns of stability, compatibility,
> maintenance and so on.
> I would like to have the code reviewed, and proper guidance on how to
> get this feature available at one point in R, in base R or as a
> package or patch if other people are interested.
>
> Usage
> 
> Rcov_start()
> # your code to trace here
> res <- Rcov_stop()
>
> res is currently a hashed env, with traced source filenames associated
> with 2-columns matrices holding the line numbers and their
> frequencies.
>
>
> How it works
> -
> I added a test in getSrcref(), that records the line numbers if code
> coverage is started.
> The overhead should be minimal since for a given file, subsequent
> covered lines will be stored
> in constant time. I use a hased env to store the occurrences by file.
>
> I added two entry points in the utils package (Rcov_start() and Rcov_stop())
>
>
> Example
> -
> * untar the latest R-devel and cd into it
> * patch -p1 < rdev-cov-patch.txt
> * ./configure [... ] && make && [sudo] make install
> * install the devtools package
> * run the following script using Rscript
>
> library(methods)
> library(devtools)
> pkg  <- download.packages('testthat', '.', repos = "http://stat.ethz.ch/CRAN";)
> untar(pkg[1, 2])
>
> Rcov_start()
> test('testthat')
> env <- Rcov_stop()
>
> res <- lapply(ls(env), get, envir = env)
> names(res) <- ls(env)
> print(res)
>
>
> This will hopefully output something like:
> $`.../testthat/R/auto-test.r`
>  [,1] [,2]
> [1,]   331
> [2,]   801
>
> $`.../testthat/R/colour-text.r`
>   [,1] [,2]
>  [1,]   181
>  [2,]   19  106
>  [3,]   20  106
>  [4,]   22  106
>  [5,]   23  106
>  [6,]   401
>  [7,]   591
>  [8,]   701
>  [9,]   71  106
> ...
>
>
> Karl Forner
>
>
> Disclaimer
> -
> There are probably bugs  and ugly statements, but this is just a proof
> of concept. This is untested and only run on a linux x86_64
diff -urN -x '.*' R-devel/src/library/utils/man/Rcov_start.Rd 
R-develcov/src/library/utils/man/Rcov_start.Rd
--- R-devel/src/library/utils/man/Rcov_start.Rd 1970-01-01 01:00:00.0 
+0100
+++ R-develcov/src/library/utils/man/Rcov_start.Rd  2014-03-07 
18:41:33.117646470 +0100
@@ -0,0 +1,26 @@
+% File src/library/utils/man/Rcov_start.Rd
+% Part of the R package, http://www.R-project.org
+% Copyright 1995-2010 R Core Team
+% Distributed under GPL 2 or later
+
+\name{Rcov_start}
+\alias{Rcov_start}
+\title{Start Code Coverage analysis of R's Execution}
+\description{
+  Start Code Coverage analysis of the execution of \R expressions.
+}
+\usage{
+Rcov_start(nb_lines = 1L, growth_rate = 2)
+}
+\arguments{
+  \item{nb_lines}{
+Initial max number of lines per source file. 
+  }
+  \item{growth_rate}{
+growth factor of the line numbers vectors per filename. 
+If a reached line number L is greater than  nb_lines, the vector will
+be reallocated with provisional size of growth_rate * L. 
+  }
+}
+
+\keyword{utilities}
diff -urN -x '.*' R-devel/src/library/utils/man/Rcov_stop.Rd 
R-develcov/src/library/utils/man/Rcov_stop.Rd
--- R-devel/src/library/utils/man/Rcov_stop.Rd  1970-01-01 01:00:00.0 
+0100
+++ R-develcov/src/library/utils/man/Rcov_stop.Rd   2014-03-07 
18:41:33.117646470 +0100
@@ -0,0 +1,20 @@
+% File src/library/utils/man/Rcov_stop.Rd
+% Part of the R package, http://www.R-project.org
+% Copyright 1995-2010 R Core Team
+% Distributed under GPL 2 or later
+
+\name{Rcov_stop}
+\alias{Rcov_stop}
+\title{Start Code Coverage analysis of R's Execution}
+\description{
+  Start Code Coverage analysis of the execution of \R expressions.
+}
+\usage{
+Rcov_stop()
+}
+
+\value{
+  a named list of integer vectors holding occurrences counts (line number, 
frequency)
+  , named after the covered source file names. 
+}
+\keyword{utilities}
diff -urN -x '.*' R-devel/src/library/utils/NAMESPACE 
R-develcov/src/library/utils/NAMESPACE
--- R-devel/src/library/utils/NAMESPACE 2013-09-10 03:04:59.0 +0200
+++ R-develcov/src/library/utils/NAMESPACE  2014-03-07 18:41:33.121646470 
+0100
@@ -1,7 +1,7 @@
 # 

[Rd] [PATCH] Code coverage support proof of concept

2014-03-05 Thread Karl Forner
Hello,

I submit a patch for review that implements code coverage tracing in
the R interpreter.
It records the lines that are actually executed and their associated
frequency for which srcref information is available.

I perfectly understands that this patch will not make its way inside R
as it is, that they are many concerns of stability, compatibility,
maintenance and so on.
I would like to have the code reviewed, and proper guidance on how to
get this feature available at one point in R, in base R or as a
package or patch if other people are interested.

Usage

Rcov_start()
# your code to trace here
res <- Rcov_stop()

res is currently a hashed env, with traced source filenames associated
with 2-columns matrices holding the line numbers and their
frequencies.


How it works
-
I added a test in getSrcref(), that records the line numbers if code
coverage is started.
The overhead should be minimal since for a given file, subsequent
covered lines will be stored
in constant time. I use a hased env to store the occurrences by file.

I added two entry points in the utils package (Rcov_start() and Rcov_stop())


Example
-
* untar the latest R-devel and cd into it
* patch -p1 < rdev-cov-patch.txt
* ./configure [... ] && make && [sudo] make install
* install the devtools package
* run the following script using Rscript

library(methods)
library(devtools)
pkg  <- download.packages('testthat', '.', repos = "http://stat.ethz.ch/CRAN";)
untar(pkg[1, 2])

Rcov_start()
test('testthat')
env <- Rcov_stop()

res <- lapply(ls(env), get, envir = env)
names(res) <- ls(env)
print(res)


This will hopefully output something like:
$`.../testthat/R/auto-test.r`
 [,1] [,2]
[1,]   331
[2,]   801

$`.../testthat/R/colour-text.r`
  [,1] [,2]
 [1,]   181
 [2,]   19  106
 [3,]   20  106
 [4,]   22  106
 [5,]   23  106
 [6,]   401
 [7,]   591
 [8,]   701
 [9,]   71  106
...


Karl Forner


Disclaimer
-
There are probably bugs  and ugly statements, but this is just a proof
of concept. This is untested and only run on a linux x86_64
diff -ruN R-devel/src/library/utils/man/Rcov_start.Rd 
R-devel-cov/src/library/utils/man/Rcov_start.Rd
--- R-devel/src/library/utils/man/Rcov_start.Rd 1970-01-01 01:00:00.0 
+0100
+++ R-devel-cov/src/library/utils/man/Rcov_start.Rd 2014-03-05 
16:07:45.907596276 +0100
@@ -0,0 +1,26 @@
+% File src/library/utils/man/Rcov_start.Rd
+% Part of the R package, http://www.R-project.org
+% Copyright 1995-2010 R Core Team
+% Distributed under GPL 2 or later
+
+\name{Rcov_start}
+\alias{Rcov_start}
+\title{Start Code Coverage analysis of R's Execution}
+\description{
+  Start Code Coverage analysis of the execution of \R expressions.
+}
+\usage{
+Rcov_start(nb_lines = 1L, growth_rate = 2)
+}
+\arguments{
+  \item{nb_lines}{
+Initial max number of lines per source file. 
+  }
+  \item{growth_rate}{
+growth factor of the line numbers vectors per filename. 
+If a reached line number L is greater than  nb_lines, the vector will
+be reallocated with provisional size of growth_rate * L. 
+  }
+}
+
+\keyword{utilities}
diff -ruN R-devel/src/library/utils/man/Rcov_stop.Rd 
R-devel-cov/src/library/utils/man/Rcov_stop.Rd
--- R-devel/src/library/utils/man/Rcov_stop.Rd  1970-01-01 01:00:00.0 
+0100
+++ R-devel-cov/src/library/utils/man/Rcov_stop.Rd  2014-03-03 
16:14:25.883440716 +0100
@@ -0,0 +1,20 @@
+% File src/library/utils/man/Rcov_stop.Rd
+% Part of the R package, http://www.R-project.org
+% Copyright 1995-2010 R Core Team
+% Distributed under GPL 2 or later
+
+\name{Rcov_stop}
+\alias{Rcov_stop}
+\title{Start Code Coverage analysis of R's Execution}
+\description{
+  Start Code Coverage analysis of the execution of \R expressions.
+}
+\usage{
+Rcov_stop()
+}
+
+\value{
+  a named list of integer vectors holding occurrences counts (line number, 
frequency)
+  , named after the covered source file names. 
+}
+\keyword{utilities}
diff -ruN R-devel/src/library/utils/NAMESPACE 
R-devel-cov/src/library/utils/NAMESPACE
--- R-devel/src/library/utils/NAMESPACE 2013-09-10 03:04:59.0 +0200
+++ R-devel-cov/src/library/utils/NAMESPACE 2014-03-03 16:18:48.407430952 
+0100
@@ -1,7 +1,7 @@
 # Refer to all C routines by their name prefixed by C_
 useDynLib(utils, .registration = TRUE, .fixes = "C_")
 
-export("?", .DollarNames, CRAN.packages, Rprof, Rprofmem, RShowDoc,
+export("?", .DollarNames, CRAN.packages, Rcov_start, Rcov_stop, Rprof, 
Rprofmem, RShowDoc,
RSiteSearch, URLdecode, URLencode, View, adist, alarm, apropos,
aregexec, argsAnywhere, assignInMyNamespace, assignInNamespace,
as.roman, as.person, as.personList, as.relistable, aspell,
diff -ruN R-devel/src/library/utils/R/Rcov.R 
R-devel-cov/src/library/utils/R/Rcov.R
--- R-devel/src/library/utils/R/Rcov.R 

Re: [Rd] How to catch warnings sent by arguments of s4 methods ?

2013-12-02 Thread Karl Forner
Hi,
Just to add some information and to clarify why I feel this is an
important issue.

If you have a S4 method with a default argument, it seems that you can
not catch the warnings
emitted during their evaluation. It matters because on some occasions
those warnings carry an essential information,
that your code needs to use.

Martin Morgan added some information about this issue on:
http://stackoverflow.com/questions/20268021/how-to-catch-warnings-sent-during-s4-method-selection
Basically the C function R_dispatchGeneric  uses R_tryEvalSilent to
evaluate the method arguments, that seems no to use the calling
handlers.

Best,
Karl


On Fri, Nov 29, 2013 at 11:30 AM, Karl Forner  wrote:
> Hello,
>
> I apologized if this had already been addressed, and I also submitted
> this problem on SO:
> http://stackoverflow.com/questions/20268021/how-to-catch-warnings-sent-during-s4-method-selection
>
> Example code:
> setGeneric('my_method', function(x) standardGeneric('my_method') )
> setMethod('my_method', 'ANY', function(x) invisible())
>
> withCallingHandlers(my_method(warning('argh')), warning = function(w)
> { stop('got warning:', w) })
> # this does not catch the warning
>
> It seems that the warnings emitted during the evaluation of the
> arguments of S4 methods can not get caught using
> withCallingHandlers().
>
> Is this expected ? Is there a work-around ?
>
> Best,
> Karl Forner

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] How to catch warnings sent by arguments of s4 methods ?

2013-11-29 Thread Karl Forner
Hello,

I apologized if this had already been addressed, and I also submitted
this problem on SO:
http://stackoverflow.com/questions/20268021/how-to-catch-warnings-sent-during-s4-method-selection

Example code:
setGeneric('my_method', function(x) standardGeneric('my_method') )
setMethod('my_method', 'ANY', function(x) invisible())

withCallingHandlers(my_method(warning('argh')), warning = function(w)
{ stop('got warning:', w) })
# this does not catch the warning

It seems that the warnings emitted during the evaluation of the
arguments of S4 methods can not get caught using
withCallingHandlers().

Is this expected ? Is there a work-around ?

Best,
Karl Forner

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] problem using rJava with parallel::mclapply

2013-11-11 Thread Karl Forner
Thanks Malcolm,

But it does seem to solve the problem.




On Mon, Nov 11, 2013 at 6:48 PM, Cook, Malcolm  wrote:

> Karl,
>
> I have the following notes to self that may be pertinent:
>
> options(java.parameters=
>  ## Must preceed `library(XLConnect)` in order to prevent "Java
>  ## requested System.exit(130), closing R." which happens when
>  ## rJava quits R upon trapping INT (control-c), as is done by
>  ## XLConnect (and playwith?), below. (c.f.:
>  ## https://www.rforge.net/bugzilla/show_bug.cgi?id=237)
>  "-Xrs")
>
>
> ~Malcolm
>
>
>
>  >-Original Message-
>  >From: r-devel-boun...@r-project.org [mailto:
> r-devel-boun...@r-project.org] On Behalf Of Karl Forner
>  >Sent: Monday, November 11, 2013 11:41 AM
>  >To: r-devel@r-project.org
>  >Cc: Martin Studer
>  >Subject: [Rd] problem using rJava with parallel::mclapply
>  >
>  >Dear all,
>  >
>  >I got an issue trying to parse excel files in parallel using XLConnect,
> the
>  >process hangs forever.
>  >Martin Studer, the maintainer of XLConnect kindly investigated the issue,
>  >identified rJava as a possible cause of the problem:
>  >
>  >This does not work (hangs):
>  >library(parallel)
>  >require(rJava)
>  >.jinit()
>  >res <- mclapply(1:2, function(i) {
>  >  J("java.lang.Runtime")$getRuntime()$gc()
>  >  1
>  >  }, mc.cores = 2)
>  >
>  >but this works:
>  >library(parallel)
>  >res <- mclapply(1:2, function(i) {
>  >  require(rJava)
>  >  .jinit()
>  >  J("java.lang.Runtime")$getRuntime()$gc()
>  >  1
>  >}, mc.cores = 2)
>  >
>  >To cite Martin, it seems to work with mclapply when the JVM process is
>  >initialized after forking.
>  >
>  >Is this a bug or a limitation of rJava ?
>  >Or is there a good practice for rJava clients to avoid this problem ?
>  >
>  >Best,
>  >Karl
>  >
>  >P.S.
>  >> sessionInfo()
>  >R version 3.0.1 (2013-05-16)
>  >Platform: x86_64-unknown-linux-gnu (64-bit)
>  >
>  >locale:
>  > [1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C
>  > [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8
>  > [5] LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8
>  > [7] LC_PAPER=C LC_NAME=C
>  > [9] LC_ADDRESS=C   LC_TELEPHONE=C
>  >[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
>  >
>  >attached base packages:
>  >[1] stats graphics  grDevices utils datasets  methods   base
>  >
>  >loaded via a namespace (and not attached):
>  >[1] tools_3.0.1
>  >
>  >  [[alternative HTML version deleted]]
>  >
>  >__
>  >R-devel@r-project.org mailing list
>  >https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] problem using rJava with parallel::mclapply

2013-11-11 Thread Karl Forner
Dear all,

I got an issue trying to parse excel files in parallel using XLConnect, the
process hangs forever.
Martin Studer, the maintainer of XLConnect kindly investigated the issue,
identified rJava as a possible cause of the problem:

This does not work (hangs):
library(parallel)
require(rJava)
.jinit()
res <- mclapply(1:2, function(i) {
  J("java.lang.Runtime")$getRuntime()$gc()
  1
  }, mc.cores = 2)

but this works:
library(parallel)
res <- mclapply(1:2, function(i) {
  require(rJava)
  .jinit()
  J("java.lang.Runtime")$getRuntime()$gc()
  1
}, mc.cores = 2)

To cite Martin, it seems to work with mclapply when the JVM process is
initialized after forking.

Is this a bug or a limitation of rJava ?
Or is there a good practice for rJava clients to avoid this problem ?

Best,
Karl

P.S.
> sessionInfo()
R version 3.0.1 (2013-05-16)
Platform: x86_64-unknown-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=C LC_NAME=C
 [9] LC_ADDRESS=C   LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] tools_3.0.1

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] unloadNamespace, getPackageName and "Created a package name xxx " warning

2013-10-29 Thread Karl Forner
Dear all,

Consider this code:
>library("data.table")
>unloadNamespace('data.table')

It produces some warnings
Warning in FUN(X[[1L]], ...) :
  Created a package name, ‘2013-10-29 17:05:51’, when none found
Warning in FUN(X[[1L]], ...) :
  Created a package name, ‘2013-10-29 17:05:51’, when none found
...

The warning is produced by the getPackageName() function.
e.g.
getPackageName(parent.env(getNamespace('data.table')))

I was wondering what could be done to get rid of these warnings, which I
believe in the case "unloadNamespace" case are irrelevant.

The stack of calls is:
# where 3: sapply(where, getPackageName)
# where 4: findClass(what, classWhere)
# where 5: .removeSuperclassBackRefs(cl, cldef, searchWhere)
# where 6: methods:::cacheMetaData(ns, FALSE, ns)
# where 7: unloadNamespace(pkgname)

So for instance:
>findClass('data.frame', getNamespace('data.table'))
generates a warning which once again seems irrelevant.

On the top of my head, I could imagine adding an extra argument to
getPackageName, say warning = TRUE, which would be set to FALSE in the
getPackageName call in findClass() body.

I also wonder if in the case of import namespaces, getPackageName() could
not find a more appropriate name:
>parent.env(getNamespace('data.table'))

attr(,"name")
[1] "imports:data.table"

This namespace has a name that might be used to generate the package name.

My question is: what should be done ?

Thanks for your attention.

Karl Forner

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Possible problem with namespaceImportFrom() and methods for generic primitive functions

2013-10-18 Thread Karl Forner
Hi all,

I have a problem with a package that imports two other packages which both
export a method for the `[` primitive function.

I set up a reproducible example here:
https://github.com/kforner/namespaceImportFrom_problem.git

Basically, the testPrimitiveImport package imports testPrimitiveExport1 and
testPrimitiveExport2, which both export a S4 class and a `[` method for the
class.
Then:
R CMD INSTALL -l lib testPrimitiveExport1
R CMD INSTALL -l lib testPrimitiveExport2

The command:
R CMD INSTALL -l lib testPrimitiveImport

gives me:
Error in namespaceImportFrom(self, asNamespace(ns)) :
  trying to get slot "package" from an object of a basic class ("function")
with no slots

I get the same message if I check the package (with R CMD check), or even
if I try to load it using devtools::load_all()


I tried to investigate the problem, and I found that the error arises in
the base::namespaceImportFrom() function, and more precisely in
this block:
for (n in impnames) if (exists(n, envir = impenv, inherits = FALSE)) {
if (.isMethodsDispatchOn() && methods:::isGeneric(n,  ns)) {
genNs <- get(n, envir = ns)@package

Here n is '[', and the get(n, envir = ns) expression returns
.Primitive("["), which is a function and has no @package slot.

This will only occur if exists(n, envir = impenv, inherits = FALSE) returns
TRUE, i.e. if the '[' symbol is already in the imports env of the package.
In my case, the first call to namespaceImportFrom() is for the first import
of testPrimitiveExport1, which runs fine and populate the imports env with
'['.
But for the second call, exists(n, envir = impenv, inherits = FALSE) will
be TRUE, so that the offending line will be called.


I do not know if the problem is on my side, e.g. from a misconfiguration of
the NAMESPACE file, or if it is a bug and in which case what should be done.

Any feedback appreciated.

Karl Forner

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] sys.source() does not provide the parsing info to eval()

2013-06-24 Thread Karl Forner
Hello,

It seems that the parsing information attached to expressions parsed by the
parse() function when keep.source=TRUE is not provided to the eval()
function.

Please consider this code:

path <- tempfile()
code <- '(function() print( str( sys.calls() ) ))()'
writeLines(code, path)
sys.source(path, envir=globalenv(), keep.source=TRUE)

> OUTPUT:
Dotted pair list of 4
 $ : language sys.source(path, envir = globalenv(), keep.source = TRUE)
 $ : language eval(i, envir)
 $ : language eval(expr, envir, enclos)
 $ : language (function() print(str(sys.calls(()
NULL

then:
eval(parse(text=code))
> OUTPUT:
Dotted pair list of 3
 $ : language eval(parse(text = code))
 $ : language eval(expr, envir, enclos)
 $ :length 1 (function() print(str(sys.calls(()
  ..- attr(*, "srcref")=Class 'srcref'  atomic [1:8] 1 1 1 42 1 42 1 1
  .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile'


As you can see, when using eval() directly, the expression/call has the
parsing information available in the "srcref" attribute, but not when using
sys.source()

Looking at sys.source() implementation, this seems to be caused by this
line:
for (i in exprs) eval(i, envir)

The attribute "srcref" is not available anymore when "exprs" is subsetted,
as illustred by the code below:

ex <- parse( text="1+1; 2+2")

attr(ex, 'srcref')
print(str(ex))
# length 2 expression(1 + 1, 2 + 2)
#  - attr(*, "srcref")=List of 2
#   ..$ :Class 'srcref'  atomic [1:8] 1 1 1 3 1 3 1 1
#   .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile'

#   ..$ :Class 'srcref'  atomic [1:8] 1 6 1 8 6 8 1 1
#   .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile'

#  - attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' 
#  - attr(*, "wholeSrcref")=Class 'srcref'  atomic [1:8] 1 0 2 0 0 0 1 2
#   .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile'

# NULL

print( str(ex[[1]]))
#  language 1 + 1
# NULL

print( str(ex[1]))
# length 1 expression(1 + 1)
#  - attr(*, "srcref")=List of 1
#   ..$ :Class 'srcref'  atomic [1:8] 1 1 1 3 1 3 1 1
#   .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile'

# NULL


I suppose that the line "for (i in exprs) eval(i, envir)" could be replaced
by "eval(exprs, envir)" ?

Best,

Karl Forner



P.S
> sessionInfo()
R version 3.0.1 (2013-05-16)
Platform: x86_64-unknown-linux-gnu (64-bit)
...

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] bug in package.skeleton(), and doc typo.

2013-06-04 Thread Karl Forner
Hi all,

I think there's a bug in package.skeleton(), when using the environment
argument:

Example:

env <- new.env()
env$hello  <- function() { print('hello') }
package.skeleton(name='mypkg', environment=env)

==> does not create any source in mypkg/R/*

By the way, package.skeleton(name='mypkg', environment=env, list="hello")
does not work either.

According to the documentation:
>The arguments list, environment, and code_files provide alternative ways
to initialize the package.
> If code_files is supplied, the files so named will be sourced to form the
environment, then used to generate the package skeleton.
>Otherwise list defaults to the non-hidden files in environment (those
whose name does not start with .), but can be supplied to select a subset
of the objects in that environment.

I believe to have found the problem: in package.skeleton() body, the two
calls to dump():
> dump(internalObjs, file = file.path(code_dir, sprintf("%s-internal.R",
name)))
> dump(item, file = file.path(code_dir, sprintf("%s.R", list0[item])))
should use the extra argument: envir=environment

There's also a typo in the doc:
The sentence:
> Otherwise list defaults to the non-hidden **files** in environment (those
whose name does not start with .)
should be
> Otherwise list defaults to the non-hidden **objects** in environment
(those whose name does not start with .)

Best,
Karl Forner



>  sessionInfo()
R version 3.0.1 (2013-05-16)
Platform: x86_64-unknown-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_US.UTF-8  LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8   LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=en_US.UTF-8   LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8  LC_NAME=en_US.UTF-8
 [9] LC_ADDRESS=en_US.UTF-8LC_TELEPHONE=en_US.UTF-8
[11] LC_MEASUREMENT=en_US.UTF-8LC_IDENTIFICATION=en_US.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] rj_1.1.3-1

loaded via a namespace (and not attached):
[1] rj.gd_1.1.3-1 tools_3.0.1

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Catch SIGINT from user in backend C++ code

2013-05-06 Thread Karl Forner
Hello,

I once wrote  a package called RcppProgress, that you can find here:
https://r-forge.r-project.org/R/?group_id=1230
I did not try it for a long time, but it was developed to solve this
exact problem.
You can have a look the its companion package: RcppProgressExample.
Here's a link to the original announcement:
http://tolstoy.newcastle.edu.au/R/e17/devel/12/02/0443.html

Hope it helps.
Karl Forner
Quartz Bio

On Thu, May 2, 2013 at 1:50 AM, Jewell, Chris  wrote:
> Hi,
>
> I was wondering if anybody knew how to trap SIGINTs (ie Ctrl-C) in backend 
> C++ code for R extensions?  I'm writing a package that uses the GPU for some 
> hefty matrix operations in a tightly coupled parallel algorithm implemented 
> in CUDA.
>
> The problem is that once running, the C++ module cannot apparently be 
> interrupted by a SIGINT, leaving the user sat waiting even if they realise 
> they've launched the algorithm with incorrect settings.  Occasionally, the 
> SIGINT gets through and the C++ module stops.  However, this leaves the CUDA 
> context hanging, meaning that if the algorithm is launched again R dies.  If 
> I could trap the SIGINT, then I could make sure a) that the algorithm stops 
> immediately, and b) that the CUDA context is destructed nicely.
>
> Is there a "R-standard" method of doing this?
>
> Thanks,
>
> Chris
>
>
> --
> Dr Chris Jewell
> Lecturer in Biostatistics
> Institute of Fundamental Sciences
> Massey University
> Private Bag 11222
> Palmerston North 4442
> New Zealand
> Tel: +64 (0) 6 350 5701 Extn: 3586
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] parallel::mclapply does not return try-error objects with mc.preschedule=TRUE

2013-04-23 Thread Karl Forner
>
>> Is this a bug ?
>>
>
> Not in parallel.  Something else has changed, and I am about to commit a
> different version that still works as documented.
>
>
Thanks for replying.

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] parallel::mclapply does not return try-error objects with mc.preschedule=TRUE

2013-04-11 Thread Karl Forner
Hello,

Consider this:

1)
library(parallel)
res <- mclapply(1:2, stop)
#Warning message:
#In mclapply(1:2, stop) :
# all scheduled cores encountered errors in user code

is(res[[1]], 'try-error')
#[1] FALSE


2)
library(parallel)
res <- mclapply(1:2, stop, mc.preschedule=FALSE)
#Warning message:
#In mclapply(1:2, stop, mc.preschedule = FALSE) :
#  2 function calls resulted in an error

is(res[[1]], 'try-error')
#[1] TRUE

The documentation states that:
'Each forked process runs its job inside try(..., silent = TRUE) so if
errors occur they will be stored as class "try-error" objects in the
return value and a warning will be given.'


Is this a bug ?

Thanks
Karl


> sessionInfo()
R version 2.15.3 (2013-03-01)
Platform: x86_64-unknown-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=C LC_NAME=C
 [9] LC_ADDRESS=C   LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] parallel  stats graphics  grDevices utils datasets  methods
[8] base

loaded via a namespace (and not attached):
[1] tools_2.15.3

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] How to avoid using gridextra via Depends instead of Imports in a package ?

2013-03-20 Thread Karl Forner
Hello,

I really need some insight on a problem we encountered using grid,
lattice and gridExtra.

I tried to reduce the problem, so the plot make no sense.

we have a package: gridextrabug

with:

DESCRIPTION
--
Package: gridextrabug
Title: gridextrabug
Version: 0.1
Author: toto
Maintainer: toto 
Description: gridextrabug
Imports:
grid,
gridExtra,
lattice,
latticeExtra,
reshape,
Depends:
R (>= 2.15),
methods
Suggests:
testthat,
devtools
License: GPL (>= 3)
Collate:
'zzz.R'
'plotFDR.R'

R/plotFDR.R

plot_fdr <- function(dt,qvalue_col,pvalue_col, zoom_x=NULL, zoom_y=NULL,
fdrLimit=0,overview_plot=FALSE,...)
{

frm <- as.formula(paste(qvalue_col,"~ rank(",pvalue_col,")"))
plt <- xyplot( frm ,
data=dt,
abline=list(h=fdrLimit,lty="dashed"),
pch=16,cex=1,
type="p",
panel=panelinplot2,
subscripts= TRUE,

)

return(plt)
}

panelinplot2 <- function(x,y,subscripts,cex,type,...){

panel.xyplot(x,y,subscripts=subscripts,
ylim=c(0,1),
type=type,
cex=cex,...)
pltoverview <- xyplot(y~x,xlab=NULL,
ylab=NULL,
type="l",
par.settings=qb_theme_nopadding(),
scales=list(draw=FALSE),
cex=0.6,...)
gr <- grob(p=pltoverview, ..., cl="lattice")


grid.draw(gr) # <---
problematic call
}

NAMESPACE
--
export(panelinplot2)
export(plot_fdr)
importFrom(grid,gpar)
importFrom(grid,grid.draw)
importFrom(grid,grid.rect)
importFrom(grid,grid.text)
importFrom(grid,grob)
importFrom(grid,popViewport)
importFrom(grid,pushViewport)
importFrom(grid,unit)
importFrom(grid,viewport)
importFrom(gridExtra,drawDetails.lattice)
importFrom(lattice,ltext)
importFrom(lattice,panel.segments)
importFrom(lattice,panel.xyplot)
importFrom(lattice,stripplot)
importFrom(lattice,xyplot)
importFrom(latticeExtra,as.layer)
importFrom(latticeExtra,layer)
importFrom(reshape,sort_df)

Then if you execute this script:

without_extra.R
--
library(gridextrabug)
p <- seq(10^-10,1,0.001)
p <- p[sample(1:length(p))]
q <- p.adjust(p, "BH")
df <- data.frame(p,q)


plt <-  plot_fdr(df,qvalue_col= "q", pvalue_col="p",
zoom_x=c(0,20),
fdrLimit=0.6,
overview_plot=TRUE)
X11()
print(plt)

you will not have the second plot corresponding the call to panelinplot2


If you execute this one:

with_extra.R
--
library(gridextrabug)
p <- seq(10^-10,1,0.001)
p <- p[sample(1:length(p))]
q <- p.adjust(p, "BH")
df <- data.frame(p,q)


plt <-  plot_fdr(df,qvalue_col= "q", pvalue_col="p",
zoom_x=c(0,20),
fdrLimit=0.6,
overview_plot=TRUE)
X11()

library(gridExtra)
print(plt)

you will have the second plot.


>From what I understood, the last line of panelinplot2(), "
grid.draw(x)", dispatches to  grid:::grid.draw.grob(), which in turn
calls grid:::drawGrob(), which calls grid::drawDetails() which is a S3
generic.
The gridExtra package defines the method drawDetails.lattice().
When the package is loaded in the search() path,  the "grid.draw(x)"
call dispatches to gridExtra:::drawDetails.lattice().

We would rather avoid messing with the search path, which is a best
practice if I'm not mistaken, so we tried hard to solve it using
Imports.
But I came to realize that the problem was in the grid namespace, not
in our package namespace.

I tested it with the following work-around:
parent.env(parent.env(getNamespace('grid'))) <- getNamespace('gridExtra')

which works.

So my questions are:
  * did we miss something obvious ?
  * what is the proper way to handle this situation ?


Thanks in advance for your wisdom.

Karl Forner

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Problem using raw vectors with inline cfunction

2013-02-01 Thread Karl Forner
Hello,

>From what I understood from the documentation I found, when using the
inline cfunction with convention=".C",
R raw vectors should be given as unsigned char* to the C function.

But consider the following script:

library(inline)

testRaw <- cfunction(signature(raw='raw', len='integer')
, body='
int l = *len;
int i = 0;
Rprintf("sizeof(raw[0])=%i\\n", sizeof(raw[0]));
for (i = 0; i < l; ++i) Rprintf("%i, ", (int)raw[i]);
for (i = 0; i < l; ++i) raw[i] = i*10;
'
, convention=".C", language='C', verbose=TRUE
)

tt <- as.raw(1:10)
testRaw(tt, length(tt))


When I execute it:

$ R --vanilla --quiet < work/inline_cfunction_raw_bug.R

sizeof(raw[0])=1
192, 216, 223, 0, 0, 0, 0, 0, 224, 214,
 *** caught segfault ***
address (nil), cause 'unknown'

Traceback:
 1: .Primitive(".C")(, raw =
as.character(raw), len = as.integer(len))
 2: testRaw(tt, length(tt))
aborting ...
Segmentation fault (core dumped)


I was expecting to get in the C function a pointer on a byte array of
values (1,2,3,4,5,6,7,8,9,10).
Apparently that is not the case. I guess that the "raw =
as.character(raw)," printed in the traceback is responsible for the
observed behavior.

If it is expected behavior, how can I get a pointer on my array of bytes ?


Thanks.

Karl

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] weird bug with parallel, RSQlite and tcltk

2013-01-07 Thread Karl Forner
Hello and thank you.
Indeed gsubfn is responsible for loading tcltk in my case.

On Thu, Jan 3, 2013 at 12:14 PM, Gabor Grothendieck
 wrote:
> options(gsubfn.engine = "R")

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] weird bug with parallel, RSQlite and tcltk

2013-01-03 Thread Karl Forner
Hello,

The point is that I do not use tcltk, it gets loaded probably as a
dependency of a dependency of a package.
When I unload it all work perfectly fine. I just found it because one
of my computer did not have tk8.5 installed, and did not exhibit the
mentioned bug. So I really think something should be done about this.
Maybe the "gui loop" should not be run a the the loading of the tcltk
package, but
at the first function ran, or something like this.

As you can see in my example code, the in-memory database is opened in
the parallel code...

Best,
Karl

On Mon, Dec 31, 2012 at 10:58 PM, Simon Urbanek
 wrote:
>
> On Dec 31, 2012, at 1:08 PM, Karl Forner wrote:
>
>> Hello,
>>
>> I spent a lot of a time on a weird bug, and I just managed to narrow it down.
>>
>
> First, tcltk and multicore don't mix well, see the warning in the 
> documentation (it mentions GUIs and AFAIR tcltk fires up a GUI event loop 
> even if you don't actually create GUI elements). Second, using any kind of 
> descriptors in parallel code is asking for trouble since those will be owned 
> by multiple processes. If you use databases files, etc. they must be opened 
> in the parallel code, they cannot be shared by multiple workers. The latter 
> is ok in your code so you're probably bitten by the former.
>
> Cheers,
> Simon
>
>
>
>> In parallel code (here with parallel::mclappy, but I got it
>> doMC/multicore too), if the library(tcltk) is loaded, R hangs when
>> trying to open a DB connection.
>> I got the same behaviour on two different computers, one dual-core,
>> and one 2 xeon quad-core.
>>
>> Here's the code:
>>
>> library(parallel)
>> library(RSQLite)
>> library(tcltk)
>> #unloadNamespace("tcltk")
>>
>> res <- mclapply(1:2, function(x) {
>>   db <- DBI::dbConnect("SQLite", ":memory:")
>> }, mc.cores=2)
>> print("Done")
>>
>> When I execute it (R --vanilla  < test_parallel_db.R), it hangs
>> forever, and I have to type several times CTRL+C to interrupt it. I
>> then get this message:
>>
>> Warning messages:
>> 1: In selectChildren(ac, 1) : error 'Interrupted system call' in select
>> 2: In selectChildren(ac, 1) : error 'Interrupted system call' in select
>>
>> Then, just remove library(tcltk), or uncomment
>> unloadNamespace("tcltk"), and it works fine again.
>>
>> I guess there's a bug somewhere, but where exactly ?
>>
>> Best,
>>
>> Karl Forner
>>
>> Further info:
>>
>>
>> R version 2.15.1 (2012-06-22) -- "Roasted Marshmallows"
>> Copyright (C) 2012 The R Foundation for Statistical Computing
>> ISBN 3-900051-07-0
>> Platform: x86_64-unknown-linux-gnu (64-bit)
>>
>> ubuntu 12.04 and 12.10
>>
>> ubuntu package tk8.5
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] weird bug with parallel, RSQlite and tcltk

2012-12-31 Thread Karl Forner
Hello,

I spent a lot of a time on a weird bug, and I just managed to narrow it down.

In parallel code (here with parallel::mclappy, but I got it
doMC/multicore too), if the library(tcltk) is loaded, R hangs when
trying to open a DB connection.
I got the same behaviour on two different computers, one dual-core,
and one 2 xeon quad-core.

Here's the code:

library(parallel)
library(RSQLite)
library(tcltk)
#unloadNamespace("tcltk")

res <- mclapply(1:2, function(x) {
db <- DBI::dbConnect("SQLite", ":memory:")
}, mc.cores=2)
print("Done")   

When I execute it (R --vanilla  < test_parallel_db.R), it hangs
forever, and I have to type several times CTRL+C to interrupt it. I
then get this message:

Warning messages:
1: In selectChildren(ac, 1) : error 'Interrupted system call' in select
2: In selectChildren(ac, 1) : error 'Interrupted system call' in select

Then, just remove library(tcltk), or uncomment
unloadNamespace("tcltk"), and it works fine again.

I guess there's a bug somewhere, but where exactly ?

Best,

Karl Forner

Further info:


R version 2.15.1 (2012-06-22) -- "Roasted Marshmallows"
Copyright (C) 2012 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-unknown-linux-gnu (64-bit)

ubuntu 12.04 and 12.10

ubuntu package tk8.5

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] portable parallel seeds project: request for critiques

2012-03-02 Thread Karl Forner
Thanks for your quick reply.

About the rngSetSeed package: is it usable at c/c++ level ?

The same can be said about initializations. Initialization is a random
> number generator, whose output is used as the initial state of some
> other generator. There is no proof that a particular initialization cannot
> be distinguished from truly random numbers in a mathematical sense for
> the same reason as above.
>
> A possible strategy is to use a cryptographically strong hash function
> for the initialization. This means to transform the seed to the initial
> state of the generator using a function, for which we have a good
> guarantee that it produces output, which is computationally hard to
> distinguish from truly random numbers. For this purpose, i suggest
> to use the package rngSetSeed provided currently at
>
>  http://www.cs.cas.cz/~savicky/randomNumbers/
>
> It is based on AES and Fortuna similarly as "randaes", but these
> components are used only for the initialization of Mersenne-Twister.
> When the generator is initialized, then it runs on its usual speed.
>
> In the notation of
>
>  http://www.agner.org/random/ran-instructions.pdf
>
> using rngSetSeed for initialization of Mersenne-Twister is Method 4
> in Section 6.1.
>


Hmm I had not paid attention to the last paragraph:

> The seeding procedure used in the
> present software use*s a separate random number* generator of a different
> design in order to
> avoid any interference. An extra feature is the RandomInitByArray function
> which makes
> it possible to initialize the random number generator with multiple seeds.
> We can make sure
> that the streams have different starting points by using the thread id as
> one of the seeds.
>

So it means that I am already using this solution ! (in the RcppRandomSFTM,
see other post).
and that I should be reasonably safe.


>
> I appreciate comments.
>
> Petr Savicky.
>
> P.S. I included some more comments on the relationship of provably good
> random number generators and P ?= NP question to the end of the page
>
>  http://www.cs.cas.cz/~savicky/randomNumbers/


Sorry but it's too involved for me.


>
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] c/c++ Random Number Generators Benchmarks using OpenMP

2012-03-02 Thread Karl Forner
Dear R gurus,

I am interested in permutations-based cpu-intensive methods so I had to pay
a little attention to Random Number Generators (RNG).
For my needs, RNGs have to:
   1) be fast. I profiled my algorithms, and for some the bottleneck was
the RNG.
   2) be scalable. Meaning that I want the RNG to remain fast as I add
threads.
   3) offer a long cycle length. Some basic generators have a cycle length
so low that in a few seconds you can finish it, making further computations
useless and redundant
   4) be able to give reproducible results independent of the number of
threads used, i.e. I want my program to give the very same exact results
using one or 10 threads
   ( 4) "be good" of course )

I found an implementation that seems to meet my criterion and made a
preliminary package to test it.
In the meantime Petr Savicky contacted saying he was about to release a
similar package called rngOpenMP.

So I decided to perform some quick benchmarks. The benchmark code is
available as a R package "rngBenchmarks" here:
https://r-forge.r-project.org/scm/viewvc.php/pkg/?root=gwas-bin-tests
but it depends on some unpublished package, like rngOpenMP, and my
preliminary package, yet available from the same URL.

As a benchmark I implemented a Monte-Carlo computation of PI.
I tried to use the exact same computation method, using a template argument
for the RNG, and providing wrappers for the
different available RNGs, except for the rngOpenMP that is not
instantiable, so I adapted specifically the code.

I included in the benchmark:
 -  the c implementation used by the R package Rlecuyer
 - the (GNU) random_r RNG available on GNU/linux systems and that is
reentrant
 - my RcppRandomSFMT,wrapping a modified version of the SIMD-oriented Fast
Mersenne Twister (SFMT) Random Number Generator
provided by http://www.agner.org/random Randomc
 - rngOpenMP
I tried to include the rsprng RNG, but could not manage to use it in my
code.


My conclusions:
  - all the implementations work, meaning that the computed values converge
towards PI with the number of iterations
  - all the implementations are scalable.
  - RcppRandomSFMT and random_r are an order of magnitude faster than
rlecuyer and rngOpenMP
  - actually RcppRandomSFMT and random_r have very similar performance.

The problem with random_r is that its cycle length according to my manpage
is ~ 3E10, enabling for instance only 3 millions permutations of a vector
of 10,000 elements,
to be compared with

Leaving the RcppRandomSFMT as best candidate. This implementation also
allows multiple seeds, solving my requisite number 4, reproducible results
independent of the number of threads, if I use as second seed the task
identifier.

Of course I am probably biased, so please tell me if you have some better
ideas of benchmarks, tests of correctness, if you'd like some other
implementations to be included.

People interested in this topic couldcontact me in order that we
collaboratively propose an implementation suiting all needs.

Thanks,

Karl Forner

Annex:

I ran the benchmarks on a linux Intel(R) Xeon(R) with 2 cpus of 4 cores
each ( CPU  E5520  @ 2.27GHz).

type threads nerrortime time_per_chunk
1 lecuyer   1 1e+07 2.105472e-04   1.538 0.00153800
2 lecuyer   1 1e+08 4.441492e-05  15.265 0.00152650
3 lecuyer   1 1e+09 2.026819e-05 153.209 0.00153209
4 lecuyer   2 1e+07 3.182633e-04   0.821 0.00082100
5 lecuyer   2 1e+08 7.375036e-05   7.751 0.00077510
6 lecuyer   2 1e+09 9.290323e-06  76.476 0.00076476
7 lecuyer   4 1e+07 9.630351e-05   0.401 0.00040100
8 lecuyer   4 1e+08 1.263486e-05   3.887 0.00038870
9 lecuyer   4 1e+09 1.151515e-06  38.618 0.00038618
10lecuyer   8 1e+07 1.239703e-05   0.241 0.00024100
11lecuyer   8 1e+08 7.894518e-05   2.133 0.00021330
12lecuyer   8 1e+09 6.782041e-06  20.420 0.00020420
13   random_r   1 1e+07 7.898746e-05   0.137 0.00013700
14   random_r   1 1e+08 4.748343e-05   1.290 0.00012900
15   random_r   1 1e+09 1.685692e-05  12.844 0.00012844
16   random_r   2 1e+07 4.757590e-06   0.095 0.9500
17   random_r   2 1e+08 7.389450e-05   0.663 0.6630
18   random_r   2 1e+09 2.913732e-05   6.469 0.6469
19   random_r   4 1e+07 1.664590e-04   0.037 0.3700
20   random_r   4 1e+08 1.138106e-04   0.330 0.3300
21   random_r   4 1e+09 3.734717e-05   3.209 0.3209
22   random_r   8 1e+07 1.034678e-04   0.051 0.5100
23   random_r   8 1e+08 4.733472e-05   0.167 0.1670
24   random_r   8 1e+09 1.985413e-05   1.694 0.1694
25 rng_openmp   1 1e+07 2.097492e-04   1.231 0.00123100
26 rng_openmp   1 1e+08 7.580436e-05  12.155 0.00121550
27 rng_openmp   1 1e+09 2.772810e-05 120.712 0.00120712
28 rng_openmp   2 1

Re: [Rd] portable parallel seeds project: request for critiques

2012-03-02 Thread Karl Forner
> Some of the random number generators allow as a seed a vector,
> not only a single number. This can simplify generating the seeds.
> There can be one seed for each of the 1000 runs and then,
> the rows of the seed matrix can be
>
>  c(seed1, 1), c(seed1, 2), ...
>  c(seed2, 1), c(seed2, 2), ...
>  c(seed3, 1), c(seed3, 2), ...
>  ...
>
> There could be even only one seed and the matrix can be generated as
>
>  c(seed, 1, 1), c(seed, 1, 2), ...
>  c(seed, 2, 1), c(seed, 2, 2), ...
>  c(seed, 3, 1), c(seed, 3, 2), ...
>
> If the initialization using the vector c(seed, i, j) is done
> with a good quality hash function, the runs will be independent.
>
> What is your opinion on this?
>
> An advantage of seeding with a vector is also that there can
> be significantly more initial states of the generator among
> which we select by the seed than 2^32, which is the maximum
> for a single integer seed.
>
>

Hello,
I would be also in favor for using multiple seeds based on (seed,
task_number) for convenience (i.e. avoiding storing the seeds)
and with the possibility of having a dynamic number of tasks, but I am mot
sure it is theoretically correct.
But I can refer you to this article:
http://www.agner.org/random/ran-instructions.pdf , section 6.1
where the author states:

For example, if we make 100 streams of 10^10 random numbers each from an
> SFMT
> generator with cycle length ρ = 2^11213, we have a probability of overlap
> p ≈ 10^3362.
>

What do you think ? I am very concerned by the correctness of this approach
so would appreciate any advice on that matter.

Thanks
Karl

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] RcppProgress: progress monitoring and interrupting c++ code, request for comments

2012-02-23 Thread Karl Forner
Hello,

I just created a little package, RcppProgress, to display a progress bar to
monitor the execution status of a C++ code loop, possibly multihreaded with
OpenMP.
I also implemented the possibility to check for user interruption, using
the work-around by Simon Urbanek.

I just uploaded the package on my R-forge project, so you should be able to
get the package from
https://r-forge.r-project.org/scm/viewvc.php/pkg/RcppProgress/?root=gwas-bin-tests

* The progress bar is displayed using REprintf, so that it works also in
the eclipse StatET console, provided that you disable the scroll lock.
* You should be able to nicely interrupt the execution by typing CTRL+C in
the R console, or by clicking the "cancel current task" in the StatET
console.
* I tried to write a small documentation, included in the package, but
basically you use it like this:

The main loop:

Progress p(max, display_progress); // create the progress monitor
#pragma omp parallel for schedule(dynamic)
for (int i = 0; i < max; ++i) {
if ( ! p.is_aborted() ) { // the only way to exit an OpenMP loop
long_computation(nb);
p.increment(); // update the progress
}
}

and in your computation intensive function:

void long_computation(int nb) {
double sum = 0;
for (int i = 0; i < nb; ++i) {
if ( Progress::check_abort() )
return;
for (int j = 0; j < nb; ++j) {
sum += Rf_dlnorm(i+j, 0.0, 1.0, 0);
}
}
}

I provided two small R test functions so that you can see how it looks,
please see the doc.

 I would be extremely grateful if you could give me comments, criticisms
and other suggestions.

I try to release this in order to reuse this functionality in my other
packages.

Best regards,
Karl Forner

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] OpenMP and random number generation

2012-02-22 Thread Karl Forner
Hello,

For your information, I plan to release "soon" a package with a fast and
multithreaded aware RNG for C++ code in R packages.
It is currently part of one of my (not yet accepted) packages and I want to
extract it into its own package.
I plan to do some quick benchmarks too.

Of course I can not define exactly when it will be ready.

Best,
Karl

On Wed, Feb 22, 2012 at 9:23 AM, Mathieu Ribatet <
mathieu.riba...@math.univ-montp2.fr> wrote:

> Dear all,
>
> Now that R has OpenMP facilities, I'm trying to use it for my own package
> but I'm still wondering if it is safe to use random number generation
> within a OpenMP block. I looked at the R writing extension document  both
> on the OpenMP and Random number generation but didn't find any information
> about that.
>
> Could someone tell me if it is safe or not please ?
>
> Best,
> Mathieu
>
> -
> I3M, UMR CNRS 5149
> Universite Montpellier II,
> 4 place Eugene Bataillon
> 34095 Montpellier cedex 5   France
> http://www.math.univ-montp2.fr/~ribatet
> Tel: + 33 (0)4 67 14 41 98
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] .Call in R

2011-11-18 Thread Karl Forner
Yes indeed. My mistake.

On Fri, Nov 18, 2011 at 4:45 PM, Joris Meys  wrote:

> Because if you calculate the probability and then make uniform values,
> nothing guarantees that the sum of those uniform values actually is
> larger than 50,000. You only have 50% chance it is, in fact...
> Cheers
> Joris
>
> On Fri, Nov 18, 2011 at 4:08 PM, Karl Forner 
> wrote:
> > Hi,
> >
> > A probably very naive remark, but I believe that the probability of sum(
> > runif(1) ) >= 5 is exactly 0.5. So why not just test that, and
> > generate the uniform values only if needed ?
> >
> >
> > Karl Forner
> >
> > On Thu, Nov 17, 2011 at 6:09 PM, Raymond 
> wrote:
> >
> >> Hi R developers,
> >>
> >>I am new to this forum and hope someone can help me with .Call in R.
> >> Greatly appreciate any help!
> >>
> >>Say, I have a vector called "vecA" of length 1, I generate a
> vector
> >> called "vecR" with elements randomly generated from Uniform[0,1]. Both
> vecA
> >> and vecR are of double type. I want to replace elements vecA by
> elements in
> >> vecR only if sum of elements in vecR is greater than or equal to 5000.
> >> Otherwise, vecR remain unchanged. This is easy to do in R, which reads
> >>vecA<-something;
> >>vecR<-runif(1);
> >>if (sum(vecR)>=5000)){
> >>   vecA<-vecR;
> >>}
> >>
> >>
> >>Now my question is, if I am going to do the same thing in R using
> .Call.
> >> How can I achieve it in a more efficient way (i.e. less computation time
> >> compared with pure R code above.).  My c code (called "change_vecA.c")
> >> using
> >> .Call is like this:
> >>
> >>SEXP change_vecA(SEXP vecA){
> >> int i,vecA_len;
> >> double sum,*res_ptr,*vecR_ptr,*vecA_ptr;
> >>
> >> vecA_ptr=REAL(vecA);
> >> vecA_len=length(vecA);
> >> SEXP res_vec,vecR;
> >>
> >> PROTECT(res_vec=allocVector(REALSXP, vec_len));
> >> PROTECT(vecR=allocVector(REALSXP, vec_len));
> >> res_ptr=REAL(res_vec);
> >> vecR_ptr=REAL(vecR);
> >> GetRNGstate();
> >> sum=0.0;
> >> for (i=0;i >>  vecR_ptr[i]=runif(0,1);
> >>  sum+=vecR_ptr[i];
> >> }
> >> if (sum>=5000){
> >>/*copy vecR to the vector to be returned*/
> >>for (i=0;i >>  res_ptr[i]=vecR_ptr[i];
> >>}
> >> }
> >> else{
> >>/*copy vecA to the vector to be returned*/
> >>for (i=0;i >>  res_ptr[i]=vecA_ptr[i];
> >>}
> >> }
> >>
> >> PutRNGstate();
> >> UNPROTECT(2);
> >> resturn(res);
> >> }
> >> My R wrapper function is
> >>change_vecA<-function(vecA){
> >>  dyn.load("change_vecA.so");
> >>  .Call("change_vecA",vecA);
> >>}
> >>
> >> Now my question is, due to two loops (one generates the random
> >> vector and one determines the vector to be returned), can .Call still be
> >> faster than pure R code (only one loop to copy vecR to vecA given
> condition
> >> is met)? Or, how can I improve my c code to avoid redundant loops if
> any.
> >> My
> >> concern is if vecA is large (say of length 100 or even bigger),
> loops
> >> in
> >> C code can slow things down.  Thanks for any help!
> >>
> >>
> >>
> >>
> >>
> >> --
> >> View this message in context:
> >> http://r.789695.n4.nabble.com/Call-in-R-tp4080721p4080721.html
> >> Sent from the R devel mailing list archive at Nabble.com.
> >>
> >> __
> >> R-devel@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-devel
> >>
> >
> >[[alternative HTML version deleted]]
> >
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> >
>
>
>
> --
> Joris Meys
> Statistical consultant
>
> Ghent University
> Faculty of Bioscience Engineering
> Department of Mathematical Modelling, Statistics and Bio-Informatics
>
> tel : +32 9 264 59 87
> joris.m...@ugent.be
> ---
> Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] .Call in R

2011-11-18 Thread Karl Forner
Hi,

A probably very naive remark, but I believe that the probability of sum(
runif(1) ) >= 5 is exactly 0.5. So why not just test that, and
generate the uniform values only if needed ?


Karl Forner

On Thu, Nov 17, 2011 at 6:09 PM, Raymond  wrote:

> Hi R developers,
>
>I am new to this forum and hope someone can help me with .Call in R.
> Greatly appreciate any help!
>
>Say, I have a vector called "vecA" of length 1, I generate a vector
> called "vecR" with elements randomly generated from Uniform[0,1]. Both vecA
> and vecR are of double type. I want to replace elements vecA by elements in
> vecR only if sum of elements in vecR is greater than or equal to 5000.
> Otherwise, vecR remain unchanged. This is easy to do in R, which reads
>vecA<-something;
>vecR<-runif(1);
>if (sum(vecR)>=5000)){
>   vecA<-vecR;
>}
>
>
>Now my question is, if I am going to do the same thing in R using .Call.
> How can I achieve it in a more efficient way (i.e. less computation time
> compared with pure R code above.).  My c code (called "change_vecA.c")
> using
> .Call is like this:
>
>SEXP change_vecA(SEXP vecA){
> int i,vecA_len;
> double sum,*res_ptr,*vecR_ptr,*vecA_ptr;
>
> vecA_ptr=REAL(vecA);
> vecA_len=length(vecA);
> SEXP res_vec,vecR;
>
> PROTECT(res_vec=allocVector(REALSXP, vec_len));
> PROTECT(vecR=allocVector(REALSXP, vec_len));
> res_ptr=REAL(res_vec);
> vecR_ptr=REAL(vecR);
> GetRNGstate();
> sum=0.0;
> for (i=0;i  vecR_ptr[i]=runif(0,1);
>  sum+=vecR_ptr[i];
> }
> if (sum>=5000){
>/*copy vecR to the vector to be returned*/
>for (i=0;i  res_ptr[i]=vecR_ptr[i];
>}
> }
> else{
>/*copy vecA to the vector to be returned*/
>for (i=0;i  res_ptr[i]=vecA_ptr[i];
>}
> }
>
> PutRNGstate();
> UNPROTECT(2);
> resturn(res);
> }
> My R wrapper function is
>change_vecA<-function(vecA){
>  dyn.load("change_vecA.so");
>  .Call("change_vecA",vecA);
>}
>
> Now my question is, due to two loops (one generates the random
> vector and one determines the vector to be returned), can .Call still be
> faster than pure R code (only one loop to copy vecR to vecA given condition
> is met)? Or, how can I improve my c code to avoid redundant loops if any.
> My
> concern is if vecA is large (say of length 100 or even bigger), loops
> in
> C code can slow things down.  Thanks for any help!
>
>
>
>
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Call-in-R-tp4080721p4080721.html
> Sent from the R devel mailing list archive at Nabble.com.
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Fwd: Error in svg() : cairo-based devices are not supported on this build

2011-06-06 Thread Karl Forner
Check what configure is saying when you build R and config.log. You may be
> simply missing something like pango-dev - Cairo doesn't use pango while R
> does - but it is usually optional (it works on my Mac without pango) so
> there may be more to it - config.log will tell you.
>

I managed to compile it successfully with pango-cairo support by editing the
configure script and adding the pangoxft module to the pkg-config list:
%diff -c configure.bak  configure
*** configure.bak   2011-05-31 16:16:55.0 +0200
--- configure   2011-05-31 16:17:21.0 +0200
***
*** 31313,31319 
  $as_echo "$r_cv_has_pangocairo" >&6; }
if test "x${r_cv_has_pangocairo}" = "xyes"; then
  modlist="pangocairo"
! for module in cairo-xlib cairo-png; do
if "${PKGCONF}" --exists ${module}; then
modlist="${modlist} ${module}"
fi
--- 31313,31319 
  $as_echo "$r_cv_has_pangocairo" >&6; }
if test "x${r_cv_has_pangocairo}" = "xyes"; then
  modlist="pangocairo"
! for module in cairo-xlib cairo-png pangoxft; do
if "${PKGCONF}" --exists ${module}; then
modlist="${modlist} ${module}"
fi


I do not know if it is an error in the configure script or just a
peculiarity of my installation. All these libs (pango, cairo, gtk, glib)
have been installed manually from tarballs.

Best,

Karl

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Error in svg() : cairo-based devices are not supported on this build

2011-05-19 Thread Karl Forner
Hello,

Sorry if it is not the right place..


I installed R-2.13.0 on a x86_64 linux server.
All went fine, but the svg()  function yells:
> svg()
Error in svg() : cairo-based devices are not supported on this build

I have the Cairo, cairoDevice, RSvgDevice packages installed, and running.

> Cairo.capabilities()
  png  jpeg  tiff   pdf   svgps   x11   win
 TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE

I tried to google around unsuccessfully. The only thing I noticed in
config.log is:
r_cv_has_pangocairo=no
r_cv_cairo_works=yes
r_cv_has_cairo=yes
#define HAVE_WORKING_CAIRO 1
#define HAVE_CAIRO_PDF 1
#define HAVE_CAIRO_PS 1
#define HAVE_CAIRO_SVG 1


So what can be wrong ??

Thank you

Karl

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Possible bug in cut.dendrogram when there are only 2 leaves in the tree ?

2011-01-28 Thread Karl Forner
Hello,

I noticed a behavior ot the cut() function that does not seem right. In a
dendrogram with only 2 leaves in one cluster, if you cut()
at a height above this cluster, you end up with 2 cut clusters, one for each
leaf, instead of one.

But it seems to work fine for dendrograms with more than 2 objects.

For instance:

library(stats)
m <- matrix(c(0,0.1,0.1,0),nrow=2, ncol=2)
dd <- as.dendrogram(hclust(as.dist(m)))
#plot(dd)
print(cut(dd, 0.2)) # 2 clusters in $lower

m2 <- matrix(c(0,0.1,0.5,0.1,0,0.5,0.5,0.5,0),nrow=3, ncol=3)
dd <- as.dendrogram(hclust(as.dist(m2)))
print(cut(dd, 0.2)) # here 2 clusters in $lower, as expected

So the question is: is it expected behavior that the whole tree is not
reported in the $lower if it is itself under the threshold ?

Thank you,

Karl FORNER

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] dendrogram plot does not draw long labels ?

2011-01-25 Thread Karl Forner
Hi Tobias and thank you for your reply,

Using your insight I managed to work-around the issue (with some help) by
increasing
the "mai" option of par().
For example a "mai" with first coordinate (bottom) set to 5 allows to
display ~ 42 letters.

We tried to change the xpd value in the text() call that you mentioned, but
it did not seem to fix the problem.

But I think this is very annoying: the dendrogram plot is meant to be the
common unique plotting for all clustering stuff
and suddenly if your labels are just too long, nothing get displayed,
without even a warning.
I suppose that the margins should be dynamically set based on the max label
text drawn length...

The hclust plot seemed to handle very nicely these long labels, but I need
to display colored labels and the only way I found is to use the
plot.dendrogram for this.

Best,

Karl

On Tue, Jan 25, 2011 at 12:17 PM, Tobias Verbeke <
tobias.verb...@openanalytics.eu> wrote:

> Hi Karl,
>
>
> On 01/25/2011 11:27 AM, Karl Forner wrote:
>
>  It seems that the plot function for dendrograms does not draw labels when
>> they are too long.
>>
>>  hc<- hclust(dist(USArrests), "ave")
>>> dend1<- as.dendrogram(hc)
>>> dend2<- cut(dend1, h=70)
>>> dd<- dend2$lower[[1]]
>>> plot(dd) # first label is drawn
>>> attr(dd[[1]], "label")<- "aa"
>>> plot(dd) # first label is NOT drawn
>>>
>>
>> Is this expected ?
>>
>
> Reading the code of stats:::plotNode, yes.
>
> Clipping to the figure region is hard-coded.
>
> You can see it is clipping to the figure region as follows:
>
>
> hc <- hclust(dist(USArrests), "ave")
> dend1 <- as.dendrogram(hc)
> dend2 <- cut(dend1, h=70)
> dd <- dend2$lower[[1]]
> op <- par(oma = c(8,4,4,2)+0.1, xpd = NA)
>
> plot(dd) # first label is drawn
> attr(dd[[1]], "label") <- "abcdefghijklmnopqrstuvwxyz"
>
> plot(dd) # first label is NOT drawn
> box(which = "figure")
> par(op)
>
>
>  Is it possible to force the drawing ?
>>
>
> These are (from very quick reading -- not verified)
> the culprit lines in plotNode, I think:
>
> text(xBot, yBot + vln, nodeText, xpd = TRUE, # <- clipping hard-coded
>  cex = lab.cex, col = lab.col, font = lab.font)
>
> Best,
> Tobias
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] dendrogram plot does not draw long labels ?

2011-01-25 Thread Karl Forner
Hello,

It seems that the plot function for dendrograms does not draw labels when
they are too long.

> hc <- hclust(dist(USArrests), "ave")
> dend1 <- as.dendrogram(hc)
> dend2 <- cut(dend1, h=70)
> dd <- dend2$lower[[1]]
> plot(dd) # first label is drawn
> attr(dd[[1]], "label") <- "aa"
> plot(dd) # first label is NOT drawn

Is this expected ?
Is it possible to force the drawing ?

Thank you,

Karl

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] checking user interrupts in C(++) code

2010-09-29 Thread Karl Forner
Hi,

Thanks for your reply,


There are several ways in which you can make your code respond to interrupts
> properly - which one is suitable depends on your application. Probably the
> most commonly used for interfacing foreign objects is to create an external
> pointer with a finalizer - that makes sure the object is released even if
> you pass it on to R later. For memory allocated within a call you can either
> use R's transient memory allocation (see Salloc) or use the on.exit handler
> to cleanup any objects you allocated manually and left over.
>

Using  R's transient memory allocation is not really an option when you use
some code, like a library, not developed for R. Moreover what about c++ and
the new operator ?

One related question: if the code is interrupted, are C++ local objects
freed ?
Otherwise it is very very complex to attack all allocated objects, moreover
it depends on where happens the interruption

Best,

Karl

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] checking user interrupts in C(++) code

2010-09-28 Thread Karl Forner
Hello,

My problem is that I have an extension in C++ that can be quite
time-consuming. I'd like to make it interruptible.
The problem is that if I use the recommended R_CheckUserInterrupt() method I
have no possibility to cleanup (e.g. free the memory).

I've seen an old thread about this, but I wonder if there's a new and
definitive answer.

I just do not understand why a simple R_CheckUserInterrupt() like method
returning a boolean could not be used.
Please enlighten me !

Karl

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Possible bug or annoyance with library.dynam.unload()

2010-09-22 Thread Karl Forner
> Your package depends on Rcpp, so I didn't try it in the alpha version of
2.12.0

It's a mistake, in fact it does not depend anymore. You can safely delete
the src/Makevars file.


Duncan Murdoch
>
>
>  Steps to reproduce the problem:
>>
>> * unarchive it ( tar zxvf foo_0.1.tar.gz )
>> * cd foo
>> * install it locally ( mkdir local; R CMD INSTALL -l local . )
>> * R
>> >  library(foo, lib.loc="local/")
>> >.dynLibs()
>> # there you should be able to see the foo.so lib, in my case
>> /x05/people/m160508/workspace/foo/local/foo/libs/foo.so
>>
>> >  unloadNamespace("foo")
>> .onUnload, libpath= local/fooWarning message:
>> .onUnload failed in unloadNamespace() for 'foo', details:
>>   call: library.dynam.unload("foo", libpath)
>>   error: shared library 'foo' was not loaded
>>
>> #The libpath that the .onUnload() gets is "local/foo".
>>
>> #This fails:
>> >library.dynam.unload("foo", "local/foo")
>> Error in library.dynam.unload("foo", "local/foo") :
>>   shared library 'foo' was not loaded
>>
>> # but if you use the absolute path it works:
>> >library.dynam.unload("foo",
>> "/x05/people/m160508/workspace/foo/local/foo")
>>
>> Karl
>>
>> On Tue, Sep 21, 2010 at 5:33 PM, Duncan Murdoch> >wrote:
>>
>> >   On 21/09/2010 10:38 AM, Karl Forner wrote:
>> >
>> >>  Hello,
>> >>
>> >>  I got no reply on this issue.
>> >>  It is not critical and I could think of work-around, but it really
>> looks
>> >>  like a bug to me.
>> >>  Should I file a bug-report instead of posting in this list ?
>> >>
>> >
>> >  I'd probably post instructions for a reproducible example first.  Pick
>> some
>> >  CRAN package, tell us what to do with it to trigger the error, and then
>> we
>> >  can see if it's something special about your package or Roxygen or a
>> general
>> >  problem.
>> >
>> >  Duncan Murdoch
>> >
>> >   Thanks,
>> >>
>> >>  Karl
>> >>
>> >>  On Thu, Sep 16, 2010 at 6:11 PM, Karl Forner
>> >>   wrote:
>> >>
>> >>  >   Hello,
>> >>  >
>> >>  >   I have a package with a namespace. Because I use Roxygen that
>> >>  overwrites
>> >>  >   the NAMESPACE file each time it is run, I use a R/zzz.R file with
>> >>  >   an .onLoad() and .onUnload() functions to take care of loading and
>> >>  >   unloading my shared library.
>> >>  >
>> >>  >   The problem: if I load my library from a local directory, then the
>> >>  >   unloading of the package fails, e.g:
>> >>  >
>> >>  >   # loads fine
>> >>  >   >library(Foo, lib.loc=".Rcheck")
>> >>  >
>> >>  >   >unloadNamespace("Foo")
>> >>  >   Warning message:
>> >>  >   .onUnload failed in unloadNamespace() for 'Foo', details:
>> >>  > call: library.dynam.unload("Foo", libpath)
>> >>  > error: shared library 'Foo' was not loaded
>> >>  >
>> >>  >   # I traced it a little:
>> >>  >   >library.dynam.unload("Foo", ".Rcheck/Foo")
>> >>  >   Error in library.dynam.unload("Foo", ".Rcheck/Foo") :
>> >>  > shared library 'Foo' was not loaded
>> >>  >
>> >>  >   # using an absolute path works
>> >>  >   >library.dynam.unload("Foo", "/home/toto/.Rcheck/Foo")
>> >>  >
>> >>  >
>> >>  >   So from what I understand, the problem is either that the relative
>> >>  libpath
>> >>  >   is sent to the .onUnload() function instead of the absolute one,
>> >>  >   or that library.dynam.unload() should be modified to handle the
>> >>  relative
>> >>  >   paths.
>> >>  >
>> >>  >   Am I missing something ? What should I do ?
>> >>  >
>> >>  >   Thanks,
>> >>  >
>> >>  >
>> >>  >   Karl
>> >>  >
>> >>
>> >> [[alternative HTML version deleted]]
>> >>
>> >>  __
>> >>  R-devel@r-project.org mailing list
>> >>  https://stat.ethz.ch/mailman/listinfo/r-devel
>> >>
>> >
>> >
>>
>>
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Possible bug or annoyance with library.dynam.unload()

2010-09-22 Thread Karl Forner
Thanks Duncan for your suggestion.

I could not find any package using dynamic library, namespaces and not the
useDynLib pragma so
I created a minimalistic package to demonstrate the problem.
Please find attached a very small package foo (8.8k)

Steps to reproduce the problem:

* unarchive it ( tar zxvf foo_0.1.tar.gz )
* cd foo
* install it locally ( mkdir local; R CMD INSTALL -l local . )
* R
> library(foo, lib.loc="local/")
>.dynLibs()
# there you should be able to see the foo.so lib, in my case
/x05/people/m160508/workspace/foo/local/foo/libs/foo.so

> unloadNamespace("foo")
.onUnload, libpath= local/fooWarning message:
.onUnload failed in unloadNamespace() for 'foo', details:
  call: library.dynam.unload("foo", libpath)
  error: shared library 'foo' was not loaded

#The libpath that the .onUnload() gets is "local/foo".

#This fails:
>library.dynam.unload("foo", "local/foo")
Error in library.dynam.unload("foo", "local/foo") :
  shared library 'foo' was not loaded

# but if you use the absolute path it works:
>library.dynam.unload("foo", "/x05/people/m160508/workspace/foo/local/foo")

Karl

On Tue, Sep 21, 2010 at 5:33 PM, Duncan Murdoch wrote:

>  On 21/09/2010 10:38 AM, Karl Forner wrote:
>
>> Hello,
>>
>> I got no reply on this issue.
>> It is not critical and I could think of work-around, but it really looks
>> like a bug to me.
>> Should I file a bug-report instead of posting in this list ?
>>
>
> I'd probably post instructions for a reproducible example first.  Pick some
> CRAN package, tell us what to do with it to trigger the error, and then we
> can see if it's something special about your package or Roxygen or a general
> problem.
>
> Duncan Murdoch
>
>  Thanks,
>>
>> Karl
>>
>> On Thu, Sep 16, 2010 at 6:11 PM, Karl Forner
>>  wrote:
>>
>> >  Hello,
>> >
>> >  I have a package with a namespace. Because I use Roxygen that
>> overwrites
>> >  the NAMESPACE file each time it is run, I use a R/zzz.R file with
>> >  an .onLoad() and .onUnload() functions to take care of loading and
>> >  unloading my shared library.
>> >
>> >  The problem: if I load my library from a local directory, then the
>> >  unloading of the package fails, e.g:
>> >
>> >  # loads fine
>> >  >library(Foo, lib.loc=".Rcheck")
>> >
>> >  >unloadNamespace("Foo")
>> >  Warning message:
>> >  .onUnload failed in unloadNamespace() for 'Foo', details:
>> >call: library.dynam.unload("Foo", libpath)
>> >error: shared library 'Foo' was not loaded
>> >
>> >  # I traced it a little:
>> >  >library.dynam.unload("Foo", ".Rcheck/Foo")
>> >  Error in library.dynam.unload("Foo", ".Rcheck/Foo") :
>> >shared library 'Foo' was not loaded
>> >
>> >  # using an absolute path works
>> >  >library.dynam.unload("Foo", "/home/toto/.Rcheck/Foo")
>> >
>> >
>> >  So from what I understand, the problem is either that the relative
>> libpath
>> >  is sent to the .onUnload() function instead of the absolute one,
>> >  or that library.dynam.unload() should be modified to handle the
>> relative
>> >  paths.
>> >
>> >  Am I missing something ? What should I do ?
>> >
>> >  Thanks,
>> >
>> >
>> >  Karl
>> >
>>
>>[[alternative HTML version deleted]]
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>
>


foo_0.1.tar.gz
Description: GNU Zip compressed data
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Possible bug or annoyance with library.dynam.unload()

2010-09-21 Thread Karl Forner
Hello,

I got no reply on this issue.
It is not critical and I could think of work-around, but it really looks
like a bug to me.
Should I file a bug-report instead of posting in this list ?

Thanks,

Karl

On Thu, Sep 16, 2010 at 6:11 PM, Karl Forner  wrote:

> Hello,
>
> I have a package with a namespace. Because I use Roxygen that overwrites
> the NAMESPACE file each time it is run, I use a R/zzz.R file with
> an .onLoad() and .onUnload() functions to take care of loading and
> unloading my shared library.
>
> The problem: if I load my library from a local directory, then the
> unloading of the package fails, e.g:
>
> # loads fine
> >library(Foo, lib.loc=".Rcheck")
>
> >unloadNamespace("Foo")
> Warning message:
> .onUnload failed in unloadNamespace() for 'Foo', details:
>   call: library.dynam.unload("Foo", libpath)
>   error: shared library 'Foo' was not loaded
>
> # I traced it a little:
> >library.dynam.unload("Foo", ".Rcheck/Foo")
> Error in library.dynam.unload("Foo", ".Rcheck/Foo") :
>   shared library 'Foo' was not loaded
>
> # using an absolute path works
> >library.dynam.unload("Foo", "/home/toto/.Rcheck/Foo")
>
>
> So from what I understand, the problem is either that the relative libpath
> is sent to the .onUnload() function instead of the absolute one,
> or that library.dynam.unload() should be modified to handle the relative
> paths.
>
> Am I missing something ? What should I do ?
>
> Thanks,
>
>
> Karl
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Possible bug or annoyance with library.dynam.unload()

2010-09-16 Thread Karl Forner
Hello,

I have a package with a namespace. Because I use Roxygen that overwrites the
NAMESPACE file each time it is run, I use a R/zzz.R file with
an .onLoad() and .onUnload() functions to take care of loading and unloading
my shared library.

The problem: if I load my library from a local directory, then the unloading
of the package fails, e.g:

# loads fine
>library(Foo, lib.loc=".Rcheck")

>unloadNamespace("Foo")
Warning message:
.onUnload failed in unloadNamespace() for 'Foo', details:
  call: library.dynam.unload("Foo", libpath)
  error: shared library 'Foo' was not loaded

# I traced it a little:
>library.dynam.unload("Foo", ".Rcheck/Foo")
Error in library.dynam.unload("Foo", ".Rcheck/Foo") :
  shared library 'Foo' was not loaded

# using an absolute path works
>library.dynam.unload("Foo", "/home/toto/.Rcheck/Foo")


So from what I understand, the problem is either that the relative libpath
is sent to the .onUnload() function instead of the absolute one,
or that library.dynam.unload() should be modified to handle the relative
paths.

Am I missing something ? What should I do ?

Thanks,


Karl

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Best way to manage configuration for openMP support

2010-09-15 Thread Karl Forner
Thanks a lot, I have implemented the configure stuff and it works perfectly
!!
Exactly what I was looking for.

I just added AC_PREREQ([2.62]) because the AC_OPENMP was only supported from
this version, and
 AC_MSG_WARN([NO OpenMP support detected. You should should use gcc >= 4.2
!!!])
when no openmp support was detected.

Maybe this could be put into the Writing R Extensions manual.

Thanks again,

Karl

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Fwd: warning or error upon type/storage mode coercion?

2010-09-15 Thread Karl Forner
-- Forwarded message --
From: Karl Forner 
Date: Wed, Sep 15, 2010 at 10:14 AM
Subject: Re: [Rd] warning or error upon type/storage mode coercion?
To: Stefan Evert 


I'm a Perl fan, and I really really miss the "use strict" feature. IMHO it's
very error-prone not to have thios safety net.

Best,



On Wed, Sep 15, 2010 at 9:54 AM, Stefan Evert wrote:

>
> On 15 Sep 2010, at 03:23, Benjamin Tyner wrote:
>
> > 2. So, assuming the answer to (1) is a resounding "no", does anyone care
> to state an opinion regarding the philosophical or historical rationale for
> why this is the case in R/S, whereas certain other interpreted languages
> offer the option to perform strict type checking? Basically, I'm trying to
> explain to someone from a perl background why the (apparent) lack of a "use
> strict; use warnings;" equivalent is not a hindrance to writing bullet-proof
> R code.
>
> If they're from a Perl background, you might also want to point out to them
> that (base) Perl doesn't do _any_ type checking at all, and converts types
> as needed.  As in ...
>
> $x = "0.0";
> if ($x) ... # true
> if ($x+0) ... # false
>
> AFAIK, that's one of the main complaints that people have about Perl.  "use
> strict" will just make sure that all variables have to be declared before
> they're used, so you can't mess up by mistyping variable names.  Which is
> something I'd very much like to have in R occasionally ...
>
> Best,
> Stefan
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel