Re: [Rd] CRAN policies

2012-03-30 Thread Matthew Dowle
Mark.Bravington at csiro.au writes:

 There must be over 2000 people who have written CRAN packages by now; every 
extra
 check and non-back-compatible additional requirement runs the risk of 
generating false-negatives and
 incurring many extra person-hours to fix non-problems. Plus someone needs 
to document and explain the
 check (adding to the rule mountain), plus there is the time spent in 
discussions like this..!

Not sure where you're coming from on that. For example, Prof Ripley has added 
quite a few new NOTEs to QC.R over the last few months. These caught things I 
wasn't aware of in the two packages I maintain and I was more than happy to fix 
them. It improves quality, surely.

There's only one particular NOTE causing an issue: 'no visible binding'. If it 
were made a MEMO, we can move on. All the other NOTEs can (and should) be 
fixed, can't they?

Matthew

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] CRAN policies

2012-03-30 Thread Claudia Beleites
Paul,

 One of the things I have noticed with the R 2.15.0 RC and --as-cran is 
 that the I have to bump the version number of the working copy of my 
[snip]
 
 I am curious how other developers approach this.

Regardless of --as-cran I find it very useful to use the date as minor
part of the version number (e.g. hyperSpec 0.98-20120320), which I set
automatically.

Claudia





-- 
Claudia Beleites
Spectroscopy/Imaging
Institute of Photonic Technology
Albert-Einstein-Str. 9
07745 Jena
Germany

email: claudia.belei...@ipht-jena.de
phone: +49 3641 206-133
fax:   +49 2641 206-399

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R-forge --as-cran

2012-03-30 Thread Paul Gilbert
(Renamed from  Re: [Rd] CRAN policies because the of the 
muli-threading of that subject.)


Claudia

Actually, my version numbers are year-month dates, eg 2012.3-1, although 
I don't set them automatically.


I have had some additional off-line discussion on this. The problem is this:

Now when I submit version 2012.3-1 to CRAN, any checks of that package 
on R-forge will fail, until I change the version number. This is by 
specific request of the CRAN maintainers to the R-forge maintainers, the 
reason being, understandably, that the CRAN maintainers do not like 
getting submissions without the version number changed. One implication 
of this is that I should change the R-forge version number as soon as I 
make any changes to the package, even if I am going to change it again 
before I actually release to CRAN. This seems like a reasonable 
practice, even if I have not always done that.


The case where the code on R-forge remains unchanged for some time after 
it is released to CRAN is more subtle. If R-forge does not re-run the 
checks until I make a change, as is the current situation, then the 
package will still be indicated as ok on the R-forge pkg page. However, 
when R is upgraded, I would like the checks to be re-run on all 
platforms, not just on my own testing platform. But when that is done, 
the R-forge indication is going to be that the package failed, because 
the version number is the same as on CRAN. The information I want is 
actually available on the CRAN daily check. I just need to know that 
when my package is unchanged from the version on CRAN, I should look at 
CRAN daily rather than at the R-forge result.


Paul

On 12-03-30 10:38 AM, Claudia Beleites wrote:

Paul,


One of the things I have noticed with the R 2.15.0 RC and --as-cran is
that the I have to bump the version number of the working copy of my

[snip]


I am curious how other developers approach this.


Regardless of --as-cran I find it very useful to use the date as minor
part of the version number (e.g. hyperSpec 0.98-20120320), which I set
automatically.

Claudia







__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] CRAN policies

2012-03-30 Thread William Dunlap
It looks like you define a few functions that use substitute() or sys.call()
or similar functions to look at the unevaluated argument list.  E.g.,

  cq -
  function( ...) {
  # Saves putting in quotes!
  # E.G.: quoted( first, second, third) is the same as c( 'first', 'second', 
'third')
  # wrapping by as.character means cq() returns character(0) not list()
as.character( sapply( as.list( match.call( expand.dots=TRUE))[-1], 
as.character))
  }
%such.that% and %SUCH.THAT% do similar things.

Almost all the complaints from check involve calls to a handful of such
functions.  If you could tell codetools:::checkUsage that that these functions
did nonstandard evaluation on all or some of their arguments then the
complaints would go away and other checks for  real errors like misspellings
would still be done.

Another possible part of the problem is that if checkUsage is checking a 
function like
  f - function(x) paste(x, cq(suffix), sep=.)
it attributes the out-of-scope suffix problem to 'f' and doesn't mention that 
the immediate
caller is 'cq', so you cannot easily filter output complaints about cq.  (CRAN 
would
not do such filtering, but a developer might.)

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com


 -Original Message-
 From: r-devel-boun...@r-project.org [mailto:r-devel-boun...@r-project.org] On 
 Behalf
 Of mark.braving...@csiro.au
 Sent: Thursday, March 29, 2012 6:30 PM
 Cc: r-de...@stat.math.ethz.ch
 Subject: Re: [Rd] CRAN policies
 
 I'm concerned this thread is heading the wrong way, towards techno-fixes for 
 imaginary
 problems. R package-building is already encumbered with a huge set of 
 complicated
 rules, and more instructions/rules eg for metadata would make things worse 
 not better.
 
 
 RCMD CHECK on the 'mvbutils' package generates over 300 Notes about no 
 visible
 binding..., which inevitably I just ignore. They arise because RCMD CHECK is 
 too stupid
 to understand one of my preferred coding idioms (I'm not going to explain 
 what-- that's
 beside the point). And RCMD CHECK always will be too stupid to understand 
 everything
 that a rich language like R might quite reasonably cause experienced coders 
 to do.
 
 It should not be CRAN's business how I write my code, or even whether my code 
 does
 what it is supposed to. It might be CRAN's business to try to work out 
 whether my code
 breaks CRAN's policies, eg by causing R to crash horribly-- that's presumably 
 what
 Warnings are for (but see below). And maybe there could be circumstances 
 where an
 automatic check might be worried enough to alert the CRANia and require 
 manual
 explanation and emails etc from a developer, but even that seems doomed given 
 the
 growing deluge of packages.
 
 RCMD CHECK currently functions both as a sanitizer for CRAN, and as a 
 developer-tool.
 But the fact that the one programl does both things seems accidental to me, 
 and I think
 this dual-use is muddying the discussion. There's a big distinction between 
 (i) code-checks
 that developers themselves might or might not find useful-- which should be 
 left to the
 developer, and will vary from person to person-- and (ii) code-checks that 
 CRAN enforces
 for its own peace-of-mind. Maybe it's convenient to have both functions in 
 the same
 place, and it'd be fine to use Notes for one and Warnings for the other, but 
 the different
 purposes should surely be kept clear.
 
 Personally, in building over 10 packages (only 2 on CRAN), I haven't found 
 RCMD CHECK
 to be of any use, except for the code-documentation and example-running bits. 
 I know
 other people have different opinions, but that's the point: 
 one-size-does-not-fit-all when
 it comes to coding tools.
 
 And wrto the Warnings themselves: I feel compelled to point out that it's 
 logically
 impossible to fully check whether R code will do bad things. One has to 
 wonder at what
 point adding new checks becomes futile or counterproductive. There must be 
 over 2000
 people who have written CRAN packages by now; every extra check and non-back-
 compatible additional requirement runs the risk of generating false-negatives 
 and
 incurring many extra person-hours to fix non-problems. Plus someone needs to
 document and explain the check (adding to the rule mountain), plus there is 
 the time
 spent in discussions like this..!
 
 Mark
 
 Mark Bravington
 CSIRO CMIS
 Marine Lab
 Hobart
 Australia
 
 From: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] On Behalf 
 Of
 Hadley Wickham [had...@rice.edu]
 Sent: 30 March 2012 07:42
 To: William Dunlap
 Cc: r-de...@stat.math.ethz.ch; Spencer Graves
 Subject: Re: [Rd] CRAN policies
 
  Most of that stuff is already in codetools, at least when it is checking 
  functions
  with checkUsage().  E.g., arguments of ~ are not checked.  The  expr 
  argument
  to with() will not be checked if you add  skipWith=FALSE to the call to 
  checkUsage.
 
library(codetools)
 

Re: [Rd] CRAN policies

2012-03-30 Thread Kevin Wright
I'll echo Mark's concerns.  R _used_ to be a language for turning ideas
into software quickly.  Now it is more like prototyping ideas in software
quickly, and then spend a substantial amount of time trying to follow
administrative rules to package the code.  Quality has its costs.

Many of the code checks I find quite useful, but the no visible binding
one generates lots of nuisance notes for me.  I must have a similar coding
style to Mark.

Kevin


On Thu, Mar 29, 2012 at 8:29 PM, mark.braving...@csiro.au wrote:

 I'm concerned this thread is heading the wrong way, towards techno-fixes
 for imaginary problems. R package-building is already encumbered with a
 huge set of complicated rules, and more instructions/rules eg for metadata
 would make things worse not better.

 RCMD CHECK on the 'mvbutils' package generates over 300 Notes about no
 visible binding..., which inevitably I just ignore. They arise because
 RCMD CHECK is too stupid to understand one of my preferred coding idioms
 (I'm not going to explain what-- that's beside the point). And RCMD CHECK
 always will be too stupid to understand everything that a rich language
 like R might quite reasonably cause experienced coders to do.

 It should not be CRAN's business how I write my code, or even whether my
 code does what it is supposed to. It might be CRAN's business to try to
 work out whether my code breaks CRAN's policies, eg by causing R to crash
 horribly-- that's presumably what Warnings are for (but see below). And
 maybe there could be circumstances where an automatic check might be
 worried enough to alert the CRANia and require manual explanation and
 emails etc from a developer, but even that seems doomed given the growing
 deluge of packages.

 RCMD CHECK currently functions both as a sanitizer for CRAN, and as a
 developer-tool. But the fact that the one programl does both things seems
 accidental to me, and I think this dual-use is muddying the discussion.
 There's a big distinction between (i) code-checks that developers
 themselves might or might not find useful-- which should be left to the
 developer, and will vary from person to person-- and (ii) code-checks that
 CRAN enforces for its own peace-of-mind. Maybe it's convenient to have both
 functions in the same place, and it'd be fine to use Notes for one and
 Warnings for the other, but the different purposes should surely be kept
 clear.

 Personally, in building over 10 packages (only 2 on CRAN), I haven't found
 RCMD CHECK to be of any use, except for the code-documentation and
 example-running bits. I know other people have different opinions, but
 that's the point: one-size-does-not-fit-all when it comes to coding tools.

 And wrto the Warnings themselves: I feel compelled to point out that it's
 logically impossible to fully check whether R code will do bad things. One
 has to wonder at what point adding new checks becomes futile or
 counterproductive. There must be over 2000 people who have written CRAN
 packages by now; every extra check and non-back-compatible additional
 requirement runs the risk of generating false-negatives and incurring many
 extra person-hours to fix non-problems. Plus someone needs to document
 and explain the check (adding to the rule mountain), plus there is the time
 spent in discussions like this..!

 Mark

 Mark Bravington
 CSIRO CMIS
 Marine Lab
 Hobart
 Australia
 
 From: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] On
 Behalf Of Hadley Wickham [had...@rice.edu]
 Sent: 30 March 2012 07:42
 To: William Dunlap
 Cc: r-de...@stat.math.ethz.ch; Spencer Graves
 Subject: Re: [Rd] CRAN policies

  Most of that stuff is already in codetools, at least when it is checking
 functions
  with checkUsage().  E.g., arguments of ~ are not checked.  The  expr
 argument
  to with() will not be checked if you add  skipWith=FALSE to the call to
 checkUsage.
 
library(codetools)
 
checkUsage(function(dataFrame) with(dataFrame, {Num/Den ; Resp ~
 Pred}))
   anonymous: no visible binding for global variable 'Num' (:1)
   anonymous: no visible binding for global variable 'Den' (:1)
 
checkUsage(function(dataFrame) with(dataFrame, {Num/Den ; Resp ~
 Pred}), skipWith=TRUE)
 
checkUsage(function(dataFrame) with(DataFrame, {Num/Den ; Resp ~
 Pred}), skipWith=TRUE)
   anonymous: no visible binding for global variable 'DataFrame'
 
  The only part that I don't see is the mechanism to add code-walker
 functions to
  the environment in codetools that has the standard list of them for
 functions with
  nonstandard evaluation:
objects(codetools:::collectUsageHandlers, all=TRUE)
[1] $ $-   .Internal
[4] :::::   @
[7] @-   { ~
   [10] --   =
   [13] assignbinomial  bquote
   [16] data  detachexpression
   [19] for   function  Gamma
   [22] gaussian  if

Re: [Rd] CRAN policies

2012-03-30 Thread Joshua Wiley
On Fri, Mar 30, 2012 at 11:41 AM, Kevin Wright kw.s...@gmail.com wrote:
 I'll echo Mark's concerns.  R _used_ to be a language for turning ideas
 into software quickly.  Now it is more like prototyping ideas in software
 quickly, and then spend a substantial amount of time trying to follow
 administrative rules to package the code.

..if you want to submit to CRAN.  There are practically zero if you
host on your own website.  Of course developers are free to do
whatever they want and R core does not get to tell them what/how to do
it.  R core does get a say when you ask them to host your source and
build your package binaries.

 Quality has its costs.

So does using CRAN.  If it is not the best solution for your problem,
use something else.  Hadley uses github from development ggplot2, and
with the dev_tools package, it is relatively easy for users to install
the source ggplot2 code.  Something like that might be appropriate for
code/packages wehre you just want to 'turn ideas into software
quickly'.  There is an extra step required for users to use it, but
that makes sense because it weeds out inept users from using code with
less quality control.


 Many of the code checks I find quite useful, but the no visible binding
 one generates lots of nuisance notes for me.  I must have a similar coding
 style to Mark.

 Kevin


 On Thu, Mar 29, 2012 at 8:29 PM, mark.braving...@csiro.au wrote:

 I'm concerned this thread is heading the wrong way, towards techno-fixes
 for imaginary problems. R package-building is already encumbered with a
 huge set of complicated rules, and more instructions/rules eg for metadata
 would make things worse not better.

 RCMD CHECK on the 'mvbutils' package generates over 300 Notes about no
 visible binding..., which inevitably I just ignore. They arise because
 RCMD CHECK is too stupid to understand one of my preferred coding idioms
 (I'm not going to explain what-- that's beside the point). And RCMD CHECK
 always will be too stupid to understand everything that a rich language
 like R might quite reasonably cause experienced coders to do.

 It should not be CRAN's business how I write my code, or even whether my
 code does what it is supposed to. It might be CRAN's business to try to
 work out whether my code breaks CRAN's policies, eg by causing R to crash
 horribly-- that's presumably what Warnings are for (but see below). And
 maybe there could be circumstances where an automatic check might be
 worried enough to alert the CRANia and require manual explanation and
 emails etc from a developer, but even that seems doomed given the growing
 deluge of packages.

 RCMD CHECK currently functions both as a sanitizer for CRAN, and as a
 developer-tool. But the fact that the one programl does both things seems
 accidental to me, and I think this dual-use is muddying the discussion.
 There's a big distinction between (i) code-checks that developers
 themselves might or might not find useful-- which should be left to the
 developer, and will vary from person to person-- and (ii) code-checks that
 CRAN enforces for its own peace-of-mind. Maybe it's convenient to have both
 functions in the same place, and it'd be fine to use Notes for one and
 Warnings for the other, but the different purposes should surely be kept
 clear.

 Personally, in building over 10 packages (only 2 on CRAN), I haven't found
 RCMD CHECK to be of any use, except for the code-documentation and
 example-running bits. I know other people have different opinions, but
 that's the point: one-size-does-not-fit-all when it comes to coding tools.

 And wrto the Warnings themselves: I feel compelled to point out that it's
 logically impossible to fully check whether R code will do bad things. One
 has to wonder at what point adding new checks becomes futile or
 counterproductive. There must be over 2000 people who have written CRAN
 packages by now; every extra check and non-back-compatible additional
 requirement runs the risk of generating false-negatives and incurring many
 extra person-hours to fix non-problems. Plus someone needs to document
 and explain the check (adding to the rule mountain), plus there is the time
 spent in discussions like this..!

 Mark

 Mark Bravington
 CSIRO CMIS
 Marine Lab
 Hobart
 Australia
 
 From: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] On
 Behalf Of Hadley Wickham [had...@rice.edu]
 Sent: 30 March 2012 07:42
 To: William Dunlap
 Cc: r-de...@stat.math.ethz.ch; Spencer Graves
 Subject: Re: [Rd] CRAN policies

  Most of that stuff is already in codetools, at least when it is checking
 functions
  with checkUsage().  E.g., arguments of ~ are not checked.  The  expr
 argument
  to with() will not be checked if you add  skipWith=FALSE to the call to
 checkUsage.
 
    library(codetools)
 
    checkUsage(function(dataFrame) with(dataFrame, {Num/Den ; Resp ~
 Pred}))
   anonymous: no visible binding for global variable 'Num' (:1)
   

[Rd] RSiteSearch

2012-03-30 Thread Jonathan Baron
I don't know what the RSiteSearch function does anymore, or who
maintains it. Please ignore this if you have nothing to do with it.

I recently moved my R site, finzi.psych.upenn.edu to a new
computer. Somehow some of the mailing-list search capabilities were
lost, and I do not have time to find the problem. Because I have not
been maintaining these lists at all since 2010, I decided that the
simplest thing was just to remove them all from the search
path. Again, some of them haven't been working anyway for a few weeks,
and I have had only one complaint. (And there are much better ways to
search mailing lists now.)

The functions, task views, and vignette searches still work.

Eventually I would like to move to a different search engine for
functions, task views, and vignettes. But it isn't that hard to
maintain the current one (Namazu) as well, if you want me to do that.

Jon
-- 
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page: http://www.sas.upenn.edu/~baron
Editor: Judgment and Decision Making (http://journal.sjdm.org)

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] r-forge build failure bafflement

2012-03-30 Thread Ben Bolker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

  Figured it out (I think: I haven't gotten through restructuring and
testing, but I think it's going to work now).  I was paying
insufficient attention to the build arguments, in particular
- --resave-data=best.

  I previously had the clever idea to save some fitted models with the
package so that they would be available to users who wanted to work
with them without taking the time and trouble of re-running the model
fits (some of which are very slow, especially for MCMC variants).  So
I saved these in a .rda file in the data section, which seemed like a
good idea at the time.  However, that means that resaving the data now
requires loading the package with which the class of those model
objects is associated ... and voila, a mysterious, apparently
self-referential, error message.

   For now I'm experimenting with moving those fits to a .rda file
inst/extdata ... (I thought using to dput() instead of save() to save
them might fix the problem, but it doesn't seem to) but this does seem
like a bit of a pain (I have included anther function, getdata(), so
that users don't have to mess around with system.file(extdata,
[model_obj], package=glmmADMB). I would be curious if anyone has any
other suggestions for ways to work around this issue, or if they feel
that I am subverting the intended use of the data/ directory (and so
it's my own fault).

  happy friday, and thanks to all for their suggestions.

Ben Bolker

On 12-03-30 01:38 AM, Prof Brian Ripley wrote:
 We've seen similar things several times with CRAN submissions.
 Basic scenario was
 
 - INSTALL (via build or check) is trying to install a package that
 is not already installed, into a private library not on the usual
 .libPaths().
 
 - Start-up code in that package is looking for the package, and
 does not respect lib[name] as passed to .First.lib or
 .onLoad/.onAttach.  E.g. a call to installed.package() or
 packageDescription.
 
 - As the maintainer has an earlier version installed in .Library
 (s)he cannot reproduce it.
 
 I took a very quick look at the package: it has .First.lib and not 
 .onLoad/.onAttach, and of course it has a namespace (all packages
 now do).  I would start by fixing that.
 
 On 29/03/2012 21:54, Ben Bolker wrote:
 
 I am attempting to build a package on r-forge and running into a 
 weird error.  I have been in correspondence with the R-forge
 admins and am turning to r-devel on the remote chance that
 someone might have a guess as to what is going wrong or a
 suggestion about further diagnostics/experiments I could try ...
 
 The package seems to build fine on my system(s) with
 
 R CMD build --compact-vignettes --resave-data=best pkg
 
 (these are the R-forge build arguments, according to the r-forge
 admins)
 
 -- I've tried it with R-devel on Linux-32 and R 2.14.2 on
 MacOS-64.
 
 The build log (basically identical across linux64/win64/macos64)
 is as follows:
 
 -- Thu Mar 29 20:15:21 2012: Building tarball for
 package glmmADMB (SVN revision 204) using R version 2.14.2
 (2012-02-29) ...
 
 * checking for file 'glmmADMB/DESCRIPTION' ... OK * preparing
 'glmmADMB': * checking DESCRIPTION meta-information ... OK *
 checking for LF line-endings in source and make files * checking
 for empty or unneeded directories * looking to see if a
 'data/datalist' file should be added * re-saving image files 
 Error in loadNamespace(name) : there is no package called
 'glmmADMB' Execution halted Run time: 0.51 seconds. --
 
 so apparently the package is failing because it doesn't exist
 (!!) I originally thought this was a circular dependency problem,
 because glmmADMB and coefplot2 (another r-forge package) depended
 on each other, but I have (at least for now) removed glmmADMB's
 coefplot2 dependency.  As far as I can tell there are *no*
 packages on r-forge that depend on/suggest/import glmmADMB.
 
 a1- available.packages(contriburl= 
 contrib.url(http://r-forge.r-project.org;))
 rownames(a1)[glmmADMB %in% a1[,Suggests]]
 character(0)
 rownames(a1)[glmmADMB %in% a1[,Depends]]
 character(0)
 rownames(a1)[glmmADMB %in% a1[,Imports]]
 character(0)
 
 The perhaps-relevant parts of the DESCRIPTION file: = 
 BuildVignettes: no Description: Fits mixed-effects models using a
 variety of distributions Imports: stats, nlme Depends: R (=
 2.13), methods, MASS, R2admb Suggests: lattice, lme4, lme4.0,
 coda, mlmRev, scapeMCMC, ggplot2, bbmle, pscl, knitr, car =
 
 The only other thing I can think of is backing up a few SVN 
 revisions and seeing whether I can get back to a working version,
 but I'd like to see if I can get it fixed by moving forward
 rather than backward ...
 
 
 For anyone who is intrigued and wants to investigate farther:
 
 http://r-forge.r-project.org/R/?group_id=847 
 http://r-forge.r-project.org/scm/?group_id=847
 
 cheers Ben Bolker
 
 __ 
 R-devel@r-project.org mailing list 
 

Re: [Rd] r-forge build failure bafflement

2012-03-30 Thread Ben Bolker
On 12-03-30 08:14 PM, Brian G. Peterson wrote:
 On my phone, so replying off-list, but wouldn't loading the data
 objects in a running R session and using the appropriate
 compression arguments to save() do the trick? - Brian

  I might try that, but I have a strong suspicion that R will try to
load the data objects anyway to see if they are compressible ...  I'd
have to look more carefully at resaveRdaFiles (in
src/library/tools/admin.R) to be sure, but it looks like line 860

suppressPackageStartupMessages(load(p, envir = env))

unconditionally loads the files and would trigger the search for the
package -- I don't think this is avoidable.

 [cc'd back to r-devel for discussion/archival purposes]

 -- Sent from my Android phone with K-9 Mail. Please excuse my
 brevity.
 
 Ben Bolker bbol...@gmail.com wrote:
 
 Figured it out (I think: I haven't gotten through restructuring
 and testing, but I think it's going to work now).  I was paying 
 insufficient attention to the build arguments, in particular 
 --resave-data=best.
 
 I previously had the clever idea to save some fitted models with
 the package so that they would be available to users who wanted to
 work with them without taking the time and trouble of re-running
 the model fits (some of which are very slow, especially for MCMC
 variants).  So I saved these in a .rda file in the data section,
 which seemed like a good idea at the time.  However, that means
 that resaving the data now requires loading the package with which
 the class of those model objects is associated ... and voila, a
 mysterious, apparently self-referential, e! rror message.
 
 For now I'm experimenting with moving those fits to a .rda file 
 inst/extdata ... (I thought using to dput() instead of save() to
 save them might fix the problem, but it doesn't seem to) but this
 does seem like a bit of a pain (I have included anther function,
 getdata(), so that users don't have to mess around with
 system.file(extdata, [model_obj], package=glmmADMB). I would be
 curious if anyone has any other suggestions for ways to work around
 this issue, or if they feel that I am subverting the intended use
 of the data/ directory (and so it's my own fault).
 
 happy friday, and thanks to all for their suggestions.
 
 Ben Bolker
 
 On 12-03-30 01:38 AM, Prof Brian Ripley wrote:
 We've seen similar things several times with CRAN submissions. 
 Basic scenario was
 
 - INSTALL (via build or check) is trying to install a package
 that is not
 already installed, into a private library not on the usual
 .libPaths().
 
 - Start-up code in that package is looking for the package, and 
 does not respect lib[name] as passed to .First.lib or 
 .onLoad/.onAttach.  E.g. a call to installed.package() or 
 packageDescription.
 
 - As the maintainer has an earlier version installed in .Library 
 (s)he cannot reproduce it.
 
 I took a very quick look at the package: it has .First.lib and
 not .onLoad/.onAttach, and of course it has a namespace (all
 packages now do).  I would start by fixing that.
 
 On 29/03/2012 21:54, Ben Bolker wrote:
 
 I am attempting to build a package on r-forge and running into
 a weird error.  I have been in correspondence with the R-forge 
 admins and am turning to r-devel on the remote chance that 
 someo!
 ne might have a guess as to what is going wrong or a
 suggestion about further diagnostics/experiments I could try
 ...
 
 The package seems to build fine on my system(s) with
 
 R CMD build --compact-vignettes --resave-data=best pkg
 
 (these are the R-forge build arguments, according to the
 r-forge admins)
 
 -- I've tried it with R-devel on Linux-32 and R 2.14.2 on 
 MacOS-64.
 
 The build log (basically identical across
 linux64/win64/macos64) is as follows:
 
 -- Thu Mar 29 20:15:21 2012: Building tarball for 
 package glmmADMB (SVN revision 204) using R version 2.14.2 
 (2012-02-29) ...
 
 * checking for file 'glmmADMB/DESCRIPTION' ... OK * preparing 
 'glmmADMB': * checking DESCRIPTION meta-informatio!
 n ... OK *
 checking for LF line-endings in source and make files *
 checking for empty or unneeded directories * looking to see if
 a 'data/datalist' file should be added * re-saving image files
  Error in loadNamespace(name) : there is no package called 
 'glmmADMB' Execution halted Run time: 0.51 seconds. --
 
 so apparently the package is failing because it doesn't exist 
 (!!) I originally thought this was a circular dependency
 problem, because glmmADMB and coefplot2 (another r-forge
 package) depended on each other, but I have (at least for now)
 removed glmmADMB's coefplot2 dependency.  As far as I can tell
 there are *no* packages on r-forge that depend
 on/suggest/import glmmADMB.
 
 a1- available.packages(contriburl= 
 contrib.url(http://r-forge.r-project.org;))
 rownames(a1)[glmmADMB %in% a1[,Suggests]]
 character(0)
 rownames(a1)[glmmADMB %in% a1[,Depends]]
 character(0)
 rownames(a1)[glmmADMB %in% a1[,Imports]]
 character(0)
 
 The