RE: [Rd] delay() has been deprecated for 2.1.0
On Sat, 12 Mar 2005 [EMAIL PROTECTED] wrote: Uh-oh... I've just written a bunch of code for 'mvbutils' using 'delay', and am worried by the statement that "there should be no way to see a object in R". At present, it's possible to check whether 'x' is a promise via e.g. 'class( .GlobalEnv$x)'. This will be different to 'class( x)' if 'x' is a promise, regardless of whether the promise has or has not been forced yet. This can be very useful; my recent code relies on it to check whether certain objects have been changed since last being saved. [These certain objects are originally assigned as promises to load from individual files. Read-accessing the object keeps it as class 'promise', whereas write-access creates a non-promise. Thus I can tell whether the individual files need re-saving when the entire workspace is saved.] Relying on undocumented features when designing a package is not a good idea. In this case the feature of env$x returning a promise contradicts the documentation and is therefore a bug (the documentation says that env$x should behave like the corresponding get() expression, and that forcec promises and returns their values). The has-it-changed test has been very valuable to me in allowing fast handling of large collections of large objects (which is why I've been adding this functionality to 'mvbutils'); and apart from is-it-still-a-promise, I can't think of any other R-level way of testing whether an object has been modified. [If there is another way, please let me know!] Is there any chance of retaining *some* R-level way of checking whether an object is a promise, both pre-forcing and post-forcing? (Not necessarily via the 'class( env$x)' method, if that's deemed objectionable.) This would not be a good idea. The current behavior of leaving an evaluated promise in place is also not a documented feature as far as I can see. It is a convenient way of implementing lazy evaluation in an interpreter but it has drawbacks. One is the cost of the extra dereference. Another is the fact that these promises keep alive their environments that might otherwise be inaccessible and hence available for GC. These environments might in turn reference large data structures, keeping them alive. At this point it seems too complicated to deal with this in the interpreter, but a compiler might be able to prove that a promise can be safely discarded. (In fact a compiler might be able to prove that a promise is not needed in the first place). There are other possible approaches that you might use, such as active bindings (see the help for makeActiveBinding). If that won't do we can look into developing a suitable abstraction that we can implement and document in a way that does not tie the hands of the internal implementation. Best, luke Mark Bravington [EMAIL PROTECTED] -Original Message- From: [EMAIL PROTECTED] on behalf of Duncan Murdoch Sent: Sat 12/03/2005 3:05 AM To: r-devel@stat.math.ethz.ch Cc: Gregory Warnes; David Brahm; Torsten Hothorn; Nagiza F. Samatova Subject: [Rd] delay() has been deprecated for 2.1.0 After a bunch of discussion in the core group, we have decided to deprecate the delay() function (which was introduced as "experimental" in R 0.50). This is the function that duplicates in R code the delayed evaluation mechanism (the promise) that's used in evaluating function arguments. The problem with delay() was that it was handled inconsistently (e.g. sometimes you would see an object displayed as , sometimes it would be evaluated); it tended to be error-prone in usage (e.g. this was the cause of the bug that makes the curve() function create a "pu" object in the global environment); and it was generally difficult to figure out exactly what the semantics of it should be in order to be consistent. delay() has been replaced with delayedAssign(). This new function creates a promise and assigns it into an environment. Once one more set of changes is made and delay() is gone, there should be no way to see a object in R: as soon as the object is accessed, it will be evaluated and you'll see the value. A few packages made use of delay(). I have replaced all of those uses with delayedAssign(). The most common usage was something like the QA code uses: assign("T", delay(stop("T used instead of TRUE")), pos = .CheckExEnv) This translates to delayedAssign("T", stop("T used instead of TRUE"), eval.env = .GlobalEnv, assign.env = .CheckExEnv) In most cases the "eval.env = .GlobalEnv" argument is not necessary (and in fact it is often a bug, as it was in curve()). The environment where the promise is to be evaluated now defaults to the environment where the call is being made, rather than
Re: [Rd] CRAN Task Views: ctv package available
Paul, On 11 March 2005 at 17:55, Paul Gilbert wrote: | Achim | ... | >>Other things that might be useful are various programming views, like | >>a Matlab view, and an SPSS view. | > | > | > I'm not sure what you have in mind here, because I would think that this | > amounts more to using the language in general than using certain | > packages. | ... | | I guess I didn't see that this should only be about packages. It seems | you could put just about anything in the HTML page. A table like | | MatlabR translation | --- | eye(n) diag(1,n) |=<- | etc | | might be useful to a lot of people converting from Matlab. I may have | the above wrong, but I do have a good start on a table like this in a | sed script somewhere, so if anyone volunteers to be the maintainer I can | help a bit. Recall that (at least some of) this exists already in Robin's page at http://cran.us.r-project.org/doc/contrib/R-and-octave-2.txt (using the .us mirror here, adapt at will) Dirk -- Better to have an approximate answer to the right question than a precise answer to the wrong question. -- John Tukey as quoted by John Chambers __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
RE: [Rd] delay() has been deprecated for 2.1.0
Uh-oh... I've just written a bunch of code for 'mvbutils' using 'delay', and am worried by the statement that "there should be no way to see a object in R". At present, it's possible to check whether 'x' is a promise via e.g. 'class( .GlobalEnv$x)'. This will be different to 'class( x)' if 'x' is a promise, regardless of whether the promise has or has not been forced yet. This can be very useful; my recent code relies on it to check whether certain objects have been changed since last being saved. [These certain objects are originally assigned as promises to load from individual files. Read-accessing the object keeps it as class 'promise', whereas write-access creates a non-promise. Thus I can tell whether the individual files need re-saving when the entire workspace is saved.] The has-it-changed test has been very valuable to me in allowing fast handling of large collections of large objects (which is why I've been adding this functionality to 'mvbutils'); and apart from is-it-still-a-promise, I can't think of any other R-level way of testing whether an object has been modified. [If there is another way, please let me know!] Is there any chance of retaining *some* R-level way of checking whether an object is a promise, both pre-forcing and post-forcing? (Not necessarily via the 'class( env$x)' method, if that's deemed objectionable.) Mark Bravington [EMAIL PROTECTED] -Original Message- From: [EMAIL PROTECTED] on behalf of Duncan Murdoch Sent: Sat 12/03/2005 3:05 AM To: r-devel@stat.math.ethz.ch Cc: Gregory Warnes; David Brahm; Torsten Hothorn; Nagiza F. Samatova Subject: [Rd] delay() has been deprecated for 2.1.0 After a bunch of discussion in the core group, we have decided to deprecate the delay() function (which was introduced as "experimental" in R 0.50). This is the function that duplicates in R code the delayed evaluation mechanism (the promise) that's used in evaluating function arguments. The problem with delay() was that it was handled inconsistently (e.g. sometimes you would see an object displayed as , sometimes it would be evaluated); it tended to be error-prone in usage (e.g. this was the cause of the bug that makes the curve() function create a "pu" object in the global environment); and it was generally difficult to figure out exactly what the semantics of it should be in order to be consistent. delay() has been replaced with delayedAssign(). This new function creates a promise and assigns it into an environment. Once one more set of changes is made and delay() is gone, there should be no way to see a object in R: as soon as the object is accessed, it will be evaluated and you'll see the value. A few packages made use of delay(). I have replaced all of those uses with delayedAssign(). The most common usage was something like the QA code uses: assign("T", delay(stop("T used instead of TRUE")), pos = .CheckExEnv) This translates to delayedAssign("T", stop("T used instead of TRUE"), eval.env = .GlobalEnv, assign.env = .CheckExEnv) In most cases the "eval.env = .GlobalEnv" argument is not necessary (and in fact it is often a bug, as it was in curve()). The environment where the promise is to be evaluated now defaults to the environment where the call is being made, rather than the global environment, and this is usually what you want. Package writers who use delay() will now get a warning that it has been deprecated. They should recode their package to use delayedAssign instead. Examples from CRAN of this (I am not sure if this list is exhaustive): exactRankTests, genetics, g.data, maxstat, taskPR, coin I have cc'd the maintainers of those packages. If you want a single code base for your package that works in both the upcoming R 2.1.0 and older versions, this presents a problem: older versions don't have delayedAssign. Here is a workalike function that could be used in older versions: delayedAssign <- function(x, value, eval.env = parent.frame(), assign.env = parent.frame()) { assign(x, .Internal(delay(substitute(value), eval.env)), envir = assign.env) } Because this function calls the internal delay() function directly, it should work in R 2.1.0+ as well without a warning, but the internal function will eventually go away too, so I don't recommend using it in the long term. Sorry for
Re: [Rd] CRAN Task Views: ctv package available
Achim ... Other things that might be useful are various programming views, like a Matlab view, and an SPSS view. I'm not sure what you have in mind here, because I would think that this amounts more to using the language in general than using certain packages. ... I guess I didn't see that this should only be about packages. It seems you could put just about anything in the HTML page. A table like MatlabR translation --- eye(n) diag(1,n) =<- etc might be useful to a lot of people converting from Matlab. I may have the above wrong, but I do have a good start on a table like this in a sed script somewhere, so if anyone volunteers to be the maintainer I can help a bit. Paul __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Customizable R_HOME at build time
Currently there is no way to specify R_HOME at build time - it is hard-coded to ${prefix}/lib/R. This is slightly inconvenient for special setups (e.g. installing different versions of R in parallel). I was thinking of something like: ./configure --prefix=/usr --enable-custom-R-home=/usr/share/R-2.0 which will use /usr/share/R-2.0 as R_HOME - otherwise it would use /usr/lib/R The following patch (against R-devel) implements that functionality, allowing more flexible builds/installs. As a side-effect it finally prints R_HOME in the configure summary. Please let me know what you think ... (and yes, I know you don't have to run make install so one could build it in target place, but I think the above is more clean ..). Cheers, Simon __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] CRAN Task Views: ctv package available
On Fri, 11 Mar 2005 14:04:36 -0500 Paul Gilbert wrote: > For my own purposes the "Econometrics" view is just fine, but I do > sometimes get questions about dse from people in fields that are > different enough that they might not even know to look in > "Econometrics." A time series view might be useful even if it only > said see also ... . Another approach to this might be to have > sub-views so, for example, "Econometrics" and "Control theory" could > both point to "Time series." Yes, that is, of course, an obvious idea but it requires more coordination between the different views, hence we've decided not to support hierarchically ordered views. Re: "ControlTheory". If someone would raise his hand and provide a view for that, it would be great. It could, of course, link to the "Econometrics" view (and vice versa) but they wouldn't have a formally defined subset in a "TimeSeries" view. > Other things that might be useful are various programming views, like > a Matlab view, and an SPSS view. I'm not sure what you have in mind here, because I would think that this amounts more to using the language in general than using certain packages. But if someone has good ideas for a Matlab or SPSS view, I would be interested to hear them. > I feel uncomfortable suggesting things that sound more like work than > feedback, because I am not about to offer to maintain anything. I have > too much on my plate right now. But in this regard it would be nice to > draw on more people contributing smaller pieces. Have you considered > something like a wiki? Yes, there have been discussions about stuff like that on R-SIG-Finance. I think the presence of an "Econometrics" view shouldn't keep people from setting up an econometrics wiki if they want something like that. But my personal preference would be that someone starts collecting and maintaining code snippets in an R package rather than a wiki. The gregmisc bundle is a place for something like that with general statistical functionality. And back then, when this was discussed on the finance SIG, I suggested that someone who has some critical mass of code would start something similar for finance/econometrics/time series stuff. Z __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] unexpected behaviour of expression(sum())
On Fri, 2005-03-11 at 17:17 +, Prof Brian Ripley wrote: > I see you have both a scalable font (the first) and size-specfic fonts. > My guess is that the scalable font is not encoded in the same way as the > others: can you track down where it is coming from? > > Otherwise my list on FC3 is the same as yours (minus the duplicates, which > are also puzzling). I have also just checked Exceed, which has the same > list plus scalable fonts (and also has > > -adobe-symbol-0-0-normal--0-0-0-0-p-0-adobe-fontspecific > -adobe-symbol-0-0-normal--0-0-0-0-p-0-sun-fontspecific > -adobe-symbol-0-0-normal--0-0-100-100-p-0-adobe-fontspecific > -adobe-symbol-0-0-normal--0-0-100-100-p-0-sun-fontspecific > -adobe-symbol-0-0-normal--0-0-75-75-p-0-adobe-fontspecific > -adobe-symbol-0-0-normal--0-0-75-75-p-0-sun-fontspecific > > which caused problems for 2.0.1 with getting bold symbols in some sizes, > hence the second bug fix I mentioned). > > As a wild guess, do you have a font server as well as local fonts? > > Brian FWIW, here is my list: $ xlsfonts | grep adobe-symbol -adobe-symbol-medium-r-normal--10-100-75-75-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--11-80-100-100-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--12-120-75-75-p-74-adobe-fontspecific -adobe-symbol-medium-r-normal--14-100-100-100-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--14-140-75-75-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--17-120-100-100-p-95-adobe-fontspecific -adobe-symbol-medium-r-normal--18-180-75-75-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--20-140-100-100-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--24-240-75-75-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--25-180-100-100-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--34-240-100-100-p-191-adobe-fontspecific -adobe-symbol-medium-r-normal--8-80-75-75-p-51-adobe-fontspecific Deepayan, which X server is being used? FC3 (fully updated) is using xorg 6.8.1 if that might make a difference. Marc __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] CRAN Task Views: ctv package available
Achim Zeileis wrote: Paul, thanks for the feedback. If I understand this correctly, I think it is a great idea. Just to be sure I do understand, would you expect there might also be a "Time Series" view, which would probably overlap some with the "Econometrics" view? In principle, the presence of an "Econometrics" view does not preclude the existence of a "TimeSeries" or "Finance" view (I already talked with Dirk privately about the latter). Even if the overlap in packages would be substantial, there could be value added via the information on the corresponding HTML page. Currently, my personal opinion would be that we don't need a separate "TimeSeries" view and that we should extend the information on the Econometrics page instead. But it's not unlikely that I could be convinced otherwise :-) So if you've got suggestions in that direction, just let me know. Achim For my own purposes the "Econometrics" view is just fine, but I do sometimes get questions about dse from people in fields that are different enough that they might not even know to look in "Econometrics." A time series view might be useful even if it only said see also ... . Another approach to this might be to have sub-views so, for example, "Econometrics" and "Control theory" could both point to "Time series." Other things that might be useful are various programming views, like a Matlab view, and an SPSS view. I feel uncomfortable suggesting things that sound more like work than feedback, because I am not about to offer to maintain anything. I have too much on my plate right now. But in this regard it would be nice to draw on more people contributing smaller pieces. Have you considered something like a wiki? Best, Paul Best, Z __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] CRAN Task Views: ctv package available
Paul, thanks for the feedback. > If I understand this correctly, I think it is a great idea. Just to be > sure I do understand, would you expect there might also be a "Time > Series" view, which would probably overlap some with the > "Econometrics" view? In principle, the presence of an "Econometrics" view does not preclude the existence of a "TimeSeries" or "Finance" view (I already talked with Dirk privately about the latter). Even if the overlap in packages would be substantial, there could be value added via the information on the corresponding HTML page. Currently, my personal opinion would be that we don't need a separate "TimeSeries" view and that we should extend the information on the Econometrics page instead. But it's not unlikely that I could be convinced otherwise :-) So if you've got suggestions in that direction, just let me know. Best, Z __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] CRAN Task Views: ctv package available
If I understand this correctly, I think it is a great idea. Just to be sure I do understand, would you expect there might also be a "Time Series" view, which would probably overlap some with the "Econometrics" view? Paul Gilbert Achim Zeileis wrote: Dear developeRs, in the last month I mentioned in several discussions on R-help that Kurt and I were working on tools for "CRAN Task Views" which should help to structure the fast-growing list of packages on CRAN. Now the first version of a package called ctv (for CRAN Task Views) is available from CRAN and also two frist drafts for such views can be seen at http://CRAN.R-project.org/src/contrib/Views/ When you install the ctv package you can also query this from within R via: CRAN.views() install.views("Econometrics", lib = "/path/to/foo") New views can be easily written in an XML-based format from which we can generate the HTML information displayed on the Web and also the information needed for querying the views via CRAN.views(). The package contains a short vignette that explains how to write new task views. Feedback on the package would be very welcome! Furthermore, if you want to write and maintain a new task view for a certain topic, that would be great! Just drop me an e-mail with your suggestion. Best wishes from Vienna, Z __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] unexpected behaviour of expression(sum())
I see you have both a scalable font (the first) and size-specfic fonts. My guess is that the scalable font is not encoded in the same way as the others: can you track down where it is coming from? Otherwise my list on FC3 is the same as yours (minus the duplicates, which are also puzzling). I have also just checked Exceed, which has the same list plus scalable fonts (and also has -adobe-symbol-0-0-normal--0-0-0-0-p-0-adobe-fontspecific -adobe-symbol-0-0-normal--0-0-0-0-p-0-sun-fontspecific -adobe-symbol-0-0-normal--0-0-100-100-p-0-adobe-fontspecific -adobe-symbol-0-0-normal--0-0-100-100-p-0-sun-fontspecific -adobe-symbol-0-0-normal--0-0-75-75-p-0-adobe-fontspecific -adobe-symbol-0-0-normal--0-0-75-75-p-0-sun-fontspecific which caused problems for 2.0.1 with getting bold symbols in some sizes, hence the second bug fix I mentioned). As a wild guess, do you have a font server as well as local fonts? Brian On Fri, 11 Mar 2005, Deepayan Sarkar wrote: On Friday 11 March 2005 01:19, Prof Brian Ripley wrote: On Thu, 10 Mar 2005, Marc Schwartz wrote: On Thu, 2005-03-10 at 19:57 -0600, Deepayan Sarkar wrote: I'm seeing inconsistent symbols from the same expression with the following code: expr = expression(sum(x, 1, n)) plot(1, main = expr, type = "n") text(1, 1, expr) Moreover, the inconsistency is reversed in r-devel compared to R 2.0.1. In particular, the main label shows a \bigoplus instead of \sum in r-devel, and the other way round in 2.0.1. demo(plotmath) shows \sum in both. Can anyone confirm? Is this intended behaviour (though I can't see how)? No problem in "Version 2.0.1 Patched (2005-03-07)". I get \sum in both places. I do not see anything in the NEWS file suggesting a bug fix for this. I just installed "Version 2.1.0 Under development (unstable) (2005-03-11)" and do not see the problem there either. Both are under FC3. We need to know both the device and the locale. Assuming this is X11, there are two fixes for font selection: Yes, it's X11, with locale "C". It doesn't happen with postscript (I haven't tried anything else). I had tried on 3 different machines other than my desktop, but all remotely. Marc's reply suggested that this was a problem with X on my local machine, and I haven't yet had a chance to check on any others. o X11() was only scaling its fonts to pointsize if the dpi was within 0.5 of 100dpi. o X11() font selection was looking for any symbol font, and sometimes got e.g. bold italic if the server has such a font. The main title in plot() and text() are asking for different sizes. If Deepayan had problems with getting a valid (Adobe symbol-encoded) font, this might vary by size which would explain the reported differences. Deepayan: can you please check what symbol fonts you have: the pattern in R-devel is "-adobe-symbol-medium-r-*-*-*-*-*-*-*-*-*-*" (Ideally we would select on encoding, but that is usually 'fontspecific' so not helpful.) I'm not really sure what I'm looking for, but everything I get seems to be 'fontspecific': deepayan $ xlsfonts | grep adobe-symbol-medium -adobe-symbol-medium-r-normal--0-0-0-0-p-0-adobe-fontspecific -adobe-symbol-medium-r-normal--0-0-100-100-p-0-adobe-fontspecific -adobe-symbol-medium-r-normal--0-0-75-75-p-0-adobe-fontspecific -adobe-symbol-medium-r-normal--10-100-75-75-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--10-100-75-75-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--11-80-100-100-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--11-80-100-100-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--12-120-75-75-p-74-adobe-fontspecific -adobe-symbol-medium-r-normal--12-120-75-75-p-74-adobe-fontspecific -adobe-symbol-medium-r-normal--14-100-100-100-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--14-100-100-100-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--14-140-75-75-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--14-140-75-75-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--17-120-100-100-p-95-adobe-fontspecific -adobe-symbol-medium-r-normal--17-120-100-100-p-95-adobe-fontspecific -adobe-symbol-medium-r-normal--18-180-75-75-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--18-180-75-75-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--20-140-100-100-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--20-140-100-100-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--24-240-75-75-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--24-240-75-75-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--25-180-100-100-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--25-180-100-100-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--34-240-100-100-p-191-adobe-fontspecific -adobe-symbol-medium-r-normal--34-240-100-100-p-191-adobe-fontspecific -adobe-symbol-medium-r-normal--8-80-75-75-p-51-adobe-fontspecific -adobe-symbol-medium-r-normal--8-80-75-75-p-51-adobe-fontspecific deepayan $ -Deepayan -- Brian D. Ripley, [EMAIL PROTECTE
Re: [Rd] unexpected behaviour of expression(sum())
On Friday 11 March 2005 01:19, Prof Brian Ripley wrote: > On Thu, 10 Mar 2005, Marc Schwartz wrote: > > On Thu, 2005-03-10 at 19:57 -0600, Deepayan Sarkar wrote: > >> I'm seeing inconsistent symbols from the same expression with the > >> following code: > >> > >> > >> expr = expression(sum(x, 1, n)) > >> plot(1, main = expr, type = "n") > >> text(1, 1, expr) > >> > >> > >> Moreover, the inconsistency is reversed in r-devel compared to R > >> 2.0.1. In particular, the main label shows a \bigoplus instead of > >> \sum in r-devel, and the other way round in 2.0.1. demo(plotmath) > >> shows \sum in both. > >> > >> Can anyone confirm? Is this intended behaviour (though I can't see > >> how)? > > > > No problem in "Version 2.0.1 Patched (2005-03-07)". I get \sum in > > both places. I do not see anything in the NEWS file suggesting a > > bug fix for this. > > > > I just installed "Version 2.1.0 Under development (unstable) > > (2005-03-11)" and do not see the problem there either. > > > > Both are under FC3. > > We need to know both the device and the locale. Assuming this is > X11, there are two fixes for font selection: Yes, it's X11, with locale "C". It doesn't happen with postscript (I haven't tried anything else). I had tried on 3 different machines other than my desktop, but all remotely. Marc's reply suggested that this was a problem with X on my local machine, and I haven't yet had a chance to check on any others. > o X11() was only scaling its fonts to pointsize if the dpi > was within 0.5 of 100dpi. > > o X11() font selection was looking for any symbol font, and > sometimes got e.g. bold italic if the server has such a font. > > The main title in plot() and text() are asking for different sizes. > If Deepayan had problems with getting a valid (Adobe symbol-encoded) > font, this might vary by size which would explain the reported > differences. > > Deepayan: can you please check what symbol fonts you have: the > pattern in R-devel is > > "-adobe-symbol-medium-r-*-*-*-*-*-*-*-*-*-*" > > (Ideally we would select on encoding, but that is usually > 'fontspecific' so not helpful.) I'm not really sure what I'm looking for, but everything I get seems to be 'fontspecific': deepayan $ xlsfonts | grep adobe-symbol-medium -adobe-symbol-medium-r-normal--0-0-0-0-p-0-adobe-fontspecific -adobe-symbol-medium-r-normal--0-0-100-100-p-0-adobe-fontspecific -adobe-symbol-medium-r-normal--0-0-75-75-p-0-adobe-fontspecific -adobe-symbol-medium-r-normal--10-100-75-75-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--10-100-75-75-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--11-80-100-100-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--11-80-100-100-p-61-adobe-fontspecific -adobe-symbol-medium-r-normal--12-120-75-75-p-74-adobe-fontspecific -adobe-symbol-medium-r-normal--12-120-75-75-p-74-adobe-fontspecific -adobe-symbol-medium-r-normal--14-100-100-100-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--14-100-100-100-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--14-140-75-75-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--14-140-75-75-p-85-adobe-fontspecific -adobe-symbol-medium-r-normal--17-120-100-100-p-95-adobe-fontspecific -adobe-symbol-medium-r-normal--17-120-100-100-p-95-adobe-fontspecific -adobe-symbol-medium-r-normal--18-180-75-75-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--18-180-75-75-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--20-140-100-100-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--20-140-100-100-p-107-adobe-fontspecific -adobe-symbol-medium-r-normal--24-240-75-75-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--24-240-75-75-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--25-180-100-100-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--25-180-100-100-p-142-adobe-fontspecific -adobe-symbol-medium-r-normal--34-240-100-100-p-191-adobe-fontspecific -adobe-symbol-medium-r-normal--34-240-100-100-p-191-adobe-fontspecific -adobe-symbol-medium-r-normal--8-80-75-75-p-51-adobe-fontspecific -adobe-symbol-medium-r-normal--8-80-75-75-p-51-adobe-fontspecific deepayan $ -Deepayan __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] delay() has been deprecated for 2.1.0
After a bunch of discussion in the core group, we have decided to deprecate the delay() function (which was introduced as "experimental" in R 0.50). This is the function that duplicates in R code the delayed evaluation mechanism (the promise) that's used in evaluating function arguments. The problem with delay() was that it was handled inconsistently (e.g. sometimes you would see an object displayed as , sometimes it would be evaluated); it tended to be error-prone in usage (e.g. this was the cause of the bug that makes the curve() function create a "pu" object in the global environment); and it was generally difficult to figure out exactly what the semantics of it should be in order to be consistent. delay() has been replaced with delayedAssign(). This new function creates a promise and assigns it into an environment. Once one more set of changes is made and delay() is gone, there should be no way to see a object in R: as soon as the object is accessed, it will be evaluated and you'll see the value. A few packages made use of delay(). I have replaced all of those uses with delayedAssign(). The most common usage was something like the QA code uses: assign("T", delay(stop("T used instead of TRUE")), pos = .CheckExEnv) This translates to delayedAssign("T", stop("T used instead of TRUE"), eval.env = .GlobalEnv, assign.env = .CheckExEnv) In most cases the "eval.env = .GlobalEnv" argument is not necessary (and in fact it is often a bug, as it was in curve()). The environment where the promise is to be evaluated now defaults to the environment where the call is being made, rather than the global environment, and this is usually what you want. Package writers who use delay() will now get a warning that it has been deprecated. They should recode their package to use delayedAssign instead. Examples from CRAN of this (I am not sure if this list is exhaustive): exactRankTests, genetics, g.data, maxstat, taskPR, coin I have cc'd the maintainers of those packages. If you want a single code base for your package that works in both the upcoming R 2.1.0 and older versions, this presents a problem: older versions don't have delayedAssign. Here is a workalike function that could be used in older versions: delayedAssign <- function(x, value, eval.env = parent.frame(), assign.env = parent.frame()) { assign(x, .Internal(delay(substitute(value), eval.env)), envir = assign.env) } Because this function calls the internal delay() function directly, it should work in R 2.1.0+ as well without a warning, but the internal function will eventually go away too, so I don't recommend using it in the long term. Sorry for any inconvenience that this causes. Duncan Murdoch __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] read.table messes up stdin upon small, erroneous input (PR#7722)
Full_Name: Jan T. Kim Version: 2.0.1, devel-2005-02-24 OS: Linux 2.6.x Submission from: (NULL) (139.222.3.229) Run read.table(stdin()) and type in the broken table 1 2 1 terminating the input by pressing Ctrl-D at the 3rd line of input. An error message by scan, complaining that "line 2 did not have 2 elements" appears, as expected. However: After this, there are three empty lines buffered in stdin: > readLines(stdin()) [1] "" "" "" Repeated attempts to read.table the broken input from stdin lead to even more strange results: > read.table(stdin()) 0: 1 2 1: 1 2: Error in scan(file = file, what = what, sep = sep, quote = quote, dec = dec, : line 2 did not have 2 elements > read.table(stdin()) 3: 1 2 4: 1 [1] V1 V2 <0 rows> (or 0-length row.names) > Analysis: These effects are due to a combination of (1) the fact that there appear to be various routes of accessing the standard input, depending on context, and (2) the use of pushback in the process of automatically figuring out the table format: * read.table uses .Internal(readTableHead(...)) to get the first nlines lines of the table (nlines = 5). * .Internal(readTableHead(...)) always returns nlines lines, adding empty lines if EOF comes before nlines lines are read. * These lines, including any empty ones not originating from the file in the first place, are then pushed back twice * The first set of lines is always consumed off by the subsequent code to figure out the number of columns. * The second set is intended to be consumed by the regular operation of scan. * However, if scan chokes before it can consume these lines, including the blank ones, these will be left in the pushback buffer. * R's interactive fetch-parse-evaluate loop does not use the connection provided by stdin(), and therefore, the buffered stuff is not noticed until the next attempt to read from the stdin connection. The strange effects reported above could probably be fixed by modifying the internal readTableHead function such that it does not produce emtpy lines in order to return the number of lines "requested" by the nlines parameter. A more fundamental approach would be to avoid pushing back lines altogether. The repeated scanning of the first few lines could be done by using a textConnection instead. Some additional work will probably be necessary to combine the first few and the remaining lines, acquired by regular operation of scan, into the complete table. __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Quirk with as.function(..., envir = NULL) and body(..., envir = NULL) <-
I've been doing some looking through the environment code lately, and noticed that both as.function(..., envir = NULL) and body(..., envir = NULL)<- treat the NULL as .GlobalEnv, even though NULL is the environment of the base package. The code that does this is very deep in the guts of R and affects all sorts of things, so I'm not planning to change it for 2.1.0, but I expect it will be fixed in 2.2.0 this fall. In the meantime, I'd advise people to avoid using envir = NULL, and instead use envir = globalenv() or envir = .GlobalEnv (which are equivalent). If you want to set base as the environment for a function, you should use environment(f) <- NULL for now. Duncan Murdoch __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] How to use Rmpi?
If your computation is simple enough to express in terms of lapply or other apply calls that you want to have run in parallel then you might try the 'snow' package on CRAN which can run on top of Rmpi. Some places to get more details on that: http://www.stat.uiowa.edu/~luke/R/cluster/cluster.html http://www.bepress.com/cgi/viewcontent.cgi?article=1016&context=uwbiostat Best, luke On Thu, 10 Mar 2005, Alessandro Balboni wrote: I need to rewrite a software in R, that runs on a cluster. I thought Rmpi will be good for me but I can't find any help other than the Rmpi-manual that actually only describe the functions in the Rmpi package. Can someone point me to some usefull guide? For example, I would like to run a for-statement on several processors (a subset of the statement on each processor) but I can't figure out how to do this! Thanks -- Luke Tierney Chair, Statistics and Actuarial Science Ralph E. Wareham Professor of Mathematical Sciences University of Iowa Phone: 319-335-3386 Department of Statistics andFax: 319-335-3017 Actuarial Science 241 Schaeffer Hall email: [EMAIL PROTECTED] Iowa City, IA 52242 WWW: http://www.stat.uiowa.edu __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R_alloc with more than 2GB (PR#7721)
On Thu, 10 Mar 2005, Wolfgang Huber wrote: Dear Prof Ripley, It is a feature. Other parts of R expect a CHARSXP to have length less than or equal to 2^31 - 1. OK, after looking closer at the code and comments in memory.c and Rinternals.h (typedef int R_len_t;) I realized that. Could you not use x = allocVector(REALSXP, vs) and REAL(x)[i]? That will get you up to 2^31 - 1 elements, which is the R limit AFAIK. Thanks, that is an excellent idea. It should be fine for my immediate needs, and better what I've just been doing with Calloc! Because of the use of Fortran, it is hard to see how to allow internal lengths (in elements, not necessarily bytes) to exceed that value. We need to return to that, but it is not straightforward and last time we discussed it we agreed to defer it. We can manage a better error message, but I am afraid nothing else in the near future. In the application that triggered this posting, the memory is for a C array of doubles within a user-defined C function, not for anything that needs to become an R object, so maybe a suggestion would be to make R_alloc go directly to malloc without the detour over allocString or allocVector; or something along that line? R_alloc makes use of garbage collection to avoid the need for explicit free()ing. Otherwise you might as well use Calloc. Given that all memory allocated via the heap is aligned to doubles, there seems to me to be little or no loss in using a REALSXP rather than a CHARSXP, and certainly negligible loss for large vectors. That will buy us a factor of 8 for the present. Brian -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-devel