Re: [Rd] Grapics Device Resolution Limits
On 10/02/2017 19:27, Prof Brian Ripley wrote: Note that there are at least 5 separate png() devices, so Linux was not using the (default) device used on Windows. In general, the device-limits info is not on the help page because we do not know it. On Windows the default device limits depend on the OS version, 32/64-bit, RAM and the graphics hardware. This sounds like the last: you were asking for 49 megapixels which is far larger than the largest screens. (Or all but the highest-end digital cameras, so one could well ask what you can usefully do with such an image.) Scratch that: res is in ppi so the image should be 2755px square: still rather large. Normally you will get warning(s) accompanying that Error, but it might just be Warning: unable to allocate bitmap Warning: opening device failed The first of those is reporting what the GraphApp toolkit said, talking directly to Windows GDI (and look at the Windows documentation for e.g. CreateCompatibleBitmap to see that no limits are mentioned). Even on Windows you have the option of using other png() devices: png(filename = "Rplot03d.png", width = 480, height = 480, units = "px", pointsize = 12, bg = "white", res = NA, family = "", restoreConsole = TRUE, type = c("windows", "cairo", "cairo-png"), antialias) Try the other 2 types: the cairo devices do not use your graphics hardware nor MicroSoft's GDI. (The other 2 devices are Xlib on a Unix-alike and Quartz on macOS.) On 10/02/2017 16:54, Martin Maechler wrote: Dario Strbenac on Fri, 10 Feb 2017 02:00:08 + writes: > Good day, > Could the documentation of graphics devices give some explanation of how big the bitmap limits are? For example, >> png("Figure1A.png", h = 7, w = 7, res = 1000, units = "cm") > Results in Error: unable to start png() device, This is amazing to me. I see -- png("Figure1A.png", h = 7, w = 7, res = 1000, units = "cm") plot(1) dev.off() null device 1 file.info("Figure1A.png")[1:5] size isdir mode mtime ctime Figure1A.png 41272 FALSE 644 2017-02-10 17:40:42 2017-02-10 17:40:42 -- in three different versions of R I've tried (all were 64-bit Linux). Note how *small* the file is. Now, I've also tried a 32-bit version of Linux (Ubuntu 14.04 LTS) and get a similar result (not exactly the same number of bytes for the file size). but the help page of devices doesn't explain that there are any limits or how they are determined. The wording of the error message could also be improved, to explain that the resolution is too high or the dimensions are too large. If one/some of those who can reproduce the problem in their versions of R provide (concise and not hard to read) patches to the source of R, we'd probably gratefully accept them.. Martin Maechler >> sessionInfo() > R version 3.3.2 Patched (2017-02-07 r72138) > Platform: i386-w64-mingw32/i386 (32-bit) > Running under: Windows 7 (build 7601) Service Pack 1 > -- > Dario Strbenac > University of Sydney > Camperdown NSW 2050 > Australia -- Brian D. Ripley, rip...@stats.ox.ac.uk Emeritus Professor of Applied Statistics, University of Oxford __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Grapics Device Resolution Limits
Note that there are at least 5 separate png() devices, so Linux was not using the (default) device used on Windows. In general, the device-limits info is not on the help page because we do not know it. On Windows the default device limits depend on the OS version, 32/64-bit, RAM and the graphics hardware. This sounds like the last: you were asking for 49 megapixels which is far larger than the largest screens. (Or all but the highest-end digital cameras, so one could well ask what you can usefully do with such an image.) Normally you will get warning(s) accompanying that Error, but it might just be Warning: unable to allocate bitmap Warning: opening device failed The first of those is reporting what the GraphApp toolkit said, talking directly to Windows GDI (and look at the Windows documentation for e.g. CreateCompatibleBitmap to see that no limits are mentioned). Even on Windows you have the option of using other png() devices: png(filename = "Rplot03d.png", width = 480, height = 480, units = "px", pointsize = 12, bg = "white", res = NA, family = "", restoreConsole = TRUE, type = c("windows", "cairo", "cairo-png"), antialias) Try the other 2 types: the cairo devices do not use your graphics hardware nor MicroSoft's GDI. (The other 2 devices are Xlib on a Unix-alike and Quartz on macOS.) On 10/02/2017 16:54, Martin Maechler wrote: Dario Strbenac on Fri, 10 Feb 2017 02:00:08 + writes: > Good day, > Could the documentation of graphics devices give some explanation of how big the bitmap limits are? For example, >> png("Figure1A.png", h = 7, w = 7, res = 1000, units = "cm") > Results in Error: unable to start png() device, This is amazing to me. I see -- png("Figure1A.png", h = 7, w = 7, res = 1000, units = "cm") plot(1) dev.off() null device 1 file.info("Figure1A.png")[1:5] size isdir mode mtime ctime Figure1A.png 41272 FALSE 644 2017-02-10 17:40:42 2017-02-10 17:40:42 -- in three different versions of R I've tried (all were 64-bit Linux). Note how *small* the file is. Now, I've also tried a 32-bit version of Linux (Ubuntu 14.04 LTS) and get a similar result (not exactly the same number of bytes for the file size). but the help page of devices doesn't explain that there are any limits or how they are determined. The wording of the error message could also be improved, to explain that the resolution is too high or the dimensions are too large. If one/some of those who can reproduce the problem in their versions of R provide (concise and not hard to read) patches to the source of R, we'd probably gratefully accept them.. Martin Maechler >> sessionInfo() > R version 3.3.2 Patched (2017-02-07 r72138) > Platform: i386-w64-mingw32/i386 (32-bit) > Running under: Windows 7 (build 7601) Service Pack 1 > -- > Dario Strbenac > University of Sydney > Camperdown NSW 2050 > Australia -- Brian D. Ripley, rip...@stats.ox.ac.uk Emeritus Professor of Applied Statistics, University of Oxford __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Pressing either Ctrl-\ of Ctrl-4 core dumps R
So do a number of other interactive programs when working in a terminal (e.g. python) since it looks like your terminal is configured for those two actions to send the SIGQUIT signal. Whether R should ignore that signal, under some circumstances at least, is another question. Best, luke On Fri, 10 Feb 2017, Henrik Bengtsson wrote: When running R from the terminal on Linux (Ubuntu 16.04), it core dumps whenever / wherever I press Ctrl-4 or Ctrl-\. You get thrown back to the terminal with "Quit (core dump)" being the only message. Grepping the R source code, it doesn't look like that message is generated by R itself. Over on Twitter, it has been confirmed to also happen on macOS. $ R -d valgrind --vanilla --quiet ==979== Memcheck, a memory error detector ==979== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al. ==979== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info ==979== Command: /usr/lib/R/bin/exec/R --vanilla --quiet ==979== 1+2 [1] 3 # At next prompt I press Ctrl-\. The same happens also when done in the middle of an entry. ==979== ==979== Process terminating with default action of signal 3 (SIGQUIT) ==979==at 0x576C9C3: __select_nocancel (syscall-template.S:84) ==979==by 0x502EABE: R_SelectEx (in /usr/lib/R/lib/libR.so) ==979==by 0x502EDDF: R_checkActivityEx (in /usr/lib/R/lib/libR.so) ==979==by 0x502F32B: ??? (in /usr/lib/R/lib/libR.so) ==979==by 0x4F6988B: Rf_ReplIteration (in /usr/lib/R/lib/libR.so) ==979==by 0x4F69CF0: ??? (in /usr/lib/R/lib/libR.so) ==979==by 0x4F69DA7: run_Rmainloop (in /usr/lib/R/lib/libR.so) ==979==by 0x4007CA: main (in /usr/lib/R/bin/exec/R) ==979== ==979== HEAP SUMMARY: ==979== in use at exit: 28,981,596 bytes in 13,313 blocks ==979== total heap usage: 27,002 allocs, 13,689 frees, 49,025,684 bytes allocated ==979== ==979== LEAK SUMMARY: ==979==definitely lost: 0 bytes in 0 blocks ==979==indirectly lost: 0 bytes in 0 blocks ==979== possibly lost: 0 bytes in 0 blocks ==979==still reachable: 28,981,596 bytes in 13,313 blocks ==979== suppressed: 0 bytes in 0 blocks ==979== Rerun with --leak-check=full to see details of leaked memory ==979== ==979== For counts of detected and suppressed errors, rerun with: -v ==979== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) Quit (core dumped) $ R --version R version 3.3.2 (2016-10-31) -- "Sincere Pumpkin Patch" Copyright (C) 2016 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) /Henrik __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel -- Luke Tierney Ralph E. Wareham Professor of Mathematical Sciences University of Iowa Phone: 319-335-3386 Department of Statistics andFax: 319-335-3017 Actuarial Science 241 Schaeffer Hall email: luke-tier...@uiowa.edu Iowa City, IA 52242 WWW: http://www.stat.uiowa.edu __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Pressing either Ctrl-\ of Ctrl-4 core dumps R
Control-backslash is the default way to generate SIGQUIT from the keyboard on Unix and SIGQUIT, by default, aborts the process and causes it to produce a core dump. Do you want R to catch SIGQUIT? % stty --all speed 38400 baud; rows 24; columns 64; line = 0; intr = ^C; quit = ^\; erase = ^H; kill = ^U; eof = ^D; eol = ; eol2 = ; swtch = ; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd -cmspar cs8 -hupcl -cstopb cread -clocal -crtscts -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany -imaxbel -iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke Bill Dunlap TIBCO Software wdunlap tibco.com On Fri, Feb 10, 2017 at 10:40 AM, Henrik Bengtsson wrote: > When running R from the terminal on Linux (Ubuntu 16.04), it core > dumps whenever / wherever I press Ctrl-4 or Ctrl-\. You get thrown > back to the terminal with "Quit (core dump)" being the only message. > Grepping the R source code, it doesn't look like that message is > generated by R itself. Over on Twitter, it has been confirmed to also > happen on macOS. > > $ R -d valgrind --vanilla --quiet > ==979== Memcheck, a memory error detector > ==979== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al. > ==979== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info > ==979== Command: /usr/lib/R/bin/exec/R --vanilla --quiet > ==979== >> 1+2 > [1] 3 > > # At next prompt I press Ctrl-\. The same happens also when done in > the middle of an entry. > >> ==979== > ==979== Process terminating with default action of signal 3 (SIGQUIT) > ==979==at 0x576C9C3: __select_nocancel (syscall-template.S:84) > ==979==by 0x502EABE: R_SelectEx (in /usr/lib/R/lib/libR.so) > ==979==by 0x502EDDF: R_checkActivityEx (in /usr/lib/R/lib/libR.so) > ==979==by 0x502F32B: ??? (in /usr/lib/R/lib/libR.so) > ==979==by 0x4F6988B: Rf_ReplIteration (in /usr/lib/R/lib/libR.so) > ==979==by 0x4F69CF0: ??? (in /usr/lib/R/lib/libR.so) > ==979==by 0x4F69DA7: run_Rmainloop (in /usr/lib/R/lib/libR.so) > ==979==by 0x4007CA: main (in /usr/lib/R/bin/exec/R) > ==979== > ==979== HEAP SUMMARY: > ==979== in use at exit: 28,981,596 bytes in 13,313 blocks > ==979== total heap usage: 27,002 allocs, 13,689 frees, 49,025,684 > bytes allocated > ==979== > ==979== LEAK SUMMARY: > ==979==definitely lost: 0 bytes in 0 blocks > ==979==indirectly lost: 0 bytes in 0 blocks > ==979== possibly lost: 0 bytes in 0 blocks > ==979==still reachable: 28,981,596 bytes in 13,313 blocks > ==979== suppressed: 0 bytes in 0 blocks > ==979== Rerun with --leak-check=full to see details of leaked memory > ==979== > ==979== For counts of detected and suppressed errors, rerun with: -v > ==979== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) > Quit (core dumped) > > $ R --version > R version 3.3.2 (2016-10-31) -- "Sincere Pumpkin Patch" > Copyright (C) 2016 The R Foundation for Statistical Computing > Platform: x86_64-pc-linux-gnu (64-bit) > > /Henrik > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Pressing either Ctrl-\ of Ctrl-4 core dumps R
When running R from the terminal on Linux (Ubuntu 16.04), it core dumps whenever / wherever I press Ctrl-4 or Ctrl-\. You get thrown back to the terminal with "Quit (core dump)" being the only message. Grepping the R source code, it doesn't look like that message is generated by R itself. Over on Twitter, it has been confirmed to also happen on macOS. $ R -d valgrind --vanilla --quiet ==979== Memcheck, a memory error detector ==979== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al. ==979== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info ==979== Command: /usr/lib/R/bin/exec/R --vanilla --quiet ==979== > 1+2 [1] 3 # At next prompt I press Ctrl-\. The same happens also when done in the middle of an entry. > ==979== ==979== Process terminating with default action of signal 3 (SIGQUIT) ==979==at 0x576C9C3: __select_nocancel (syscall-template.S:84) ==979==by 0x502EABE: R_SelectEx (in /usr/lib/R/lib/libR.so) ==979==by 0x502EDDF: R_checkActivityEx (in /usr/lib/R/lib/libR.so) ==979==by 0x502F32B: ??? (in /usr/lib/R/lib/libR.so) ==979==by 0x4F6988B: Rf_ReplIteration (in /usr/lib/R/lib/libR.so) ==979==by 0x4F69CF0: ??? (in /usr/lib/R/lib/libR.so) ==979==by 0x4F69DA7: run_Rmainloop (in /usr/lib/R/lib/libR.so) ==979==by 0x4007CA: main (in /usr/lib/R/bin/exec/R) ==979== ==979== HEAP SUMMARY: ==979== in use at exit: 28,981,596 bytes in 13,313 blocks ==979== total heap usage: 27,002 allocs, 13,689 frees, 49,025,684 bytes allocated ==979== ==979== LEAK SUMMARY: ==979==definitely lost: 0 bytes in 0 blocks ==979==indirectly lost: 0 bytes in 0 blocks ==979== possibly lost: 0 bytes in 0 blocks ==979==still reachable: 28,981,596 bytes in 13,313 blocks ==979== suppressed: 0 bytes in 0 blocks ==979== Rerun with --leak-check=full to see details of leaked memory ==979== ==979== For counts of detected and suppressed errors, rerun with: -v ==979== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) Quit (core dumped) $ R --version R version 3.3.2 (2016-10-31) -- "Sincere Pumpkin Patch" Copyright (C) 2016 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) /Henrik __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Ancient C /Fortran code linpack error
Thanks Berend, I will make that change and submit to CRAN. Best, Göran On 2017-02-10 16:13, Berend Hasselman wrote: On 10 Feb 2017, at 14:53, Göran Broström wrote: Thanks to all who answered my third question. I learned something, but: On 2017-02-09 17:44, Martin Maechler wrote: On 9 Feb 2017, at 16:00, Göran Broström wrote: In my package 'glmmML' I'm using old C code and linpack in the optimizing procedure. Specifically, one part of the code looks like this: F77_CALL(dpoco)(*hessian, &bdim, &bdim, &rcond, work, info); if (*info == 0){ F77_CALL(dpodi)(*hessian, &bdim, &bdim, det, &job); This usually works OK, but with an ill-conditioned data set (from a user of glmmML) it happened that the hessian was all nan. However, dpoco returned *info = 0 (no error!) and then the call to dpodi hanged R! I googled for C and nan and found a work-around: Change 'if ...' to if (*info == 0 & (hessian[0][0] == hessian[0][0])){ which works as a test of hessian[0][0] (not) being NaN. I'm using the .C interface for calling C code. Any thoughts on how to best handle the situation? Is this a bug in dpoco? Is there a simple way to test for any NaNs in a vector? You should/could use macro R_FINITE to test each entry of the hessian. In package nleqslv I test for a "correct" jacobian like this in file nleqslv.c in function fcnjac: for (j = 0; j < *n; j++) for (i = 0; i < *n; i++) { if( !R_FINITE(REAL(sexp_fjac)[(*n)*j + i]) ) error("non-finite value(s) returned by jacobian (row=%d,col=%d)",i+1,j+1); rjac[(*ldr)*j + i] = REAL(sexp_fjac)[(*n)*j + i]; } A minor hint on that: While REAL(.) (or INTEGER(.) ...) is really cheap in the R sources themselves, that is not the case in package code. Hence, not only nicer to read but even faster is double *fj = REAL(sexp_fjac); for (j = 0; j < *n; j++) for (i = 0; i < *n; i++) { if( !R_FINITE(fj[(*n)*j + i]) ) error("non-finite value(s) returned by jacobian (row=%d,col=%d)",i+1,j+1); rjac[(*ldr)*j + i] = fj[(*n)*j + i]; } [...] isn't this even easier to read (and correct?): for (j = 0; j < n*; j++) for (i = 0; i < n*; i++){ if ( !R_FINITE(hessian[i][j]) ) error("blah...") } ? In .C land, that is. (And sure, I'm afraid of ±Inf in this context.) Only if you have lda and n equal (which you indeed have; but still worth mentioning) when calling dpoco. Berend __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check error (interfacing to C API of other pkg): Solved
Martin, That was it- I forgot the "LinkingTo" line. I had read that section of the manual twice in the last 2 days, yet somehow missed that critical line both times. And even worse, the final sentence of said section references my own coxme package as an example of how to do it correctly! Thank you all for the help. My only remaining defense, but a very weak one, is that the error message could have been better since it led me to believe that R couldn't find the library at all. Terry Therneau On 02/10/2017 10:26 AM, Martin Maechler wrote: Therneau, Terry M , Ph D on Thu, 9 Feb 2017 12:56:17 -0600 writes: > Martyn, > No, that didn't work. > One other thing in the mix (which I don't think is the issue) is that I call one of the > C-entry points of expm. So the DESCRIPTION file imports expm, the NAMESPACE file imports > expm, and the init.c file is > #include "R.h" > #include "R_ext/Rdynload.h" > /* Interface to expm package. */ > typedef enum {Ward_2, Ward_1, Ward_buggy_octave} precond_type; > void (*expm)(double *x, int n, double *z, precond_type precond_kind); > void R_init_hmm(DllInfo *dll) > { > expm = (void (*)) R_GetCCallable("expm", "expm"); > } > I don't expect that this is the problem since I stole the > above almost verbatim from the msm package. > Terry T. Hmm. Yes, I can see that the CRAN package msm does do this, indeed. It is interesting if/why that does not produce any notes or rather even warnings. In principle, if you use the C API of 'expm' you should use 'LinkingTo: expm' see *the* manual, specifically the section https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Linking-to-native-routines-in-other-packages and that section does mention that (unfortunately in my view) you also should use 'Imports:' or 'Depends:' in addition to the 'LinkingTo:' Note howver that 'expm' would not have to mentioned in the NAMESPACE file unless your R functions do use some of expm's R level functionality. Martin __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Grapics Device Resolution Limits
Were you suppressing warnings? I get a warning along with the "unable to start device 'png'" in some cases where it fails. E.g., on Linux > png("Figure1A.png", h = 7, w = 7, res = 1e5, units = "cm") Error in png("Figure1A.png", h = 7, w = 7, res = 1e+05, units = "cm") : unable to start device 'png' In addition: Warning message: In png("Figure1A.png", h = 7, w = 7, res = 1e+05, units = "cm") : cairo error 'invalid value (typically too big) for the size of the input (surface, pattern, etc.)' or on Windows > png("Figure1A.png", h = 7, w = 7, res = 10, units = "cm") Error in png("Figure1A.png", h = 7, w = 7, res = 1e+05, units = "cm") : unable to start png() device In addition: Warning messages: 1: In png("Figure1A.png", h = 7, w = 7, res = 1e+05, units = "cm") : unable to allocate bitmap 2: In png("Figure1A.png", h = 7, w = 7, res = 1e+05, units = "cm") : opening device failed or when the current directory is not writable (or does not exist) > png("Figure1A.png", h = 7, w = 7, res = 1000, units = "cm") > plot(1:5) Error in plot.new() : could not open file 'Figure1A.png' Bill Dunlap TIBCO Software wdunlap tibco.com On Thu, Feb 9, 2017 at 6:00 PM, Dario Strbenac wrote: > Good day, > > Could the documentation of graphics devices give some explanation of how big > the bitmap limits are? For example, > >> png("Figure1A.png", h = 7, w = 7, res = 1000, units = "cm") > > Results in Error: unable to start png() device, but the help page of devices > doesn't explain that there are any limits or how they are determined. The > wording of the error message could also be improved, to explain that the > resolution is too high or the dimensions are too large. > >> sessionInfo() > R version 3.3.2 Patched (2017-02-07 r72138) > Platform: i386-w64-mingw32/i386 (32-bit) > Running under: Windows 7 (build 7601) Service Pack 1 > > -- > Dario Strbenac > University of Sydney > Camperdown NSW 2050 > Australia > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Grapics Device Resolution Limits
> Dario Strbenac > on Fri, 10 Feb 2017 02:00:08 + writes: > Good day, > Could the documentation of graphics devices give some explanation of how big the bitmap limits are? For example, >> png("Figure1A.png", h = 7, w = 7, res = 1000, units = "cm") > Results in Error: unable to start png() device, This is amazing to me. I see -- > png("Figure1A.png", h = 7, w = 7, res = 1000, units = "cm") > plot(1) > dev.off() null device 1 > file.info("Figure1A.png")[1:5] size isdir mode mtime ctime Figure1A.png 41272 FALSE 644 2017-02-10 17:40:42 2017-02-10 17:40:42 > -- in three different versions of R I've tried (all were 64-bit Linux). Note how *small* the file is. Now, I've also tried a 32-bit version of Linux (Ubuntu 14.04 LTS) and get a similar result (not exactly the same number of bytes for the file size). > but the help page of devices doesn't explain that there are any limits or how > they are determined. The wording of the error message could also be improved, > to explain that the resolution is too high or the dimensions are too large. If one/some of those who can reproduce the problem in their versions of R provide (concise and not hard to read) patches to the source of R, we'd probably gratefully accept them.. Martin Maechler >> sessionInfo() > R version 3.3.2 Patched (2017-02-07 r72138) > Platform: i386-w64-mingw32/i386 (32-bit) > Running under: Windows 7 (build 7601) Service Pack 1 > -- > Dario Strbenac > University of Sydney > Camperdown NSW 2050 > Australia > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check error (interfacing to C API of other pkg)
> Therneau, Terry M , Ph D > on Thu, 9 Feb 2017 12:56:17 -0600 writes: > Martyn, > No, that didn't work. > One other thing in the mix (which I don't think is the issue) is that I call one of the > C-entry points of expm. So the DESCRIPTION file imports expm, the NAMESPACE file imports > expm, and the init.c file is > #include "R.h" > #include "R_ext/Rdynload.h" > /* Interface to expm package. */ > typedef enum {Ward_2, Ward_1, Ward_buggy_octave} precond_type; > void (*expm)(double *x, int n, double *z, precond_type precond_kind); > void R_init_hmm(DllInfo *dll) > { > expm = (void (*)) R_GetCCallable("expm", "expm"); > } > I don't expect that this is the problem since I stole the > above almost verbatim from the msm package. > Terry T. Hmm. Yes, I can see that the CRAN package msm does do this, indeed. It is interesting if/why that does not produce any notes or rather even warnings. In principle, if you use the C API of 'expm' you should use 'LinkingTo: expm' see *the* manual, specifically the section https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Linking-to-native-routines-in-other-packages and that section does mention that (unfortunately in my view) you also should use 'Imports:' or 'Depends:' in addition to the 'LinkingTo:' Note howver that 'expm' would not have to mentioned in the NAMESPACE file unless your R functions do use some of expm's R level functionality. Martin __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Ancient C /Fortran code linpack error
> On 10 Feb 2017, at 14:53, Göran Broström wrote: > > Thanks to all who answered my third question. I learned something, but: > > On 2017-02-09 17:44, Martin Maechler wrote: >> On 9 Feb 2017, at 16:00, Göran Broström wrote: In my package 'glmmML' I'm using old C code and linpack in the optimizing procedure. Specifically, one part of the code looks like this: F77_CALL(dpoco)(*hessian, &bdim, &bdim, &rcond, work, info); if (*info == 0){ F77_CALL(dpodi)(*hessian, &bdim, &bdim, det, &job); This usually works OK, but with an ill-conditioned data set (from a user of glmmML) it happened that the hessian was all nan. However, dpoco returned *info = 0 (no error!) and then the call to dpodi hanged R! I googled for C and nan and found a work-around: Change 'if ...' to if (*info == 0 & (hessian[0][0] == hessian[0][0])){ which works as a test of hessian[0][0] (not) being NaN. I'm using the .C interface for calling C code. Any thoughts on how to best handle the situation? Is this a bug in dpoco? Is there a simple way to test for any NaNs in a vector? >> >>> You should/could use macro R_FINITE to test each entry of the hessian. >>> In package nleqslv I test for a "correct" jacobian like this in file >>> nleqslv.c in function fcnjac: >> >>>for (j = 0; j < *n; j++) >>>for (i = 0; i < *n; i++) { >>>if( !R_FINITE(REAL(sexp_fjac)[(*n)*j + i]) ) >>>error("non-finite value(s) returned by jacobian >>> (row=%d,col=%d)",i+1,j+1); >>>rjac[(*ldr)*j + i] = REAL(sexp_fjac)[(*n)*j + i]; >>>} >> >> A minor hint on that: While REAL(.) (or INTEGER(.) ...) is really cheap >> in >> the R sources themselves, that is not the case in package code. >> >> Hence, not only nicer to read but even faster is >> >> double *fj = REAL(sexp_fjac); >> for (j = 0; j < *n; j++) >>for (i = 0; i < *n; i++) { >>if( !R_FINITE(fj[(*n)*j + i]) ) >> error("non-finite value(s) returned by jacobian >> (row=%d,col=%d)",i+1,j+1); >> rjac[(*ldr)*j + i] = fj[(*n)*j + i]; >> } >> > [...] > > isn't this even easier to read (and correct?): > >for (j = 0; j < n*; j++) > for (i = 0; i < n*; i++){ > if ( !R_FINITE(hessian[i][j]) ) error("blah...") > } > > ? In .C land, that is. (And sure, I'm afraid of ±Inf in this context.) > Only if you have lda and n equal (which you indeed have; but still worth mentioning) when calling dpoco. Berend __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Ancient C /Fortran code linpack error
Thanks to all who answered my third question. I learned something, but: On 2017-02-09 17:44, Martin Maechler wrote: On 9 Feb 2017, at 16:00, Göran Broström wrote: In my package 'glmmML' I'm using old C code and linpack in the optimizing procedure. Specifically, one part of the code looks like this: F77_CALL(dpoco)(*hessian, &bdim, &bdim, &rcond, work, info); if (*info == 0){ F77_CALL(dpodi)(*hessian, &bdim, &bdim, det, &job); This usually works OK, but with an ill-conditioned data set (from a user of glmmML) it happened that the hessian was all nan. However, dpoco returned *info = 0 (no error!) and then the call to dpodi hanged R! I googled for C and nan and found a work-around: Change 'if ...' to if (*info == 0 & (hessian[0][0] == hessian[0][0])){ which works as a test of hessian[0][0] (not) being NaN. I'm using the .C interface for calling C code. Any thoughts on how to best handle the situation? Is this a bug in dpoco? Is there a simple way to test for any NaNs in a vector? You should/could use macro R_FINITE to test each entry of the hessian. In package nleqslv I test for a "correct" jacobian like this in file nleqslv.c in function fcnjac: for (j = 0; j < *n; j++) for (i = 0; i < *n; i++) { if( !R_FINITE(REAL(sexp_fjac)[(*n)*j + i]) ) error("non-finite value(s) returned by jacobian (row=%d,col=%d)",i+1,j+1); rjac[(*ldr)*j + i] = REAL(sexp_fjac)[(*n)*j + i]; } A minor hint on that: While REAL(.) (or INTEGER(.) ...) is really cheap in the R sources themselves, that is not the case in package code. Hence, not only nicer to read but even faster is double *fj = REAL(sexp_fjac); for (j = 0; j < *n; j++) for (i = 0; i < *n; i++) { if( !R_FINITE(fj[(*n)*j + i]) ) error("non-finite value(s) returned by jacobian (row=%d,col=%d)",i+1,j+1); rjac[(*ldr)*j + i] = fj[(*n)*j + i]; } [...] isn't this even easier to read (and correct?): for (j = 0; j < n*; j++) for (i = 0; i < n*; i++){ if ( !R_FINITE(hessian[i][j]) ) error("blah...") } ? In .C land, that is. (And sure, I'm afraid of ±Inf in this context.) Thanks again, Göran __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel