Re: [R-pkg-devel] CRAN packages dependency on bioconductor packages
On 16 May 2024 at 05:34, Duncan Murdoch wrote: | I forget now, but presumably the thinking at the time was that Suggested | packages would always be available for building and checking vignettes. Yes. I argued for years (cf https://dirk.eddelbuettel.com/blog/2017/03/22/ from seven (!!) years ago) and CRAN is slowly moving away from that implicit 'always there' guarantee to prefering explicit enumerations -- and now even tests via the NoSuggests flavour. As Uwe stated in this thread, having the vignette dependencies both in Suggests as well as in the VignetteHeader should do. And it is the Right Thing (TM) to do. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Bug#1070840: r-cran-ff: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 06:28, Dirk Eddelbuettel wrote: | | On 10 May 2024 at 10:54, Graham Inggs wrote: | | Source: r-cran-ff | | Version: 4.0.12+ds-1 | | Severity: serious | | X-Debbugs-Cc: Dirk Eddelbuettel | | User: debian...@lists.debian.org | | Usertags: regression | | | | Hi Maintainer | | | | r-cran-ff's autopkgtest regresses when tested with r-base 4.4.0 [1]. | | I've copied what I hope is the relevant part of the log below. | | FYI, I am not the maintainer of r-cran-ff. | | The package is perfectly clean at CRAN on all hardware-os combinations, | including amd64 so maybe the maintainer needs to turn this test off: | |https://cloud.r-project.org/web/checks/check_results_ff.html Also, for what it is worth, installing r-cran-ff and its one dependency in a container along with r-cran-testthat and its twenty (ick!), and then running 'bash run-unit-test' leads to no issue: [ FAIL 0 | WARN 52 | SKIP 0 | PASS 966 ] Maybe something for the package maintainer to consider. Dirk | | Dirk | | | Regards | | Graham | | | | | | [1] https://ci.debian.net/packages/r/r-cran-ff/testing/amd64/ | | | | | | 42s ══ Failed tests | | | | 42s ── Failure ('test-zero_lengths.R:34:3'): file size is correct when | | creating ff integer from scratch ── | | 42s file.exists(f1) is not TRUE | | 42s | | 42s `actual`: FALSE | | 42s `expected`: TRUE | | 42s | | 42s [ FAIL 1 | WARN 52 | SKIP 0 | PASS 965 ] | | 42s Error: Test failures | | 42s Execution halted | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070840: r-cran-ff: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 06:28, Dirk Eddelbuettel wrote: | | On 10 May 2024 at 10:54, Graham Inggs wrote: | | Source: r-cran-ff | | Version: 4.0.12+ds-1 | | Severity: serious | | X-Debbugs-Cc: Dirk Eddelbuettel | | User: debian...@lists.debian.org | | Usertags: regression | | | | Hi Maintainer | | | | r-cran-ff's autopkgtest regresses when tested with r-base 4.4.0 [1]. | | I've copied what I hope is the relevant part of the log below. | | FYI, I am not the maintainer of r-cran-ff. | | The package is perfectly clean at CRAN on all hardware-os combinations, | including amd64 so maybe the maintainer needs to turn this test off: | |https://cloud.r-project.org/web/checks/check_results_ff.html Also, for what it is worth, installing r-cran-ff and its one dependency in a container along with r-cran-testthat and its twenty (ick!), and then running 'bash run-unit-test' leads to no issue: [ FAIL 0 | WARN 52 | SKIP 0 | PASS 966 ] Maybe something for the package maintainer to consider. Dirk | | Dirk | | | Regards | | Graham | | | | | | [1] https://ci.debian.net/packages/r/r-cran-ff/testing/amd64/ | | | | | | 42s ══ Failed tests | | | | 42s ── Failure ('test-zero_lengths.R:34:3'): file size is correct when | | creating ff integer from scratch ── | | 42s file.exists(f1) is not TRUE | | 42s | | 42s `actual`: FALSE | | 42s `expected`: TRUE | | 42s | | 42s [ FAIL 1 | WARN 52 | SKIP 0 | PASS 965 ] | | 42s Error: Test failures | | 42s Execution halted | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070842: r-bioc-mutationalpatterns: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 11:01, Graham Inggs wrote: | Source: r-bioc-mutationalpatterns | Version: 3.12.0+dfsg-1 | Severity: serious | X-Debbugs-Cc: Dirk Eddelbuettel | User: debian...@lists.debian.org | Usertags: regression | | Hi Maintainer | | r-bioc-mutationalpatterns' autopkgtest regresses when tested with r-base 4.4.0 | [1]. I've copied what I hope is the relevant part of the log below. FYI, I am not the maintainer of r-bioc-mutationalpatterns. As you likely know, BioConductor aligns its releases with R releases and is now at release 3.19 (matching R 4.4.0) for which this package is now at version 3.14.0. I suggest the maintainer look into upgrading BioConductor to 3.19. Dirk | | Regards | Graham | | | [1] https://ci.debian.net/packages/r/r-bioc-mutationalpatterns/testing/amd64/ | | | 125s > test_check("MutationalPatterns") | 172s [ FAIL 3 | WARN 275 | SKIP 0 | PASS 280 ] | 172s | 172s ══ Failed tests | | 172s ── Error ('test-fit_to_signatures_bootstrapped.R:12:3'): Output | has correct class ── | 172s Error in `FUN(X[[i]], ...)`: isEmpty() is not defined for objects | of class NULL | 172s Backtrace: | 172s ▆ | 172s 1. ├─MutationalPatterns::fit_to_signatures_bootstrapped(...) at | test-fit_to_signatures_bootstrapped.R:12:3 | 172s 2. │ └─MutationalPatterns::fit_to_signatures_strict(...) | 172s 3. │ └─MutationalPatterns:::.strict_refit_backwards_selection_sample(...) | 172s 4. │ └─MutationalPatterns:::.plot_sim_decay(...) | 172s 5. │ ├─sims[!S4Vectors::isEmpty(sims)] %>% unlist() | 172s 6. │ ├─S4Vectors::isEmpty(sims) | 172s 7. │ └─S4Vectors::isEmpty(sims) | 172s 8. │ └─base::vapply(x, isEmpty, logical(1L)) | 172s 9. │ ├─S4Vectors (local) FUN(X[[i]], ...) | 172s 10. │ └─S4Vectors (local) FUN(X[[i]], ...) | 172s 11. └─base::unlist(.) | 172s ── Error ('test-fit_to_signatures_bootstrapped.R:31:3'): Output | is equal to expected ── | 172s Error in `FUN(X[[i]], ...)`: isEmpty() is not defined for objects | of class NULL | 172s Backtrace: | 172s ▆ | 172s 1. ├─MutationalPatterns::fit_to_signatures_bootstrapped(...) at | test-fit_to_signatures_bootstrapped.R:31:3 | 172s 2. │ └─MutationalPatterns::fit_to_signatures_strict(...) | 172s 3. │ └─MutationalPatterns:::.strict_refit_backwards_selection_sample(...) | 172s 4. │ └─MutationalPatterns:::.plot_sim_decay(...) | 172s 5. │ ├─sims[!S4Vectors::isEmpty(sims)] %>% unlist() | 172s 6. │ ├─S4Vectors::isEmpty(sims) | 172s 7. │ └─S4Vectors::isEmpty(sims) | 172s 8. │ └─base::vapply(x, isEmpty, logical(1L)) | 172s 9. │ ├─S4Vectors (local) FUN(X[[i]], ...) | 172s 10. │ └─S4Vectors (local) FUN(X[[i]], ...) | 172s 11. └─base::unlist(.) | 172s ── Error ('test-fit_to_signatures_strict.R:11:1'): (code run | outside of `test_that()`) ── | 172s Error in `FUN(X[[i]], ...)`: isEmpty() is not defined for objects | of class NULL | 172s Backtrace: | 172s ▆ | 172s 1. ├─MutationalPatterns::fit_to_signatures_strict(...) at | test-fit_to_signatures_strict.R:11:1 | 172s 2. │ └─MutationalPatterns:::.strict_refit_backwards_selection_sample(...) | 172s 3. │ └─MutationalPatterns:::.plot_sim_decay(...) | 172s 4. │ ├─sims[!S4Vectors::isEmpty(sims)] %>% unlist() | 172s 5. │ ├─S4Vectors::isEmpty(sims) | 172s 6. │ └─S4Vectors::isEmpty(sims) | 172s 7. │ └─base::vapply(x, isEmpty, logical(1L)) | 172s 8. │ ├─S4Vectors (local) FUN(X[[i]], ...) | 172s 9. │ └─S4Vectors (local) FUN(X[[i]], ...) | 172s 10. └─base::unlist(.) | 172s | 172s [ FAIL 3 | WARN 275 | SKIP 0 | PASS 280 ] | 173s Error: Test failures | 173s Execution halted -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070843: r-bioc-s4vectors: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 11:04, Graham Inggs wrote: | Source: r-bioc-s4vectors | Version: 0.40.2+dfsg-1 | Severity: serious | X-Debbugs-Cc: Dirk Eddelbuettel | User: debian...@lists.debian.org | Usertags: regression | | Hi Maintainer | | r-bioc-s4vectors' autopkgtest regresses when tested with r-base 4.4.0 | [1]. I've copied what I hope is the relevant part of the log below. FYI, I am not the maintainer of r-bioc-s4vectors. As you likely know, BioConductor aligns its releases with R releases and is now at release 3.19 (matching R 4.4.0) for which this package is now at version 0.42.0. I suggest the maintainer look into upgrading BioConductor to 3.19. Dirk | | Regards | Graham | | | [1] https://ci.debian.net/packages/r/r-bioc-s4vectors/testing/amd64/ | | | 125s > S4Vectors:::.test() | 129s Timing stopped at: 0.009 0 0.009 | 129s Error in var(x) : is.atomic(y) is not TRUE | 129s In addition: Warning messages: | 129s 1: In combineUniqueCols(X, Y, Z, use.names = FALSE) : | 129s different values in multiple instances of column 'dup', ignoring this | 129s column in DFrame 2 | 129s 2: In combineUniqueCols(X, Y, Z) : | 129s different values for shared rows in multiple instances of column 'dup', | 129s ignoring this column in DFrame 2 | 129s 3: In combineUniqueCols(x, y2) : | 129s different values for shared rows in multiple instances of column 'X', | 129s ignoring this column in DFrame 2 | 130s Loading required package: GenomeInfoDb | 132s | 132s | 132s RUNIT TEST PROTOCOL -- Thu May 9 22:12:10 2024 | 132s *** | 132s Number of test functions: 74 | 132s Number of errors: 1 | 132s Number of failures: 0 | 132s | 132s | 132s 1 Test Suite : | 132s S4Vectors RUnit Tests - 74 test functions, 1 error, 0 failures | 132s ERROR in test_Rle_numerical: Error in var(x) : is.atomic(y) is not TRUE | 132s | 132s Test files with failing tests | 132s | 132s test_Rle-utils.R | 132s test_Rle_numerical | 132s | 132s | 132s Error in BiocGenerics:::testPackage("S4Vectors") : | 132s unit tests failed for package S4Vectors | 132s Calls: -> | 132s Execution halted -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070842: r-bioc-mutationalpatterns: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 11:01, Graham Inggs wrote: | Source: r-bioc-mutationalpatterns | Version: 3.12.0+dfsg-1 | Severity: serious | X-Debbugs-Cc: Dirk Eddelbuettel | User: debian...@lists.debian.org | Usertags: regression | | Hi Maintainer | | r-bioc-mutationalpatterns' autopkgtest regresses when tested with r-base 4.4.0 | [1]. I've copied what I hope is the relevant part of the log below. FYI, I am not the maintainer of r-bioc-mutationalpatterns. As you likely know, BioConductor aligns its releases with R releases and is now at release 3.19 (matching R 4.4.0) for which this package is now at version 3.14.0. I suggest the maintainer look into upgrading BioConductor to 3.19. Dirk | | Regards | Graham | | | [1] https://ci.debian.net/packages/r/r-bioc-mutationalpatterns/testing/amd64/ | | | 125s > test_check("MutationalPatterns") | 172s [ FAIL 3 | WARN 275 | SKIP 0 | PASS 280 ] | 172s | 172s ══ Failed tests | | 172s ── Error ('test-fit_to_signatures_bootstrapped.R:12:3'): Output | has correct class ── | 172s Error in `FUN(X[[i]], ...)`: isEmpty() is not defined for objects | of class NULL | 172s Backtrace: | 172s ▆ | 172s 1. ├─MutationalPatterns::fit_to_signatures_bootstrapped(...) at | test-fit_to_signatures_bootstrapped.R:12:3 | 172s 2. │ └─MutationalPatterns::fit_to_signatures_strict(...) | 172s 3. │ └─MutationalPatterns:::.strict_refit_backwards_selection_sample(...) | 172s 4. │ └─MutationalPatterns:::.plot_sim_decay(...) | 172s 5. │ ├─sims[!S4Vectors::isEmpty(sims)] %>% unlist() | 172s 6. │ ├─S4Vectors::isEmpty(sims) | 172s 7. │ └─S4Vectors::isEmpty(sims) | 172s 8. │ └─base::vapply(x, isEmpty, logical(1L)) | 172s 9. │ ├─S4Vectors (local) FUN(X[[i]], ...) | 172s 10. │ └─S4Vectors (local) FUN(X[[i]], ...) | 172s 11. └─base::unlist(.) | 172s ── Error ('test-fit_to_signatures_bootstrapped.R:31:3'): Output | is equal to expected ── | 172s Error in `FUN(X[[i]], ...)`: isEmpty() is not defined for objects | of class NULL | 172s Backtrace: | 172s ▆ | 172s 1. ├─MutationalPatterns::fit_to_signatures_bootstrapped(...) at | test-fit_to_signatures_bootstrapped.R:31:3 | 172s 2. │ └─MutationalPatterns::fit_to_signatures_strict(...) | 172s 3. │ └─MutationalPatterns:::.strict_refit_backwards_selection_sample(...) | 172s 4. │ └─MutationalPatterns:::.plot_sim_decay(...) | 172s 5. │ ├─sims[!S4Vectors::isEmpty(sims)] %>% unlist() | 172s 6. │ ├─S4Vectors::isEmpty(sims) | 172s 7. │ └─S4Vectors::isEmpty(sims) | 172s 8. │ └─base::vapply(x, isEmpty, logical(1L)) | 172s 9. │ ├─S4Vectors (local) FUN(X[[i]], ...) | 172s 10. │ └─S4Vectors (local) FUN(X[[i]], ...) | 172s 11. └─base::unlist(.) | 172s ── Error ('test-fit_to_signatures_strict.R:11:1'): (code run | outside of `test_that()`) ── | 172s Error in `FUN(X[[i]], ...)`: isEmpty() is not defined for objects | of class NULL | 172s Backtrace: | 172s ▆ | 172s 1. ├─MutationalPatterns::fit_to_signatures_strict(...) at | test-fit_to_signatures_strict.R:11:1 | 172s 2. │ └─MutationalPatterns:::.strict_refit_backwards_selection_sample(...) | 172s 3. │ └─MutationalPatterns:::.plot_sim_decay(...) | 172s 4. │ ├─sims[!S4Vectors::isEmpty(sims)] %>% unlist() | 172s 5. │ ├─S4Vectors::isEmpty(sims) | 172s 6. │ └─S4Vectors::isEmpty(sims) | 172s 7. │ └─base::vapply(x, isEmpty, logical(1L)) | 172s 8. │ ├─S4Vectors (local) FUN(X[[i]], ...) | 172s 9. │ └─S4Vectors (local) FUN(X[[i]], ...) | 172s 10. └─base::unlist(.) | 172s | 172s [ FAIL 3 | WARN 275 | SKIP 0 | PASS 280 ] | 173s Error: Test failures | 173s Execution halted -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070843: r-bioc-s4vectors: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 11:04, Graham Inggs wrote: | Source: r-bioc-s4vectors | Version: 0.40.2+dfsg-1 | Severity: serious | X-Debbugs-Cc: Dirk Eddelbuettel | User: debian...@lists.debian.org | Usertags: regression | | Hi Maintainer | | r-bioc-s4vectors' autopkgtest regresses when tested with r-base 4.4.0 | [1]. I've copied what I hope is the relevant part of the log below. FYI, I am not the maintainer of r-bioc-s4vectors. As you likely know, BioConductor aligns its releases with R releases and is now at release 3.19 (matching R 4.4.0) for which this package is now at version 0.42.0. I suggest the maintainer look into upgrading BioConductor to 3.19. Dirk | | Regards | Graham | | | [1] https://ci.debian.net/packages/r/r-bioc-s4vectors/testing/amd64/ | | | 125s > S4Vectors:::.test() | 129s Timing stopped at: 0.009 0 0.009 | 129s Error in var(x) : is.atomic(y) is not TRUE | 129s In addition: Warning messages: | 129s 1: In combineUniqueCols(X, Y, Z, use.names = FALSE) : | 129s different values in multiple instances of column 'dup', ignoring this | 129s column in DFrame 2 | 129s 2: In combineUniqueCols(X, Y, Z) : | 129s different values for shared rows in multiple instances of column 'dup', | 129s ignoring this column in DFrame 2 | 129s 3: In combineUniqueCols(x, y2) : | 129s different values for shared rows in multiple instances of column 'X', | 129s ignoring this column in DFrame 2 | 130s Loading required package: GenomeInfoDb | 132s | 132s | 132s RUNIT TEST PROTOCOL -- Thu May 9 22:12:10 2024 | 132s *** | 132s Number of test functions: 74 | 132s Number of errors: 1 | 132s Number of failures: 0 | 132s | 132s | 132s 1 Test Suite : | 132s S4Vectors RUnit Tests - 74 test functions, 1 error, 0 failures | 132s ERROR in test_Rle_numerical: Error in var(x) : is.atomic(y) is not TRUE | 132s | 132s Test files with failing tests | 132s | 132s test_Rle-utils.R | 132s test_Rle_numerical | 132s | 132s | 132s Error in BiocGenerics:::testPackage("S4Vectors") : | 132s unit tests failed for package S4Vectors | 132s Calls: -> | 132s Execution halted -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070841: r-bioc-iranges: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 10:58, Graham Inggs wrote: | Source: r-bioc-iranges | Version: 2.36.0-1 | Severity: serious | X-Debbugs-Cc: Dirk Eddelbuettel | User: debian...@lists.debian.org | Usertags: regression | | Hi Maintainer | | r-bioc-iranges' autopkgtest regresses when tested with r-base 4.4.0 | [1]. I've copied what I hope is the relevant part of the log below. FYI, I am not the maintainer of r-bioc-iranges. As you likely know, BioConductor aligns releases with R releases at is now at release 3.19 for which this package is at version 2.38.0. I suggest the maintainer look into upgrading BioConductor to 3.19. Dirk | | Regards | Graham | | | [1] https://ci.debian.net/packages/r/r-bioc-iranges/testing/amd64/ | | | 194s *** | 194s Number of test functions: 98 | 194s Number of errors: 1 | 194s Number of failures: 0 | 194s | 194s | 194s 1 Test Suite : | 194s IRanges RUnit Tests - 98 test functions, 1 error, 0 failures | 194s ERROR in test_AtomicList_numerical: Error in FUN(X[[i]], ...) : | is.atomic(y) is not TRUE | 194s | 194s Test files with failing tests | 194s | 194s test_AtomicList-utils.R | 194s test_AtomicList_numerical | 194s | 194s | 194s Warning messages: | 194s 1: In recycleListElements(e1, en) : | 194s Some element lengths are not multiples of their corresponding | element length in e1 | 194s 2: In x + y : | 194s longer object length is not a multiple of shorter object length | 194s 3: In recycleListElements(e1, en) : | 194s Some element lengths are not multiples of their corresponding | element length in e1 | 194s 4: In x + y : | 194s longer object length is not a multiple of shorter object length | 194s Execution halted -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070841: r-bioc-iranges: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 10:58, Graham Inggs wrote: | Source: r-bioc-iranges | Version: 2.36.0-1 | Severity: serious | X-Debbugs-Cc: Dirk Eddelbuettel | User: debian...@lists.debian.org | Usertags: regression | | Hi Maintainer | | r-bioc-iranges' autopkgtest regresses when tested with r-base 4.4.0 | [1]. I've copied what I hope is the relevant part of the log below. FYI, I am not the maintainer of r-bioc-iranges. As you likely know, BioConductor aligns releases with R releases at is now at release 3.19 for which this package is at version 2.38.0. I suggest the maintainer look into upgrading BioConductor to 3.19. Dirk | | Regards | Graham | | | [1] https://ci.debian.net/packages/r/r-bioc-iranges/testing/amd64/ | | | 194s *** | 194s Number of test functions: 98 | 194s Number of errors: 1 | 194s Number of failures: 0 | 194s | 194s | 194s 1 Test Suite : | 194s IRanges RUnit Tests - 98 test functions, 1 error, 0 failures | 194s ERROR in test_AtomicList_numerical: Error in FUN(X[[i]], ...) : | is.atomic(y) is not TRUE | 194s | 194s Test files with failing tests | 194s | 194s test_AtomicList-utils.R | 194s test_AtomicList_numerical | 194s | 194s | 194s Warning messages: | 194s 1: In recycleListElements(e1, en) : | 194s Some element lengths are not multiples of their corresponding | element length in e1 | 194s 2: In x + y : | 194s longer object length is not a multiple of shorter object length | 194s 3: In recycleListElements(e1, en) : | 194s Some element lengths are not multiples of their corresponding | element length in e1 | 194s 4: In x + y : | 194s longer object length is not a multiple of shorter object length | 194s Execution halted -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070840: r-cran-ff: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 10:54, Graham Inggs wrote: | Source: r-cran-ff | Version: 4.0.12+ds-1 | Severity: serious | X-Debbugs-Cc: Dirk Eddelbuettel | User: debian...@lists.debian.org | Usertags: regression | | Hi Maintainer | | r-cran-ff's autopkgtest regresses when tested with r-base 4.4.0 [1]. | I've copied what I hope is the relevant part of the log below. FYI, I am not the maintainer of r-cran-ff. The package is perfectly clean at CRAN on all hardware-os combinations, including amd64 so maybe the maintainer needs to turn this test off: https://cloud.r-project.org/web/checks/check_results_ff.html Dirk | Regards | Graham | | | [1] https://ci.debian.net/packages/r/r-cran-ff/testing/amd64/ | | | 42s ══ Failed tests | | 42s ── Failure ('test-zero_lengths.R:34:3'): file size is correct when | creating ff integer from scratch ── | 42s file.exists(f1) is not TRUE | 42s | 42s `actual`: FALSE | 42s `expected`: TRUE | 42s | 42s [ FAIL 1 | WARN 52 | SKIP 0 | PASS 965 ] | 42s Error: Test failures | 42s Execution halted -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070840: r-cran-ff: autopkgtest regression with r-base 4.4.0
On 10 May 2024 at 10:54, Graham Inggs wrote: | Source: r-cran-ff | Version: 4.0.12+ds-1 | Severity: serious | X-Debbugs-Cc: Dirk Eddelbuettel | User: debian...@lists.debian.org | Usertags: regression | | Hi Maintainer | | r-cran-ff's autopkgtest regresses when tested with r-base 4.4.0 [1]. | I've copied what I hope is the relevant part of the log below. FYI, I am not the maintainer of r-cran-ff. The package is perfectly clean at CRAN on all hardware-os combinations, including amd64 so maybe the maintainer needs to turn this test off: https://cloud.r-project.org/web/checks/check_results_ff.html Dirk | Regards | Graham | | | [1] https://ci.debian.net/packages/r/r-cran-ff/testing/amd64/ | | | 42s ══ Failed tests | | 42s ── Failure ('test-zero_lengths.R:34:3'): file size is correct when | creating ff integer from scratch ── | 42s file.exists(f1) is not TRUE | 42s | 42s `actual`: FALSE | 42s `expected`: TRUE | 42s | 42s [ FAIL 1 | WARN 52 | SKIP 0 | PASS 965 ] | 42s Error: Test failures | 42s Execution halted -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [R-pkg-devel] Overcoming CRAN's 5mb vendoring requirement
Software Heritage (see [1] for their website and [2] for a brief intro I gave at useR! 2019 in Toulouse) covers GitHub and CRAN [3]. It is by now 'in collaboration with UNESCO', supported by a long and posh list of sponsors [4] and about as good as it gets to 'ensure longevity of artifacts'. It is of course not meant for downloads during frequent builds. But given the 'quasi-institutional nature' and sponsorship, we could think of using GitHub as an 'active cache'. But CRAN is CRAN and as it now stands GitHub is not trusted. ¯\_(ツ)_/¯ Dirk [1] https://www.softwareheritage.org/ [2] https://dirk.eddelbuettel.com/papers/useR2019_swh_cran_talk.pdf [3] https://www.softwareheritage.org/faq/ question 2.1 [4] https://www.softwareheritage.org/support/sponsors/ -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] Fast Matrix Serialization in R?
On 9 May 2024 at 03:20, Sameh Abdulah wrote: | I need to serialize and save a 20K x 20K matrix as a binary file. Hm that is an incomplete specification: _what_ do you want to do with it? Read it back in R? Share it with other languages (like Python) ? I.e. what really is your use case? Also, you only seem to use readBin / writeBin. Why not readRDS / saveRDS which at least give you compression? If it is to read/write from / to R look into the qs package. It is good. The README.md at its repo has benchmarks: https://github.com/traversc/qs If you want to index into the stored data look into fst. Else also look at databases Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] Overcoming CRAN's 5mb vendoring requirement
On 8 May 2024 at 11:02, Josiah Parry wrote: | CRAN has rejected this package with: | | * Size of tarball: 18099770 bytes* | | *Please reudce to less than 5 MB for a CRAN package.* Are you by chance confusing a NOTE (issued, but can be overruled) with a WARNING (more severe, likely a must-be-addressed) or ERROR? There are lots and lots of packages larger than 5mb -- see eg https://cran.r-project.org/src/contrib/?C=S;O=D which has a top-5 of rcdklibs 19mb fastrmodels15mb prqlr 15mb RFlocalfdr 14mb acss.data 14mb and at least one of those is also Rust-using and hence a possible template. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Bug#1070240: r-cran-tmb: Please rebuild under updated Matrix package
Package: r-cran-tmb Version: 1.9.11-1 Severity: important CRAN package Matrix had a new release 1.7.0 bringing in a new SuiteSparse API which requires a rebuild if (and only if) the Matrix headers are used. Your package is one of those that do, and therefore needs a rebuild. This was reasonably well circulated earlier (by upstream in [1], and I followed up on debian-devel) but it was then decided to tie this 1.7-0 release to the R 4.4.0 release last week. To recap we can look at the total dependencies of Matrix at CRAN (1300+) and the ones using the headers via LinkingTo (15) in R via > db <- tools::CRAN_package_db() > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] > length(matrixrevdep)# the vector 'matrixrevdep' list all [1] 1349 > tools::package_dependencies("Matrix", which = "LinkingTo", reverse = TRUE)[[1L]] [1] "ahMLE" "bayesWatch" "cplm" "GeneralizedWendland" [5] "geostatsp" "hibayes" "irlba" "lme4" [9] "mcmcsae" "OpenMx" "PRIMME" "PUlasso" [13] "robustlmm" "spGARCH" "TMB" > But among these 15 affected only five are in Debian: irlbar-cran-irlba lme4 r-cran-lme4 OpenMx r-cran-openmx TMB r-cran-tmb bcSeqr-bioc-bcseq and lme4 is my package, and I already rebuilt it. You should see a message about 'Matrix API 1, needs 2' in case of a mismatch, if you rebuild then the `library(...)` call in R will be quiet as usual. All it takes is a rebuild: for r-cran-lme4 I just adjusted (Build-) Depends for r-cran-matrix to r-cran-matrix (>= 1.7-0) (and I also adjusted r-base-dev to depend on 4.4.0 or greater, but that is both optional, and implied via Matrix). It would be terrific if you could update the package in the next few days. If you are unable I could do a binary-only NMU -- just let me know. Many thanks, Dirk [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070239: r-cran-openmx: Please rebuild under updated Matrix package
Package: r-cran-openmx Version: 2.21.11+dfsg-3 Severity: important CRAN package Matrix had a new release 1.7.0 bringing in a new SuiteSparse API which requires a rebuild if (and only if) the Matrix headers are used. Your package is one of those that do, and therefore needs a rebuild. This was reasonably well circulated earlier (by upstream in [1], and I followed up on debian-devel) but it was then decided to tie this 1.7-0 release to the R 4.4.0 release last week. To recap we can look at the total dependencies of Matrix at CRAN (1300+) and the ones using the headers via LinkingTo (15) in R via > db <- tools::CRAN_package_db() > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] > length(matrixrevdep)# the vector 'matrixrevdep' list all [1] 1349 > tools::package_dependencies("Matrix", which = "LinkingTo", reverse = TRUE)[[1L]] [1] "ahMLE" "bayesWatch" "cplm" "GeneralizedWendland" [5] "geostatsp" "hibayes" "irlba" "lme4" [9] "mcmcsae" "OpenMx" "PRIMME" "PUlasso" [13] "robustlmm" "spGARCH" "TMB" > But among these 15 affected only five are in Debian: irlbar-cran-irlba lme4 r-cran-lme4 OpenMx r-cran-openmx TMP r-cran-tmp bcSeqr-bioc-bcseq and lme4 is my package, and I already rebuilt it. You should see a message about 'Matrix API 1, needs 2' in case of a mismatch, if you rebuild then the `library(...)` call in R will be quiet as usual. All it takes is a rebuild: for r-cran-lme4 I just adjusted (Build-) Depends for r-cran-matrix to r-cran-matrix (>= 1.7-0) (and I also adjusted r-base-dev to depend on 4.4.0 or greater, but that is both optional, and implied via Matrix). It would be terrific if you could update the package in the next few days. If you are unable I could do a binary-only NMU -- just let me know. Many thanks, Dirk [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070238: r-cran-irlba: Please rebuild under updated Matrix package
Package: r-cran-irlba Version: 2.3.5.1-3 Severity: important CRAN package Matrix had a new release 1.7.0 bringing in a new SuiteSparse API which requires a rebuild if (and only if) the Matrix headers are used. Your package is one of those that do, and therefore needs a rebuild. This was reasonably well circulated earlier (by upstream in [1], and I followed up on debian-devel) but it was then decided to tie this 1.7-0 release to the R 4.4.0 release last week. To recap we can look at the total dependencies of Matrix at CRAN (1300+) and the ones using the headers via LinkingTo (15) in R via > db <- tools::CRAN_package_db() > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] > length(matrixrevdep)# the vector 'matrixrevdep' list all [1] 1349 > tools::package_dependencies("Matrix", which = "LinkingTo", reverse = TRUE)[[1L]] [1] "ahMLE" "bayesWatch" "cplm" "GeneralizedWendland" [5] "geostatsp" "hibayes" "irlba" "lme4" [9] "mcmcsae" "OpenMx" "PRIMME" "PUlasso" [13] "robustlmm" "spGARCH" "TMB" > But among these 15 affected only five are in Debian: irlbar-cran-irlba lme4 r-cran-lme4 OpenMx r-cran-openmx TMP r-cran-tmp bcSeqr-bioc-bcseq and lme4 is my package, and I already rebuilt it. You should see a message about 'Matrix API 1, needs 2' in case of a mismatch, if you rebuild then the `library(...)` call in R will be quiet as usual. All it takes is a rebuild: for r-cran-lme4 I just adjusted (Build-) Depends for r-cran-matrix to r-cran-matrix (>= 1.7-0) (and I also adjusted r-base-dev to depend on 4.4.0 or greater, but that is both optional, and implied via Matrix). It would be terrific if you could update the package in the next few days. If you are unable I could do a binary-only NMU -- just let me know. Many thanks, Dirk [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [Rd] Patches for CVE-2024-27322
On 30 April 2024 at 11:59, peter dalgaard wrote: | svn diff -c 86235 ~/r-devel/R Which is also available as https://github.com/r-devel/r-svn/commit/f7c46500f455eb4edfc3656c3fa20af61b16abb7 Dirk | (or 86238 for the port to the release branch) should be easily backported. | | (CC Luke in case there is more to it) | | - pd | | > On 30 Apr 2024, at 11:28 , Iñaki Ucar wrote: | > | > Dear R-core, | > | > I just received notification of CVE-2024-27322 [1] in RedHat's Bugzilla. We | > updated R to v4.4.0 in Fedora rawhide, F40, EPEL9 and EPEL8, so no problem | > there. However, F38 and F39 will stay at v4.3.3, and I was wondering if | > there's a specific patch available, or if you could point me to the commits | > that fixed the issue, so that we can cherry-pick them for F38 and F39. | > Thanks. | > | > [1] https://nvd.nist.gov/vuln/detail/CVE-2024-27322 | > | > Best, | > -- | > Iñaki Úcar | > | > [[alternative HTML version deleted]] | > | > __ | > R-devel@r-project.org mailing list | > https://stat.ethz.ch/mailman/listinfo/r-devel | | -- | Peter Dalgaard, Professor, | Center for Statistics, Copenhagen Business School | Solbjerg Plads 3, 2000 Frederiksberg, Denmark | Phone: (+45)38153501 | Office: A 4.23 | Email: pd@cbs.dk Priv: pda...@gmail.com | | __ | R-devel@r-project.org mailing list | https://stat.ethz.ch/mailman/listinfo/r-devel -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Bug#1070009: r-cran-data.table: Update to current upstream
The file 'issue_563_fread.txt' appears to be an input to data.table::fread() for a test on encodings, glancing at the context. I can run 'R CMD check --as-cran data.table_1.15.4.tar.gz' just fine [1] here without any failing tests (and I have no locale or anything set). It's not my package but if I were you the natural step would be a combination of pausing the offending tests and filing an upstream issue notifying upstream that you had to do so. It is now a pretty active team so you may get some help from them. Dirk [1] I also have a local R env.vars set to report timing issues at lower thresholds than CRAN itself (to be aware for the packages I (co-)author so I get a bit more line noise: ## ... earlier lines omitted, this is on x86_64 with Ubuntu 23.10 ## * checking tests ... Running ‘autoprint.R’ Running R code in ‘autoprint.R’ had CPU time 4.2 times elapsed time Comparing ‘autoprint.Rout’ to ‘autoprint.Rout.save’ ... OK Running ‘froll.R’ [9s/9s] Running ‘knitr.R’ Running R code in ‘knitr.R’ had CPU time 3.7 times elapsed time Comparing ‘knitr.Rout’ to ‘knitr.Rout.save’ ... OK Running ‘main.R’ [30s/25s] Running ‘nafill.R’ Running R code in ‘nafill.R’ had CPU time 3.2 times elapsed time Running ‘other.R’ Running ‘programming.R’ Running R code in ‘programming.R’ had CPU time 2.5 times elapsed time Running ‘types.R’ Running R code in ‘types.R’ had CPU time 4.4 times elapsed time [47s/35s] NOTE * checking for unstated dependencies in vignettes ... OK * checking package vignettes ... OK * checking re-building of vignette outputs ... [76s/20s] OK * checking PDF version of manual ... [5s/4s] OK * checking HTML version of manual ... [2s/2s] OK * checking for non-standard things in the check directory ... OK * checking for detritus in the temp directory ... OK * DONE Status: 1 NOTE See ‘/tmp/r/data.table.Rcheck/00check.log’ for details. edd@rob:/tmp/r$ -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1070009: r-cran-data.table: Update to current upstream
The package is pristine at CRAN https://cran.r-project.org/web/checks/check_results_data.table.html (apart from some new warnings several packages now get about interal R API headers, nothing to do with tests) Maybe you can sort this with upstream -- data.table is effectively holding up r-base (and has been for months since the R 4.3.3 release) which is not exactly ideal. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [R-pkg-devel] Problem with loading package "devtools" from CRAN.
On 30 April 2024 at 01:21, Rolf Turner wrote: | On Mon, 29 Apr 2024 06:30:20 -0500 | Dirk Eddelbuettel wrote: | | | | > These days, I strongly recommend r2u [1]. As you already use R via | > CRAN through apt, r2u adds one more repository after which _all_ R | > packages are handled via the same apt operations that you already | > trust to get you R from CRAN (as well as anything else on your | > machine). This covers all 20+ thousand CRAN packages along with 400 | > key BioC packages. Handling your packages with your system package | > managed guarantees all dependencies are resolved reliably and | > quickly. It makes installing, upgrading, managing CRAN package | > easier, faster and more reliable. | | | | > [1] https://eddelbuettel.github.io/r2u | | | | Sounds promising, but I cannot follow what "r2u" is actually | all about. What *is* r2u? And how do I go about using it? Do I | invoke it (or invoke something) from within R? Or do I invoke | something from the OS? E.g. something like | | sudo apt-get install | | ??? You could peruse the documentation at https://eddelbuettel.github.io/r2u and / or the blogposts I have especially below https://dirk.eddelbuettel.com/blog/code/r4/ (and you may have to read 'in reverse order'). | I have downloaded the file add_cranapt_jammy.sh and executed | |sudo sh add_cranapt_jammy.sh | | which seemed to run OK. What now? Briefly, when you setup r2u you set up new a new apt repo AND a new way to access them from R (using the lovely `bspm` package). So in R saying `install.packages("devtools")` will seamlessly fetch r-cran-devtools and about 100 other files it depends upon (if you start from an 'empty' system as I did in a container last eve before replying to you). That works in mere seconds. You can then say `library(devtools)` as if you compiled locally. Naturally, using binaries both way faster and easier when it works (as this generally does). See the blog posts, see the demos, see the r2u site, try in (risklessly !!) in a container or at gitpod or in continuous integration or in codespaces or ... The docs try to get to that. Maybe start small and aim `install.packages()` at a package you know you do not have see what what happens? Follow-ups may be more appropriate for r-sig-debian, and/or an issue ticket at the r2u github repo depending on nature of the follow-up. Good luck, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] Problem with loading package "devtools" from CRAN.
Rolf, This question might have been more appropriate for r-sig-debian than here. But as Simon noted, the lack of detail makes is difficult to say anything to aid. It likely was an issue local to your setup and use. These days, I strongly recommend r2u [1]. As you already use R via CRAN through apt, r2u adds one more repository after which _all_ R packages are handled via the same apt operations that you already trust to get you R from CRAN (as well as anything else on your machine). This covers all 20+ thousand CRAN packages along with 400 key BioC packages. Handling your packages with your system package managed guarantees all dependencies are resolved reliably and quickly. It makes installing, upgrading, managing CRAN package easier, faster and more reliable. To double-check, I just spot-checked 'devtools' on an r2u container (on top of Ubuntu 22.04) and of course devtools install and runs fine (as a binary). So maybe give r2u a go. "Sixteen million packages served" in two years ... Cheers, Dirk [1] https://eddelbuettel.github.io/r2u -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Bug#1070009: r-cran-data.table: Update to current upstream
Package: r-cran-data.table Version: 1.14.10+dfsg-1 Severity: normal data.table had a release 1.15.0 in January -- the first new one in three years! -- and two follow-ups since bringing it 1.15.4 at CRAN. Please update the Debian package to the current upstream version. This should likely reduce some autopkgtest noise too in both data.table itself and some of the packages depending on it. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [ESS] Error installing on ubuntu
On 27 April 2024 at 10:53, 신선영(수학과) via ESS-help wrote: | Dear all, | | I get the following error message: | | make -C lisp all | make[1]: Entering directory '/home/mathi/ess-24.01.1/lisp' | test -f ../etc/.IS.RELEASE || wget -qO - https://raw.githubusercontent.com/JuliaEditorSupport/julia-emacs/master/julia-mode.el > julia-mode.el | test -f ../etc/.IS.RELEASE || wget -qO - https://raw.githubusercontent.com/JuliaEditorSupport/julia-emacs/master/julia-mode-latexsubs.el > julia-mode-latexsubs.el | Computing dependencies | sed: can't read julia-mode-latexsubs.el: No such file or directory | | … | … | | In toplevel form: | julia-mode.el:40:2: Error: Cannot open load file: No such file or directory, julia-mode-latexsubs | make[1]: *** [Makefile:58: julia-mode.elc] Error 1 | make[1]: Leaving directory '/home/mathi/ess-24.01.1/lisp' | make: *** [Makefile:30: lisp] Error 2 | | I uncommented some lines related with Julia in Makefile, but that did fix the issue. | | Any advice is appreciated. Thanks. Where did you start from? I sometimes use the Debian/Ubuntu (that I used to look after, now it is done by Seb, see https://tracker.debian.org/pkg/ess and https://packages.ubuntu.com/search?suite=all=all=any=ess=sourcenames) and sometimes I use melpa. It generally 'just works'. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ ESS-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/ess-help
Bug#1069842: rjava: FTBFS: /usr/bin/ld: cannot find -ldeflate: No such file or directory
reassign 1069842 r-base thanks On 25 April 2024 at 18:27, Santiago Vila wrote: | Package: src:rjava | Version: 1.0-11-1 | Severity: serious | Tags: ftbfs | | Dear maintainer: | | During a rebuild of all packages in unstable, your package failed to build: Thanks for this. It is caused by the just released R 4.4.0 which now uses libdeflate, gets it somehow already via its Build-Depends but then does not implicitly pass it on via its virtual (child) package r-base-dev and its depends. (Both have a list of lib*-dev compression packages.) I will make a r-base 4.4.0-2 either today or tomorrow to correct this and have r-base-dev explicitly list libdeflate-dev. Dirk | | | [...] | debian/rules build | dh build --buildsystem R | dh_update_autotools_config -O--buildsystem=R | cp: warning: behavior of -n is non-portable and may change in future; use --update=none instead | cp: warning: behavior of -n is non-portable and may change in future; use --update=none instead | dh_autoreconf -O--buildsystem=R | dh_auto_configure -O--buildsystem=R | dh_auto_build -O--buildsystem=R | dh_auto_test -O--buildsystem=R | create-stamp debian/debhelper-build-stamp | fakeroot debian/rules binary | dh binary --buildsystem R | dh_testroot -O--buildsystem=R | dh_prep -O--buildsystem=R | dh_auto_install --destdir=debian/r-cran-rjava/ -O--buildsystem=R | I: R Package: rJava Version: 1.0-11 | I: Building using R version 4.4.0-1 | I: R API version: r-api-4.0 | I: Using built-time from d/changelog: Fri, 26 Jan 2024 11:10:09 -0600 | mkdir -p /<>/debian/r-cran-rjava/usr/lib/R/site-library | R CMD INSTALL -l /<>/debian/r-cran-rjava/usr/lib/R/site-library --clean . "--built-timestamp='Fri, 26 Jan 2024 11:10:09 -0600'" | * installing *source* package ‘rJava’ ... | files ‘configure’, ‘jri/tools/config.guess’, ‘jri/tools/config.sub’, ‘src/config.h.in’ have the wrong MD5 checksums | ** using staged installation | checking for gcc... gcc | checking whether the C compiler works... yes | checking for C compiler default output file name... a.out | checking for suffix of executables... | checking whether we are cross compiling... no | checking for suffix of object files... o | checking whether the compiler supports GNU C... yes | checking whether gcc accepts -g... yes | checking for gcc option to enable C11 features... none needed | checking for sys/wait.h that is POSIX.1 compatible... yes | checking for stdio.h... yes | checking for stdlib.h... yes | checking for string.h... yes | checking for inttypes.h... yes | checking for stdint.h... yes | checking for strings.h... yes | checking for sys/stat.h... yes | checking for sys/types.h... yes | checking for unistd.h... yes | checking for string.h... (cached) yes | checking for sys/time.h... yes | checking for unistd.h... (cached) yes | checking for an ANSI C-conforming const... yes | configure: checking whether gcc supports static inline... | yes | checking whether setjmp.h is POSIX.1 compatible... yes | checking for gcc options needed to detect all undeclared functions... none needed | checking whether sigsetjmp is declared... yes | checking whether siglongjmp is declared... yes | checking Java support in R... present: | interpreter : '/usr/lib/jvm/default-java/bin/java' | archiver: '/usr/lib/jvm/default-java/bin/jar' | compiler: '/usr/lib/jvm/default-java/bin/javac' | header prep.: '' | cpp flags : '-I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux' | java libs : '-L/usr/lib/jvm/default-java/lib/server -ljvm' | checking whether Java run-time works... yes | checking whether -Xrs is supported... yes | checking whether -Xrs will be used... yes | checking whether JVM will be loaded dynamically... no | checking whether JNI programs can be compiled... yes | checking whether JNI programs run... yes | checking JNI data types... ok | checking whether JRI should be compiled (autodetect)... yes | checking whether debugging output should be enabled... no | checking whether memory profiling is desired... no | checking whether threads support is requested... no | checking whether callbacks support is requested... no | checking whether JNI cache support is requested... no | checking whether headless init is enabled... no | checking whether JRI is requested... yes | configure: creating ./config.status | config.status: creating src/Makevars | config.status: creating R/zzz.R | config.status: creating src/config.h | === configuring in jri (/<>/jri) | configure: running /bin/bash ./configure --disable-option-checking '--prefix=/usr/local' 'CC=gcc' 'CFLAGS=-g -O2 -Werror=implicit-function-declaration -ffile-prefix-map=/<>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection' 'LDFLAGS=-Wl,-z,relro' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' --cache-file=/dev/null --srcdir=. | checking
Bug#1069842: rjava: FTBFS: /usr/bin/ld: cannot find -ldeflate: No such file or directory
reassign 1069842 r-base thanks On 25 April 2024 at 18:27, Santiago Vila wrote: | Package: src:rjava | Version: 1.0-11-1 | Severity: serious | Tags: ftbfs | | Dear maintainer: | | During a rebuild of all packages in unstable, your package failed to build: Thanks for this. It is caused by the just released R 4.4.0 which now uses libdeflate, gets it somehow already via its Build-Depends but then does not implicitly pass it on via its virtual (child) package r-base-dev and its depends. (Both have a list of lib*-dev compression packages.) I will make a r-base 4.4.0-2 either today or tomorrow to correct this and have r-base-dev explicitly list libdeflate-dev. Dirk | | | [...] | debian/rules build | dh build --buildsystem R | dh_update_autotools_config -O--buildsystem=R | cp: warning: behavior of -n is non-portable and may change in future; use --update=none instead | cp: warning: behavior of -n is non-portable and may change in future; use --update=none instead | dh_autoreconf -O--buildsystem=R | dh_auto_configure -O--buildsystem=R | dh_auto_build -O--buildsystem=R | dh_auto_test -O--buildsystem=R | create-stamp debian/debhelper-build-stamp | fakeroot debian/rules binary | dh binary --buildsystem R | dh_testroot -O--buildsystem=R | dh_prep -O--buildsystem=R | dh_auto_install --destdir=debian/r-cran-rjava/ -O--buildsystem=R | I: R Package: rJava Version: 1.0-11 | I: Building using R version 4.4.0-1 | I: R API version: r-api-4.0 | I: Using built-time from d/changelog: Fri, 26 Jan 2024 11:10:09 -0600 | mkdir -p /<>/debian/r-cran-rjava/usr/lib/R/site-library | R CMD INSTALL -l /<>/debian/r-cran-rjava/usr/lib/R/site-library --clean . "--built-timestamp='Fri, 26 Jan 2024 11:10:09 -0600'" | * installing *source* package ‘rJava’ ... | files ‘configure’, ‘jri/tools/config.guess’, ‘jri/tools/config.sub’, ‘src/config.h.in’ have the wrong MD5 checksums | ** using staged installation | checking for gcc... gcc | checking whether the C compiler works... yes | checking for C compiler default output file name... a.out | checking for suffix of executables... | checking whether we are cross compiling... no | checking for suffix of object files... o | checking whether the compiler supports GNU C... yes | checking whether gcc accepts -g... yes | checking for gcc option to enable C11 features... none needed | checking for sys/wait.h that is POSIX.1 compatible... yes | checking for stdio.h... yes | checking for stdlib.h... yes | checking for string.h... yes | checking for inttypes.h... yes | checking for stdint.h... yes | checking for strings.h... yes | checking for sys/stat.h... yes | checking for sys/types.h... yes | checking for unistd.h... yes | checking for string.h... (cached) yes | checking for sys/time.h... yes | checking for unistd.h... (cached) yes | checking for an ANSI C-conforming const... yes | configure: checking whether gcc supports static inline... | yes | checking whether setjmp.h is POSIX.1 compatible... yes | checking for gcc options needed to detect all undeclared functions... none needed | checking whether sigsetjmp is declared... yes | checking whether siglongjmp is declared... yes | checking Java support in R... present: | interpreter : '/usr/lib/jvm/default-java/bin/java' | archiver: '/usr/lib/jvm/default-java/bin/jar' | compiler: '/usr/lib/jvm/default-java/bin/javac' | header prep.: '' | cpp flags : '-I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux' | java libs : '-L/usr/lib/jvm/default-java/lib/server -ljvm' | checking whether Java run-time works... yes | checking whether -Xrs is supported... yes | checking whether -Xrs will be used... yes | checking whether JVM will be loaded dynamically... no | checking whether JNI programs can be compiled... yes | checking whether JNI programs run... yes | checking JNI data types... ok | checking whether JRI should be compiled (autodetect)... yes | checking whether debugging output should be enabled... no | checking whether memory profiling is desired... no | checking whether threads support is requested... no | checking whether callbacks support is requested... no | checking whether JNI cache support is requested... no | checking whether headless init is enabled... no | checking whether JRI is requested... yes | configure: creating ./config.status | config.status: creating src/Makevars | config.status: creating R/zzz.R | config.status: creating src/config.h | === configuring in jri (/<>/jri) | configure: running /bin/bash ./configure --disable-option-checking '--prefix=/usr/local' 'CC=gcc' 'CFLAGS=-g -O2 -Werror=implicit-function-declaration -ffile-prefix-map=/<>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection' 'LDFLAGS=-Wl,-z,relro' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' --cache-file=/dev/null --srcdir=. | checking
Re: [Rd] Question regarding .make_numeric_version with non-character input
Hi Kurt, On 25 April 2024 at 08:07, Kurt Hornik wrote: | > Hervé Pagès writes: | | > Hi Kurt, | > Is it intended that numeric_version() returns an error by default on | > non-character input in R 4.4.0? | | Dear Herve, yes, that's the intention. | | > It seems that I can turn this into a warning by setting | > _R_CHECK_STOP_ON_INVALID_NUMERIC_VERSION_INPUTS_=false but I don't | > seem to be able to find any of this mentioned in the NEWS file. | | That's what I added for smoothing the transition: it will be removed | from the trunk shortly. I would actually be nice to have a more robust variant for non-CRAN versions. For example I just had to do a local hack to be able to use what the QuantLib 'rc' 1.34-rc reported (when I then used to R facilities to condition code and tests on whether I was dealing with code before or after an API transition). So as a wishlist: could you envision an extension to package_version() casting that, say, removes all [a-zA-Z]+ first (if opted into) ? Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: R 4.4.0 coming April 24
On 21 April 2024 at 15:25, Sebastiaan Couwenberg wrote: | On 4/21/24 3:04 PM, Dirk Eddelbuettel wrote: | > R upstream no longer releases or tests for 32 bits (and has not since the R | > 4.3.0 release a year ago) so 'expect trouble there'. I think you all in the | > release team may need to override this to unblock. | | Wouldn't it be better then to add architecture-is-64-bit to the r-base | build dependencies to prevent it from building on 32bit architectures | and then file partial RM bugreports for r-base and its rdeps to get them | removed from the 32bit architectures? Yes!! I actually grep'ed among all my (100+) packages but did not see an example. I may be missing the best way to do this: so is this (new to me) 'architecture-is-64-bit' the way to do it? A quick 'apt-cache search' leads me to 'architecture-properties'. I would be in favor. So thanks for the suggestion! Are there concerns or side effects or our desire to build for as many platforms as possible etc? Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: R 4.4.0 coming April 24
Hi Graham, Hi Release Team, On 21 April 2024 at 13:37, Graham Inggs wrote: | On Thu, 18 Apr 2024 at 13:38, Dirk Eddelbuettel wrote: | > Right now it now only shows 'all reports (re-)running'. | | That was because of the new upload, but I see the results there now. | | The packages with failing autopkgtests are: | | r-bioc-iranges/2.36.0-1 | r-bioc-mutationalpatterns/3.12.0+dfsg-1 | r-bioc-s4vectors/0.40.2+dfsg-1 | r-cran-data.table/1.14.10+dfsg-1 | r-cran-ff/4.0.12+ds-1 I checked these five just now: four of them are current so I may have to leave this with their maintainer. But r-cran-data.table is quite badly behind (the once again very active upstream) and the current release is 1.15.4. I use package a lot myself and keep an eye out on their upstream work, there were some minor CRAN required updates so an update could cure that for us too. And given how widely data.table is used (i.e. by r-bioc-s4vectors which itself is used by r-bioc-iranges and r-bioc-mutationalpatterns) we quite possibly have one package causing four hickups. | > But package r-base | > has had the usual issues in unstable for a few weeks now because 'some | > people' insist on adding autopkg tests including for architectures / build | > sizes no longer supported upstream -- R stopped 32 bit support over a year | > and release ago | | For the pseudo-excuses in experimental only amd64 and arm64 are | tested, no 32-bit architectures. Ah. I expect more skirmish then. R upstream no longer releases or tests for 32 bits (and has not since the R 4.3.0 release a year ago) so 'expect trouble there'. I think you all in the release team may need to override this to unblock. R 4.4.0 itself is fine. I decided to also eat my own dogfood and sent the same package I sent to experimental to launchpad for Ubuntu 23.10 (my daily driver here), and I am running it now for a over a day. "Everything works", I have hourly cronjobs for R too and there is no issue. So I plan to proceed with R 4.4.0. | > -- as well continually letting dependencies slip so that the | > autopkg tests involve old and outdated package releases combined with the | > fact that BioConductor has _very_ specific release cycles yet they throw | > r-bioc-* package in too) so there is little I can do on the end of package | > r-base. Briefly, I am being put into a bad corner by other maintainers here, | > and I no longer have the energy to discuss that with them. We have been at | > this for years. | | I think "discuss" was probably not the best word for Paul to suggest here. | | You only need to inform the maintainers of the affected packages, and | that can be done by filing RC bugs against the affected versions. If | the packages don't get updated, auto-removal will take care of them. | The sooner this is done, the better. Well when r-base 4.3.3-3 was being held back by what I consider autopkg overuse and a 100+ packages failed on two of the non-release-for-R 32-bit arches. I was not exactly in the mood to deal with 100+ RC bugs manually. _My package_ is fine and I take care of it. But I presume you guys have scripts for this while I do not. Some help and coordination may be useful and appreciated. One more thing: I forgot / failed to follow-up on what I had emailed about earlier. The Matrix package (aka "r-cran-matrix") update affecting the handful of packages _compiling against the Matrix header files_. That is just a few among hundreds using Matrix just as an R package (and which remain unaffected from the exported header API update which is shielded for normal use). CRAN and the Matrix team decided to wait for R 4.4.0 so Matrix will follow shortly after R 4.4.0 and I think I can handle that manually. Either n-day NMUs or simply initial bug reports requesting a rebuild. Cheers, and thanks as always for all you all on the release team do. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: R 4.4.0 coming April 24
Hi Paul, On 18 April 2024 at 11:50, Paul Gevers wrote: | Hi Dirk, | | On 18-04-2024 4:41 a.m., Dirk Eddelbuettel wrote: | > I uploaded a first | > beta release r-base_4.3.3.20240409-1 to 'experimental' a week ago, I just | > followed up with a rc release r-base_4.3.3.20240416-1. | | Thanks for preparing in experimental, as that triggers some QA. | | > Given these non-changes, I do not think we need a formal transition. If the | > release teams thinks otherwise, please let me know, ideally before April 24. | | https://qa.debian.org/excuses.php?experimental=1=r-base shows | there are 5 reverse (test) dependencies who's autopkgtest fail with the | latest r-base in experimental. You'll want to discuss with the | maintainers of those packages what that means for either r-base or their | packages (ideally by filing bug reports to track the discussion). Right now it now only shows 'all reports (re-)running'. But package r-base has had the usual issues in unstable for a few weeks now because 'some people' insist on adding autopkg tests including for architectures / build sizes no longer supported upstream -- R stopped 32 bit support over a year and release ago -- as well continually letting dependencies slip so that the autopkg tests involve old and outdated package releases combined with the fact that BioConductor has _very_ specific release cycles yet they throw r-bioc-* package in too) so there is little I can do on the end of package r-base. Briefly, I am being put into a bad corner by other maintainers here, and I no longer have the energy to discuss that with them. We have been at this for years. The r-base package itself is fine in unstable, as well as with eg the packages I maintain. It is also fine in Ubuntu (and Debian, both also via backports we coordinate at the R mirror network CRAN) and I run an add-on project [1] where *every* CRAN package (and 400+ BioConductor packages) is turned into .deb packages access from R via install.packages() (for the two most recent LTS releases). I know this stuff, I have been using and contributing to R for 25 years, I am in close contact with upstream, and I happen to sit on the R Foundation board. Cheers, Dirk [1] https://eddelbuettel.github.io/r2u | Paul | x[DELETED ATTACHMENT OpenPGP_signature.asc, application/pgp-signature] -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
R 4.4.0 coming April 24
R 4.4.0 will be released on April 24 (following the long established pattern of annual 'a.b.0' releases). As is common, nightlies (as alpha, betas, rc) have been made available for four weeks leading up to it. I uploaded a first beta release r-base_4.3.3.20240409-1 to 'experimental' a week ago, I just followed up with a rc release r-base_4.3.3.20240416-1. (The date is the commit date, the tar.gz sources are updated nightly.) There is no documented (or anticipated) API change (see the doc/NEWS file, I also followed up with upstream to double-check) so the virtual tag can stay at r-api-4.0, the tag we have used since R 4.0.0. The graphics API (affecting graphics devices) also did not change and remains at 16 so the (auto-generated) tag remains at r-graphics-engine-16. Given these non-changes, I do not think we need a formal transition. If the release teams thinks otherwise, please let me know, ideally before April 24. Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
R 4.4.0 coming April 24
R 4.4.0 will be released on April 24 (following the long established pattern of annual 'a.b.0' releases). As is common, nightlies (as alpha, betas, rc) have been made available for four weeks leading up to it. I uploaded a first beta release r-base_4.3.3.20240409-1 to 'experimental' a week ago, I just followed up with a rc release r-base_4.3.3.20240416-1. (The date is the commit date, the tar.gz sources are updated nightly.) There is no documented (or anticipated) API change (see the doc/NEWS file, I also followed up with upstream to double-check) so the virtual tag can stay at r-api-4.0, the tag we have used since R 4.0.0. The graphics API (affecting graphics devices) also did not change and remains at 16 so the (auto-generated) tag remains at r-graphics-engine-16. Given these non-changes, I do not think we need a formal transition. If the release teams thinks otherwise, please let me know, ideally before April 24. Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [Rd] read.csv
As an aside, the odd format does not seem to bother data.table::fread() which also happens to be my personally preferred workhorse for these tasks: > fname <- "/tmp/r/filename.csv" > read.csv(fname) Gene SNP prot log10p 1 YWHAE 13:62129097_C_T 1433 7.35 2 YWHAE 4:72617557_T_TA 1433 7.73 > data.table::fread(fname) Gene SNP prot log10p 1: YWHAE 13:62129097_C_T 1433E 7.35 2: YWHAE 4:72617557_T_TA 1433E 7.73 > readr::read_csv(fname) Rows: 2 Columns: 4 ── Column specification ── Delimiter: "," chr (2): Gene, SNP dbl (2): prot, log10p ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. # A tibble: 2 × 4 Gene SNP prot log10p 1 YWHAE 13:62129097_C_T 1433 7.35 2 YWHAE 4:72617557_T_TA 1433 7.73 > That's on Linux, everything current but dev version of data.table. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] read.csv
On 16 April 2024 at 10:46, jing hua zhao wrote: | Dear R-developers, | | I came to a somewhat unexpected behaviour of read.csv() which is trivial but worthwhile to note -- my data involves a protein named "1433E" but to save space I drop the quote so it becomes, | | Gene,SNP,prot,log10p | YWHAE,13:62129097_C_T,1433E,7.35 | YWHAE,4:72617557_T_TA,1433E,7.73 | | Both read.cv() and readr::read_csv() consider prot(ein) name as (possibly confused by scientific notation) numeric 1433 which only alerts me when I tried to combine data, | | all_data <- data.frame() | for (protein in proteins[1:7]) | { |cat(protein,":\n") |f <- paste0(protein,".csv") |if(file.exists(f)) |{ | p <- read.csv(f) | print(p) | if(nrow(p)>0) all_data <- bind_rows(all_data,p) |} | } | | proteins[1:7] | [1] "1433B" "1433E" "1433F" "1433G" "1433S" "1433T" "1433Z" | | dplyr::bind_rows() failed to work due to incompatible types nevertheless rbind() went ahead without warnings. You may need to reconsider aiding read.csv() (and alternate reading functions) by supplying column-type info instead of relying on educated heuristic guesses which appear to fail here due to the nature of your data. Other storage formats can store type info. That is generally safer and may be an option too. I think this was more of an email for r-help than r-devel. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Bug#970021: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)
On 9 April 2024 at 18:45, Jose Manuel Abuin Mosquera wrote: | If possible, I would like to contribute. At work we use the Go and | Python implementations, also, in the short term, we will start using the | Rust one. Similar for us, and we have seen plenty of build headaches across pypi or conda ... (Hence my earlier hint about nanoarrow. No linking, uses the C API of two void pointers.) | Just to point out, the Rust version has its own native implementation, | here: https://github.com/apache/arrow-rs . And IIRC there is an independent Arrow implementation (in Rust) used by polars making it two possible ITPs: vanilla Arrow from Apache and Arrow from polars. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#970021: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)
On 9 April 2024 at 18:45, Jose Manuel Abuin Mosquera wrote: | If possible, I would like to contribute. At work we use the Go and | Python implementations, also, in the short term, we will start using the | Rust one. Similar for us, and we have seen plenty of build headaches across pypi or conda ... (Hence my earlier hint about nanoarrow. No linking, uses the C API of two void pointers.) | Just to point out, the Rust version has its own native implementation, | here: https://github.com/apache/arrow-rs . And IIRC there is an independent Arrow implementation (in Rust) used by polars making it two possible ITPs: vanilla Arrow from Apache and Arrow from polars. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)
On 9 April 2024 at 18:45, Jose Manuel Abuin Mosquera wrote: | If possible, I would like to contribute. At work we use the Go and | Python implementations, also, in the short term, we will start using the | Rust one. Similar for us, and we have seen plenty of build headaches across pypi or conda ... (Hence my earlier hint about nanoarrow. No linking, uses the C API of two void pointers.) | Just to point out, the Rust version has its own native implementation, | here: https://github.com/apache/arrow-rs . And IIRC there is an independent Arrow implementation (in Rust) used by polars making it two possible ITPs: vanilla Arrow from Apache and Arrow from polars. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1068117: dieharder: dab_monobit2 crashes with ntuple > 17
On 8 April 2024 at 18:21, Lucas Thode wrote: | Apologies for the confusion, I didn't realize the patch in question was a new | addition. Just confirmed that it errors out instead of segfaulting or hanging. Thanks for confirming! Dirk | On Sat, Apr 6, 2024 at 5:32 PM Dirk Eddelbuettel wrote: | | | Hi Lucas, | | As Milan suggested, please sure you are current. If in doubt, park you | current checkout and start from | | git checkout https://github.com/eddelbuettel/dieharder.git | | where you should see today's commit from merging PR 24. | | edd@rob:~/git/dieharder(master)$ git ls | head | * 3442896 - (HEAD -> master, origin/master, origin/HEAD) Merge pull | request #24 from mbroz/dab-monobit2-ntup (10 hours ago) | |\ | | * d928cbf - Avoid overflow in DAB Monobit2 test. (10 hours ago) | | |/ | * 2d4763a - Merge pull request #22 from mbroz/master (6 weeks ago) | | |\ | | * 67989b4 - Do not report file input rewind if nothing was read | repeatedly. (6 weeks ago) | |/ | * c987a15 - Fix segfault for wrongly specified test on commandline. (# | 21) (9 weeks ago) | * a186d90 - Merge pull request #20 from mbroz/warning-fixes (2 months | ago) | edd@rob:~/git/dieharder(master)$ | | Do not rely on the Debian package, it has not been updated yet. | | Cheers, Dirk | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | | -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1068117: dieharder: dab_monobit2 crashes with ntuple > 17
Hi Lucas, As Milan suggested, please sure you are current. If in doubt, park you current checkout and start from git checkout https://github.com/eddelbuettel/dieharder.git where you should see today's commit from merging PR 24. edd@rob:~/git/dieharder(master)$ git ls | head * 3442896 - (HEAD -> master, origin/master, origin/HEAD) Merge pull request #24 from mbroz/dab-monobit2-ntup (10 hours ago) |\ | * d928cbf - Avoid overflow in DAB Monobit2 test. (10 hours ago) |/ * 2d4763a - Merge pull request #22 from mbroz/master (6 weeks ago) |\ | * 67989b4 - Do not report file input rewind if nothing was read repeatedly. (6 weeks ago) |/ * c987a15 - Fix segfault for wrongly specified test on commandline. (#21) (9 weeks ago) * a186d90 - Merge pull request #20 from mbroz/warning-fixes (2 months ago) edd@rob:~/git/dieharder(master)$ Do not rely on the Debian package, it has not been updated yet. Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1068117: dieharder: dab_monobit2 crashes with ntuple > 17
Hi Lucas, On 30 March 2024 at 22:47, Lucas Thode wrote: | Package: dieharder | Version: 3.31.1.4-1.1 | Severity: normal | X-Debbugs-Cc: thode...@gmail.com | | Dear Maintainer, | | `dieharder -d 209 -n $nvalue` crashes for $nvalue>17: | | $ dieharder -d 209 | #=# | #dieharder version 3.31.1 Copyright 2003 Robert G. Brown # | #=# |rng_name|rands/second| Seed | | mt19937| 1.55e+08 |2819069712| | #=# | test_name |ntup| tsamples |psamples| p-value |Assessment | #=# | dab_monobit2| 12| 6500| 1|0.40510331| PASSED | $ dieharder -d 209 -n 12 | #=# | #dieharder version 3.31.1 Copyright 2003 Robert G. Brown # | #=# |rng_name|rands/second| Seed | | mt19937| 2.54e+08 | 152376536| | #=# | test_name |ntup| tsamples |psamples| p-value |Assessment | #=# | dab_monobit2| 12| 6500| 1|0.10580971| PASSED | $ dieharder -d 209 -n 17 | #=# | #dieharder version 3.31.1 Copyright 2003 Robert G. Brown # | #=# |rng_name|rands/second| Seed | | mt19937| 2.29e+08 |2998370165| | #=# | test_name |ntup| tsamples |psamples| p-value |Assessment | #=# | dab_monobit2| 17| 6500| 1|1.| FAILED | $ dieharder -d 209 -n 18 | *** stack smashing detected ***: terminated | Aborted | $ dieharder -d 209 -n 27 | *** stack smashing detected ***: terminated | Aborted | $ dieharder -d 209 -n 28 | Segmentation fault | | P.S. There are more issues with this test not liking non-standard n values, as | can be seen from it failing miserably on mt19937 with -n 17, but the crash is | the most glaring problem. Good stuff. dieharder is a little 'dormant' upstream and via my maintenance of the Debian package I have somewhat inherited upstream. Can you take a look please if this was take care of already at the (somewhat active) shadow repo of mine at https://github.com/eddelbuettel/dieharder I will also CC Milan who has been very attentive with a few other fixes, and may have seen this one too. We are trying to get hold of Robert but no luck yet. Cheers, Dirk PS Apologies also for replying late. I usually get to bug reports within a day but it's a teaching term plus being busy at my 'real job' puts some stress on my response times. :-/ I think I reply quicker to GH issues as I am on GH all day anyway... | -- System Information: | Debian Release: trixie/sid | APT prefers testing | APT policy: (500, 'testing') | Architecture: amd64 (x86_64) | Foreign Architectures: i386 | | Kernel: Linux 6.3.0-1-amd64 (SMP w/12 CPU threads; PREEMPT) | Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set | Shell: /bin/sh linked to /usr/bin/dash | Init: systemd (via /run/systemd/system) | LSM: AppArmor: enabled | | Versions of packages dieharder depends on: | ii libc6 2.37-15 | ii libdieharder3t64 3.31.1.4-1.1 | ii libgsl27 2.7.1+dfsg-6+b1 | | dieharder recommends no packages. | | dieharder suggests no packages. | | -- no debconf information -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [Rd] RSS Feed of NEWS needs a hand
On 2 April 2024 at 09:41, Duncan Murdoch wrote: | On 02/04/2024 8:50 a.m., Dirk Eddelbuettel wrote: | > On 2 April 2024 at 07:37, Dirk Eddelbuettel wrote: | > blosxom, simple as it is, takes (IIRC) filesystem ctime as the posting | > timestamp so would be best if you had a backup with the old timestamps. | > | | Looks like those dates are gone -- the switch from svn to git involved | some copying, and I didn't preserve timestamps. You can recreate them. Nobody cares too much about the hour or minute with a day as there (always ? generally ?) was only one post per day. But preserving the overall sort order would be nice as would not spamming the recent posts with old ones. | I'll see about regenerating the more recent ones. I don't think there's | much historical interest in the pre-4.0 versions, so maybe I'll just | nuke those. I suspect you will have to do it programmatically too. You could even take the old timestamps of the svn and/or git commits and then touch the ctime (or maybe it was mtime, I forget but 'touch --time= file' works). "Been there done that" for part of my 20+ year old blog infrastructure too. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] RSS Feed of NEWS needs a hand
On 2 April 2024 at 07:37, Dirk Eddelbuettel wrote: | | On 2 April 2024 at 08:21, Duncan Murdoch wrote: | | I have just added R-4-4-branch to the feeds. I think I've also fixed | | the \I issue, so today's news includes a long list of old changes. | | These feeds can fussy: looks like you triggered many updates. Feedly | currently greets me with 569 new posts (!!) in that channel. Now 745 -- and the bigger issue seems to be that the 'posted at' timestamp is wrong and 'current' so all the old posts are now seen as 'fresh'. Hence the flood ... of unsorted post. blosxom, simple as it is, takes (IIRC) filesystem ctime as the posting timestamp so would be best if you had a backup with the old timestamps. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] RSS Feed of NEWS needs a hand
On 2 April 2024 at 08:21, Duncan Murdoch wrote: | I have just added R-4-4-branch to the feeds. I think I've also fixed | the \I issue, so today's news includes a long list of old changes. These feeds can fussy: looks like you triggered many updates. Feedly currently greets me with 569 new posts (!!) in that channel. Easy enough to mark as all read -- first off thanks for updating the service! Dirk, a loyal reader since day one -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [R-pkg-devel] Order of repo access from options("repos")
On 1 April 2024 at 17:44, Uwe Ligges wrote: | Untested: | | install.packages() calls available.packages() to find out which packages | are available - and passes a "filters" argument if supplied. | That can be a user defined filter. It should be possible to write a user | defined filter which prefers the packages in your local repo. Intriguing. Presumably that would work for update.packages() too? (We actually have a use case at work, and as one way out I created another side-repo to place a package with an incremented version number so it would 'win' on hightest version; this is due to some non-trivial issues with the underlying dependencies.) Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] Order of repo access from options("repos")
On 31 March 2024 at 11:43, Martin Morgan wrote: | So all repositories are consulted and then the result filtered to contain just | the most recent version of each. Does it matter then what order the | repositories are visited? Right. I fall for that too often, as I did here. The order matters for .libPaths() where the first match is use, for package install the highest number (from any entry in getOption(repos)) wins. Thanks for catching my thinko. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Bug#970021: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)
Julian, Arrow is a complicated and large package. We use it at work (where there is a fair amount of Python, also to Conda etc) and do have issues with more complex builds especially because it is 'data infrastructure' and can come in from different parts. I would recommend against packaging at old one -- we also have seen issues with different (py)arrow version biting. Have you seen https://github.com/apache/arrow-nanoarrow ? It works via the C API to Arrow which interchanges data via two void* to the the two structs for arrow array and schema -- and avoids linkage issue. (In user space the pyarrow or R arrow packages can still be used also interfacing via these.) I have been using it for R package bindings for some time and we plan to expand that (again, at work) -- as do others. It is already use by duckdb, by the Arrow 'ADBC' interfaces (which are generic in the ODBC/JDBC sense but for Arrow, and also by a python interface to snowflake. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#970021: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)
Julian, Arrow is a complicated and large package. We use it at work (where there is a fair amount of Python, also to Conda etc) and do have issues with more complex builds especially because it is 'data infrastructure' and can come in from different parts. I would recommend against packaging at old one -- we also have seen issues with different (py)arrow version biting. Have you seen https://github.com/apache/arrow-nanoarrow ? It works via the C API to Arrow which interchanges data via two void* to the the two structs for arrow array and schema -- and avoids linkage issue. (In user space the pyarrow or R arrow packages can still be used also interfacing via these.) I have been using it for R package bindings for some time and we plan to expand that (again, at work) -- as do others. It is already use by duckdb, by the Arrow 'ADBC' interfaces (which are generic in the ODBC/JDBC sense but for Arrow, and also by a python interface to snowflake. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)
Julian, Arrow is a complicated and large package. We use it at work (where there is a fair amount of Python, also to Conda etc) and do have issues with more complex builds especially because it is 'data infrastructure' and can come in from different parts. I would recommend against packaging at old one -- we also have seen issues with different (py)arrow version biting. Have you seen https://github.com/apache/arrow-nanoarrow ? It works via the C API to Arrow which interchanges data via two void* to the the two structs for arrow array and schema -- and avoids linkage issue. (In user space the pyarrow or R arrow packages can still be used also interfacing via these.) I have been using it for R package bindings for some time and we plan to expand that (again, at work) -- as do others. It is already use by duckdb, by the Arrow 'ADBC' interfaces (which are generic in the ODBC/JDBC sense but for Arrow, and also by a python interface to snowflake. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)
Julian, Arrow is a complicated and large package. We use it at work (where there is a fair amount of Python, also to Conda etc) and do have issues with more complex builds especially because it is 'data infrastructure' and can come in from different parts. I would recommend against packaging at old one -- we also have seen issues with different (py)arrow version biting. Have you seen https://github.com/apache/arrow-nanoarrow ? It works via the C API to Arrow which interchanges data via two void* to the the two structs for arrow array and schema -- and avoids linkage issue. (In user space the pyarrow or R arrow packages can still be used also interfacing via these.) I have been using it for R package bindings for some time and we plan to expand that (again, at work) -- as do others. It is already use by duckdb, by the Arrow 'ADBC' interfaces (which are generic in the ODBC/JDBC sense but for Arrow, and also by a python interface to snowflake. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)
Julian, Arrow is a complicated and large package. We use it at work (where there is a fair amount of Python, also to Conda etc) and do have issues with more complex builds especially because it is 'data infrastructure' and can come in from different parts. I would recommend against packaging at old one -- we also have seen issues with different (py)arrow version biting. Have you seen https://github.com/apache/arrow-nanoarrow ? It works via the C API to Arrow which interchanges data via two void* to the the two structs for arrow array and schema -- and avoids linkage issue. (In user space the pyarrow or R arrow packages can still be used also interfacing via these.) I have been using it for R package bindings for some time and we plan to expand that (again, at work) -- as do others. It is already use by duckdb, by the Arrow 'ADBC' interfaces (which are generic in the ODBC/JDBC sense but for Arrow, and also by a python interface to snowflake. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [R-pkg-devel] Order of repo access from options("repos")
Greg, There are AFAICT two issues here: how R unrolls the named vector that is the 'repos' element in the list 'options', and how your computer resolves DNS for localhost vs 172.17.0.1. I would try something like options(repos = c(CRAN = "http://localhost:3001/proxy;, C = "http://localhost:3002;, B = "http://localhost:3003/proxy;, A = "http://localhost:3004;)) or the equivalent with 172.17.0.1. When I do that here I get errors from first to last as we expect: > options(repos = c(CRAN = "http://localhost:3001/proxy;, C = "http://localhost:3002;, B = "http://localhost:3003/proxy;, A = "http://localhost:3004;)) > available.packages() Warning: unable to access index for repository http://localhost:3001/proxy/src/contrib: cannot open URL 'http://localhost:3001/proxy/src/contrib/PACKAGES' Warning: unable to access index for repository http://localhost:3002/src/contrib: cannot open URL 'http://localhost:3002/src/contrib/PACKAGES' Warning: unable to access index for repository http://localhost:3003/proxy/src/contrib: cannot open URL 'http://localhost:3003/proxy/src/contrib/PACKAGES' Warning: unable to access index for repository http://localhost:3004/src/contrib: cannot open URL 'http://localhost:3004/src/contrib/PACKAGES' Package Version Priority Depends Imports LinkingTo Suggests Enhances License License_is_FOSS License_restricts_use OS_type Archs MD5sum NeedsCompilation File Repository > Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [Rd] Question regarding .make_numeric_version with non-character input
On 29 March 2024 at 17:56, Andrea Gilardi via R-devel wrote: | Dear all, | | I have a question regarding the R-devel version of .make_numeric_version() function. As far as I can understand, the current code (https://github.com/wch/r-source/blob/66b91578dfc85140968f07dd4e72d8cb8a54f4c6/src/library/base/R/version.R#L50-L56) runs the following steps in case of non-character input: | | 1. It creates a message named msg using gettextf. | 2. Such object is then passed to stop(msg) or warning(msg) according to the following condition | | tolower(Sys.getenv("_R_CHECK_STOP_ON_INVALID_NUMERIC_VERSION_INPUTS_") != "false") | | However, I don't understand the previous code since the output of Sys.getenv("_R_CHECK_STOP_ON_INVALID_NUMERIC_VERSION_INPUTS_") != "false" is just a boolean value and tolower() will just return "true" or "false". Maybe the intended code is tolower(Sys.getenv("_R_CHECK_STOP_ON_INVALID_NUMERIC_VERSION_INPUTS_")) != "false" ? Or am I missing something? Yes, agreed -- good catch. In full, the code is (removing leading whitespace, and putting it back onto single lines) msg <- gettextf("invalid non-character version specification 'x' (type: %s)", typeof(x)) if(tolower(Sys.getenv("_R_CHECK_STOP_ON_INVALID_NUMERIC_VERSION_INPUTS_") != "false")) stop(msg, domain = NA) else warning(msg, domain = NA, immediate. = TRUE) where msg is constant (but reflecting language settings via standard i18n) and as you not the parentheses appear wrong. What was intended is likely msg <- gettextf("invalid non-character version specification 'x' (type: %s)", typeof(x)) if(tolower(Sys.getenv("_R_CHECK_STOP_ON_INVALID_NUMERIC_VERSION_INPUTS_")) != "false") stop(msg, domain = NA) else warning(msg, domain = NA, immediate. = TRUE) If you use bugzilla before and have a handle, maybe file a bug report with this as patch at https://bugs.r-project.org/ Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [R-sig-Debian] Problem Installing R 4.3.3 on Vanilla based Jammy Ubuntu
Marco, It usually helps to be aware of one's hardware platform ;-) There is an option for Docker command to tell it to switch to x86_64, my colleagues who are on M1 and alike use that to access the generally richer eco-system of binaries for the Intel world. If on the other hand you prefer to fully self-sufficient and compile 'everything' you now at least know that the RRutter PPA gives you R. Michael: Should we look into mirroring both architectures? Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org ___ R-SIG-Debian mailing list R-SIG-Debian@r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-debian
Re: [R-pkg-devel] using portable simd instructions
On 27 March 2024 at 08:48, jesse koops wrote: | Thank you, I was not aware of the easy way to search CRAN. I looked at | rcppsimdjson of course, but couldn't figure it out since it is done in | the simdjson library if interpret it correclty, not within the R | ecosystem and I didn't know how that would change things. Writing R | extensions assumes a lot of prior knowledge so I will have to work my | way up to there first. I think I have (at least) one other package doing something like this _in the library layer too_ as suggested by Tomas, namely crc32c as used by digest. You could study how crc32c [0] does this for x86_64 and arm64 to get hardware optimization. (This may be more specific cpu hardware optimization but at least the library and cmake files are small.) I decided as a teenager that assembler wasn't for me and haven't looked back, but I happily take advantage of it when bundled well. So strong second for the recommendation by Tomas to rely on this being done in an external and tested library. (Another interesting one there is highway [1]. Just packaging that would likely be an excellent contribution.) Dirk [0] repo: https://github.com/google/crc32c [1] repo: https://github.com/google/highway docs: https://google.github.io/highway/en/master/ | | Op di 26 mrt 2024 om 15:41 schreef Dirk Eddelbuettel : | > | > | > On 26 March 2024 at 10:53, jesse koops wrote: | > | How can I make this portable and CRAN-acceptable? | > | > But writing (or borrowing ?) some hardware detection via either configure / | > autoconf or cmake. This is no different than other tasks decided at install-time. | > | > Start with 'Writing R Extensions', as always, and work your way up from | > there. And if memory serves there are already a few other packages with SIMD | > at CRAN so you can also try to take advantage of the search for a 'token' | > (here: 'SIMD') at the (unofficial) CRAN mirror at GitHub: | > | >https://github.com/search?q=org%3Acran%20SIMD=code | > | > Hth, Dirk | > | > -- | > dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [Rd] paths capability FALSE on devel?
On 27 March 2024 at 11:03, Prof Brian Ripley via R-devel wrote: | On 27/03/2024 10:28, Alexandre Courtiol wrote: | > Hi all, | > | > I don't know if it is a local issue on my hands or not, but after | > installing R-devel the output of grDevices::dev.capabilities()$paths is | > FALSE, while it is TRUE for R 4.3.3. | > Relatedly, I have issues with plotting paths on devel. | > | > At this stage, I simply would like to know if others running R devel and R | > 4.3.3 can replicate this behaviour and if there are obvious reasons why the | > observed change would be expected. | | The help says | | Query the capabilities of the current graphics device. | | You haven't told us what that was. See the posting guide for the "at a | minimum" information you also did not provide Yes, with that I see > x11() > grDevices::dev.capabilities()$paths [1] TRUE > > getRversion() [1] ‘4.5.0’ > > R.version _ platform x86_64-pc-linux-gnu arch x86_64 os linux-gnu system x86_64, linux-gnu status Under development (unstable) major 4 minor 5.0 year 2024 month 03 day27 svn rev86214 language R version.string R Under development (unstable) (2024-03-27 r86214) nickname Unsuffered Consequences > Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [R-pkg-devel] Check results on r-devel-windows claiming error but tests seem to pass?
On 26 March 2024 at 09:37, Dirk Eddelbuettel wrote: | | Avi, | | That was a hickup and is now taken care of. When discussing this (off-line) | with Jeroen we (rightly) suggested that keeping an eye on Typo, as usual, "he (rightly) suggested". My bad. D. | |https://contributor.r-project.org/svn-dashboard/ | | is one possibility to keep track while we have no status alert system from | CRAN. I too was quite confused because a new upload showed errors, and | win-builder for r-devel just swallowed any uploads. | | Cheers, Dirk | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | | __ | R-package-devel@r-project.org mailing list | https://stat.ethz.ch/mailman/listinfo/r-package-devel -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] using portable simd instructions
On 26 March 2024 at 10:53, jesse koops wrote: | How can I make this portable and CRAN-acceptable? But writing (or borrowing ?) some hardware detection via either configure / autoconf or cmake. This is no different than other tasks decided at install-time. Start with 'Writing R Extensions', as always, and work your way up from there. And if memory serves there are already a few other packages with SIMD at CRAN so you can also try to take advantage of the search for a 'token' (here: 'SIMD') at the (unofficial) CRAN mirror at GitHub: https://github.com/search?q=org%3Acran%20SIMD=code Hth, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] Check results on r-devel-windows claiming error but tests seem to pass?
Avi, That was a hickup and is now taken care of. When discussing this (off-line) with Jeroen we (rightly) suggested that keeping an eye on https://contributor.r-project.org/svn-dashboard/ is one possibility to keep track while we have no status alert system from CRAN. I too was quite confused because a new upload showed errors, and win-builder for r-devel just swallowed any uploads. Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] How to store large data to be used in an R package?
On 25 March 2024 at 11:12, Jairo Hidalgo Migueles wrote: | I'm reaching out to seek some guidance regarding the storage of relatively | large data, ranging from 10-40 MB, intended for use within an R package. | Specifically, this data consists of regression and random forest models | crucial for making predictions within our R package. | | Initially, I attempted to save these models as internal data within the | package. While this approach maintains functionality, it has led to a | package size exceeding 20 MB. I'm concerned that this would complicate | submitting the package to CRAN in the future. | | I would greatly appreciate any suggestions or insights you may have on | alternative methods or best practices for efficiently storing and accessing | this data within our R package. Brooke and I wrote a paper on one way of addressing it via a 'data' package accessibly via an Additional_repositories: entry supported by a drat repo. See https://journal.r-project.org/archive/2017/RJ-2017-026/index.html for the paper which contains a nice slow walkthrough of all the details. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: CRAN Package Matrix update and a possible transition or not
On 23 March 2024 at 07:25, Dirk Eddelbuettel wrote: | | On 22 March 2024 at 11:12, Dirk Eddelbuettel wrote: | | | | On 27 February 2024 at 19:01, Dirk Eddelbuettel wrote: | | | A couple of days ago, the (effective) Maintainer and rather active developer | | | of the Matrix package Mikael Jagan (CC'ed) posted on the r-package-devel list | | | (the primary list for R package development) that the upcoming change of | | | Matrix 1.7-0, planned for March 11, will be _very midly disruptive_ but only | | | to the very small subset of Matrix dependents that _actually use its | | | headers_. See the full mail at [1]. The gory detail is that Matrix embeds and | | | uses an advanced sparse matrix library (called SuiteSparse) which it updates, | | | and the change in headers affects those (and only those!) who compile against | | | these headers. | | | | | | Now, Matrix currently has 1333 packages at CRAN using it [2]. But he lists 15 | | | (fifteen) of possibly breaking because these are the packages having a | | | 'LinkingTo: Matrix' [3]. That 1.113 per cent. | | | | | | It is similar for us. Running a simple `apt-cache rdepends r-cran-matrix | wc -l` | | | gets us 145 lines (including headers and meta packages). Call it 140 that a | | | transition would cover. | | | | | | But among the 15 affected only five are in Debian: | | | | | | irlbar-cran-irlba | | | lme4 r-cran-lme4 | | | OpenMx r-cran-openmx | | | TMP r-cran-tmp | | | bcSeqr-bioc-bcseq | | | | | | One of these is mine (lme4), I can easily produce a sequenced update. I | | | suggested we deal with the other _four packages_ by standard bug reports and | | | NMUs as needed instead of forcing likely 140 packages through a transition. | | | | | | Note that is in fact truly different from the past two hickups with Matrix | | | transition which happened at the R-only level of caching elements of its OO | | | resolution and whatnot hence affecting more package. This time it really is | | | compilation, and packages NOT touching the SuiteSparse headers (ie roughly | | | 135 or so of the 140 Debian packages using Matrix) will not be affected. | | | | | | That said, I of course defer to the release team. If the feeling is 'eff | | | this, transition it is' that is what we do. Whether I think is overkill or | | | not is moot. | | | | | | Feel free to CC me as I am not longer a regular on debian-devel. | | | | The new Matrix release is now on CRAN so I plan to proceed as outlined with a | | first upload experimental, likely later today or this evening (my timezone). | | My bad. That was another release in the 1.6-* series, namely 1.6-5. No | special action needed. 1.7-0 is still pending. Gaaa. Wrong *again*. 1.7-0 *was* released (see my CRANberries log [1]) but has since been withdrawn (!!) at CRAN. We continue to monitor. Dirk [1] https://dirk.eddelbuettel.com/cranberries/2024/03/22/#Matrix_1.7-0 | Dirk | | | Dirk | | | | | | | | Cheers, Dirk | | | | | | | | | [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html | | | [2] In R: | | | > db <- tools::CRAN_package_db() | | | > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] | | | > length(matrixrevdep)# the vector 'matrixrevdep' list all | | | [1] 1333 | | | > | | | [3] LinkingTo:, despite its name, is the directive to include the package C | | | headers in the compilation. The 'db' object above allows to us to subset | | | which of the 1333 packages using Matrix also have a LinkingTo | | | | | | | | | -- | | | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | | | | -- | | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: CRAN Package Matrix update and a possible transition or not
On 23 March 2024 at 07:25, Dirk Eddelbuettel wrote: | | On 22 March 2024 at 11:12, Dirk Eddelbuettel wrote: | | | | On 27 February 2024 at 19:01, Dirk Eddelbuettel wrote: | | | A couple of days ago, the (effective) Maintainer and rather active developer | | | of the Matrix package Mikael Jagan (CC'ed) posted on the r-package-devel list | | | (the primary list for R package development) that the upcoming change of | | | Matrix 1.7-0, planned for March 11, will be _very midly disruptive_ but only | | | to the very small subset of Matrix dependents that _actually use its | | | headers_. See the full mail at [1]. The gory detail is that Matrix embeds and | | | uses an advanced sparse matrix library (called SuiteSparse) which it updates, | | | and the change in headers affects those (and only those!) who compile against | | | these headers. | | | | | | Now, Matrix currently has 1333 packages at CRAN using it [2]. But he lists 15 | | | (fifteen) of possibly breaking because these are the packages having a | | | 'LinkingTo: Matrix' [3]. That 1.113 per cent. | | | | | | It is similar for us. Running a simple `apt-cache rdepends r-cran-matrix | wc -l` | | | gets us 145 lines (including headers and meta packages). Call it 140 that a | | | transition would cover. | | | | | | But among the 15 affected only five are in Debian: | | | | | | irlbar-cran-irlba | | | lme4 r-cran-lme4 | | | OpenMx r-cran-openmx | | | TMP r-cran-tmp | | | bcSeqr-bioc-bcseq | | | | | | One of these is mine (lme4), I can easily produce a sequenced update. I | | | suggested we deal with the other _four packages_ by standard bug reports and | | | NMUs as needed instead of forcing likely 140 packages through a transition. | | | | | | Note that is in fact truly different from the past two hickups with Matrix | | | transition which happened at the R-only level of caching elements of its OO | | | resolution and whatnot hence affecting more package. This time it really is | | | compilation, and packages NOT touching the SuiteSparse headers (ie roughly | | | 135 or so of the 140 Debian packages using Matrix) will not be affected. | | | | | | That said, I of course defer to the release team. If the feeling is 'eff | | | this, transition it is' that is what we do. Whether I think is overkill or | | | not is moot. | | | | | | Feel free to CC me as I am not longer a regular on debian-devel. | | | | The new Matrix release is now on CRAN so I plan to proceed as outlined with a | | first upload experimental, likely later today or this evening (my timezone). | | My bad. That was another release in the 1.6-* series, namely 1.6-5. No | special action needed. 1.7-0 is still pending. Gaaa. Wrong *again*. 1.7-0 *was* released (see my CRANberries log [1]) but has since been withdrawn (!!) at CRAN. We continue to monitor. Dirk [1] https://dirk.eddelbuettel.com/cranberries/2024/03/22/#Matrix_1.7-0 | Dirk | | | Dirk | | | | | | | | Cheers, Dirk | | | | | | | | | [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html | | | [2] In R: | | | > db <- tools::CRAN_package_db() | | | > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] | | | > length(matrixrevdep)# the vector 'matrixrevdep' list all | | | [1] 1333 | | | > | | | [3] LinkingTo:, despite its name, is the directive to include the package C | | | headers in the compilation. The 'db' object above allows to us to subset | | | which of the 1333 packages using Matrix also have a LinkingTo | | | | | | | | | -- | | | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | | | | -- | | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: CRAN Package Matrix update and a possible transition or not
On 22 March 2024 at 11:12, Dirk Eddelbuettel wrote: | | On 27 February 2024 at 19:01, Dirk Eddelbuettel wrote: | | A couple of days ago, the (effective) Maintainer and rather active developer | | of the Matrix package Mikael Jagan (CC'ed) posted on the r-package-devel list | | (the primary list for R package development) that the upcoming change of | | Matrix 1.7-0, planned for March 11, will be _very midly disruptive_ but only | | to the very small subset of Matrix dependents that _actually use its | | headers_. See the full mail at [1]. The gory detail is that Matrix embeds and | | uses an advanced sparse matrix library (called SuiteSparse) which it updates, | | and the change in headers affects those (and only those!) who compile against | | these headers. | | | | Now, Matrix currently has 1333 packages at CRAN using it [2]. But he lists 15 | | (fifteen) of possibly breaking because these are the packages having a | | 'LinkingTo: Matrix' [3]. That 1.113 per cent. | | | | It is similar for us. Running a simple `apt-cache rdepends r-cran-matrix | wc -l` | | gets us 145 lines (including headers and meta packages). Call it 140 that a | | transition would cover. | | | | But among the 15 affected only five are in Debian: | | | | irlbar-cran-irlba | | lme4 r-cran-lme4 | | OpenMx r-cran-openmx | | TMP r-cran-tmp | | bcSeqr-bioc-bcseq | | | | One of these is mine (lme4), I can easily produce a sequenced update. I | | suggested we deal with the other _four packages_ by standard bug reports and | | NMUs as needed instead of forcing likely 140 packages through a transition. | | | | Note that is in fact truly different from the past two hickups with Matrix | | transition which happened at the R-only level of caching elements of its OO | | resolution and whatnot hence affecting more package. This time it really is | | compilation, and packages NOT touching the SuiteSparse headers (ie roughly | | 135 or so of the 140 Debian packages using Matrix) will not be affected. | | | | That said, I of course defer to the release team. If the feeling is 'eff | | this, transition it is' that is what we do. Whether I think is overkill or | | not is moot. | | | | Feel free to CC me as I am not longer a regular on debian-devel. | | The new Matrix release is now on CRAN so I plan to proceed as outlined with a | first upload experimental, likely later today or this evening (my timezone). My bad. That was another release in the 1.6-* series, namely 1.6-5. No special action needed. 1.7-0 is still pending. Dirk | Dirk | | | | | Cheers, Dirk | | | | | | [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html | | [2] In R: | | > db <- tools::CRAN_package_db() | | > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] | | > length(matrixrevdep)# the vector 'matrixrevdep' list all | | [1] 1333 | | > | | [3] LinkingTo:, despite its name, is the directive to include the package C | | headers in the compilation. The 'db' object above allows to us to subset | | which of the 1333 packages using Matrix also have a LinkingTo | | | | | | -- | | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: CRAN Package Matrix update and a possible transition or not
On 22 March 2024 at 11:12, Dirk Eddelbuettel wrote: | | On 27 February 2024 at 19:01, Dirk Eddelbuettel wrote: | | A couple of days ago, the (effective) Maintainer and rather active developer | | of the Matrix package Mikael Jagan (CC'ed) posted on the r-package-devel list | | (the primary list for R package development) that the upcoming change of | | Matrix 1.7-0, planned for March 11, will be _very midly disruptive_ but only | | to the very small subset of Matrix dependents that _actually use its | | headers_. See the full mail at [1]. The gory detail is that Matrix embeds and | | uses an advanced sparse matrix library (called SuiteSparse) which it updates, | | and the change in headers affects those (and only those!) who compile against | | these headers. | | | | Now, Matrix currently has 1333 packages at CRAN using it [2]. But he lists 15 | | (fifteen) of possibly breaking because these are the packages having a | | 'LinkingTo: Matrix' [3]. That 1.113 per cent. | | | | It is similar for us. Running a simple `apt-cache rdepends r-cran-matrix | wc -l` | | gets us 145 lines (including headers and meta packages). Call it 140 that a | | transition would cover. | | | | But among the 15 affected only five are in Debian: | | | | irlbar-cran-irlba | | lme4 r-cran-lme4 | | OpenMx r-cran-openmx | | TMP r-cran-tmp | | bcSeqr-bioc-bcseq | | | | One of these is mine (lme4), I can easily produce a sequenced update. I | | suggested we deal with the other _four packages_ by standard bug reports and | | NMUs as needed instead of forcing likely 140 packages through a transition. | | | | Note that is in fact truly different from the past two hickups with Matrix | | transition which happened at the R-only level of caching elements of its OO | | resolution and whatnot hence affecting more package. This time it really is | | compilation, and packages NOT touching the SuiteSparse headers (ie roughly | | 135 or so of the 140 Debian packages using Matrix) will not be affected. | | | | That said, I of course defer to the release team. If the feeling is 'eff | | this, transition it is' that is what we do. Whether I think is overkill or | | not is moot. | | | | Feel free to CC me as I am not longer a regular on debian-devel. | | The new Matrix release is now on CRAN so I plan to proceed as outlined with a | first upload experimental, likely later today or this evening (my timezone). My bad. That was another release in the 1.6-* series, namely 1.6-5. No special action needed. 1.7-0 is still pending. Dirk | Dirk | | | | | Cheers, Dirk | | | | | | [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html | | [2] In R: | | > db <- tools::CRAN_package_db() | | > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] | | > length(matrixrevdep)# the vector 'matrixrevdep' list all | | [1] 1333 | | > | | [3] LinkingTo:, despite its name, is the directive to include the package C | | headers in the compilation. The 'db' object above allows to us to subset | | which of the 1333 packages using Matrix also have a LinkingTo | | | | | | -- | | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: CRAN Package Matrix update and a possible transition or not
On 27 February 2024 at 19:01, Dirk Eddelbuettel wrote: | A couple of days ago, the (effective) Maintainer and rather active developer | of the Matrix package Mikael Jagan (CC'ed) posted on the r-package-devel list | (the primary list for R package development) that the upcoming change of | Matrix 1.7-0, planned for March 11, will be _very midly disruptive_ but only | to the very small subset of Matrix dependents that _actually use its | headers_. See the full mail at [1]. The gory detail is that Matrix embeds and | uses an advanced sparse matrix library (called SuiteSparse) which it updates, | and the change in headers affects those (and only those!) who compile against | these headers. | | Now, Matrix currently has 1333 packages at CRAN using it [2]. But he lists 15 | (fifteen) of possibly breaking because these are the packages having a | 'LinkingTo: Matrix' [3]. That 1.113 per cent. | | It is similar for us. Running a simple `apt-cache rdepends r-cran-matrix | wc -l` | gets us 145 lines (including headers and meta packages). Call it 140 that a | transition would cover. | | But among the 15 affected only five are in Debian: | | irlbar-cran-irlba | lme4 r-cran-lme4 | OpenMx r-cran-openmx | TMP r-cran-tmp | bcSeqr-bioc-bcseq | | One of these is mine (lme4), I can easily produce a sequenced update. I | suggested we deal with the other _four packages_ by standard bug reports and | NMUs as needed instead of forcing likely 140 packages through a transition. | | Note that is in fact truly different from the past two hickups with Matrix | transition which happened at the R-only level of caching elements of its OO | resolution and whatnot hence affecting more package. This time it really is | compilation, and packages NOT touching the SuiteSparse headers (ie roughly | 135 or so of the 140 Debian packages using Matrix) will not be affected. | | That said, I of course defer to the release team. If the feeling is 'eff | this, transition it is' that is what we do. Whether I think is overkill or | not is moot. | | Feel free to CC me as I am not longer a regular on debian-devel. The new Matrix release is now on CRAN so I plan to proceed as outlined with a first upload experimental, likely later today or this evening (my timezone). Dirk | | Cheers, Dirk | | | [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html | [2] In R: | > db <- tools::CRAN_package_db() | > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] | > length(matrixrevdep)# the vector 'matrixrevdep' list all | [1] 1333 | > | [3] LinkingTo:, despite its name, is the directive to include the package C | headers in the compilation. The 'db' object above allows to us to subset | which of the 1333 packages using Matrix also have a LinkingTo | | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: CRAN Package Matrix update and a possible transition or not
On 27 February 2024 at 19:01, Dirk Eddelbuettel wrote: | A couple of days ago, the (effective) Maintainer and rather active developer | of the Matrix package Mikael Jagan (CC'ed) posted on the r-package-devel list | (the primary list for R package development) that the upcoming change of | Matrix 1.7-0, planned for March 11, will be _very midly disruptive_ but only | to the very small subset of Matrix dependents that _actually use its | headers_. See the full mail at [1]. The gory detail is that Matrix embeds and | uses an advanced sparse matrix library (called SuiteSparse) which it updates, | and the change in headers affects those (and only those!) who compile against | these headers. | | Now, Matrix currently has 1333 packages at CRAN using it [2]. But he lists 15 | (fifteen) of possibly breaking because these are the packages having a | 'LinkingTo: Matrix' [3]. That 1.113 per cent. | | It is similar for us. Running a simple `apt-cache rdepends r-cran-matrix | wc -l` | gets us 145 lines (including headers and meta packages). Call it 140 that a | transition would cover. | | But among the 15 affected only five are in Debian: | | irlbar-cran-irlba | lme4 r-cran-lme4 | OpenMx r-cran-openmx | TMP r-cran-tmp | bcSeqr-bioc-bcseq | | One of these is mine (lme4), I can easily produce a sequenced update. I | suggested we deal with the other _four packages_ by standard bug reports and | NMUs as needed instead of forcing likely 140 packages through a transition. | | Note that is in fact truly different from the past two hickups with Matrix | transition which happened at the R-only level of caching elements of its OO | resolution and whatnot hence affecting more package. This time it really is | compilation, and packages NOT touching the SuiteSparse headers (ie roughly | 135 or so of the 140 Debian packages using Matrix) will not be affected. | | That said, I of course defer to the release team. If the feeling is 'eff | this, transition it is' that is what we do. Whether I think is overkill or | not is moot. | | Feel free to CC me as I am not longer a regular on debian-devel. The new Matrix release is now on CRAN so I plan to proceed as outlined with a first upload experimental, likely later today or this evening (my timezone). Dirk | | Cheers, Dirk | | | [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html | [2] In R: | > db <- tools::CRAN_package_db() | > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] | > length(matrixrevdep)# the vector 'matrixrevdep' list all | [1] 1333 | > | [3] LinkingTo:, despite its name, is the directive to include the package C | headers in the compilation. The 'db' object above allows to us to subset | which of the 1333 packages using Matrix also have a LinkingTo | | | -- | dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [R-pkg-devel] Request for assistance: error in installing on Debian (undefined symbol: omp_get_num_procs) and note in checking the HTML versions (no command 'tidy' found, package 'V8' unavailable
Salut Annaig, On 21 March 2024 at 09:26, Annaig De-Walsche wrote: | Dear R-package-devel Community, | | I hope this email finds you well. I am reaching out to seek assistance regarding package development in R. | | Specifically, I am currently developing an R package for querying composite hypotheses using Rccp. My preferred typo. The package is actually called Rcpp (pp as in plus-plus). | Skipping checking HTML validation: no command 'tidy' found | Skipping checking math rendering: package 'V8' unavailable | | I have searched through the available documentation and resources, but I still need help understanding the error and note messages. Hence, I am turning to this community, hoping that some of you have encountered similar issues. | | Thank you very much for considering my request. I would be grateful if anyone could provide me with some help. | | Best regards, | Annaïg De Walsche | Quantitative Genetics and Evolution unit of INRAE | Gif-sur-Yvette, France | Could you share with us which actual Docker container you started? | Installing package into ‘/home/docker/R’ | (as ‘lib’ is unspecified) | 'getOption("repos")' replaces Bioconductor standard repositories, see | 'help("repositories", package = "BiocManager")' for details. | Replacement repositories: | CRAN: https://cloud.r-project.org | * installing *source* package ‘qch’ ... | ** using staged installation | ** libs | using C++ compiler: ‘g++ (Debian 13.2.0-7) 13.2.0’ | using C++11 | g++ -fsanitize=undefined,bounds-strict -fno-omit-frame-pointer -std=gnu++11 -I"/usr/local/lib/R/include" -DNDEBUG -I'/home/docker/R/Rcpp/include' -I'/home/docker/R/RcppArmadillo/include' -I/usr/local/include-fpic -g -O2 -Wall -pedantic -mtune=native -c RcppExports.cpp -o RcppExports.o | g++ -fsanitize=undefined,bounds-strict -fno-omit-frame-pointer -std=gnu++11 -I"/usr/local/lib/R/include" -DNDEBUG -I'/home/docker/R/Rcpp/include' -I'/home/docker/R/RcppArmadillo/include' -I/usr/local/include-fpic -g -O2 -Wall -pedantic -mtune=native -c updatePrior_rcpp.cpp -o updatePrior_rcpp.o | updatePrior_rcpp.cpp:55: warning: ignoring ‘#pragma omp parallel’ [-Wunknown-pragmas] |55 |#pragma omp parallel num_threads(threads_nb) | | | updatePrior_rcpp.cpp:65: warning: ignoring ‘#pragma omp for’ [-Wunknown-pragmas] |65 | #pragma omp for | | | updatePrior_rcpp.cpp:92: warning: ignoring ‘#pragma omp critical’ [-Wunknown-pragmas] |92 | #pragma omp critical | | | updatePrior_rcpp.cpp:178: warning: ignoring ‘#pragma omp parallel’ [-Wunknown-pragmas] | 178 | #pragma omp parallel num_threads(threads_nb) | | | updatePrior_rcpp.cpp:190: warning: ignoring ‘#pragma omp for’ [-Wunknown-pragmas] | 190 | #pragma omp for | | | updatePrior_rcpp.cpp:289: warning: ignoring ‘#pragma omp parallel’ [-Wunknown-pragmas] | 289 | #pragma omp parallel num_threads(threads_nb) | | | updatePrior_rcpp.cpp:301: warning: ignoring ‘#pragma omp for’ [-Wunknown-pragmas] | 301 | #pragma omp for | | | updatePrior_rcpp.cpp:341: warning: ignoring ‘#pragma omp critical’ [-Wunknown-pragmas] | 341 | #pragma omp critical | | | updatePrior_rcpp.cpp:409: warning: ignoring ‘#pragma omp parallel’ [-Wunknown-pragmas] | 409 | #pragma omp parallel num_threads(threads_nb) | | | updatePrior_rcpp.cpp:423: warning: ignoring ‘#pragma omp for’ [-Wunknown-pragmas] | 423 | #pragma omp for | | | updatePrior_rcpp.cpp:527: warning: ignoring ‘#pragma omp parallel’ [-Wunknown-pragmas] | 527 | #pragma omp parallel num_threads(threads_nb) | | | updatePrior_rcpp.cpp:539: warning: ignoring ‘#pragma omp for’ [-Wunknown-pragmas] | 539 | #pragma omp for | | | updatePrior_rcpp.cpp:580: warning: ignoring ‘#pragma omp critical’ [-Wunknown-pragmas] | 580 | #pragma omp critical | | You seem to be using a number of OpenMP directives. That is good and performant. But OpenMP cannot be assumed as given; some OSs more or less skip it alltogether, some platforms or compilers may not have it. I ran into the same issue earlier trying to test something with clang on Linux, it would not find the OpenMP library gcc happily finds. I moved on in that (local) use case. In short you probably want to condition your use. | g++ -fsanitize=undefined,bounds-strict -fno-omit-frame-pointer -std=gnu++11 -shared -L/usr/local/lib/R/lib -L/usr/local/lib -o qch.so RcppExports.o updatePrior_rcpp.o -L/usr/local/lib/R/lib -lRlapack -L/usr/local/lib/R/lib -lRblas -lgfortran -lm -lubsan -lquadmath -L/usr/local/lib/R/lib -lR | installing to /home/docker/R/00LOCK-qch/00new/qch/libs | ** R | ** data | *** moving datasets to lazyload DB | ** byte-compile and prepare package for lazy loading | 'getOption("repos")' replaces Bioconductor standard repositories, see | 'help("repositories", package = "BiocManager")' for details. | Replacement repositories: | CRAN: https://cloud.r-project.org | Note: wrong
Bug#1067218: gretl: please make the build reproducible
Hi Chris, On 20 March 2024 at 11:05, Chris Lamb wrote: | Source: gretl | Version: 2023c-2.1 | Severity: wishlist | Tags: patch | User: reproducible-bui...@lists.alioth.debian.org | Usertags: timestamps | X-Debbugs-Cc: reproducible-b...@lists.alioth.debian.org | | Hi, | | Whilst working on the Reproducible Builds effort [0], we noticed that | gretl could not be built reproducibly. | | This is because the PDF documentation embeds the current date via TeX's | \today (etc.). A patch is attached that uses FORCE_SOURCE_DATE to request | that TeX sources the current date from SOURCE_DATE_EPOCH instead of the | system clock. With pleasure! Thanks for the patch. gretl_2023c-3 is now building, should be up 'shortly'. Dirk | [0] https://reproducible-builds.org/ | | | Regards, | | -- | ,''`. | : :' : Chris Lamb | `. `'` la...@debian.org / chris-lamb.co.uk |`- | x[DELETED ATTACHMENT gretl.diff.txt, plain text] -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [R-pkg-devel] new maintainer for CRAN package XML
Dear Uwe, Did CRAN ever reach a decision here with a suitable volunteer (or group of volunteers) ? The state of XML came up again recently on mastodon, and it might be helpful to share an update if there is one. Thanks, as always, for all you and the rest of the team do for CRAN. Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
[Rd] RSS Feed of NEWS needs a hand
Years ago Duncan set up a nightly job to feed RSS based off changes to NEWS, borrowing some setup parts from CRANberries as for example the RSS 'compiler'. That job is currently showing the new \I{...} curly protection in an unfavourable light. Copying from the RSS reader I had pointed at this since the start [1], for today I see (indented by four spaces) CHANGES IN R-devel INSTALLATION on WINDOWS The makefiles and installer scripts for Windows have been tailored to \IRtools44, an update of the \IRtools43 toolchain. It is based on GCC 13 and newer versions of \IMinGW-W64, \Ibinutils and libraries (targeting 64-bit Intel CPUs). R-devel can no longer be built using \IRtools43 without changes. \IRtools44 has experimental suport for 64-bit ARM (aarch64) CPUs via LLVM 17 toolchain using lld, clang/flang-new and libc++. Can some kind soul put a filter over it to remove the \I ? Thanks, Dirk [1] Feedly. Unless we set this up so early that I once used Google Reader. It's been a while... -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Bug#1066403: R packages failing to build with missing -ltirpc are actually an issue in r-base
On 13 March 2024 at 19:06, Aurelien Jarno wrote: | control: reassign 1066403 r-base-dev | control: reassign 1066452 r-base-dev | control: reassign 1066455 r-base-dev | control: reassign 1066456 r-base-dev | control: forcemerge 1066403 1066452 1066455 1066456 | control: affects 1066403 rjava | control: affects 1066403 rapache | control: affects 1066403 littler | control: affects 1066403 rpy2 | control: retitle 1066403 r-base-dev: missing dependency on libtirpc-dev | | Hi Dirk, | | There are 4 r-base packages failing to build in the latest archive | rebuild: | | #1066403 rjava: FTBFS: ld: cannot find -ltirpc: No such file or directory | #1066452 rapache: FTBFS: ld: cannot find -ltirpc: No such file or directory | #1066455 littler: FTBFS: ld: cannot find -ltirpc: No such file or directory | #1066456 rpy2: FTBFS: ld: cannot find -ltirpc: No such file or directory | | Investigating, it appears that the issue is actually at the r-base | level. They try to link with -ltirpc because R tell them to do so: | | $ R CMD config --ldflags | -Wl,--export-dynamic -fopenmp -Wl,-z,relro -L/usr/lib/R/lib -lR -lpcre2-8 -llzma -lbz2 -lz -ltirpc -lrt -ldl -lm -licuuc -licui18n | | Therefore it seems that r-base-dev is missing a dependency on | libtirpc-dev. Sorry for not having noticed that when filling #1065216. I should have noticed that too when I prepared 4.3.3-2 from your #1065216: r-base (4.3.3-2) unstable; urgency=medium * debian/control: Add libtirpc-dev to Build-Depends to fix build issue from side effects of t64 transition (Closes: #1065216) -- Dirk Eddelbuettel Mon, 04 Mar 2024 08:54:45 -0600 I will take care of it in -3. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1066403: R packages failing to build with missing -ltirpc are actually an issue in r-base
On 13 March 2024 at 19:06, Aurelien Jarno wrote: | control: reassign 1066403 r-base-dev | control: reassign 1066452 r-base-dev | control: reassign 1066455 r-base-dev | control: reassign 1066456 r-base-dev | control: forcemerge 1066403 1066452 1066455 1066456 | control: affects 1066403 rjava | control: affects 1066403 rapache | control: affects 1066403 littler | control: affects 1066403 rpy2 | control: retitle 1066403 r-base-dev: missing dependency on libtirpc-dev | | Hi Dirk, | | There are 4 r-base packages failing to build in the latest archive | rebuild: | | #1066403 rjava: FTBFS: ld: cannot find -ltirpc: No such file or directory | #1066452 rapache: FTBFS: ld: cannot find -ltirpc: No such file or directory | #1066455 littler: FTBFS: ld: cannot find -ltirpc: No such file or directory | #1066456 rpy2: FTBFS: ld: cannot find -ltirpc: No such file or directory | | Investigating, it appears that the issue is actually at the r-base | level. They try to link with -ltirpc because R tell them to do so: | | $ R CMD config --ldflags | -Wl,--export-dynamic -fopenmp -Wl,-z,relro -L/usr/lib/R/lib -lR -lpcre2-8 -llzma -lbz2 -lz -ltirpc -lrt -ldl -lm -licuuc -licui18n | | Therefore it seems that r-base-dev is missing a dependency on | libtirpc-dev. Sorry for not having noticed that when filling #1065216. I should have noticed that too when I prepared 4.3.3-2 from your #1065216: r-base (4.3.3-2) unstable; urgency=medium * debian/control: Add libtirpc-dev to Build-Depends to fix build issue from side effects of t64 transition (Closes: #1065216) -- Dirk Eddelbuettel Mon, 04 Mar 2024 08:54:45 -0600 I will take care of it in -3. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [R-pkg-devel] Suggesting an archived package in the DESCRIPTION file
On 5 March 2024 at 15:12, Duncan Murdoch wrote: | On 05/03/2024 2:26 p.m., Dirk Eddelbuettel wrote: | > The default behaviour is to build after every commit to the main branch. But | > there are options. On the repo I mentioned we use | > | > "branch": "*release", | | Where do you put that? I don't see r2u on R-universe, so I guess you're | talking about a different repo; which one? In the (optional) control repo that can drive your 'r-universe', and the file has to be named 'packages.json'. For you the repo would https://github.com/dmurdoch/dmurdoch.r-universe.dev (and the naming rule was tightened by Jeroen recently -- we used to call these just 'universe', now it has to match your runiverse) The file packages.json would then have a block { "package": "rgl", "maintainer": "Duncan Murdoch " "url": "https://github.com/dmurdoch/rgl;, "available": true, "branch": "*release" } The reference I mentioned is our package 'tiledbsoma' (joint work of TileDB and CZI, in https://github.com/single-cell-data/TileDB-SOMA) and described here: https://github.com/TileDB-Inc/tiledb-inc.r-universe.dev/blob/master/packages.json (and you can ignore the '"subdir": "apis/r"' which is a facet local to that repo). Note that 'my' packages.json in my eddelbuettel.r-universe.dev ie https://github.com/eddelbuettel/eddelbuettel.r-universe.dev/blob/master/packages.json also describe but without the '"branch": "*release"' and that builds with every merge to the main branch by my choice; that build is mine and 'inofficial' giving us two. | > It is under your control. You could document how to install via `remotes` | > from that branch. As so often, it's about trading one thing off for another. | | I do that, but my documentation falls off the bottom of the screen, and | the automatic docs generated by R-universe are at the top. I always get lost in the r-universe docs too. Some, as Jeroen kindly reminded me the other day, are here: https://github.com/r-universe-org Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] Suggesting an archived package in the DESCRIPTION file
On 5 March 2024 at 13:28, Duncan Murdoch wrote: | What I'm seeing is that the tags are ignored, and it is distributing the | HEAD of the main branch. I don't think most users should be using that | version: in my packages it won't have had full reverse dependency | checks, I only do that before CRAN releases. And occasionally it hasn't | even passed R CMD check, though that's not my normal workflow. On the | other hand, I like that it's available and easy to install, it just | shouldn't be the default install. The default behaviour is to build after every commit to the main branch. But there are options. On the repo I mentioned we use "branch": "*release", and now builds occur on tagged releases only. The above is AFAIUI a meta declaration understood by `remotes`, it was an option suggested by a colleague. Naming actual branches also works. | I suppose I could do all development on a "devel" branch, and only merge | it into the main branch after I wanted to make a release, but then the | R-universe instructions would be no good for getting the devel code. It is under your control. You could document how to install via `remotes` from that branch. As so often, it's about trading one thing off for another. | I don't know anything about dpkg, but having some options available to | package authors would be a good thing. Yes but you know {install,available}.packages and have some understanding of how R identifies and installs packages. I merely illustrated a different use pattern of giving "weights" to repos. If "we all" want different behaviour, someone has to site down and write it. Discussing some possible specs and desired behavior may help. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] Suggesting an archived package in the DESCRIPTION file
On 5 March 2024 at 11:56, Duncan Murdoch wrote: | I have mixed feelings about r-universe. On the one hand, it is really | nicely put together, and it offers the service described above. On the | other, it's probably a bad idea to follow its advice and use | install.packages() with `repos` as shown: that will install development | versions of packages, not releases. Yup. It's a point I raised right at the start as I really do believe in curated releases but clearly a lot of people prefer the simplicity of 'tagging a release' at GitHub and then getting a build. r-universe is indeed good at what it does and reliable. There are limited choices in 'driving' what you can do with it. We rely quite heavily on it in a large project for work. As each 'repo' can appear only once in a universe, we resorted to having the 'offical' build follow GitHub 'releases', as well as (optional, additional) builds against a the main branch from another universe. This example is for a non-CRAN package. With CRAN packages, r-universe can be useful too. For some of my packages, I now show multiple 'badges' at the README: for the released CRAN version as well as for the current 'rc' in the main branch sporting a differentiating final digit. RcppArmadillo had a pre-releases available to test that way for a few weeks til the new release this week. So in effect, this gives you what `drat` allows yet also automagically adds builds. It's quite useful when you are careful about it. | Do you know if it's possible for a package to suggest the CRAN version | first, with an option like the above only offered as a pre-release option? In the language of Debian and its dpkg and tools, one solution to that would be 'repository pinning' to declare a 'value' on a repository. There, the default is 500, and e.g. for r2u I set this to 700 as you usually want its versions. We do not have this for R, but it could be added (eventually) as a new value in PACKAGES, or as a new supplementary attribute. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] Suggesting an archived package in the DESCRIPTION file
On 5 March 2024 at 06:25, Duncan Murdoch wrote: | You could make a compatible version of `survivalmodels` available on a | non-CRAN website, and refer to that website in the | Additional_repositories field of DESCRIPTION. Every r-universe sub-site fits that requirement. For this package Google's first hit was https://raphaels1.r-universe.dev/survivalmodels and it carries the same line on install.packages() that Jeroen adds to every page: install.packages('survivalmodels', repos = c('https://raphaels1.r-universe.dev', 'https://cloud.r-project.org')) So doing all three of - adding a line 'Additional_repositories: https://raphaels1.r-universe.dev' - adding a 'Suggests: survivalmodels; - ensuring conditional use only as Suggests != Depends should do. | It would be best if you fixed whatever issue caused survivalmodels to be | archived when you do this. | | Looking here: | https://cran-archive.r-project.org/web/checks/2024/2024-03-02_check_results_survivalmodels.html | that appears very easy to do. The source is here: | https://github.com/RaphaelS1/survivalmodels/ . The other may even take a PR fixing this going forward. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Bug#1065216: r-base: recent libc6-dev change causes the xdr feature to be dropped
On 1 March 2024 at 23:36, Aurelien Jarno wrote: | Source: r-base | Version: 4.3.3-1 | Severity: serious | Tags: ftbfs | Justification: fails to build from source (but built successfully in the past) | User: debian-gl...@lists.debian.org | Usertags: libtirpc-dev | | Dear maintainer, | | Starting with glibc 2.31, support for NIS (libnsl library) has been | moved to a separate libnsl2 package. In order to allow a smooth | transition, a libnsl-dev, which depends on libtirpc-dev, has been added | to the libc6-dev package. | | The libnsl-dev dependency has been temporarily dropped in the 2.37-15.1 | NMU, as part of the 64-bit time_t transition. This causes the xdr | feature of r-base to be dropped, I am not sure it is something to care | about. | | Therefore please either: | - Add libtirpc-dev as build dependency I'll do that. We don't have that much little vs big endian out there anymore but it is a feature that was long supported so it should remain supported. Dirk | - Disable the xdr feature support explicitly so that it does not depend | on the packages installed on the system. | | Regards | Aurelien -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1065216: r-base: recent libc6-dev change causes the xdr feature to be dropped
On 1 March 2024 at 23:36, Aurelien Jarno wrote: | Source: r-base | Version: 4.3.3-1 | Severity: serious | Tags: ftbfs | Justification: fails to build from source (but built successfully in the past) | User: debian-gl...@lists.debian.org | Usertags: libtirpc-dev | | Dear maintainer, | | Starting with glibc 2.31, support for NIS (libnsl library) has been | moved to a separate libnsl2 package. In order to allow a smooth | transition, a libnsl-dev, which depends on libtirpc-dev, has been added | to the libc6-dev package. | | The libnsl-dev dependency has been temporarily dropped in the 2.37-15.1 | NMU, as part of the 64-bit time_t transition. This causes the xdr | feature of r-base to be dropped, I am not sure it is something to care | about. | | Therefore please either: | - Add libtirpc-dev as build dependency I'll do that. We don't have that much little vs big endian out there anymore but it is a feature that was long supported so it should remain supported. Dirk | - Disable the xdr feature support explicitly so that it does not depend | on the packages installed on the system. | | Regards | Aurelien -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [Rcpp-devel] segfault on exit CRAN+Intel only
Hi Murray, On 4 March 2024 at 07:03, Murray Efford wrote: | Dirk | Thanks for a very helpful reply. I'll simplify my return values. | | I mentioned Intel with rhub2 in my earlier post here, but I'm sorry | that was somewhat buried. Debugging is somewhere between painful and | impossible when my only check is submitting to CRAN! It would be *really* helpful to have a path not involving CRAN. | Also, I had tried valgrind, but that got stuck in Linux on what I | assumed was an unrelated "unhandled instruction" error wrt OpenBLAS. | That appeared unrelated, but maybe we need to factor it in as a | possible interaction with RcppArmadillo. Strangely valgrind sticks on | this -- | ==2242833== valgrind: Unrecognised instruction at address 0x57d3650. | ==2242833==at 0x57D3650: dot_compute (in | /opt/OpenBLAS/lib/libopenblas_skylakexp-r0.3.23.dev.so) | -- even after I have set options(matprod="internal") in R, so | something else (RcppArmadillo?) must be trying to use OpenBLAS. That seems local to your system. I can just do 'R -d valgrind' as expected. edd@rob:~$ R -q -d valgrind -e 'v <- integer(10)' ==2219911== Memcheck, a memory error detector ==2219911== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al. ==2219911== Using Valgrind-3.21.0 and LibVEX; rerun with -h for copyright info ==2219911== Command: /usr/lib/R/bin/exec/R -q -e v~+~\<-~+~integer(10) ==2219911== > v <- integer(10) > > ==2219911== ==2219911== HEAP SUMMARY: ==2219911== in use at exit: 51,025,490 bytes in 11,017 blocks ==2219911== total heap usage: 26,520 allocs, 15,503 frees, 78,392,784 bytes allocated ==2219911== ==2219911== LEAK SUMMARY: ==2219911==definitely lost: 0 bytes in 0 blocks ==2219911==indirectly lost: 0 bytes in 0 blocks ==2219911== possibly lost: 0 bytes in 0 blocks ==2219911==still reachable: 51,025,490 bytes in 11,017 blocks ==2219911== of which reachable via heuristic: ==2219911== newarray : 4,264 bytes in 1 blocks ==2219911== suppressed: 0 bytes in 0 blocks ==2219911== Reachable blocks (those to which a pointer was found) are not shown. ==2219911== To see them, rerun with: --leak-check=full --show-leak-kinds=all ==2219911== ==2219911== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) edd@rob:~$ valgrind is pretty good and useful. I also enjoy the fact that eg tinytest testfiles are script so we can test them in the aggregate, or in isolation, or via their helper function. edd@rob:~/git/rcpparmadillo/inst/tinytest(master)$ R -q -d valgrind -e 'tinytest::run_test_file("test_fastLm.R")' ==2243731== Memcheck, a memory error detector ==2243731== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al. ==2243731== Using Valgrind-3.21.0 and LibVEX; rerun with -h for copyright info ==2243731== Command: /usr/lib/R/bin/exec/R -q -e tinytest::run_test_file("test_fastLm.R") ==2243731== > tinytest::run_test_file("test_fastLm.R") test_fastLm.R. 30 tests OK 4.3s All ok, 30 results (4.3s) > > ==2243731== ==2243731== HEAP SUMMARY: ==2243731== in use at exit: 58,200,576 bytes in 11,419 blocks ==2243731== total heap usage: 38,070 allocs, 26,651 frees, 141,466,157 bytes allocated ==2243731== ==2243731== LEAK SUMMARY: ==2243731==definitely lost: 0 bytes in 0 blocks ==2243731==indirectly lost: 0 bytes in 0 blocks ==2243731== possibly lost: 0 bytes in 0 blocks ==2243731==still reachable: 58,200,576 bytes in 11,419 blocks ==2243731== of which reachable via heuristic: ==2243731== newarray : 4,264 bytes in 1 blocks ==2243731== suppressed: 0 bytes in 0 blocks ==2243731== Reachable blocks (those to which a pointer was found) are not shown. ==2243731== To see them, rerun with: --leak-check=full --show-leak-kinds=all ==2243731== ==2243731== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) edd@rob:~/git/rcpparmadillo/inst/tinytest(master)$ Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org ___ Rcpp-devel mailing list Rcpp-devel@lists.r-forge.r-project.org https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/rcpp-devel
Re: [Rcpp-devel] segfault on exit CRAN+Intel only
And "beauty" (ahem) of discussion scattered over two mailing lists: I now see you have a testbed via rhub2 (good) even though it does not reproduce (hm...). So you could still try the suggested simplication. Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org ___ Rcpp-devel mailing list Rcpp-devel@lists.r-forge.r-project.org https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/rcpp-devel
Re: [Rcpp-devel] segfault on exit CRAN+Intel only
On 3 March 2024 at 20:47, Murray Efford wrote: | A couple of days ago I posted on R-package-devel about a mysterious | segfault from R CMD checks of my package secrdesign (see | https://CRAN.R-project.org/package=secrdesign, and | https://github.com/MurrayEfford/secrdesign) The issue rises only on | CRAN and only with the Intel(R) oneAPI DPC++/C++ Compiler: | | *** caught segfault *** | address (nil), cause 'unknown' | | As noted by Ivan Krylov and Uwe Ligges, the fault happens at the end | of the R session (as it quits()). The package passes when checked on | Intel(R) oneAPI DPC++/C++ Compiler 2023.2.0 (2023.2.0.20230721) with | rhub2 . | | Now, CRAN via Uwe Ligges has accepted a new version of secrdesign | despite the continuing error. My reason for raising it here is that | (i) it is likely to raise its head next time I update, | (ii) my experience may not be unique, | (iii) my use of Rcpp, RcppArmadillo and BH in this package is very | limited (https://github.com/MurrayEfford/secrdesign/tree/main/src), | and it may therefore be provide clues to an Rcpp pro. | (iv) I have just noticed a similar 'Additional issue' for | https://CRAN.R-project.org/package=ipsecr that also uses Rcpp, | RcppArmadillo and BH. | Any advice would be welcome. I have no experience with docker, so | answers in words of one or few syllables, please. I was about to suggest to run with 'valgrind' and/or 'asan'/'ubsan' as many folks do when chasing 'spurious' bugs related to memory -- but then CRAN already does that for you and found nothing! So it is hard to say anything. It could be a bug on your end, it could be a bug in the compiler (!!), it could be a bug in the libraries. Now, Boost and Armadillo are fairly mature and widely used so that is not likely either. Given that you spotted another package in the same intersection it could be an interaction. But I am afraid you may need to work towards creating a 'workbench' where you can chip away at this. Some of us can eyeball, and some are truly excellent at this, but that may not be a reliable (or scalable) way forward. [ goes looking ] So I eyeballed your code. One thing I might do is keep the return object simpler. Instead of (on-the fly) creation of Rcpp::List with Rcpp::Named that contain scalars, maybe consider returning a Rcpp::NumericVector(2) and set the two elements. You can still set 'names' on that too. A super-pedestrian version is > Rcpp::cppFunction('NumericVector myvec() { NumericVector v(2); v[0] = 1.23; > v[1] = 2.34; CharacterVector nm(2); nm[0] = "foo"; nm[1] = "bar"; > v.attr("names") = nm; return v; }') > myvec() foo bar 1.23 2.34 > and a fancier brace-initialization way is > Rcpp::cppFunction('NumericVector myvec() { NumericVector v{1.23, 2.34}; > CharacterVector nm{"foo", "bar"}; v.attr("names") = nm; return v; }') > myvec() foo bar 1.23 2.34 > Either works and avoids creation of temporaries at return which the (less widely used !!) Intel compiler may resolve differently from g++ and clang++. So it could be us, and defensive programming is always good but with repros it so hard to say anything... Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org ___ Rcpp-devel mailing list Rcpp-devel@lists.r-forge.r-project.org https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/rcpp-devel
Re: [Rcpp-devel] RcppArmadillo with -fopenmp: Not using all available cores
Hi Robin, On 2 March 2024 at 16:34, Robin Liu wrote: | sessionInfo() was the right clue. Indeed the version of R on machine B was not | linked to OpenBLAS. Switching to a version with OpenBLAS allows the test code | to use all cores. | | A clear way to check which library is linked is to run the following: | | > extSoftVersion()["BLAS"] Ah yes -- I keep forgetting about that one. Good reminder! | Thanks for your help! Always a pleasure. Glad you are all set. Dirk | On Sat, Feb 24, 2024 at 9:17 AM Dirk Eddelbuettel wrote: | | | On 24 February 2024 at 11:44, Robin Liu wrote: | | Thank you Dirk for the response. | | | | I called RcppArmadillo::armadillo_get_number_of_omp_threads() on both | machines | | and correctly see that machine A and B have 20 and 40 cores, | respectively. I | | also see that calling the setter changes this value. | | | | However, calling the setter does not seem to change the number of cores | used on | | either machine A or B. I have updated my code example as below: the | execution | | uses 20 cores on machine A and 1 core on machine B as before, despite my | | setting the number of omp threads to 5. Do you have any further hints? | | I fear you need to debug that on the machine 'B' in question. It's all open | source. I do not think either Conrad or myself put code in to constrain | you | to one core on 'B' (and then doesn't as you see on 'A'). | | You can grep around both the RcppArmadillo wrapper code and the include | Armadillo code, I suggest making a local copy and peppering in some print | statements. | | Also keep in mind that (Rcpp)Armadillo hands off to computation to the | actual | LAPACK / BLAS implementation on that machine. Lots of things can go wrong | there: maybe R was compiled with its own embedded BLAS/LAPACK sources | (preventing a call out to OpenBLAS even when the machine has it). Or maybe | R | was compiled correctly but a single-threaded set of libraries is on the | machine. | | You have not supplied any of that information. Many bug report suggestions | hint that showing `sessionInfo()` helps -- and it does show the BLAS/LAPACK | libraries. You are not forced to show us this, but by not showing us you | prevent us from being more focussed on suggestions. So maybe start at your | end by glancing at sessionInfo() on A and B? | | Dirk | | | | library(RcppArmadillo) | | library(Rcpp) | | | | RcppArmadillo::armadillo_set_number_of_omp_threads(5) | | print(sprintf("There are %d threads", | | RcppArmadillo::armadillo_get_number_of_omp_threads())) | | | | src <- | | r"(#include | | | | // [[Rcpp::depends(RcppArmadillo)]] | | | | // [[Rcpp::export]] | | arma::vec getEigenValues(arma::mat M) { | | return arma::eig_sym(M); | | })" | | | | size <- 1 | | m <- matrix(rnorm(size^2), size, size) | | m <- m * t(m) | | | | # This line compiles the above code with the -fopenmp flag. | | sourceCpp(code = src, verbose = TRUE, rebuild = TRUE) | | result <- getEigenValues(m) | | print(result[1:10]) | | | | On Fri, Feb 23, 2024 at 12:53 PM Dirk Eddelbuettel | wrote: | | | | | | On 23 February 2024 at 09:35, Robin Liu wrote: | | | Hi all, | | | | | | Here is an R script that uses Armadillo to decompose a large matrix | and | | print | | | the first 10 eigenvalues. | | | | | | library(RcppArmadillo) | | | library(Rcpp) | | | | | | src <- | | | r"(#include | | | | | | // [[Rcpp::depends(RcppArmadillo)]] | | | | | | // [[Rcpp::export]] | | | arma::vec getEigenValues(arma::mat M) { | | | return arma::eig_sym(M); | | | })" | | | | | | size <- 1 | | | m <- matrix(rnorm(size^2), size, size) | | | m <- m * t(m) | | | | | | # This line compiles the above code with the -fopenmp flag. | | | sourceCpp(code = src, verbose = TRUE, rebuild = TRUE) | | | result <- getEigenValues(m) | | | print(result[1:10]) | | | | | | When I run this code on server A, I see that arma can implicitly | leverage | | all | | | available cores by running top -H. However, on server B it can only | use | | one | | | core despite multiple being available: there is just one process | entry in | | top | | | -H. Both processes successfully exit and return an answer. The | process on | | | server B is of course much slower. | | | | It is documented in the package how this is applied and the policy is | to | |
Re: [Rcpp-devel] Segfault in wrapping code in Rcpp
Hi Nikhil, Don't post images. I read in a text-based reader. The mailing list software also scrubs html (I think). I would simplify. Start with the simplest Rcpp Modules setup. Then add. Check checking. Eventually on your way towards what you are doing now you may spot the error. Hope this helps, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org ___ Rcpp-devel mailing list Rcpp-devel@lists.r-forge.r-project.org https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/rcpp-devel
Bug#1063320: gretl: NMU diff for 64-bit time_t transition
On 29 February 2024 at 00:20, Steve Langasek wrote: | Dear maintainer, | | Please find attached a final version of this patch for the time_t | transition. This patch is being uploaded to unstable. | | Note that this adds a versioned build-dependency on dpkg-dev, to guard | against accidental backports with a wrong ABI. Thanks a lot for managing this well. I replaced the earlier patch with this one and force-pushed over the previous commit. The repo is current. Really appreciate the handling of the 64-bit time_t issue by all. Cheers, Dirk | Thanks! | | | -- System Information: | Debian Release: trixie/sid | APT prefers unstable | APT policy: (500, 'unstable') | Architecture: amd64 (x86_64) | | Kernel: Linux 6.5.0-14-generic (SMP w/12 CPU threads; PREEMPT) | Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE | Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set | Shell: /bin/sh linked to /usr/bin/dash | Init: systemd (via /run/systemd/system) | x[DELETED ATTACHMENT nmu_gretl.debdiff, plain text] -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1063320: gretl: NMU diff for 64-bit time_t transition
On 29 February 2024 at 00:20, Steve Langasek wrote: | Dear maintainer, | | Please find attached a final version of this patch for the time_t | transition. This patch is being uploaded to unstable. | | Note that this adds a versioned build-dependency on dpkg-dev, to guard | against accidental backports with a wrong ABI. Thanks a lot for managing this well. I replaced the earlier patch with this one and force-pushed over the previous commit. The repo is current. Really appreciate the handling of the 64-bit time_t issue by all. Cheers, Dirk | Thanks! | | | -- System Information: | Debian Release: trixie/sid | APT prefers unstable | APT policy: (500, 'unstable') | Architecture: amd64 (x86_64) | | Kernel: Linux 6.5.0-14-generic (SMP w/12 CPU threads; PREEMPT) | Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE | Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set | Shell: /bin/sh linked to /usr/bin/dash | Init: systemd (via /run/systemd/system) | x[DELETED ATTACHMENT nmu_gretl.debdiff, plain text] -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1062364: dieharder: NMU diff for 64-bit time_t transition
On 28 February 2024 at 21:28, mwhud...@fastmail.fm wrote: | Dear maintainer, | | Please find attached a final version of this patch for the time_t | transition. This patch is being uploaded to unstable. | | Note that this adds a versioned build-dependency on dpkg-dev, to guard | against accidental backports with a wrong ABI. Thanks a lot for managing this well. I replaced the earlier patch with this one and force-pushed over the previous commit. The repo is current. Really appreciate the handling of the 64-bit time_t issue by all. Cheers, Dirk | Thanks! | | | -- System Information: | Debian Release: trixie/sid | APT prefers unstable | APT policy: (500, 'unstable'), (1, 'experimental') | Architecture: amd64 (x86_64) | | Kernel: Linux 6.5.0-21-generic (SMP w/16 CPU threads; PREEMPT) | Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE | Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set | Shell: /bin/sh linked to /usr/bin/dash | Init: systemd (via /run/systemd/system) | x[DELETED ATTACHMENT nmu_dieharder.debdiff, plain text] -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1062364: dieharder: NMU diff for 64-bit time_t transition
On 28 February 2024 at 21:28, mwhud...@fastmail.fm wrote: | Dear maintainer, | | Please find attached a final version of this patch for the time_t | transition. This patch is being uploaded to unstable. | | Note that this adds a versioned build-dependency on dpkg-dev, to guard | against accidental backports with a wrong ABI. Thanks a lot for managing this well. I replaced the earlier patch with this one and force-pushed over the previous commit. The repo is current. Really appreciate the handling of the 64-bit time_t issue by all. Cheers, Dirk | Thanks! | | | -- System Information: | Debian Release: trixie/sid | APT prefers unstable | APT policy: (500, 'unstable'), (1, 'experimental') | Architecture: amd64 (x86_64) | | Kernel: Linux 6.5.0-21-generic (SMP w/16 CPU threads; PREEMPT) | Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE | Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set | Shell: /bin/sh linked to /usr/bin/dash | Init: systemd (via /run/systemd/system) | x[DELETED ATTACHMENT nmu_dieharder.debdiff, plain text] -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Bug#1064388: ess: New version 24.1.1
On 28 February 2024 at 10:17, Sébastien Villemot wrote: | Salut Dirk, | | Le mercredi 21 février 2024 à 06:54 -0600, Dirk Eddelbuettel a écrit : | > Source: ess | > Version: 24.01.0-1 | > Severity: minor | > | > Salut Seb -- and thanks for packaging the recent 24.1.0 which installs | > fine. There is by now a follow-up 24.1.1 which would be nice to have too. | | Thanks for the ping. | | Actually I missed this release because the upstream tarball was not | uploaded to the location scanned by the debian/watch file. I emailed | the ess-deb...@r-project.org list about that problem. I am trying to remember who, besides you and me, reads ess-debian. Hm. Try ess-help in case you don't hear anything back. Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [R-pkg-devel] Unable to access log operator in C
On 28 February 2024 at 19:05, Avraham Adler wrote: | I am hoping the solution to this question is simple, but I have not | been able to find one. I am building a routine in C to be called from | R. I am including Rmath.h. However, when I have a call to "log", I get | the error "called object 'log' is not a function or a function | pointer. When I "trick" it by calling log1p(x - 1), which I *know* is | exported from Rmath.h, it works. | | More completely, my includes are: | #include | #include | #include | #include | #include // for NULL | #include | | The object being logged is a double, passed into C as an SEXP, call it | "a", which for now will always be a singleton. I initialize a pointer | double *pa = REAL(a). I eventually call log(pa[0]), which does not | compile and throws the error listed above. Switching the call to | log1p(pa[0] - 1.0) works and returns the proper answer. | | Even including math.h explicitly does not help, which makes sense as | it is included by Rmath.h. Can you show the actual line? Worst case rename your source file to end in .cpp, include and call std::log. > Rcpp::cppFunction("double mylog(double x) { return std::log(x); }") > mylog(exp(42)) [1] 42 > Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
CRAN Package Matrix update and a possible transition or not
A couple of days ago, the (effective) Maintainer and rather active developer of the Matrix package Mikael Jagan (CC'ed) posted on the r-package-devel list (the primary list for R package development) that the upcoming change of Matrix 1.7-0, planned for March 11, will be _very midly disruptive_ but only to the very small subset of Matrix dependents that _actually use its headers_. See the full mail at [1]. The gory detail is that Matrix embeds and uses an advanced sparse matrix library (called SuiteSparse) which it updates, and the change in headers affects those (and only those!) who compile against these headers. Now, Matrix currently has 1333 packages at CRAN using it [2]. But he lists 15 (fifteen) of possibly breaking because these are the packages having a 'LinkingTo: Matrix' [3]. That 1.113 per cent. It is similar for us. Running a simple `apt-cache rdepends r-cran-matrix | wc -l` gets us 145 lines (including headers and meta packages). Call it 140 that a transition would cover. But among the 15 affected only five are in Debian: irlbar-cran-irlba lme4 r-cran-lme4 OpenMx r-cran-openmx TMP r-cran-tmp bcSeqr-bioc-bcseq One of these is mine (lme4), I can easily produce a sequenced update. I suggested we deal with the other _four packages_ by standard bug reports and NMUs as needed instead of forcing likely 140 packages through a transition. Note that is in fact truly different from the past two hickups with Matrix transition which happened at the R-only level of caching elements of its OO resolution and whatnot hence affecting more package. This time it really is compilation, and packages NOT touching the SuiteSparse headers (ie roughly 135 or so of the 140 Debian packages using Matrix) will not be affected. That said, I of course defer to the release team. If the feeling is 'eff this, transition it is' that is what we do. Whether I think is overkill or not is moot. Feel free to CC me as I am not longer a regular on debian-devel. Cheers, Dirk [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html [2] In R: > db <- tools::CRAN_package_db() > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] > length(matrixrevdep)# the vector 'matrixrevdep' list all [1] 1333 > [3] LinkingTo:, despite its name, is the directive to include the package C headers in the compilation. The 'db' object above allows to us to subset which of the 1333 packages using Matrix also have a LinkingTo -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
CRAN Package Matrix update and a possible transition or not
A couple of days ago, the (effective) Maintainer and rather active developer of the Matrix package Mikael Jagan (CC'ed) posted on the r-package-devel list (the primary list for R package development) that the upcoming change of Matrix 1.7-0, planned for March 11, will be _very midly disruptive_ but only to the very small subset of Matrix dependents that _actually use its headers_. See the full mail at [1]. The gory detail is that Matrix embeds and uses an advanced sparse matrix library (called SuiteSparse) which it updates, and the change in headers affects those (and only those!) who compile against these headers. Now, Matrix currently has 1333 packages at CRAN using it [2]. But he lists 15 (fifteen) of possibly breaking because these are the packages having a 'LinkingTo: Matrix' [3]. That 1.113 per cent. It is similar for us. Running a simple `apt-cache rdepends r-cran-matrix | wc -l` gets us 145 lines (including headers and meta packages). Call it 140 that a transition would cover. But among the 15 affected only five are in Debian: irlbar-cran-irlba lme4 r-cran-lme4 OpenMx r-cran-openmx TMP r-cran-tmp bcSeqr-bioc-bcseq One of these is mine (lme4), I can easily produce a sequenced update. I suggested we deal with the other _four packages_ by standard bug reports and NMUs as needed instead of forcing likely 140 packages through a transition. Note that is in fact truly different from the past two hickups with Matrix transition which happened at the R-only level of caching elements of its OO resolution and whatnot hence affecting more package. This time it really is compilation, and packages NOT touching the SuiteSparse headers (ie roughly 135 or so of the 140 Debian packages using Matrix) will not be affected. That said, I of course defer to the release team. If the feeling is 'eff this, transition it is' that is what we do. Whether I think is overkill or not is moot. Feel free to CC me as I am not longer a regular on debian-devel. Cheers, Dirk [1] https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010463.html [2] In R: > db <- tools::CRAN_package_db() > matrixrevdep <- tools::package_dependencies("Matrix", reverse=TRUE, db=db)[[1]] > length(matrixrevdep)# the vector 'matrixrevdep' list all [1] 1333 > [3] LinkingTo:, despite its name, is the directive to include the package C headers in the compilation. The 'db' object above allows to us to subset which of the 1333 packages using Matrix also have a LinkingTo -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [R-pkg-devel] Package required but not available: ‘arrow’
On 26 February 2024 at 09:19, Simon Urbanek wrote: | [requiring increased is] best way [..] and certainly the only good practice. No, not really. Another viewpoint, which is implemented in another project I contribute to, is where a version + build_revision tuple exists if, and only if, the underlying upload was accepted. Until then upload iterations are fine. Hence s/only good practive/one possible way/. Anyway: `arrow` is long back at CRAN (yay!) so this thread is done anyway. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [Rcpp-devel] RcppArmadillo with -fopenmp: Not using all available cores
On 24 February 2024 at 11:44, Robin Liu wrote: | Thank you Dirk for the response. | | I called RcppArmadillo::armadillo_get_number_of_omp_threads() on both machines | and correctly see that machine A and B have 20 and 40 cores, respectively. I | also see that calling the setter changes this value. | | However, calling the setter does not seem to change the number of cores used on | either machine A or B. I have updated my code example as below: the execution | uses 20 cores on machine A and 1 core on machine B as before, despite my | setting the number of omp threads to 5. Do you have any further hints? I fear you need to debug that on the machine 'B' in question. It's all open source. I do not think either Conrad or myself put code in to constrain you to one core on 'B' (and then doesn't as you see on 'A'). You can grep around both the RcppArmadillo wrapper code and the include Armadillo code, I suggest making a local copy and peppering in some print statements. Also keep in mind that (Rcpp)Armadillo hands off to computation to the actual LAPACK / BLAS implementation on that machine. Lots of things can go wrong there: maybe R was compiled with its own embedded BLAS/LAPACK sources (preventing a call out to OpenBLAS even when the machine has it). Or maybe R was compiled correctly but a single-threaded set of libraries is on the machine. You have not supplied any of that information. Many bug report suggestions hint that showing `sessionInfo()` helps -- and it does show the BLAS/LAPACK libraries. You are not forced to show us this, but by not showing us you prevent us from being more focussed on suggestions. So maybe start at your end by glancing at sessionInfo() on A and B? Dirk | library(RcppArmadillo) | library(Rcpp) | | RcppArmadillo::armadillo_set_number_of_omp_threads(5) | print(sprintf("There are %d threads", | RcppArmadillo::armadillo_get_number_of_omp_threads())) | | src <- | r"(#include | | // [[Rcpp::depends(RcppArmadillo)]] | | // [[Rcpp::export]] | arma::vec getEigenValues(arma::mat M) { | return arma::eig_sym(M); | })" | | size <- 1 | m <- matrix(rnorm(size^2), size, size) | m <- m * t(m) | | # This line compiles the above code with the -fopenmp flag. | sourceCpp(code = src, verbose = TRUE, rebuild = TRUE) | result <- getEigenValues(m) | print(result[1:10]) | | On Fri, Feb 23, 2024 at 12:53 PM Dirk Eddelbuettel wrote: | | | On 23 February 2024 at 09:35, Robin Liu wrote: | | Hi all, | | | | Here is an R script that uses Armadillo to decompose a large matrix and | print | | the first 10 eigenvalues. | | | | library(RcppArmadillo) | | library(Rcpp) | | | | src <- | | r"(#include | | | | // [[Rcpp::depends(RcppArmadillo)]] | | | | // [[Rcpp::export]] | | arma::vec getEigenValues(arma::mat M) { | | return arma::eig_sym(M); | | })" | | | | size <- 1 | | m <- matrix(rnorm(size^2), size, size) | | m <- m * t(m) | | | | # This line compiles the above code with the -fopenmp flag. | | sourceCpp(code = src, verbose = TRUE, rebuild = TRUE) | | result <- getEigenValues(m) | | print(result[1:10]) | | | | When I run this code on server A, I see that arma can implicitly leverage | all | | available cores by running top -H. However, on server B it can only use | one | | core despite multiple being available: there is just one process entry in | top | | -H. Both processes successfully exit and return an answer. The process on | | server B is of course much slower. | | It is documented in the package how this is applied and the policy is to | NOT | blindly enforce one use case (say all cores, or half, or a magically chosen | value of N for whatever value of N) but to follow the local admin setting | and | respecting standard environment variables. | | So I suspect that your machine 'B' differs from machine 'A' in this | regards. | | Not that this is a _run-time_ and not _compile-time_ behavior. As it is for | multicore-enabled LAPACK and BLAS libraries, the OpenMP library and | basically | most software of this type. | | You can override it, see | RcppArmadillo::armadillo_set_number_of_omp_threads | RcppArmadillo::armadillo_get_number_of_omp_threads | | Can you try and see if these help you? | | Dirk | | | Here is the compilation on server A: | | /usr/local/lib/R/bin/R CMD SHLIB --preclean -o 'sourceCpp_2.so' | | 'file197c21cbec564.cpp' | | g++ -std=gnu++11 -I"/usr/local/lib/R/include" -DNDEBUG -I../inst/include | | -fopenmp -I"/usr/local/lib/R/site-library/Rcpp/include" -I"/usr/local/ | lib/R/ | | site-library/RcppArmadillo/include" -I"/tmp/RtmpwhGRi3/ | | sourceCpp-x86_64-pc-linux-gnu-1.0.9" -I/usr/local/in
Re: [Rcpp-devel] RcppArmadillo with -fopenmp: Not using all available cores
On 23 February 2024 at 09:35, Robin Liu wrote: | Hi all, | | Here is an R script that uses Armadillo to decompose a large matrix and print | the first 10 eigenvalues. | | library(RcppArmadillo) | library(Rcpp) | | src <- | r"(#include | | // [[Rcpp::depends(RcppArmadillo)]] | | // [[Rcpp::export]] | arma::vec getEigenValues(arma::mat M) { | return arma::eig_sym(M); | })" | | size <- 1 | m <- matrix(rnorm(size^2), size, size) | m <- m * t(m) | | # This line compiles the above code with the -fopenmp flag. | sourceCpp(code = src, verbose = TRUE, rebuild = TRUE) | result <- getEigenValues(m) | print(result[1:10]) | | When I run this code on server A, I see that arma can implicitly leverage all | available cores by running top -H. However, on server B it can only use one | core despite multiple being available: there is just one process entry in top | -H. Both processes successfully exit and return an answer. The process on | server B is of course much slower. It is documented in the package how this is applied and the policy is to NOT blindly enforce one use case (say all cores, or half, or a magically chosen value of N for whatever value of N) but to follow the local admin setting and respecting standard environment variables. So I suspect that your machine 'B' differs from machine 'A' in this regards. Not that this is a _run-time_ and not _compile-time_ behavior. As it is for multicore-enabled LAPACK and BLAS libraries, the OpenMP library and basically most software of this type. You can override it, see RcppArmadillo::armadillo_set_number_of_omp_threads RcppArmadillo::armadillo_get_number_of_omp_threads Can you try and see if these help you? Dirk | Here is the compilation on server A: | /usr/local/lib/R/bin/R CMD SHLIB --preclean -o 'sourceCpp_2.so' | 'file197c21cbec564.cpp' | g++ -std=gnu++11 -I"/usr/local/lib/R/include" -DNDEBUG -I../inst/include | -fopenmp -I"/usr/local/lib/R/site-library/Rcpp/include" -I"/usr/local/lib/R/ | site-library/RcppArmadillo/include" -I"/tmp/RtmpwhGRi3/ | sourceCpp-x86_64-pc-linux-gnu-1.0.9" -I/usr/local/include -fpic -g -O2 | -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time | -D_FORTIFY_SOURCE=2 -g -c file197c21cbec564.cpp -o file197c21cbec564.o | g++ -std=gnu++11 -shared -L/usr/local/lib/R/lib -L/usr/local/lib -o | sourceCpp_2.so file197c21cbec564.o -fopenmp -llapack -lblas -lgfortran -lm | -lquadmath -L/usr/local/lib/R/lib -lR | | and here it is for server B: | /sw/R/R-4.2.3/lib64/R/bin/R CMD SHLIB --preclean -o 'sourceCpp_2.so' | 'file158165b9c4ae1.cpp' | g++ -std=gnu++11 -I"/sw/R/R-4.2.3/lib64/R/include" -DNDEBUG -I../inst/include | -fopenmp -I"/home/my_username/.R/library/Rcpp/include" -I"/home/ my_username | /.R/library/RcppArmadillo/include" -I"/tmp/RtmpvfPt4l/ | sourceCpp-x86_64-pc-linux-gnu-1.0.10" -I/usr/local/include -fpic -g -O2 -c | file158165b9c4ae1.cpp -o file158165b9c4ae1.o | g++ -std=gnu++11 -shared -L/sw/R/R-4.2.3/lib64/R/lib -L/usr/local/lib64 -o | sourceCpp_2.so file158165b9c4ae1.o -fopenmp -llapack -lblas -lgfortran -lm | -lquadmath -L/sw/R/R-4.2.3/lib64/R/lib -lR | | I thought that the -fopenmp flag should let arma implicitly parallelize matrix | computations. Any hints as to why this may not work on server B? | | The actual code I'm running is an R package that includes RcppArmadillo and | RcppEnsmallen. Server B is the login node to an hpc cluster, but the code does | not use all cores on the compute nodes either. | | Best, | Robin | ___ | Rcpp-devel mailing list | Rcpp-devel@lists.r-forge.r-project.org | https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/rcpp-devel -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org ___ Rcpp-devel mailing list Rcpp-devel@lists.r-forge.r-project.org https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/rcpp-devel
Re: [R-pkg-devel] Package required but not available: ‘arrow’
On 23 February 2024 at 15:53, Leo Mada wrote: | Dear Dirk & R-Members, | | It seems that the version number is not incremented: | # Archived | arrow_14.0.2.1.tar.gz 2024-02-08 11:57 3.9M | # Pending | arrow_14.0.2.1.tar.gz 2024-02-08 18:24 3.9M | | Maybe this is the reason why it got stuck in "pending". No it is not. The hint to increase version numbers on re-submission is a weaker 'should' or 'might', not a strong 'must'. I have uploaded a few packages to CRAN over the last two decades, and like others have made mistakes requiring iterations. I have not once increased a version number. If/when CRAN sees an error in its (automated, largely) processing, the package is moved and the space is cleared allowing a fresh upload. (Of course you cannot upload under the same filename twice _before_ the initial processing. By default uploads do not overwrite.) Arhive/ is distinct from pending. POSIX semantics on times also help: your example clearly shows that the one in archived is older by about 6 1/2 hours. That said, in case there are multiple rounds of email and discussion having distinct numbers may ease identification of the particular package and discussion thread. But it still makes sense to have this be a suggestion, not a requirement. Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Re: [R-pkg-devel] Package required but not available:‘arrow’
On 22 February 2024 at 04:01, Duncan Murdoch wrote: | For you to deal with this, you should make arrow into a suggested | package, For what it is worth, that is exactly what package tiledb does. Yet the Suggests: still lead to a NOTE requiring a human to override which did not happen until I gently nudged after the 'five work days' had lapsed. So full agreement that 'in theory' a Suggests: should help and is the weaker and simpler dependency. However 'in practice' it can still lead to being held up up when the weak-dependency package does not build. [ As for Dénes's point, most if not all the internals in package tiledb are actually on nanoarrow but we offer one code path returning an Arrow Table object and that requires 'arrow' the package for the instantiation. So it really all boils down to 'Lightweight is the right weight' as we say over at www.tinyverse.org. But given that the public API offers an Arrow accessor, it is a little late to pull back from it. And Arrow is a powerful and useful tool. Building it, however, can have its issues... ] Anyway, while poking around the issue when waiting, I was also told by Arrow developers that the issue (AFAICT a missing header) is fixed, and looking at CRAN's incoming reveals the package has been sitting there since Feb 8 (see https://cran.r-project.org/incoming/pending/). So would be good to hear from CRAN what if anything is happening here. Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel
Bug#1064388: ess: New version 24.1.1
Source: ess Version: 24.01.0-1 Severity: minor Salut Seb -- and thanks for packaging the recent 24.1.0 which installs fine. There is by now a follow-up 24.1.1 which would be nice to have too. Amities, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
Re: [Rcpp-devel] Wrapping a c++ class with singleton using RCPP Module
On 21 February 2024 at 09:21, Iñaki Ucar wrote: | Could you please provide more details about what you tried so far and what are | the issues you found? A link to a public repo with a test case would be even | better. Seconded! I think I also did something like that 'way early' and 'way simply'. In just one file you can just have. class Foo { ... }; // forward declaration, or actual declaration static Foo* myfooptr = nullptr; followed by a few simple Rcpp function to init (ie allocated), set a value, get a value and maybe destroy at end (even callable via on.exit() from R). The key really is to differentiate between types Rcpp knows, and those types or classes you have that it doesn't -- so you have to write accessors in terms of the types R knows. We must have examples for that somewhere... Cheers, Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org ___ Rcpp-devel mailing list Rcpp-devel@lists.r-forge.r-project.org https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/rcpp-devel
Re: [Rd] Compiling libR as a standalone C library for java+jni (-fPIC)
Salut Pierre, On 20 February 2024 at 10:33, Pierre Lindenbaum wrote: | (cross-posted on SO: https://stackoverflow.com/questions/78022766) | | Hi all, | | I'm trying to compile R as a static library with the -fPIC flag so I can use it within java+JNI (is it only possible ?), but I cannot find the right flags in '.configure' to compile R this way. | | I tested various flags but I cannot find the correct syntax. | | for now, my latest attempt was | | ``` | rm -rvf "TMP/R-4.3.2" TMP/tmp.tar.gz | mkdir -p TMP/R-4.3.2/lib/ | wget -O TMP/tmp.tar.gz "https://pbil.univ-lyon1.fr/CRAN/src/base/R-4/R-4.3.2.tar.gz; | cd TMP && tar xfz tmp.tar.gz && rm tmp.tar.gz && cd R-4.3.2 && \ | CPICFLAGS=fpic FPICFLAGS=fpic CXXPICFLAGS=fpic SHLIB_LDFLAGS=shared SHLIB_CXXLDFLAGS=shared ./configure --enable-R-static-lib --prefix=/path/to/TMP --with-x=no --disable-BLAS-shlib && make | | ``` Looks like you consistenly dropped the '-' from '-fPIC'. FWIW the Debian (and hence Ubuntu and other derivatives) binaries contain a libR you can embed. And littler and RInside have done so for maybe 15 years. Cannot help with JNI but note that the history of the headless (and generally excellent) Rserve (and its clients) started on Java. Might be worth a try. Good luck, Dirk | witch gives the following error during configure: | | | ``` | configure: WARNING: you cannot build info or HTML versions of the R manuals | configure: WARNING: you cannot build PDF versions of the R manuals | configure: WARNING: you cannot build PDF versions of vignettes and help pages | make[1]: Entering directory 'R-4.3.2' | configure.ac:278: error: possibly undefined macro: AM_CONDITIONAL | If this token and others are legitimate, please use m4_pattern_allow. | See the Autoconf documentation. | configure.ac:870: error: possibly undefined macro: AC_DISABLE_STATIC | configure.ac:2226: error: possibly undefined macro: AM_LANGINFO_CODESET | configure.ac:2876: error: possibly undefined macro: AM_NLS | configure.ac:2880: error: possibly undefined macro: AM_GNU_GETTEXT_VERSION | configure.ac:2881: error: possibly undefined macro: AM_GNU_GETTEXT | make[1]: *** [Makefile:49: configure] Error 1 | | ``` | removing the XXXFLAGS=YYY and --prefix (?) allows R to be compiled but It's not loaded into java. | | ``` | gcc -ITMP -I${JAVA_HOME}/include/ -I${JAVA_HOME}/include/linux \ | -LTMP/R-4.3.2/lib `TMP/R-4.3.2/bin/R CMD config --cppflags` -shared -fPIC -o TMP/libRSession.so -g RSession.c TMP/R-4.3.2/lib/libR.a | /usr/bin/ld: TMP/R-4.3.2/lib/libR.a(objects.o): warning: relocation against `R_dot_Method' in read-only section `.text' | /usr/bin/ld: TMP/R-4.3.2/lib/libR.a(altrep.o): relocation R_X86_64_PC32 against symbol `R_NilValue' can not be used when making a shared object; recompile with -fPIC | /usr/bin/ld: final link failed: bad value | ``` | | Any idea ? Thanks | | Pierre | | __ | R-devel@r-project.org mailing list | https://stat.ethz.ch/mailman/listinfo/r-devel -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Tcl socket server (tcltk) does not work any more on R 4.3.2
On 20 February 2024 at 12:27, webmail.gandi.net wrote: | Dear list, | | It seems that something changed between R 4.2.3 and R 4.3 (tested with 4.3.2) that broke the Tcl socket server. Here is a reproducible example: | | - R process #1 (Tcl socket server): | | library(tcltk) | cmd <- r"( | proc accept {chan addr port} { ;# Make a proc to accept connections |puts "$addr:$port says [gets $chan]" ;# Receive a string |puts $chan goodbye ;# Send a string |close $chan ;# Close the socket (automatically flushes) | } ;# | socket -server accept 12345 ;# Create a server socket)" | .Tcl(cmd) | | - R process #2 (socket client): | | con <- socketConnection(host = "localhost", port = 12345, blocking = FALSE) | writeLines("Hello, world!", con) # Should print something in R #1 stdout | readLines(con) # Should receive "goodbye" | close(con) | | When R process #1 is R 4.2.3, it works as expected (whatever version of R #2). When R process #1 is R 4.3.2, nothing is sent or received through the socket apparently, but no error is issued and process #2 seems to be able to connect to the socket. | | I am stuck with this. Thanks in advance for help. >From a quick check this issue seems to persist in the (current) R-devel 2024-02-20 r85951 too. Dirk | Regards, | | Philippe | | > .Tcl("puts [info patchlevel]") | 8.6.13 | | | > sessionInfo() | R version 4.3.2 (2023-10-31) | Platform: aarch64-apple-darwin20 (64-bit) | Running under: macOS Sonoma 14.2.1 | | Matrix products: default | BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib | LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.11.0 | | locale: | [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 | | time zone: Europe/Brussels | tzcode source: internal | | attached base packages: | [1] tcltk stats graphics grDevices utils datasets methods base | | loaded via a namespace (and not attached): | [1] compiler_4.3.2 tools_4.3.2glue_1.7.0 | __ | R-devel@r-project.org mailing list | https://stat.ethz.ch/mailman/listinfo/r-devel -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [ESS] FW: [GNU ELPA] ESS version 24.1.1
On 18 February 2024 at 20:54, Brett Presnell via ESS-help wrote: | | Forgot to mention that you may need to uninstall and reinstall the ess | package after putting the :pin in place, but I'm not sure about that. | Restarting emacs is maybe needed too, but not sure about that either. The pin, along with uninstalling the 20240131* one I had, seems to have done the trick. Many thanks! Dirk -- dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ ESS-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/ess-help