On 14 January 2021 at 15:32, Rampal Etienne wrote:
| I have a package with FORTRAN code using REAL(16) for accurate 
| computations. This package has been on CRAN for several years, but now 
| it suddenly fails on M1 Macs, apparently due to the use of REAL(16) - it 
| cannot handle higher precision than DOUBLE PRECISION, i.e. REAL(8). How 
| do I solve this problem? Is it possible to exclude a certain 
| architecture/platform (that of M1 in this case)?

The (excellent, and early) post by two R Core members hints at specific
issues with Fortran:

   
https://developer.r-project.org/Blog/public/2020/11/02/will-r-work-on-apple-silicon/index.html

It doesn't specifically mention lack of REAL(16) but it hints somewhat
strongly at Fortran still being worked on.

But from what I recall, 'long double' beyond 64 bit is not standard across
the other (formally support) platforms either.  Recall that you have

  > capabilities()[["long.double"]]
  [1] TRUE          
  > 

now in r-release as well. Can you make your code switch between 64 and 128
bit doubles 'as available'? 

Dirk

-- 
https://dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org

______________________________________________
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel

Reply via email to