Thanks Adam :-)

Le 20/09/13 15:56, Adam Estrada a écrit :
Thanks Martin! I am interested in what you find out in your research
regarding floating point precisions. I (we) have been noticing some
precision inconsistencies between 64bit Python and 64bit Java and I am
curious to learn what others may encounter.

It depends on which operation the inconsistencies are found:

 *

   On the basic +, -, *, / operations, it depends on whether Phython
   makes use of the Intel 80 bits registers or not. Java explicitly
   forbid that (actually: mandates to truncate to 64 bits after every
   operations, except for the exponent part if the "strictfp" keyword
   is not present). So if the Phython language does not impose such
   restriction (which I do not know), then Phython is likely to be more
   accurate than Java on Intel processors, at the expense of slightly
   different results on different architectures.

 *

   For trigonometric operations (sin, cos, tan, etc.), it depends on
   whether Phython checks for large angles or not. The x87 Intel
   processors are accurate for angles between -45° to +45°. For angles
   greater than that, the processor uses an inaccurate approximation of
   Pi. The error is small for small angles, but become greater as the
   angle increase. Makers can build more accurate processors today, but
   Intel can not do that on Pentium for compatibility reasons (I have
   been told that AMD tried many years ago, but had to revert back to
   the inaccurate algorithm because producing the correct answer was
   breaking too many softwares). See the "Evaluation" section of Bug ID
   4857011 [1] for more information (very interesting in my opinion).
   Note that I do not know is using SSE unit instead of x87, as
   proposed in [2], would affect accuracy.


About SIS matrix, the situation is as below:

 *

   Matrix multiplications and inversions are usually performed only
   once when preparing a map projection. Then the result is used for
   transforming a large bunch of coordinates. Consequently performance
   of (matrix*matrix) and (matrix ^ -1) operations do not really matter
   for Apache SIS. Performance of (matrix*vector) do matter a lot, but
   this will be the job of an other class (MathTransform - to be
   committed later).

 *

   In Apache SIS, the numerical values in the matrices will often be
   specified by the CRS definition. The values may be conversion factor
   from feet to metres (0.3048), from metre to kilometres (0.001), from
   gradians to degrees (0.9), or map projection parameters defined by
   the user (false easting/northing, etc.). It is quite common that a
   chain of operations go from CRS A to CRS B to CRS C than back to CRS
   A. After such circle, we often see 0.3047999999 instead of 0.3048,
   etc. It is this kind of errors that I would like to reduce. While
   apparently minor, we found the annoyance to have surprising large
   implications.


In summary, for SIS matrices, accuracy is more important than performance. Yesterday I was considering to use java.math.BigDecimal for internal computation (not to be visible to users). I was concerned by the high cost of such objects, but I though that the fact that BigDecimal works in base 10 could actually be good for SIS matrices because many factors are defined in base 10 (e.g. conversion from feet to metres is defined as exactly 0.3048, which does not have an exact representation as a Java double). However BigDecimal does not support NaN and infinities, and we really needs the SIS matrices package to be able to work with NaN/infinities.

Today I'm exploring the use of "double-double" arithmetic as an alternative [3]. It would be more compact, more efficient, support NaN/infinities and would hopefully have enough precision for our needs regarding 0.3048 and similar factors.


    Martin


[1] http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4857011
[2] http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7175279
[3] http://en.wikipedia.org/wiki/Double-double_%28arithmetic%29#Double-double_arithmetic

Reply via email to