> In floating-point, usually doing an operation to double precision and then 
> rounding to float gives the right result in float precision. One exception to 
> this is fused multiply add (fma) where "a * b + c" is computed with a single 
> rounding. This requires the equivalent of extra intermediate precision inside 
> the operation. If a float fma is implemented using a double fma rounded to 
> float, for some well-chosen arguments where the final result is near a 
> half-way result in *float*, an incorrect answer will be computed due to 
> double rounding. In more detail, the double result will round up and then the 
> cast to float will round up again whereas a single rounding of the exact 
> answer to float would only round-up once.
> 
> The new float fma implementation does the exact arithmetic using BigDecimal 
> where possible, with guard to handle the non-finite and signed zero IEEE 754 
> details.

Joe Darcy has updated the pull request incrementally with one additional commit 
since the last revision:

  Add a jtreg run command to disable any fma instrinic so the Java code is 
tested.

-------------

Changes:
  - all: https://git.openjdk.java.net/jdk/pull/2684/files
  - new: https://git.openjdk.java.net/jdk/pull/2684/files/9d26b312..ee2ea23a

Webrevs:
 - full: https://webrevs.openjdk.java.net/?repo=jdk&pr=2684&range=01
 - incr: https://webrevs.openjdk.java.net/?repo=jdk&pr=2684&range=00-01

  Stats: 1 line in 1 file changed: 1 ins; 0 del; 0 mod
  Patch: https://git.openjdk.java.net/jdk/pull/2684.diff
  Fetch: git fetch https://git.openjdk.java.net/jdk pull/2684/head:pull/2684

PR: https://git.openjdk.java.net/jdk/pull/2684

Reply via email to