Here is a much better default implementation for java.lang.Math.fma(float, 
float, float):

public static float fma(float a, float b, float c) {
    // product is equal to the exact value of a * b
    final double product = (double)a * (double)b;
    final double cAsDouble = (double)c;
    double sum = product + cAsDouble;

    if (Double.isFinite(sum)) {
        // If sum is finite, round sum to odd to ensure that the conversion of
        // sum to single-precision floating point is correctly rounded.
        final double v = sum - product;
        final double err = (product - (sum - v)) + (cAsDouble - v);

        final long sumBits = Double.doubleToRawLongBits(sum);
        final long errBits = Double.doubleToRawLongBits(err);
        final long sumIsInexactInSignBit =
          errBits ^ (errBits + 0x7FFF_FFFF_FFFF_FFFFL);

        sum = Double.longBitsToDouble(
            (sumBits + (((sumBits ^ errBits) & sumIsInexactInSignBit) >> 63)) |
            (sumIsInexactInSignBit >>> 63));

        // sum is now equal to the rounded to odd double-precision floating
        // point result of a * b + c
    }

    // Return the result of converting sum (which will be rounded to odd if
    // finite) to single-precision floating point
    return (float)sum;
}

The above implementation of Math.fma(float, float, float) is much more 
efficient than the current implementation of java.lang.Math.fma(float, float, 
float) in src/java.base/share/classes/java/lang/Math.java as it avoids the 
overhead of using BigDecimal addition to compute the exact sum of a * b + c if 
a, b, and c are all finite floating-point values.

Reply via email to