[ https://issues.apache.org/jira/browse/NUMBERS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17333362#comment-17333362 ]
Alex Herbert commented on NUMBERS-155: -------------------------------------- Looking at the Brent solver I think this does not require extra precision. The algorithm aims to find b such that f(b) == 0. The input function f is user provided and computes in standard precision. So you only need to find b as a standard precision value. The algorithm is computing updates to b such that the function f(b) moves closer to zero. Any use of extended precision is applicable to computing the factors used perform the inverse quadratic interpolation. This only uses one subtraction of two other composite terms: {code:java} p = s * (2 * m * q * (q - r) - (b - a) * (r - 1)); p = s * (X - Y); {code} Thus there is not a chance of large cancellation and the update step will be close to a true unlimited precision result. As long as the solver does not blow up to use increasingly larger steps when alternating around a root, or compute insignificant steps (and there is detection for this with a reversion to bisection) then the only effect of a less than perfect computation of the inverse linear interpolation would be a few more iterations before the root is found. Or another argument is you need to find b within tolerance where: {code:java} a < b < c f(a) < 0 f(c) > 0 f(b) == 0 (within tolerance) {code} Given a correct bracket of a root then just moving the closest to zero end in and setting b as the midpoint (bisection) would eventually find the root to the same tolerance. No extended precision need. The Brent solver is just computing the update using not the midpoint but using a smarter approximation of the local curve to jump towards the root. You could modify the BrentSolver to use floats to do all the computation of the inverse quadratic interpolation to effectively lower the precision and then verify it still solves functions. If it takes a lot more iterations then there may be a case for seeing if an extended precision computation of the update step would take significantly less iterations to solve the same function. But the accuracy of the solution should not change as the tolerance is chosen by the caller. > About modules "arrays" and "rootfinder" > --------------------------------------- > > Key: NUMBERS-155 > URL: https://issues.apache.org/jira/browse/NUMBERS-155 > Project: Commons Numbers > Issue Type: Task > Components: arrays, core > Affects Versions: 1.0-beta1 > Reporter: Gilles Sadowski > Priority: Major > Labels: refactoring > Fix For: 1.0 > > > Shouldn't some functionality currently in module {{commons-numbers-arrays}} > be moved to {{commons-number-core}}? > Rationale is that the utilities focus on extended precision of some > computations (on numbers that happen to be stored in an array). > Wouldn't the Brent solver (in module {{commons-numbers-rootfinder}}) benefit > from a version based on the new {{ExtendedPrecision}} class? -- This message was sent by Atlassian Jira (v8.3.4#803005)