Hi Denis,

I think the key point is when you say "it's better to have too many roots than too few".

Also, I'm not mathematically familiar with what the case of "D close to zero" means. Generally, in floating point math, when you do a test and change the approach based on the results of that test I would hope that the answers returned by both approaches would be similar for cases near the decision point of the test. So, for example, if a 3 root case of 5,10,15 yielded a D that was "strongly negative" and a 3 root case of 5,9.99999,10.00001 yielded a D that was "just barely negative" and the 2 root case of "5,10" yielded a D that was "just barely positive" then I would have no problem with the fact that we classify the 2 vs. 3 root case on the sign of D. The thing I haven't figured out, though, is whether that is indicative of the case where the 2 vs. 3 root switchover hapens?

So, I guess I'm OK with minimal refinement if its sole purpose is to eliminate "false duplicates" in the roots.

I think it is more important if it is making sure a root is on the right side of "0 and 1" when it is being used to solve cubic curve intersection problems.

And I think there is an argument to be made if it is being used to make the "answers that came from transcendental trig functions" more accurate since those functions aren't necessarily as accurate as the math used elsewhere in the function.

                        ...jim

On 1/10/2011 1:13 PM, Denis Lila wrote:
Hi Jim.

I've attached another modification of your CubicSolver.java.
I tried many different things, but I think what's in the
attachment is the only satisfactory implementation. The logic
is somewhat similar to what you suggested, in that it computes
3 roots for D<  0. However, once roots are computed, it doesn't
try very hard at all to eliminate them in case too many were
computed. I tried doing this, but it was very problematic, because
the only reliable way to count roots is to split the domain into
intervals where the polynomial is strictly increasing or decreasing
by finding the roots of its derivative, evaluating the poly at the
interval end points and then counting sign changes. This works for
well behaved polynomials, but not for edge cases where D is very
small (which is the only situation where we actually need it to work)
because in cases where 3 roots are computed but only 2 exist one of
the critical points will also be a root, so the function will be
locally flat at one of its roots which will make solveEqn(eqn, 3, x)
fluctuate a lot near the root and the assumption that the function
is monotonic in each interval will not hold. Also, it's better to
have too many roots than too few.

I modified trySolve3 to count calls of solveCubicNew that find too
many or too few roots. When I run trySolve 1000 times it never finds
fewer roots than there actually are (or maybe it does but in extremely
rare cases, but I don't remember seeing any instances of this). It finds
too many roots in ~3000 cases compared to ~2500 of the version that
doesn't call fixRoots() (note that this isn't the same fixRoots that
is used by the old function). I think this is very good.

As for performance, it's not as good as the version that doesn't
call fixRoots, but accuracy has improved a lot. I tried to calibrate
Newton's method in the root refining function to get good accuracy
with as few iterations as possible. 3 iterations is a very good
compromise (although we might be able to get away with 2).

Regards,
Denis.

----- Original Message -----
Hi Denis,

What about logic like this:

boolean checkRoots = false;
if (D<  0) {
// 3 solution form is possible, so use it
checkRoots = (D>  -TINY); // Check them if we were borderline
// compute 3 roots as before
} else {
double u = ...;
double v = ...;
res[0] = u+v; // should be 2*u if D is near zero
if (u close to v) { // Will be true for D near zero
res[1] = -res[0]/2; // should be -u if D is near zero
checkRoots = true; // Check them if we were borderline
// Note that q=0 case ends up here as well...
}
}
if (checkRoots) {
if (num>  2&&  (res[2] == res[1] || res[2] == res[0]) {
num--;
}
if (num>  1&&  res[1] == res[0]) {
res[1] = res[--num]; // Copies res[2] to res[1] if needed
}
for (int i = num-1; i>= 0; i--) {
res[i] = refine(res[i]);
for (int j = i+1; j<  num; j++) {
if (res[i] == res[j]) {
res[i] = res[--num];
break;
}
}
}
}

Note that we lose the optimization of calculating just 2*u and -u for
the 2 root case, but that only happened in rare circumstances. Also,
if
D is near zero and negative, then we generate 3 roots using
transcendentals and potentially refine one away, but that should also
be
an uncommon situation and "there but for the grace of being a tiny
negative number would we have gone anyway" so I think it is OK to take
the long way to the answer.

Also, one could argue that if we used the transcendentals to calculate
the 3 roots, it couldn't hurt to refine the answers anyway. The other
solutions should have higher precision, but the transcendental results
will be much less accurate.

Finally, this lacks the "refine them anyway if any of them are near 0
or
1" rule - the original only did that if the transcendentals were used,
but it would be nice to do that for any of the cases. It might make
sense to have a variant that takes a boolean indicating whether to
ensure higher accuracy around 0 and 1, but that would require an API
change request...

...jim

On 1/4/11 2:02 PM, Denis Lila wrote:
Hi Jim.

The test as it is has a test case (I just chose random numbers to
check
and got lucky - d'oh!) that generates 1 solution from the new code
even
though the equation had 2 distinct solutions that weren't even near
each
other...

I figured out why this happens. It's because of cancellation in the
computation of D (two large numbers are subtracted and the result is
supposed to be 0 or close to 0, but it's about 1e-7, which wasn't
enough to pass the iszero test). I've been working on this and I
came up with a couple of different ways. They are in the attached
file (it's a modified version of the file your CubicSolve.java).

The first thing I did was to modify solveCubicOld. I tried to get
a bit fancy and although I think I fixed the problems it had, the
end result is ugly, complicated and it has small problems, like
returning 3 very close roots when there should only be one.

The other solution is to just check if the roots of the derivative
are also roots of the cubic polynomial if only 1 root was computed
by the closed form algorithm. This doesn't have the numerical
accuracy of the first way (which used bisectRoots when things went
wrong)
but it's much faster, doesn't have the multiple roots problem, and
it's
much simpler. I called your trySolve function on a few hundred
polynomials with random roots in [-10, 10] and it never finds fewer
roots than there actually are. Sometimes it finds 3 roots when there
are
only 2, but I don't think this is a huge problem.

I've attached what I have so far.

Regards,
Denis.

----- Original Message -----
Hi Denis,

I'm attaching a test program I wrote that compares the old and new
algorithms.

Obviously the old one missed a bunch of solutions because it
classified
all solutions as 1 or 3, but the new one also sometimes misses a
solution. You might want to turn this into an automated test for
the
bug (and maybe use it as a stress test with a random number
generator).

I think one problem might be that you use "is close to zero" to
check
if
you should use special processing. I think any tests which say "do
it
this way and get fewer roots" should be conservative and if we are
on
the borderline and we can do the code that generates more solutions
then
we should generate more and them maybe refine the roots and
eliminate
duplicates. That way we can be (more) sure not to leave any roots
unsolved.



...jim

Reply via email to