On Sat, 26 Sep 2020, Domingo Alvarez Duarte wrote:

I did a revision of the usage of "glp_long_double" see here https://github.com/mingodad/GLPK/commit/4941d1633e52b802bdc5f102715ac3db25db5245

====

Revised usage of glp_long_double, now it does solve hashi.mod and tiling.mod faster with "--cuts" option but hashi.mod without it's a lot slower.

====

- Standard glpsol  => 67.6 secs

- glpsol with some "long double" => 3.1 secs

I'd expect strategic use of long double to affect accuracy,
but not to have a consistent effect on speed.

Iterative refinement is one place where computing
with extra precision would be especially useful.

Note that long double=double*double raises at least three possibilities:
The product is done as double*double and assigned to long double.
The product is done as long double*long double and
converted to double before being assigned to long double.
The product is done as long double*long double and assigned to long double.
Casting a factor to long double would ensure the third.
The second really should not happen, as the product is a sub-expression,
but I would not be surprised to se it.

Storing some things as floats could speed up memory bound computations.
The constraint matrix comes to mind.
An all-integer constraint matrix with absolute values
less than 16 million could be represented exactly.
Storing them as 32-bit ints would extend the range..

Matrix factors might not be so good an idea.
It might work, but the criteria for detecting
singularity would likely have to be relaxed.

--
Michael   henne...@web.cs.ndsu.nodak.edu
"Sorry but your password must contain an uppercase letter, a number,
a haiku, a gang sign, a heiroglyph, and the blood of a virgin."
                                                             --  someeecards

Reply via email to