Re: GLPSOL in webassemby faster than native ?

2020-10-02 Thread Domingo Alvarez Duarte

Hello Michael !

You are right and that's why I mention:

===

Probably a review of "glp_gmi_gen" function and others that use qsort 
with this insight can reveal a better/stable approach to calculate cuts 
(or other tasks around qsort).


===

And testing with miplib2017 problems with a hard time limit of 60 
seconds, myGLPK was faster 110 times of the 155 problems that were 
solved from the total 1065 problems (it's not totally fair because 
myGLPK also have excluded all "xasserts" that should not be needed in 
the release binary anyway):


- Myglpsol (test-all-glpsol3.log) 110

- Statndard glpsol (test-all-glpsol.log) 45

===

#!/bin/sh
glpsol_cmd=glpsol

solve() {
    echo $1
    date
    /usr/bin/time timeout 60s $glpsol_cmd --cuts --freemps $1
}


for fn in mps/*
do
    solve $fn
done

===

===

idx    probl
0    test-all-glpsol.log
1    test-all-glpsol3.log
Count Total:    155    faster =>    test-all-glpsol.log 45    
test-all-glpsol3.log    110

idx    probl    ftime    time    mem
0    1    mps/10teams.mps.gz    0.0241    0:02.41    16940
1    0    mps/10teams.mps.gz    0.0246    0:02.46    17428
2    0    mps/22433.mps.gz    0.0048    0:00.48    6500
3    1    mps/22433.mps.gz    0.0052    0:00.52    5800
4    1    mps/23588.mps.gz    0.2264    0:22.64    10712
5    0    mps/23588.mps.gz    0.2834    0:28.34    12140
6    1    mps/acc-tight2.mps.gz    0.0805    0:08.05    18116
7    0    mps/acc-tight2.mps.gz    0.0989    0:09.89    18624
8    1    mps/air03.mps.gz    0.0049    0:00.49    52600
9    0    mps/air03.mps.gz    0.0053    0:00.53    53392
10    1    mps/amaze22012-03-15i.mps.gz    0.0028    0:00.28 21764
11    0    mps/amaze22012-03-15i.mps.gz    0.0041    0:00.41 24632
12    1    mps/amaze22012-06-28i.mps.gz    0.0004    0:00.04 5816
13    0    mps/amaze22012-06-28i.mps.gz    0.0005    0:00.05 6444
14    1    mps/amaze22012-07-04i.mps.gz    0.0009    0:00.09 9396
15    0    mps/amaze22012-07-04i.mps.gz    0.0012    0:00.12 10812
16    1    mps/app1-1.mps.gz    0.5649    0:56.49    12660
17    1    mps/app2-2.mps.gz    0.4029    0:40.29    55036
18    0    mps/app2-2.mps.gz    0.4617    0:46.17    56092
19    1    mps/app3.mps.gz    0.0135    0:01.35    8484
20    0    mps/app3.mps.gz    0.0249    0:02.49    9228
21    0    mps/bc.mps.gz    0.1395    0:13.95    82880
22    1    mps/bc.mps.gz    0.1867    0:18.67    84108
23    0    mps/bc1.mps.gz    0.2279    0:22.79    64144
24    1    mps/bc1.mps.gz    0.2371    0:23.71    63536
25    0    mps/blend2.mps.gz    0.0472    0:04.72    7180
26    1    mps/blend2.mps.gz    0.0511    0:05.11    6964
27    1    mps/bley_xs1.mps.gz    0.0015    0:00.15    9192
28    0    mps/bley_xs1.mps.gz    0.0016    0:00.16    9604
29    1    mps/bley_xs2.mps.gz    0.0008    0:00.08    7996
30    0    mps/bley_xs2.mps.gz    0.001    0:00.10    8316
31    1    mps/control20-5-10-5.mps.gz    0.0047    0:00.47 8904
32    0    mps/control20-5-10-5.mps.gz    0.0048    0:00.48 9604
33    1    mps/control30-5-10-4.mps.gz    0.0174    0:01.74 11344
34    0    mps/control30-5-10-4.mps.gz    0.0208    0:02.08 12072
35    1    mps/cvrpa-n64k9vrpi.mps.gz    0.0111    0:01.11 79452
36    0    mps/cvrpa-n64k9vrpi.mps.gz    0.0158    0:01.58 89992
37    1    mps/cvrpb-n45k5vrpi.mps.gz    0.0055    0:00.55 40256
38    0    mps/cvrpb-n45k5vrpi.mps.gz    0.0071    0:00.71 45980
39    1    mps/cvrpp-n16k8vrpi.mps.gz    0.0006    0:00.06    7340
40    0    mps/cvrpp-n16k8vrpi.mps.gz    0.0008    0:00.08    8276
41    1    mps/cvrpsimple2i.mps.gz    0.0001    0:00.01    3828
42    0    mps/cvrpsimple2i.mps.gz    0.0002    0:00.02    4424
43    1    mps/dano3_3.mps.gz    0.177    0:17.70    30196
44    0    mps/dano3_3.mps.gz    0.1815    0:18.15    31172
45    0    mps/dcmulti.mps.gz    0.0114    0:01.14    5636
46    1    mps/dcmulti.mps.gz    0.0163    0:01.63    5652
47    1    mps/decomp1.mps.gz    0.0105    0:01.05    28932
48    0    mps/decomp1.mps.gz    0.0931    0:09.31    31068
49    1    mps/decomp2.mps.gz    0.0273    0:02.73    39092
50    0    mps/decomp2.mps.gz    0.1494    0:14.94    39352
51    1    mps/dell.mps.gz    0.0005    0:00.05    4824
52    0    mps/dell.mps.gz    0.0005    0:00.05    5420
53    1    mps/diameterc-mstc-v20a190d5i.mps.gz    0.0001 0:00.01    5116
54    0    mps/diameterc-mstc-v20a190d5i.mps.gz    0.0003 0:00.03    6256
55    0    mps/diameterc-msts-v40a100d5i.mps.gz    0.0001 0:00.01    5812
56    1    mps/diameterc-msts-v40a100d5i.mps.gz    0.0001 0:00.01    5000
57    1    mps/dsbmip.mps.gz    0.0021    0:00.21    6704
58    0    mps/dsbmip.mps.gz    0.0037    0:00.37    7168
59    0    mps/ej.mps.gz    0    0:00.00    3360
60    1    mps/ej.mps.gz    0    0:00.00    3072
61    1    mps/elitserienhandball11i.mps.gz    0.0019 0:00.19    14292
62    0    mps/elitserienhandball11i.mps.gz    0.0027 0:00.27    16104
63    1    mps/elitserienhandball13i.mps.gz    0.0019 0:00.19    14520
64    0    mps/elitserienhandball13i.mps.gz    0.

Re: GLPSOL in webassemby faster than native ?

2020-10-02 Thread Michael Hennebry

On Thu, 1 Oct 2020, Domingo Alvarez Duarte wrote:

But in reality it seems that musl/qsort results in "stable sort" but actually 
for hashi.mod and tiling.mod the order of the "tie-breaker" results in a lot 
shorter solving time.


Note that you have achieved consistency of result,
not necessarily consistently better performance.

--
Michael   henne...@web.cs.ndsu.nodak.edu
"Sorry but your password must contain an uppercase letter, a number,
a haiku, a gang sign, a heiroglyph, and the blood of a virgin."
 --  someeecards



Re: GLPSOL in webassemby faster than native ?

2020-10-01 Thread Domingo Alvarez Duarte

Hello Michael !

You answer is pointing correctly the issue on this thread, and that is 
the difference between libc/qsort and musl/qsort, I found it adding 
"printf" inside "src/intopt/gmigen::glp_gmi_gen" and compared 
native/wasm output and seeing the difference looked in the source of 
emscripten and found they are using musl/qsort then I've added it to my 
repository and I'm sing it instead of libc/qsort, at least this way 
we'll have a stable result on every platform (till we found a way to 
better fix this issue).


But in reality it seems that musl/qsort results in "stable sort" but 
actually for hashi.mod and tiling.mod the order of the "tie-breaker" 
results in a lot shorter solving time.


Probably a review of "glp_gmi_gen" function and others that use qsort 
with this insight can reveal a better/stable approach to calculate cuts 
(or other tasks around qsort).



int glp_gmi_gen(glp_prob *P, glp_prob *pool, int max_cuts)
{
...
  /* sort the list by descending fractionality */
  glp_qsort(&var[1], nv, sizeof(struct var), fcmp);
  /* try to generate cuts by one for each variable in the list, but
   * not more than max_cuts cuts */
  nnn = 0;
  for (t = 1; t <= nv; t++)
  {  len = glp_gmi_cut(P, var[t].j, ind, val, phi);
 if (len < 1)
    goto skip;
 /* if the cut inequality seems to be badly scaled, reject it
  * to avoid numerical difficulties */
 for (k = 1; k <= len; k++)
 {  if (fabs(val[k]) < GLP_GMI_GEN_TOL)
   goto skip;
    if (fabs(val[k]) > GLP_GMI_GEN_TOL2)
   goto skip;
 }
 /* add the cut to the cut pool for further consideration */
 i = glp_add_rows(pool, 1);
 glp_set_row_bnds(pool, i, GLP_LO, val[0], 0);
 glp_set_mat_row(pool, i, len, ind, val);
 /* one cut has been generated */
 /*printf("nnn %d : %d : %d : %d : %g : %g : %g\n", i, len, t, 
var[t].j, var[t].f, val[k], fabs(val[k]));*/

 nnn++;
 if (nnn == max_cuts)
    break;
skip:    ;
  }
}

Cheers !

On 1/10/20 19:29, Michael Hennebry wrote:

On Thu, 1 Oct 2020, Heinrich Schuchardt wrote:


On 9/30/20 10:02 PM, Michael Hennebry wrote:

On Tue, 29 Sep 2020, Domingo Alvarez Duarte wrote:


I found why GLPK in wasm was faster with "--cuts" option than native,
it was due wasm using qsort from muslc that is a "stable sort", I've
added it to my GLPK repository and running all models in the
"examples" folder only "color.mod" produces a different solution (that
seems to be valid anyway).


My expectation is that all sort algorithms produce the same result if
you supply a sort key that is different for all elements.


In the case of ties, tie-breaking matters.
For a stable sort, the tie-breaker is the original position.





Re: GLPSOL in webassemby faster than native ?

2020-10-01 Thread Michael Hennebry

On Thu, 1 Oct 2020, Heinrich Schuchardt wrote:


On 9/30/20 10:02 PM, Michael Hennebry wrote:

On Tue, 29 Sep 2020, Domingo Alvarez Duarte wrote:


I found why GLPK in wasm was faster with "--cuts" option than native,
it was due wasm using qsort from muslc that is a "stable sort", I've
added it to my GLPK repository and running all models in the
"examples" folder only "color.mod" produces a different solution (that
seems to be valid anyway).


My expectation is that all sort algorithms produce the same result if
you supply a sort key that is different for all elements.


In the case of ties, tie-breaking matters.
For a stable sort, the tie-breaker is the original position.

--
Michael   henne...@web.cs.ndsu.nodak.edu
"Sorry but your password must contain an uppercase letter, a number,
a haiku, a gang sign, a heiroglyph, and the blood of a virgin."
 --  someeecards



Re: GLPSOL in webassemby faster than native ?

2020-09-30 Thread Heinrich Schuchardt
On 9/30/20 10:02 PM, Michael Hennebry wrote:
> On Tue, 29 Sep 2020, Domingo Alvarez Duarte wrote:
>
>> I found why GLPK in wasm was faster with "--cuts" option than native,
>> it was due wasm using qsort from muslc that is a "stable sort", I've
>> added it to my GLPK repository and running all models in the
>> "examples" folder only "color.mod" produces a different solution (that
>> seems to be valid anyway).

My expectation is that all sort algorithms produce the same result if
you supply a sort key that is different for all elements.

Could you, please, elaborate where in the GLPK code you see a problem
due to an incomplete sort key?

Do you generally observe a faster solution time? Or do you have single
model that when solved in a specific sequence that by chance is produced
by muslc is faster?

Best regards

Heinrich

>
> Michael Hennebry wrote:
>
>> Roadblocks include the toolchain and the system libraries.
>
>> The math library can matter.
>
>
> How did you find it?
>




Re: GLPSOL in webassemby faster than native ?

2020-09-30 Thread Michael Hennebry

On Tue, 29 Sep 2020, Domingo Alvarez Duarte wrote:

I found why GLPK in wasm was faster with "--cuts" option than native, it was 
due wasm using qsort from muslc that is a "stable sort", I've added it to my 
GLPK repository and running all models in the "examples" folder only 
"color.mod" produces a different solution (that seems to be valid anyway).


Michael Hennebry wrote:


Roadblocks include the toolchain and the system libraries.



The math library can matter.



How did you find it?

--
Michael   henne...@web.cs.ndsu.nodak.edu
"Sorry but your password must contain an uppercase letter, a number,
a haiku, a gang sign, a heiroglyph, and the blood of a virgin."
 --  someeecards



Re: GLPSOL in webassemby faster than native ?

2020-09-29 Thread Domingo Alvarez Duarte

Hello Andrew !

I found why GLPK in wasm was faster with "--cuts" option than native, it 
was due wasm using qsort from muslc that is a "stable sort", I've added 
it to my GLPK repository and running all models in the "examples" folder 
only "color.mod" produces a different solution (that seems to be valid 
anyway).


I've update the binaries here 
https://github.com/mingodad/GLPK/releases/tag/qsort .


Still intriguing is that the windows 32 bits build have the performance 
degraded.


Cheers !

On 21/9/20 17:11, Andrew Makhorin wrote:

On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:

Hello Andrew !

Are you saying that floating point calculations are more
efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different results
on different platforms.


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations






Re: GLPSOL in webassemby faster than native ?

2020-09-27 Thread Michael Hennebry

On Sat, 26 Sep 2020, Domingo Alvarez Duarte wrote:

I did a revision of the usage of "glp_long_double" see here 
https://github.com/mingodad/GLPK/commit/4941d1633e52b802bdc5f102715ac3db25db5245




Revised usage of glp_long_double, now it does solve hashi.mod and tiling.mod 
faster with "--cuts" option but hashi.mod without it's a lot slower.




- Standard glpsol  => 67.6 secs

- glpsol with some "long double" => 3.1 secs


I'd expect strategic use of long double to affect accuracy,
but not to have a consistent effect on speed.

Iterative refinement is one place where computing
with extra precision would be especially useful.

Note that long double=double*double raises at least three possibilities:
The product is done as double*double and assigned to long double.
The product is done as long double*long double and
converted to double before being assigned to long double.
The product is done as long double*long double and assigned to long double.
Casting a factor to long double would ensure the third.
The second really should not happen, as the product is a sub-expression,
but I would not be surprised to se it.

Storing some things as floats could speed up memory bound computations.
The constraint matrix comes to mind.
An all-integer constraint matrix with absolute values
less than 16 million could be represented exactly.
Storing them as 32-bit ints would extend the range..

Matrix factors might not be so good an idea.
It might work, but the criteria for detecting
singularity would likely have to be relaxed.

--
Michael   henne...@web.cs.ndsu.nodak.edu
"Sorry but your password must contain an uppercase letter, a number,
a haiku, a gang sign, a heiroglyph, and the blood of a virgin."
 --  someeecards

Re: GLPSOL in webassemby faster than native ?

2020-09-27 Thread Michael Hennebry

On Fri, 25 Sep 2020, Andrew Makhorin wrote:


Why do you want glpk to produce absolutely identical results on
different platforms? This has no practical sense.


In some cases, it does make sense,
but in C89 might be difficult to achieve.
If the different platforms have effectively
identical floating point processors,
getting identical numerical results should be possible.
If not, it suggests bugs or a misunderstanding.
Practicality is another is another issue.

Roadblocks include the toolchain and the system libraries.
C, especially C89, allows compilers plenty of wiggle room.
Removing it might not be possible in C89,
though I think it is in C99.
Even in C99, constants can be tricky.
The compiler has a choice between
evaluating at compile time or at run time.
On a native compiler, the distinction is usually unimportant.
On a cross-compiler, it can matter.

The math library can matter.
IEEE precisely defines square root,
but not trig functions, log or exp10.
C89 does not precisely define square root.
Does GLPK use any of those?
They would not necessarily have to be used in computing a solution.
Using them to score prospective B&C nodes could cause differences.

Similarly, the IO library can matter.
When reading a floating point number,
scanf has wiggle room.

Removing the above roadbloacks,
even for different platforms on the same machine might be difficult,
but if accomplished, would make a useful check on correctness.

--
Michael   henne...@web.cs.ndsu.nodak.edu
"Sorry but your password must contain an uppercase letter, a number,
a haiku, a gang sign, a heiroglyph, and the blood of a virgin."
 --  someeecards



Re: GLPSOL in webassemby faster than native ?

2020-09-27 Thread Andrew Makhorin
On Sun, 2020-09-27 at 11:32 +0200, Manuel Muñoz Márquez wrote:
> I agree with you, Andrew, but the problem is when the output is not a
> real number.
> 
> Suppose that you have to decide which of the project that are planning
> a big company will be done in the next year. Little difference in
> computation may lead to a solutions that are far enough one from the
> other providing very different sets of projects to be done. Is this
> admissible? Is this desirable?

If the instance has a unique optimum, only it will be reported on any
platform; however, the solution may slightly differs on different
platforms, but only within a tolerance. The case you're talking about
may happen only if the instance has multiple optima. In this case on one
platform the solver may report one optimal solution while on other
platform another, completely different optimal solution. But since all
these solutions are optimal, they all are *equivalent* to each other, 
so you can choose any of them. If, nevertheless, you think that some
optimal solution is not that you expect, this may only mean that you
missed some essential constraints.

> 
> So on Monday, Wednesday, and Friday that you work on your laptop you
> said that "Project A" should be done, but on Tuesday, Thursday, that
> your work on your supercomputer cluster you said that "Project A", of
> course, should no be done.
> 
> I'm not speaking if that is possible or not, I known that in the
> general case it is not, but for me it is a desirable behavior, and I
> think that the software should be as close to that behavior as
> possible.

> 
> El sáb, 26-09-2020 a las 15:40 +0300, Andrew Makhorin escribió:
> > 
> > In case of TeX it is important to provide identical output on any
> > platform, and to attain this Don implemented all calculations using
> > rational arithmetic. Though this approach can be used to solve, say,
> > linear algebra problems, it is impractical in general case. A
> > desirable
> > behavior of a computer program that solves a problem with a
> > numerical
> > method is to provide the result with sufficient accuracy for a
> > reasonable time, not more. Engineers and scientists (unlike
> > mathematicians and maybe accountants) as a rule are not interested
> > in
> > more than 5-6 correct decimal places in numerical results.
> > 
> > 
> > 
> > 
> 
> 



Re: GLPSOL in webassemby faster than native ?

2020-09-27 Thread Manuel Muñoz Márquez
I agree with you, Andrew, but the problem is when the output is not a
real number.

Suppose that you have to decide which of the project that are planning
a big company will be done in the next year. Little difference in
computation may lead to a solutions that are far enough one from the
other providing very different sets of projects to be done. Is this
admissible? Is this desirable?

So on Monday, Wednesday, and Friday that you work on your laptop you
said that "Project A" should be done, but on Tuesday, Thursday, that
your work on your supercomputer cluster you said that "Project A", of
course, should no be done.

I'm not speaking if that is possible or not, I known that in the
general case it is not, but for me it is a desirable behavior, and I
think that the software should be as close to that behavior as
possible.

El sáb, 26-09-2020 a las 15:40 +0300, Andrew Makhorin escribió:
> 
> In case of TeX it is important to provide identical output on any
> platform, and to attain this Don implemented all calculations using
> rational arithmetic. Though this approach can be used to solve, say,
> linear algebra problems, it is impractical in general case. A
> desirable
> behavior of a computer program that solves a problem with a numerical
> method is to provide the result with sufficient accuracy for a
> reasonable time, not more. Engineers and scientists (unlike
> mathematicians and maybe accountants) as a rule are not interested in
> more than 5-6 correct decimal places in numerical results.
> 
> 
> 
> 




Re: GLPSOL in webassemby faster than native ?

2020-09-26 Thread Domingo Alvarez Duarte

Hello !

Activating glp_long_double one by one I found that the ones that really 
makes hashi.mod with "--cuts" perform better are the ones bellow (but 
doing so degrades performance for anything else).


=

/***
*  fhv_h_solve - solve system H * x = b
*
*  This routine solves the system H * x = b, where the matrix H is the
*  middle factor of the sparse updatable FHV-factorization.
*
*  On entry the array x should contain elements of the right-hand side
*  vector b in locations x[1], ..., x[n], where n is the order of the
*  matrix H. On exit this array will contain elements of the solution
*  vector x in the same locations. */

void fhv_h_solve(FHV *fhv, glp_double x[/*1+n*/])
{ SVA *sva = fhv->luf->sva;
  int *sv_ind = sva->ind;
  glp_double *sv_val = sva->val;
  int nfs = fhv->nfs;
  int *hh_ind = fhv->hh_ind;
  int hh_ref = fhv->hh_ref;
  int *hh_ptr = &sva->ptr[hh_ref-1];
  int *hh_len = &sva->len[hh_ref-1];
  int i, k, end, ptr;
  glp_long_double x_i;
  for (k = 1; k <= nfs; k++)
  {  x_i = x[i = hh_ind[k]];
 for (end = (ptr = hh_ptr[k]) + hh_len[k]; ptr < end; ptr++)
    x_i -= sv_val[ptr] * x[sv_ind[ptr]];
 x[i] = x_i;
  }
  return;
}

/***
*  fhv_ht_solve - solve system H' * x = b
*
*  This routine solves the system H' * x = b, where H' is a matrix
*  transposed to the matrix H, which is the middle factor of the sparse
*  updatable FHV-factorization.
*
*  On entry the array x should contain elements of the right-hand side
*  vector b in locations x[1], ..., x[n], where n is the order of the
*  matrix H. On exit this array will contain elements of the solution
*  vector x in the same locations. */

void fhv_ht_solve(FHV *fhv, glp_double x[/*1+n*/])
{ SVA *sva = fhv->luf->sva;
  int *sv_ind = sva->ind;
  glp_double *sv_val = sva->val;
  int nfs = fhv->nfs;
  int *hh_ind = fhv->hh_ind;
  int hh_ref = fhv->hh_ref;
  int *hh_ptr = &sva->ptr[hh_ref-1];
  int *hh_len = &sva->len[hh_ref-1];
  int k, end, ptr;
  glp_long_double x_j;
  for (k = nfs; k >= 1; k--)
  {  if ((x_j = x[hh_ind[k]]) == 0.0)
    continue;
 for (end = (ptr = hh_ptr[k]) + hh_len[k]; ptr < end; ptr++)
    x[sv_ind[ptr]] -= sv_val[ptr] * x_j;
  }
  return;
}

=

glp_double spx_update_d_s(SPXLP *lp, glp_double d[/*1+n-m*/], int p, int q,
  const FVS *trow, const FVS *tcol)
{ /* sparse version of spx_update_d */
  int m = lp->m;
  int n = lp->n;
  glp_double *c = lp->c;
  int *head = lp->head;
  int trow_nnz = trow->nnz;
  int *trow_ind = trow->ind;
  glp_double *trow_vec = trow->vec;
  int tcol_nnz = tcol->nnz;
  int *tcol_ind = tcol->ind;
  glp_double *tcol_vec = tcol->vec;
  int i, j, k;
  glp_long_double dq; glp_double e; /*degrade performance*/
  xassert(1 <= p && p <= m);
  xassert(1 <= q && q <= n);
  xassert(trow->n == n-m);
  xassert(tcol->n == m);
  /* compute d[q] in current basis more accurately */
  k = head[m+q]; /* x[k] = xN[q] */
  dq = c[k];
  for (k = 1; k <= tcol_nnz; k++)
  {  i = tcol_ind[k];
 dq += tcol_vec[i] * c[head[i]];
  }
  /* compute relative error in d[q] */
  e = fabs(dq - d[q]) / (1.0 + fabs(dq));
  /* compute new d[q], which is the reduced cost of xB[p] in the
   * adjacent basis */
  d[q] = (dq /= tcol_vec[p]);
  /* compute new d[j] for all j != q */
  for (k = 1; k <= trow_nnz; k++)
  {  j = trow_ind[k];
 if (j != q)
    d[j] -= trow_vec[j] * dq;
  }
  return e;
}

=

Cheers !

On 24/9/20 21:18, Michael Hennebry wrote:

On Thu, 24 Sep 2020, Domingo Alvarez Duarte wrote:

I just got glpsol with "long double" working and add binaries for 
anyone that want to test then here 
https://github.com/mingodad/GLPK/releases


As noted there it'll benefit from tuning the constants in 
src/glpk_real.h


Any help/suggestion/comment is welcome !


Note that using long doubles everywhere
would slow down a memory bound computation.
Fetching more data would be required.

One might introduce glp_double_t.
C99 uses double_t for the type in which unnamed
intermediate results of double calculations are stored.

Use glp_double_t for locals that are not arrays,
especially running totals.
Do the casts to ensure that desired calculations
are actually done in glp_double_t.





Re: GLPSOL in webassemby faster than native ?

2020-09-26 Thread Andrew Makhorin
On Sat, 2020-09-26 at 09:51 +0200, Manuel Muñoz Márquez wrote:
> > Why do you want glpk to produce absolutely identical results on
> > different platforms? This has no practical sense. 
> > 
> 
> I think this is a desirable behavior. If your are solving a real
> problem it look very weird if you provides a solution using your
> computer and when you put in the user web server the system provides
> other.
> 
> Another point, is that if the calculation, and so the time required,
> depends on the platform there is no reliable way to compare "my
> results" with previous research.
> 
> Some systems, as LaTeX, work in that way. The result doesn't depend
> even the floating computation capabilities of the system CPU.
> 

In case of TeX it is important to provide identical output on any
platform, and to attain this Don implemented all calculations using
rational arithmetic. Though this approach can be used to solve, say,
linear algebra problems, it is impractical in general case. A desirable
behavior of a computer program that solves a problem with a numerical
method is to provide the result with sufficient accuracy for a
reasonable time, not more. Engineers and scientists (unlike
mathematicians and maybe accountants) as a rule are not interested in
more than 5-6 correct decimal places in numerical results.







Re: GLPSOL in webassemby faster than native ?

2020-09-26 Thread Domingo Alvarez Duarte

Hello Michael !

I did a revision of the usage of "glp_long_double" see here 
https://github.com/mingodad/GLPK/commit/4941d1633e52b802bdc5f102715ac3db25db5245




Revised usage of glp_long_double, now it does solve hashi.mod and 
tiling.mod faster with "--cuts" option but hashi.mod without it's a lot 
slower.




- Standard glpsol  => 67.6 secs

- glpsol with some "long double" => 3.1 secs

A 20x improvement with "--cuts" and some "long double", but solving 
tiling.mod is 2x improvement, but without "--cuts" it degrades in both.


Output of normal glpsol:



/usr/bin/time glpsol2 --cuts -m hashi.mod
GLPSOL: GLPK LP/MIP Solver, v4.65
Parameter(s) specified in the command line:
 --cuts -m hashi.mod
Reading model section from hashi.mod...
168 lines were read
168 lines were read
Generating edge_sel...
Generating satisfy_vertex_demand...
Generating no_crossing1...
Generating no_crossing2...
Generating no_crossing3...
Generating no_crossing4...
Generating flow_conservation...
Generating connectivity_vub1...
Generating connectivity_vub2...
Generating cost...
Model has been successfully generated
GLPK Integer Optimizer, v4.65
2487 rows, 1264 columns, 7400 non-zeros
632 integer variables, all of which are binary
Preprocessing...
1891 rows, 1162 columns, 5802 non-zeros
530 integer variables, all of which are binary
Scaling...
 A: min|aij| =  1.000e+00  max|aij| =  1.820e+02  ratio = 1.820e+02
GM: min|aij| =  8.270e-01  max|aij| =  1.209e+00  ratio = 1.462e+00
EQ: min|aij| =  6.930e-01  max|aij| =  1.000e+00  ratio = 1.443e+00
2N: min|aij| =  3.555e-01  max|aij| =  1.422e+00  ratio = 4.000e+00
Constructing initial basis...
Size of triangular part is 1887
Solving LP relaxation...
GLPK Simplex Optimizer, v4.65
1891 rows, 1162 columns, 5802 non-zeros
  0: obj =   0.0e+00 inf =   2.452e+03 (269)
    739: obj =   0.0e+00 inf =   1.554e-15 (0) 6
OPTIMAL LP SOLUTION FOUND
Integer optimization begins...
Long-step dual simplex will be used
Gomory's cuts enabled
MIR cuts enabled
Cover cuts enabled
Number of 0-1 knapsack inequalities = 354
Clique cuts enabled
Constructing conflict graph...
Conflict graph has 530 + 20 = 550 vertices
+   739: mip = not found yet >=  -inf (1; 0)
Cuts on level 0: gmi = 5; mir = 44; cov = 20; clq = 3;
+ 26492: mip = not found yet >=   0.0e+00 (23; 63)
+ 54641: mip = not found yet >=   0.0e+00 (15; 189)
+ 80727: mip = not found yet >=   0.0e+00 (19; 291)
+108666: mip = not found yet >=   0.0e+00 (23; 399)
+137106: mip = not found yet >=   0.0e+00 (20; 512)
+167888: mip = not found yet >=   0.0e+00 (23; 631)
+195255: mip = not found yet >=   0.0e+00 (15; 791)
+221348: mip = not found yet >=   0.0e+00 (14; 918)
+252185: mip = not found yet >=   0.0e+00 (7; 1046)
+278887: mip = not found yet >=   0.0e+00 (8; 1140)
+307555: mip = not found yet >=   0.0e+00 (13; 1239)
Time used: 60.0 secs.  Memory used: 11.7 Mb.
+336139: mip = not found yet >=   0.0e+00 (10; 1337)
+365233: mip = not found yet >=   0.0e+00 (7; 1447)
Cuts on level 32: gmi = 11; mir = 61; cov = 73; clq = 7;
+377836: >   0.0e+00 >= 0.0e+00   0.0% (14; 1517)
+377836: mip =   0.0e+00 >= tree is empty   0.0% (0; 1573)
INTEGER OPTIMAL SOLUTION FOUND
Time used:   67.6 secs
Memory used: 12.7 Mb (13330830 bytes)

Solution:
2---2---2-2---2-2-2---2---2---2
| |
| 1-2---4=5===2 1---2---2 | 1
|   | | | | |
2-5===4-3   | | 1-4===5---1 | 1 |
  "   | "   | |   |   " |   |
  "   | "   | 1---3-1 |   " |   |
  "   | "   | |   " |   |
2-6===6=8===5---2-3---5===7-2   |
| |   | "   | |   |   " |
| 1   |   | "   | 1-2 |   |   " 1---3
| |   |   | "   |   | |   |   " |
2 |   |   5=6---4-2 | |   2---5---4---2 |
| |   |   " |   | | | |   |   "   | |
| 2---2   " |   | | | 3-3 |   " 1 | 2
| " |   | | | | " |   " | | |
| " |   4---2 | 2 |   1 " |   3 | 1 |
| " |   "   | | | |   | " |   | |   |
2   3=6-2   "   | | | |   | " 3   | |   |
|   | | "   | | | |   | " "   | |   |
|   |   1 |   2-5   | 1 | 4===3 " "   | 2---4
|   |   | |   | "   |   | | " "   | "
|   2   | 1   | "   5===4 | " 4===3 "
|   |   | | "   "   | | "   "
2   |   3---1 | "   "   | 3-5---5=2 "
|   |   | | "   "   | | |   "   "
|   |   | 2---5=7===5---3 | 1   | 1 "   1---4
|   |   | |   | |   | | |   | | "   |
2---5---3 |   |   1 | 2---1 | | |   2 | 4-2 |
    "   | |   |   | | | | | |   | | | | |
    "   | 1   |   | | |

Re: GLPSOL in webassemby faster than native ?

2020-09-26 Thread Manuel Muñoz Márquez
>Why do you want glpk to produce absolutely identical results on
> different platforms? This has no practical sense. 
> 

I think this is a desirable behavior. If your are solving a real
problem it look very weird if you provides a solution using your
computer and when you put in the user web server the system provides
other.

Another point, is that if the calculation, and so the time required,
depends on the platform there is no reliable way to compare "my
results" with previous research.

Some systems, as LaTeX, work in that way. The result doesn't depend
even the floating computation capabilities of the system CPU.

-- 
Manuel Muñoz Márquez 

> 




Re: GLPSOL in webassemby faster than native ?

2020-09-25 Thread Andrew Makhorin
> Something doesn't match for me, because I'm compiling GLPK/glpsol
> with and without optmization and get the same reported cuts:
> 
> Cuts on level 0: gmi = 5; mir = 44; cov = 20; clq = 3;  !! *
> the same
> =
> Cuts on level 0: gmi = 5; mir = 44; cov = 20; clq = 3;  * 
> the same
> =
> Cuts on level 0: gmi = 10; mir = 28; cov = 33; clq = 3; !*
> not the same
> =
> 

Why do you want glpk to produce absolutely identical results on
different platforms? This has no practical sense. 



Re: GLPSOL in webassemby faster than native ?

2020-09-25 Thread Domingo Alvarez Duarte

Hello Andrew !

Something doesn't match for me, because I'm compiling GLPK/glpsol with 
and without optmization and get the same reported cuts:


Compiled with "-O3 -DNDEBUG"

=

glpsol2 --cuts -m hashi.mod
GLPSOL: GLPK LP/MIP Solver, v4.65
Parameter(s) specified in the command line:
 --cuts -m hashi.mod
...

Model has been successfully generated
GLPK Integer Optimizer, v4.65
2487 rows, 1264 columns, 7400 non-zeros
632 integer variables, all of which are binary
Preprocessing...
1891 rows, 1162 columns, 5802 non-zeros
530 integer variables, all of which are binary
Scaling...
 A: min|aij| =  1.000e+00  max|aij| =  1.820e+02  ratio = 1.820e+02
GM: min|aij| =  8.270e-01  max|aij| =  1.209e+00  ratio = 1.462e+00
EQ: min|aij| =  6.930e-01  max|aij| =  1.000e+00  ratio = 1.443e+00
2N: min|aij| =  3.555e-01  max|aij| =  1.422e+00  ratio = 4.000e+00
Constructing initial basis...
Size of triangular part is 1887
Solving LP relaxation...
GLPK Simplex Optimizer, v4.65
1891 rows, 1162 columns, 5802 non-zeros
  0: obj =   0.0e+00 inf =   2.452e+03 (269)
    739: obj =   0.0e+00 inf =   1.554e-15 (0) 6
OPTIMAL LP SOLUTION FOUND
Integer optimization begins...
Long-step dual simplex will be used
Gomory's cuts enabled
MIR cuts enabled
Cover cuts enabled
Number of 0-1 knapsack inequalities = 354
Clique cuts enabled
Constructing conflict graph...
Conflict graph has 530 + 20 = 550 vertices
+   739: mip = not found yet >=  -inf (1; 0)
Cuts on level 0: gmi = 5; mir = 44; cov = 20; clq = 3;  !! * the 
same

=

Compiled with "-g"

=

./glpsol --cuts -m hashi.mod
GLPSOL: GLPK LP/MIP Solver, v4.65, glp_double size 8
Parameter(s) specified in the command line:
 --cuts -m hashi.mod
...

Model has been successfully generated
GLPK Integer Optimizer, v4.65
2487 rows, 1264 columns, 7400 non-zeros
632 integer variables, all of which are binary
Preprocessing...
1891 rows, 1162 columns, 5802 non-zeros
530 integer variables, all of which are binary
Scaling...
 A: min|aij| =  1.000e+00  max|aij| =  1.820e+02  ratio = 1.820e+02
GM: min|aij| =  8.270e-01  max|aij| =  1.209e+00  ratio = 1.462e+00
EQ: min|aij| =  6.930e-01  max|aij| =  1.000e+00  ratio = 1.443e+00
2N: min|aij| =  3.555e-01  max|aij| =  1.422e+00  ratio = 4.000e+00
Constructing initial basis...
Size of triangular part is 1887
Solving LP relaxation...
GLPK Simplex Optimizer, v4.65
1891 rows, 1162 columns, 5802 non-zeros
  0: obj =   0.0e+00 inf =   2.452e+03 (269)
    739: obj =   0.0e+00 inf =   1.554e-15 (0) 6
OPTIMAL LP SOLUTION FOUND
Integer optimization begins...
Long-step dual simplex will be used
Gomory's cuts enabled
MIR cuts enabled
Cover cuts enabled
Number of 0-1 knapsack inequalities = 354
Clique cuts enabled
Constructing conflict graph...
Conflict graph has 530 + 20 = 550 vertices
+   739: mip = not found yet >=  -inf (1; 0)
Cuts on level 0: gmi = 5; mir = 44; cov = 20; clq = 3;  * the same

=

Webassembly chromium:

=
GLPSOL: GLPK LP/MIP Solver, v4.65
Parameter(s) specified in the command line:
 --cuts --math input_fileModel has been successfully generated

...

GLPK Integer Optimizer, v4.65
2487 rows, 1264 columns, 7400 non-zeros
632 integer variables, all of which are binary
Preprocessing...
1891 rows, 1162 columns, 5802 non-zeros
530 integer variables, all of which are binary
Scaling...
 A: min|aij| =  1.000e+00  max|aij| =  1.820e+02  ratio = 1.820e+02
GM: min|aij| =  8.270e-01  max|aij| =  1.209e+00  ratio = 1.462e+00
EQ: min|aij| =  6.930e-01  max|aij| =  1.000e+00  ratio = 1.443e+00
2N: min|aij| =  3.555e-01  max|aij| =  1.422e+00  ratio = 4.000e+00
Constructing initial basis...
Size of triangular part is 1887
Solving LP relaxation...
GLPK Simplex Optimizer, v4.65
1891 rows, 1162 columns, 5802 non-zeros
  0: obj =   0.0e+00 inf =   2.452e+03 (269)
    739: obj =   0.0e+00 inf =   1.554e-15 (0) 6
OPTIMAL LP SOLUTION FOUND
Integer optimization begins...
Long-step dual simplex will be used
Gomory's cuts enabled
MIR cuts enabled
Cover cuts enabled
Number of 0-1 knapsack inequalities = 354
Clique cuts enabled
Constructing conflict graph...
Conflict graph has 530 + 20 = 550 vertices
+   739: mip = not found yet >=  -inf (1; 0)
Cuts on level 0: gmi = 10; mir = 28; cov = 33; clq = 3; !* not 
the same


=

Cheers !

On 22/9/20 17:56, Andrew Makhorin wrote:

On Tue, 2020-09-22 at 15:53 +0200, Domingo Alvarez Duarte wrote:

Hello again !

On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm)
solving the "hashi.mod" with "--cuts" takes 98s and without it 909s,
using glpsol native compiled within termux takes 497s with "--cuts"
and
without it 925s.


What does "native" mean? Just changing, for example, optimization level
of the compiler may essentially change the set of generated cuts and
thus the solution time.



Arm7 32bits Nexus-5:

  wasm "--cuts -m hashi.mod" -> 98s

  wasm " -m hashi.mod" -> 90

Re: GLPSOL in webassemby faster than native ?

2020-09-25 Thread Domingo Alvarez Duarte

Hello Mike !

I did changed in several places see here 
https://github.com/mingodad/GLPK/commit/b370a854be0c10c06e025896dedc4e3461278497


===

Changed some declarations to glp_long_double in hope that using it on 
mainly "sums" would improve performance/accuracy, but right now 
performance decreased, hashi.mod takes too much time, but defining 
glp_long_double as glp_double we still have the same results/output as 
the original GLPK


===

Cheers !

On 24/9/20 21:18, Michael Hennebry wrote:

On Thu, 24 Sep 2020, Domingo Alvarez Duarte wrote:

I just got glpsol with "long double" working and add binaries for 
anyone that want to test then here 
https://github.com/mingodad/GLPK/releases


As noted there it'll benefit from tuning the constants in 
src/glpk_real.h


Any help/suggestion/comment is welcome !


Note that using long doubles everywhere
would slow down a memory bound computation.
Fetching more data would be required.

One might introduce glp_double_t.
C99 uses double_t for the type in which unnamed
intermediate results of double calculations are stored.

Use glp_double_t for locals that are not arrays,
especially running totals.
Do the casts to ensure that desired calculations
are actually done in glp_double_t.





Re: GLPSOL in webassemby faster than native ?

2020-09-25 Thread Andrew Makhorin
On Fri, 2020-09-25 at 10:04 +0200, Domingo Alvarez Duarte wrote:
> Hello Michael !
> 
> Thank you for reply !
> 
> I'll take into account the use of possible wider float format for 
> intermediary values using something like your suggestion of
> redefinable 
> type like "glp_double_t" (actually in gcc 9 in linux x86 "double_t"
> and 
> "double" are equal).
> 
> But also leave the possibility of have it working (somehow) with 
> float32, float64, float80, float128, ... everywhere.
> 
> Again help/suggestions/comments are welcome !
> 

Changing all doubles everywhere with long ones is not needed. 64-bit
double provides about 16 decimal places that is more than sufficient in
most cases. The only place where this might be crucial is linear algebra
routines (for example, on computing scalar products). However, it would
be better to use something like BLAS for this purpose.



Re: GLPSOL in webassemby faster than native ?

2020-09-25 Thread Domingo Alvarez Duarte

Hello Michael !

Thank you for reply !

I'll take into account the use of possible wider float format for 
intermediary values using something like your suggestion of redefinable 
type like "glp_double_t" (actually in gcc 9 in linux x86 "double_t" and 
"double" are equal).


But also leave the possibility of have it working (somehow) with 
float32, float64, float80, float128, ... everywhere.


Again help/suggestions/comments are welcome !

Cheers !

On 24/9/20 21:18, Michael Hennebry wrote:

On Thu, 24 Sep 2020, Domingo Alvarez Duarte wrote:

I just got glpsol with "long double" working and add binaries for 
anyone that want to test then here 
https://github.com/mingodad/GLPK/releases


As noted there it'll benefit from tuning the constants in 
src/glpk_real.h


Any help/suggestion/comment is welcome !


Note that using long doubles everywhere
would slow down a memory bound computation.
Fetching more data would be required.

One might introduce glp_double_t.
C99 uses double_t for the type in which unnamed
intermediate results of double calculations are stored.

Use glp_double_t for locals that are not arrays,
especially running totals.
Do the casts to ensure that desired calculations
are actually done in glp_double_t.





Re: GLPSOL in webassemby faster than native ?

2020-09-24 Thread Michael Hennebry

On Thu, 24 Sep 2020, Domingo Alvarez Duarte wrote:

I just got glpsol with "long double" working and add binaries for anyone that 
want to test then here https://github.com/mingodad/GLPK/releases


As noted there it'll benefit from tuning the constants in src/glpk_real.h

Any help/suggestion/comment is welcome !


Note that using long doubles everywhere
would slow down a memory bound computation.
Fetching more data would be required.

One might introduce glp_double_t.
C99 uses double_t for the type in which unnamed
intermediate results of double calculations are stored.

Use glp_double_t for locals that are not arrays,
especially running totals.
Do the casts to ensure that desired calculations
are actually done in glp_double_t.

--
Michael   henne...@web.cs.ndsu.nodak.edu
"Sorry but your password must contain an uppercase letter, a number,
a haiku, a gang sign, a heiroglyph, and the blood of a virgin."
 --  someeecards



Re: GLPSOL in webassemby faster than native ?

2020-09-24 Thread Domingo Alvarez Duarte

Hello !

I just got glpsol with "long double" working and add binaries for anyone 
that want to test then here https://github.com/mingodad/GLPK/releases


As noted there it'll benefit from tuning the constants in src/glpk_real.h

Any help/suggestion/comment is welcome !

Cheers !

On 22/9/20 21:49, Michael Hennebry wrote:

On Tue, 22 Sep 2020, Domingo Alvarez Duarte wrote:

Due to the big difference in solver time how could we figure out 
what's is it in order to use this knowledge to improve the native 
solver time ?


I mean what debug/verbose options could help us have a clue ?


I expect the answer is none.
My guess is that neither platform is inherently better than the other.
Which small roundings will be better doubtless depends
on the particular problem and version of GLPK.
Andrew insists on C89.
That pretty much eliminates control over how floating point is done.

double x=1./3., y=1./3.;
C89 does not require x and y to have the same value.
IEEE arithmetic would, despite possible double rounding.

Even with IEEE, platforms are allowed to evaluate floating point
expressions with a precision larger than required by the expressions.
They are not required to be consistent.
double x=2.0+DBLE_EPSILON-2.0, y=2.0+DBL_EPSILON-2.0;
x could be 0 or DBLE_EPSILON, as could y.
They need not be the same.
Once upon a time, that was a real problem with gcc.
For C89, it might still be.

With a lot of casts to long double and sufficient
care in the representation of constants,
Andrew might be able to get consistent behaviour between
IEEE platforms with matching doubles and long doubles.

It's been a while since I looked at any GLPK code.
My expectation is that at least some of it, e.g. dot product, is 
memory bound.

In such a case, explicitly using long doubles would
not slow it down and could improve its accuracy.
Possibly Andrew already does that.
I haven't looked lately.





Re: GLPSOL in webassemby faster than native ?

2020-09-23 Thread Domingo Alvarez Duarte

Hello Michael !

Thanks for reply !

After your reply I did a refactoring on this this branch 
https://github.com/mingodad/GLPK/tree/local-set-param where I replaced 
all occurrences of "double" by "glp_double" and added a definition for 
it as shown bellow, but it seems that there are several hard coded 
constants that are tied to using "double" (float64) and the solver is 
producing wrong results when using "long double" with the changes made 
so far.


If someone could help fix it we could have GLPK working with both and 
could compare performance/accuracy or even other numeric representations.


I also made available binaries (compiled with "double") to facilitate 
others fiddle with it without need to (ideally) build form source -> 
https://github.com/mingodad/GLPK/releases/tag/test .




#ifndef glp_double
    #ifdef WITH_LONG_DOUBLE
    #define glp_double long double
    #define GLP_DBL_EPSILON LDBL_EPSILON
    #define GLP_DBL_FMT_G "Lg"
    #define GLP_DBL_FMT_F "Lf"
    #define GLP_DBL_FMT_E "LE"
    #define GLP_DBL_FMT_e "Le"
    #else
    #define glp_double double
    #define GLP_DBL_EPSILON DBL_EPSILON
    #define GLP_DBL_FMT_G "g"
    #define GLP_DBL_FMT_F "f"
    #define GLP_DBL_FMT_E "E"
    #define GLP_DBL_FMT_e "e"
    #endif
#endif



Cheers !

On 22/9/20 21:49, Michael Hennebry wrote:

On Tue, 22 Sep 2020, Domingo Alvarez Duarte wrote:

Due to the big difference in solver time how could we figure out 
what's is it in order to use this knowledge to improve the native 
solver time ?


I mean what debug/verbose options could help us have a clue ?


I expect the answer is none.
My guess is that neither platform is inherently better than the other.
Which small roundings will be better doubtless depends
on the particular problem and version of GLPK.
Andrew insists on C89.
That pretty much eliminates control over how floating point is done.

double x=1./3., y=1./3.;
C89 does not require x and y to have the same value.
IEEE arithmetic would, despite possible double rounding.

Even with IEEE, platforms are allowed to evaluate floating point
expressions with a precision larger than required by the expressions.
They are not required to be consistent.
double x=2.0+DBLE_EPSILON-2.0, y=2.0+DBL_EPSILON-2.0;
x could be 0 or DBLE_EPSILON, as could y.
They need not be the same.
Once upon a time, that was a real problem with gcc.
For C89, it might still be.

With a lot of casts to long double and sufficient
care in the representation of constants,
Andrew might be able to get consistent behaviour between
IEEE platforms with matching doubles and long doubles.

It's been a while since I looked at any GLPK code.
My expectation is that at least some of it, e.g. dot product, is 
memory bound.

In such a case, explicitly using long doubles would
not slow it down and could improve its accuracy.
Possibly Andrew already does that.
I haven't looked lately.





Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Michael Hennebry

On Tue, 22 Sep 2020, Domingo Alvarez Duarte wrote:

Due to the big difference in solver time how could we figure out what's is it 
in order to use this knowledge to improve the native solver time ?


I mean what debug/verbose options could help us have a clue ?


I expect the answer is none.
My guess is that neither platform is inherently better than the other.
Which small roundings will be better doubtless depends
on the particular problem and version of GLPK.
Andrew insists on C89.
That pretty much eliminates control over how floating point is done.

double x=1./3., y=1./3.;
C89 does not require x and y to have the same value.
IEEE arithmetic would, despite possible double rounding.

Even with IEEE, platforms are allowed to evaluate floating point
expressions with a precision larger than required by the expressions.
They are not required to be consistent.
double x=2.0+DBLE_EPSILON-2.0, y=2.0+DBL_EPSILON-2.0;
x could be 0 or DBLE_EPSILON, as could y.
They need not be the same.
Once upon a time, that was a real problem with gcc.
For C89, it might still be.

With a lot of casts to long double and sufficient
care in the representation of constants,
Andrew might be able to get consistent behaviour between
IEEE platforms with matching doubles and long doubles.

It's been a while since I looked at any GLPK code.
My expectation is that at least some of it, e.g. dot product, is memory bound.
In such a case, explicitly using long doubles would
not slow it down and could improve its accuracy.
Possibly Andrew already does that.
I haven't looked lately.

--
Michael   henne...@web.cs.ndsu.nodak.edu
"Sorry but your password must contain an uppercase letter, a number,
a haiku, a gang sign, a heiroglyph, and the blood of a virgin."
 --  someeecards



Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Chris Matrakidis
It is well known that varying initial conditions can dramatically change
the running time of a solver (and the tree searched). In this case there
are floating point differences and in the first instance the integer
solution is found very early and has the same value with the lp solution
terminating the search instantly, while in the other it takes a few
iterations more (and again terminates the search instantly).

You can't draw any conclusion about solver performance from just one
instance, where you may be lucky (or not).

Best Regards,

Chris Matrakidis

On Tue, 22 Sep 2020 at 20:03, Domingo Alvarez Duarte 
wrote:

> Hello Andrew !
>
> In this case mean compiling with "-O3 -DNDEBUG -DWITH_SPLAYTREE" on
> Arm7, "-O3 -DNDEBUG -flto -march=native -ffp-contract=off
> -DWITH_SPLAYTREE" on x86_64 and "-O3 -DNDEBUG -flto -DWITH_SPLAYTREE" on
> wasm.
>
> How are the parameters for the cut generations selected ?
>
> Isn't strange that on wasm it's been faster than native ?
>
> Doesn't this difference gives insight to select the parameter differently ?
>
> Cheers !
>
> On 22/9/20 17:56, Andrew Makhorin wrote:
> > On Tue, 2020-09-22 at 15:53 +0200, Domingo Alvarez Duarte wrote:
> >> Hello again !
> >>
> >> On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm)
> >> solving the "hashi.mod" with "--cuts" takes 98s and without it 909s,
> >> using glpsol native compiled within termux takes 497s with "--cuts"
> >> and
> >> without it 925s.
> >
> > What does "native" mean? Just changing, for example, optimization level
> > of the compiler may essentially change the set of generated cuts and
> > thus the solution time.
> >
> >
> >> Arm7 32bits Nexus-5:
> >>
> >>   wasm "--cuts -m hashi.mod" -> 98s
> >>
> >>   wasm " -m hashi.mod" -> 909s
> >>
> >>   native "--cuts -m hashi.mod" -> 497s
> >>
> >>   native " -m hashi.mod" -> 925s
> >>
> >>
> >> Laptop Linux 64bits I7:
> >>
> >>   wasm "--cuts -m hashi.mod" -> 8s
> >>
> >>   wasm " -m hashi.mod" -> 142s
> >>
> >>   native "--cuts -m hashi.mod" -> 73s
> >>
> >>   native " -m hashi.mod" -> 55s
> >>
> >>
> >> On arm7 "--cuts" improves the performance in both wasm and native.
> >>
> >> On x86_64 "--cuts" improves in wasm but degrade in native.
> >>
> >> I hope this could give hints to improve GLPK solver performance by
> >> inspecting the decision's criteria and eventually find a better ones.
> >>
> >> Anyone can give any idea with this data ?
> >>
> >> Cheers !
> >>
> >> On 21/9/20 17:11, Andrew Makhorin wrote:
> >>> On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:
>  Hello Andrew !
> 
>  Are you saying that floating point calculations are more
>  efficient/precise in webassembly ?
> >>> No. I meant that due to floating-point computations running the same
> >>> computer program with the same data as a rule produces different
> >>> results
> >>> on different platforms.
> >>>
>  Cheers !
> 
>  On 21/9/20 15:08, Andrew Makhorin wrote:
> >> Does someone can give a possible explanation ?
> > floating-point computations
> >>
>
>


Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Domingo Alvarez Duarte

Hello Andrew !

In this case mean compiling with "-O3 -DNDEBUG -DWITH_SPLAYTREE" on 
Arm7, "-O3 -DNDEBUG -flto -march=native -ffp-contract=off 
-DWITH_SPLAYTREE" on x86_64 and "-O3 -DNDEBUG -flto -DWITH_SPLAYTREE" on 
wasm.


How are the parameters for the cut generations selected ?

Isn't strange that on wasm it's been faster than native ?

Doesn't this difference gives insight to select the parameter differently ?

Cheers !

On 22/9/20 17:56, Andrew Makhorin wrote:

On Tue, 2020-09-22 at 15:53 +0200, Domingo Alvarez Duarte wrote:

Hello again !

On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm)
solving the "hashi.mod" with "--cuts" takes 98s and without it 909s,
using glpsol native compiled within termux takes 497s with "--cuts"
and
without it 925s.


What does "native" mean? Just changing, for example, optimization level
of the compiler may essentially change the set of generated cuts and
thus the solution time.



Arm7 32bits Nexus-5:

  wasm "--cuts -m hashi.mod" -> 98s

  wasm " -m hashi.mod" -> 909s

  native "--cuts -m hashi.mod" -> 497s

  native " -m hashi.mod" -> 925s


Laptop Linux 64bits I7:

  wasm "--cuts -m hashi.mod" -> 8s

  wasm " -m hashi.mod" -> 142s

  native "--cuts -m hashi.mod" -> 73s

  native " -m hashi.mod" -> 55s


On arm7 "--cuts" improves the performance in both wasm and native.

On x86_64 "--cuts" improves in wasm but degrade in native.

I hope this could give hints to improve GLPK solver performance by
inspecting the decision's criteria and eventually find a better ones.

Anyone can give any idea with this data ?

Cheers !

On 21/9/20 17:11, Andrew Makhorin wrote:

On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:

Hello Andrew !

Are you saying that floating point calculations are more
efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different
results
on different platforms.


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations






Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Andrew Makhorin
On Tue, 2020-09-22 at 15:53 +0200, Domingo Alvarez Duarte wrote:
> Hello again !
> 
> On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm) 
> solving the "hashi.mod" with "--cuts" takes 98s and without it 909s, 
> using glpsol native compiled within termux takes 497s with "--cuts"
> and 
> without it 925s.


What does "native" mean? Just changing, for example, optimization level
of the compiler may essentially change the set of generated cuts and
thus the solution time.


> 
> Arm7 32bits Nexus-5:
> 
>  wasm "--cuts -m hashi.mod" -> 98s
> 
>  wasm " -m hashi.mod" -> 909s
> 
>  native "--cuts -m hashi.mod" -> 497s
> 
>  native " -m hashi.mod" -> 925s
> 
> 
> Laptop Linux 64bits I7:
> 
>  wasm "--cuts -m hashi.mod" -> 8s
> 
>  wasm " -m hashi.mod" -> 142s
> 
>  native "--cuts -m hashi.mod" -> 73s
> 
>  native " -m hashi.mod" -> 55s
> 
> 
> On arm7 "--cuts" improves the performance in both wasm and native.
> 
> On x86_64 "--cuts" improves in wasm but degrade in native.
> 
> I hope this could give hints to improve GLPK solver performance by 
> inspecting the decision's criteria and eventually find a better ones.
> 
> Anyone can give any idea with this data ?
> 
> Cheers !
> 
> On 21/9/20 17:11, Andrew Makhorin wrote:
> > On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:
> > > Hello Andrew !
> > > 
> > > Are you saying that floating point calculations are more
> > > efficient/precise in webassembly ?
> > 
> > No. I meant that due to floating-point computations running the same
> > computer program with the same data as a rule produces different
> > results
> > on different platforms.
> > 
> > > Cheers !
> > > 
> > > On 21/9/20 15:08, Andrew Makhorin wrote:
> > > > > Does someone can give a possible explanation ?
> > > > 
> > > > floating-point computations
> 
> 



Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Domingo Alvarez Duarte

Hello again !

On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm) 
solving the "hashi.mod" with "--cuts" takes 98s and without it 909s, 
using glpsol native compiled within termux takes 497s with "--cuts" and 
without it 925s.


Arm7 32bits Nexus-5:

    wasm "--cuts -m hashi.mod" -> 98s

    wasm " -m hashi.mod" -> 909s

    native "--cuts -m hashi.mod" -> 497s

    native " -m hashi.mod" -> 925s


Laptop Linux 64bits I7:

    wasm "--cuts -m hashi.mod" -> 8s

    wasm " -m hashi.mod" -> 142s

    native "--cuts -m hashi.mod" -> 73s

    native " -m hashi.mod" -> 55s


On arm7 "--cuts" improves the performance in both wasm and native.

On x86_64 "--cuts" improves in wasm but degrade in native.

I hope this could give hints to improve GLPK solver performance by 
inspecting the decision's criteria and eventually find a better ones.


Anyone can give any idea with this data ?

Cheers !

On 21/9/20 17:11, Andrew Makhorin wrote:

On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:

Hello Andrew !

Are you saying that floating point calculations are more
efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different results
on different platforms.


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations






Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Domingo Alvarez Duarte

Hello Tor !

Thanks for reply !

Do you know why that happen ?

Which criteria is used to select one ?

Can we somehow force/direct the solver to adopt a different one ?

Cheers !

On 22/9/20 14:03, Tor Myklebust wrote:
Note that the two solvers made different branching decisions once it 
came time to solve the MIP.


On Tue, 22 Sep 2020, Domingo Alvarez Duarte wrote:


Hello Andrew !

Due to the big difference in solver time how could we figure out 
what's is it in order to use this knowledge to improve the native 
solver time ?


I mean what debug/verbose options could help us have a clue ?

Cheers !

On 21/9/20 17:11, Andrew Makhorin wrote:

On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:

Hello Andrew !

Are you saying that floating point calculations are more
efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different 
results

on different platforms.


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations









Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Domingo Alvarez Duarte

Hello Andrew !

Due to the big difference in solver time how could we figure out what's 
is it in order to use this knowledge to improve the native solver time ?


I mean what debug/verbose options could help us have a clue ?

Cheers !

On 21/9/20 17:11, Andrew Makhorin wrote:

On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:

Hello Andrew !

Are you saying that floating point calculations are more
efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different results
on different platforms.


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations






Re: GLPSOL in webassemby faster than native ?

2020-09-21 Thread Andrew Makhorin
On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:
> Hello Andrew !
> 
> Are you saying that floating point calculations are more 
> efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different results
on different platforms.

> 
> Cheers !
> 
> On 21/9/20 15:08, Andrew Makhorin wrote:
> > > Does someone can give a possible explanation ?
> > 
> > floating-point computations
> 
> 



Re: GLPSOL in webassemby faster than native ?

2020-09-21 Thread Domingo Alvarez Duarte

Hello Andrew !

Are you saying that floating point calculations are more 
efficient/precise in webassembly ?


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations




Re: GLPSOL in webassemby faster than native ?

2020-09-21 Thread Andrew Makhorin
> Does someone can give a possible explanation ?

floating-point computations