Bug#712526: freefem++: FreeFem++-mpi not working

2013-08-23 Thread Giuseppe Pitton
Dear Dimitrios,
from the FreeFem++ mailing list:
http://ljll.math.upmc.fr/pipermail/freefempp/2013-July/002839.html
we know now that Hypre was disabled by the developers for some reason. Now, 
from v. 3.25 it is possible to configure with the option "--enable-hypre" which 
correctly builds the hypre_FreeFem.so file. It worked for me.
Kind regards.
Giuseppe


Il giorno 18/giu/2013, alle ore 12:57, Giuseppe Pitton ha scritto:

> Dear Dimitris,
> here are some thoughts on how to solve the issue (i tried to make HYPRE 
> solver work).
> Installing from source FreeFem++-3.23 I noticed that in the folder 
> ff++/downloads/hypre nothing was built. So I compiled manually hypre-2.9.0b 
> from source and I changed the paths in the file 
> ff++/src/solver/makefile-sparsesolver.inc and I also commented the non HYPRE 
> lines in ff++/src/solver/makefile. Please note that I had to modify several 
> other lines in ff++/src/solver/makefile-sparsesolver.inc, in particular:
> on line 56 after MPI_INCLUDE = there is a /I/ that should instead be a -I
> on line 77 suffix should be changed to so
> several paths should be adapted to the system in use (for instance lines 87, 
> 121, 126, 130, 135 and of course lines 185 and following)
> lines 99 and 100 should be commented
> then moving in directory ff++/src/solver and running make I get a 
> hypre_FreeFem.so that I moved to FreeFem++ library directory 
> (/usr/local/lib/ff++/3.23/lib/ on my system). Now HYPRE is running but not 
> working, see this test case for instance:
> 
> $ ff-mpirun -np 2 chaleur3d-hypre.edp 
> initparallele rank 0 on 2
> -- FreeFem++ v  3.23 (date Jeu  6 jui 2013 20:58:44 CEST)
> Load: lg_fem lg_mesh lg_mesh3 eigenvalue parallelempi 
> (large output)
>  -- Square mesh : nb vertices  =441 ,  nb triangles = 800 ,  nb boundary 
> edges 80
>  -- Square mesh : nb vertices  =441 ,  nb triangles = 800 ,  nb boundary 
> edges 80
>  -- Build Nodes/DF on mesh :   n.v. 9261, n. elmt. 48000, n b. elmt. 4800
> nb of Nodes 68921nb of DoF   68921  DFon=1100
>  -- FESpace: Nb of Nodes 68921 Nb of DoF 68921
>  -- Build Nodes/DF on mesh :   n.v. 9261, n. elmt. 48000, n b. elmt. 4800
> nb of Nodes 68921nb of DoF   68921  DFon=1100
>  -- FESpace: Nb of Nodes 68921 Nb of DoF 68921
> ###DEFAULT PARAMETERS WILL BE SET###
> SOLVER: AMG-GMRES
> tgv1e+10
> time 0.01
> tgv1e+10
> time 0.01
> *** Process received signal ***
> Signal: Segmentation fault: 11 (11)
> Signal code: Address not mapped (1)
> Failing at address: 0x7f88d259d880
> *** Process received signal ***
> Signal: Segmentation fault: 11 (11)
> Signal code: Address not mapped (1)
> Failing at address: 0x7fe9f2d9d5c0
> *** End of error message ***
> *** End of error message ***
> --
> mpirun noticed that process rank 1 with PID 550 on node (...) exited on 
> signal 11 (Segmentation fault: 11).
> --
> 
> 
> 
> Hope this helps.
> Best regards.
> Giuseppe
> 
> 
> Il giorno 17/giu/2013, alle ore 12:18, Dimitrios Eftaxiopoulos ha scritto:
> 
>> Hello Giuseppe
>> Thank you for raising the mpi related issues. 
>> The failing to load msh3 seems to me to be a more general issue not related 
>> to 
>> the parallel solver in particular. 
>> For the rest of the problems I do not have an immediate answer. 
>> I plan to try to upload freefem++-3.23 to unstable shortly so I will have a 
>> look on these issues.
>> 
>> Best regards
>> Dimitris 
>> 
>> On Sunday 16 of June 2013 21:47:49 you wrote:
>>> Package: freefem++
>>> Version: 3.19.1-1
>>> Severity: normal
>>> 
>>> Dear Maintainer,
>>> I installed freefem++ on the default Wheezy (7.1) with apt-get, I also
>>> installed freefem++-dev. I experienced the same issue on Wheezy (7.0).
>>> FreeFem++ is working properly, but the parallel version FreeFem++-mpi does
>>> not. I tried running the built-in examples of the examples++-mpi folder and
>>> they are not working. The mentioned example files can be found at the
>>> following URL: http://www.freefem.org/ff%2B%2B/ff%2B%2B/examples%2B%2B-mpi/
>>> In particular, I tried running some test cases with both the following
>>> commands:
>>> FreeFem++-mpi chaleur3D-hips.edp
>>> ff-mpirun -np 2 chaleur3D-mumps.edp
>>> I obtain two kind of errors: msh3 libraries are missing or parallel solver
>>> libraries are missing, for instance:
>>> 
>>> $ FreeFem++-mpi chaleur3D-pastix.edp
>>> initparallele rank 0 on 1
>>>

Bug#712526: freefem++: FreeFem++-mpi not working

2013-06-18 Thread Giuseppe Pitton
Dear Dimitris,
here are some thoughts on how to solve the issue (i tried to make HYPRE solver 
work).
Installing from source FreeFem++-3.23 I noticed that in the folder 
ff++/downloads/hypre nothing was built. So I compiled manually hypre-2.9.0b 
from source and I changed the paths in the file 
ff++/src/solver/makefile-sparsesolver.inc and I also commented the non HYPRE 
lines in ff++/src/solver/makefile. Please note that I had to modify several 
other lines in ff++/src/solver/makefile-sparsesolver.inc, in particular:
on line 56 after MPI_INCLUDE = there is a /I/ that should instead be a -I
on line 77 suffix should be changed to so
several paths should be adapted to the system in use (for instance lines 87, 
121, 126, 130, 135 and of course lines 185 and following)
lines 99 and 100 should be commented
then moving in directory ff++/src/solver and running make I get a 
hypre_FreeFem.so that I moved to FreeFem++ library directory 
(/usr/local/lib/ff++/3.23/lib/ on my system). Now HYPRE is running but not 
working, see this test case for instance:

$ ff-mpirun -np 2 chaleur3d-hypre.edp 
initparallele rank 0 on 2
-- FreeFem++ v  3.23 (date Jeu  6 jui 2013 20:58:44 CEST)
 Load: lg_fem lg_mesh lg_mesh3 eigenvalue parallelempi 
(large output)
  -- Square mesh : nb vertices  =441 ,  nb triangles = 800 ,  nb boundary edges 
80
  -- Square mesh : nb vertices  =441 ,  nb triangles = 800 ,  nb boundary edges 
80
  -- Build Nodes/DF on mesh :   n.v. 9261, n. elmt. 48000, n b. elmt. 4800
 nb of Nodes 68921nb of DoF   68921  DFon=1100
  -- FESpace: Nb of Nodes 68921 Nb of DoF 68921
  -- Build Nodes/DF on mesh :   n.v. 9261, n. elmt. 48000, n b. elmt. 4800
 nb of Nodes 68921nb of DoF   68921  DFon=1100
  -- FESpace: Nb of Nodes 68921 Nb of DoF 68921
###DEFAULT PARAMETERS WILL BE SET###
SOLVER: AMG-GMRES
tgv1e+10
time 0.01
tgv1e+10
time 0.01
 *** Process received signal ***
 Signal: Segmentation fault: 11 (11)
 Signal code: Address not mapped (1)
 Failing at address: 0x7f88d259d880
 *** Process received signal ***
 Signal: Segmentation fault: 11 (11)
 Signal code: Address not mapped (1)
 Failing at address: 0x7fe9f2d9d5c0
 *** End of error message ***
 *** End of error message ***
--
mpirun noticed that process rank 1 with PID 550 on node (...) exited on signal 
11 (Segmentation fault: 11).
--



Hope this helps.
Best regards.
Giuseppe


Il giorno 17/giu/2013, alle ore 12:18, Dimitrios Eftaxiopoulos ha scritto:

> Hello Giuseppe
> Thank you for raising the mpi related issues. 
> The failing to load msh3 seems to me to be a more general issue not related 
> to 
> the parallel solver in particular. 
> For the rest of the problems I do not have an immediate answer. 
> I plan to try to upload freefem++-3.23 to unstable shortly so I will have a 
> look on these issues.
> 
> Best regards
> Dimitris 
> 
> On Sunday 16 of June 2013 21:47:49 you wrote:
>> Package: freefem++
>> Version: 3.19.1-1
>> Severity: normal
>> 
>> Dear Maintainer,
>> I installed freefem++ on the default Wheezy (7.1) with apt-get, I also
>> installed freefem++-dev. I experienced the same issue on Wheezy (7.0).
>> FreeFem++ is working properly, but the parallel version FreeFem++-mpi does
>> not. I tried running the built-in examples of the examples++-mpi folder and
>> they are not working. The mentioned example files can be found at the
>> following URL: http://www.freefem.org/ff%2B%2B/ff%2B%2B/examples%2B%2B-mpi/
>> In particular, I tried running some test cases with both the following
>> commands:
>> FreeFem++-mpi chaleur3D-hips.edp
>> ff-mpirun -np 2 chaleur3D-mumps.edp
>> I obtain two kind of errors: msh3 libraries are missing or parallel solver
>> libraries are missing, for instance:
>> 
>> $ FreeFem++-mpi chaleur3D-pastix.edp
>> initparallele rank 0 on 1
>> -- FreeFem++ v  3.190001 (date Mer  9 mai 2012 21:50:21 CEST)
>> Load: lg_fem lg_mesh lg_mesh3 eigenvalue parallelempi
>>1 :  // other
>>2 : load "msh3"
>> load error : msh3
>> fail :
>> list  prefix: './' '/usr/lib/x86_64-linux-gnu/freefem++/' list  suffix : ''
>> , '.so'
>> 
>> Error line number 2, in file chaleur3D-pastix.edp, before  token msh3
>> Error load
>>  current line = 2 mpirank 0 / 1
>> Compile error : Error load
>>line number :2, msh3
>> error Compile error : Error load
>>line number :2, msh3
>> code = 1 mpirank: 0
>> FreeFem++-mpi finalize correctly .
>> 
>> $ FreeFem++-mpi chaleur3D-superludist.edp
>> initparallele rank 0 on 1
>> -- FreeFem++ v  3.190001 (date Mer  9 mai 2012 21:50:21 CEST)
>> Load: lg_fem lg_mesh lg_mesh3 eigenvalue parallelempi
>>1 :  // other
>>2 :  // NBPROC 2
>>3 : // ff-mpirun -np 4 chaleur3D-superludist.edp -glut ffglut  -n 20 -op
>> 1 -dt 0.01 -niter 10
>>4 :
>>5 : load "real_SuperLU_DIST_FreeFem"
>> load error : real_SuperLU_DIST_FreeFem

Bug#712526: freefem++: FreeFem++-mpi not working

2013-06-16 Thread Giuseppe Pitton
Package: freefem++
Version: 3.19.1-1
Severity: normal

Dear Maintainer,
I installed freefem++ on the default Wheezy (7.1) with apt-get, I also
installed freefem++-dev. I experienced the same issue on Wheezy (7.0).
FreeFem++ is working properly, but the parallel version FreeFem++-mpi does not.
I tried running the built-in examples of the examples++-mpi folder and they are
not working. The mentioned example files can be found at the following URL:
http://www.freefem.org/ff%2B%2B/ff%2B%2B/examples%2B%2B-mpi/
In particular, I tried running some test cases with both the following
commands:
FreeFem++-mpi chaleur3D-hips.edp
ff-mpirun -np 2 chaleur3D-mumps.edp
I obtain two kind of errors: msh3 libraries are missing or parallel solver
libraries are missing, for instance:

$ FreeFem++-mpi chaleur3D-pastix.edp
initparallele rank 0 on 1
-- FreeFem++ v  3.190001 (date Mer  9 mai 2012 21:50:21 CEST)
 Load: lg_fem lg_mesh lg_mesh3 eigenvalue parallelempi
1 :  // other
2 : load "msh3"
load error : msh3
 fail :
list  prefix: './' '/usr/lib/x86_64-linux-gnu/freefem++/' list  suffix : '' ,
'.so'

 Error line number 2, in file chaleur3D-pastix.edp, before  token msh3
Error load
  current line = 2 mpirank 0 / 1
Compile error : Error load
line number :2, msh3
error Compile error : Error load
line number :2, msh3
 code = 1 mpirank: 0
FreeFem++-mpi finalize correctly .

$ FreeFem++-mpi chaleur3D-superludist.edp
initparallele rank 0 on 1
-- FreeFem++ v  3.190001 (date Mer  9 mai 2012 21:50:21 CEST)
 Load: lg_fem lg_mesh lg_mesh3 eigenvalue parallelempi
1 :  // other
2 :  // NBPROC 2
3 : // ff-mpirun -np 4 chaleur3D-superludist.edp -glut ffglut  -n 20 -op 1
-dt 0.01 -niter 10
4 :
5 : load "real_SuperLU_DIST_FreeFem"
load error : real_SuperLU_DIST_FreeFem
 fail :
list  prefix: './' '/usr/lib/x86_64-linux-gnu/freefem++/' list  suffix : '' ,
'.so'

 Error line number 5, in file chaleur3D-superludist.edp, before  token
real_SuperLU_DIST_FreeFem
Error load
  current line = 5 mpirank 0 / 1
Compile error : Error load
line number :5, real_SuperLU_DIST_FreeFem
error Compile error : Error load
line number :5, real_SuperLU_DIST_FreeFem
 code = 1 mpirank: 0
FreeFem++-mpi finalize correctly .

I partially solved these issues compiling from source FreeFem++ v.3.23 with the
configure command:
../configure '--enable-download'
then make and make install as usual.
This however does not solve completely the problem since the make check gives
80 out of 84 test passed.

I have gcc 4.7.2 and openmpi 1.4.5.
I made HIPS, MUMPS and Super LU parallel solvers work, while HYPRE, pARMS and
pastix still do not. Here is one example of a working parallel simulation with
HIPS:

$ ff-mpirun -np 2 chaleur3D-hips.edp
'/opt/openmpi/1.6.4/bin/mpirun' -np 2 /usr/local/bin/FreeFem++-mpi chaleur3D-
hips.edp
initparallele rank 0 on 2
-- FreeFem++ v  3.23 (date Sun Jun 16 19:29:19 CEST 2013)
 Load: lg_fem lg_mesh lg_mesh3 eigenvalue parallelempi
(large output...)
   ~Hips_Solver S:0
times: compile 0.01s, execution 17.71s,  mpirank:1
times: compile 0.01s, execution 18.46s,  mpirank:0
  We forget of deleting   3 Nb pointer,   0Bytes  ,  mpirank 0
 CodeAlloc : nb ptr  3401,  size :382192 mpirank: 0
Bien: On a fini Normalement
  We forget of deleting   3 Nb pointer,   0Bytes  ,  mpirank 1
 CodeAlloc : nb ptr  3401,  size :382192 mpirank: 1
FreeFem++-mpi finalize correctly .
FreeFem++-mpi finalize correctly .

and an example of a failing simulation with HYPRE:

$ ff-mpirun -np 2 chaleur3D-hypre.edp
'/opt/openmpi/1.6.4/bin/mpirun' -np 2 /usr/local/bin/FreeFem++-mpi chaleur3D-
hypre.edp
initparallele rank 0 on 2
-- FreeFem++ v  3.23 (date Sun Jun 16 19:29:19 CEST 2013)
 Load: lg_fem lg_mesh lg_mesh3 eigenvalue parallelempi
1 :
  current line = 5 mpirank 1 / 2
2 :  // other
3 : load "msh3" (load: dlopen ../examples++-load/msh3.so 0x1416e60)  load:
msh3

4 : load "medit" (load: dlopen ../examples++-load/medit.so 0x14187f0)
5 : load "hypre_FreeFem"
load error : hypre_FreeFem
 fail :
list  prefix: '../examples++-load/' '' './' list  suffix : '' , '.so'

 Error line number 5, in file chaleur3D-hypre.edp, before  token hypre_FreeFem
Error load
  current line = 5 mpirank 0 / 2
Compile error : Error load
line number :5, hypre_FreeFem
error Compile error : Error load
line number :5, hypre_FreeFem
 code = 1 mpirank: 0
FreeFem++-mpi finalize correctly .
--
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
--

The other non-working solvers give the same error.




-- System Information:
Debian Release: 7.1
  APT prefers stable
  APT policy: (500, 'stable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 3.2.0-4-amd64 (SMP w/2 CPU cores)
L

Bug#651417: jovie: Found service yovie no service kttsd

2011-12-08 Thread pitton
Source: jovie
Severity: important

Dear Maintainer,
*** Please consider answering these questions, where appropriate ***

   * What led up to the situation?
   * What exactly did you do (or not do) that was effective (or
 ineffective)?
   * What was the outcome of this action?
   * What outcome did you expect instead?

*** End of the template - remove these lines ***



-- System Information:
Debian Release: wheezy/sid
  APT prefers sid
  APT policy: (500, 'sid'), (500, 'unstable')
Architecture: i386 (i686)

Kernel: Linux 3.1.0-1-686-pae (SMP w/2 CPU cores)
Locale: LANG=ru_UA.UTF-8, LC_CTYPE=ru_UA.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org