Re: [OMPI users] growing memory use from MPI application

2019-06-20 Thread Yann Jobic via users
Hi, Le 6/20/2019 à 3:31 PM, Noam Bernstein via users a écrit : On Jun 20, 2019, at 4:44 AM, Charles A Taylor > wrote: This looks a lot like a problem I had with OpenMPI 3.1.2.  I thought the fix was landed in 4.0.0 but you might want to check the code to be sure

Re: [OMPI users] Rounding errors and MPI

2017-01-16 Thread Yann Jobic
Hi, Is there an overlapping section in the MPI part ? Otherwise, please check : - declaration type of all the variables (consistency) - correct initialization of the array "wave" (to zero) - maybe use temporary variables like real size1,size2,factor size1 = dx+dy size2 = dhx+dhy factor =

[OMPI users] MPI_Sendrecv datatype memory bug ?

2016-11-24 Thread Yann Jobic
Hi all, I'm going crazy about a possible bug in my code. I'm using a derived mpi datatype in a sendrecv function. The problem is that the memory footprint of my code is growing as time increases. The problem is not showing with a regular datatype, as MPI_DOUBLE. I don't have this problem for

Re: [OMPI users] valgrind invalid read

2016-11-22 Thread Yann Jobic
on review. meanwhile, you can manually apply the patch available at https://github.com/open-mpi/ompi/pull/2418 Cheers, Gilles On 11/18/2016 9:34 PM, Yann Jobic wrote: Hi, I'm using valgrind 3.12 with openmpi 2.0.1. The code simply send an integer to another process with : #include

[OMPI users] valgrind invalid read

2016-11-18 Thread Yann Jobic
Hi, I'm using valgrind 3.12 with openmpi 2.0.1. The code simply send an integer to another process with : #include #include #include int main (int argc, char **argv) { const int tag = 13; int size, rank; MPI_Init(, ); MPI_Comm_size(MPI_COMM_WORLD, ); if (size < 2) {

[OMPI users] infiniband question

2009-09-17 Thread Yann JOBIC
value 4 (2048 bytes) And then, the program hangs. I thought i only need rdma communications, and don't need the DALP lib (with the iboip module). I am wrong ? Thanks, Yann -- ___ Yann JOBIC HPC engineer Polytech Marseille DME IUSTI-CNRS UMR 6595 Technopôle de Château

Re: [OMPI users] SVD with mpi

2009-09-09 Thread Yann JOBIC
Attila Börcs wrote: Hi Everyone, I'd like to achieve singular value decomposition with mpi. I heard about Lanczos algorith and some different kind of algorith for svd, but I need some help about this theme. Knows anybody some usable code or tutorial about parallel svd? Best Regards,

Re: [OMPI users] Program runs successfully...but with error messages displayed

2009-08-27 Thread Yann JOBIC
rs You can define some default parameter in the $OMPIDIR/etc/openmpi-mca-params.conf For instance, you can add : # Exclude openib BTL, not currently supported btl = ^openib,ofud Yann -- _______ Yann JOBIC HPC engineer Polytech Marseille DME IUSTI-CNRS UMR 6595 Technopôle

Re: [OMPI users] Help: orted: command not found.

2009-08-24 Thread Yann JOBIC
http://www.open-mpi.org/mailman/listinfo.cgi/users You may use the variable OPAL_PREFIX, which point to your installation directory. Yann -- ___ Yann JOBIC HPC engineer Polytech Marseille DME IUSTI-CNRS UMR 6595 Technopôle de Château Gombert 5 rue Enrico Fermi 13453

Re: [OMPI users] pipes system limit

2009-08-07 Thread Yann JOBIC
08/07/09 11:21, Yann JOBIC wrote: Hello all, I'm using hpc8.2 : Lidia-jobic% ompi_info Displaying Open MPI information for 32-bit ... Package: ClusterTools 8.2 Open MPI: 1.3.3r21324-ct8.2-b09j-r40 [...] And i've got a X4600 machine (8*4 cores). When i'm trying to ru

[OMPI users] pipes system limit

2009-08-07 Thread Yann JOBIC
, Yann -- ___ Yann JOBIC HPC engineer Polytech Marseille DME IUSTI-CNRS UMR 6595 Technopôle de Château Gombert 5 rue Enrico Fermi 13453 Marseille cedex 13 Tel : (33) 4 91 10 69 39 ou (33) 4 91 10 69 43 Fax : (33) 4 91 10 69 69

[OMPI users] MPI_Lookup_name

2009-06-09 Thread Yann JOBIC
in file rml_oob_contact.c at line 55 [homard:26319] [[34061,0],0] ORTE_ERROR_LOG: Bad parameter in file base/rml_base_contact.c at line 91 Have you succeed in making MPI_Lookup_name work ?? Thanks, Yann -- ___ Yann JOBIC HPC engineer Polytech Marseille DME IUSTI-CNRS

[OMPI users] strange error, seems inable to launch job

2009-02-11 Thread Mr Yann JOBIC
extra LIBS: -ldl -Wl,--export-dynamic -lnsl -lutil -- ___ Yann JOBIC HPC engineer Polytech Marseille DME IUSTI-CNRS UMR 6595 Technopôle de Château Gombert 5 rue Enrico Fermi 13453 Marseille cedex 13 Tel : (33) 4 91 10 69 39 ou (33) 4 91 10 69 43 Fax : (33) 4 91 10

Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-08 Thread Yann JOBIC
ror as you did when tried to link things with the cc program. If you are using cc to link could you possibly try to use mpif90 to link your code? --td Date: Tue, 07 Oct 2008 16:55:14 +0200 From: "Yann JOBIC" <jo...@polytech.univ-mrs.fr> Subject: [OMPI users] OMPI link error wit

Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-07 Thread Yann JOBIC
is running and gives some good results (so far, for some small cases). However i don't know if we'll have some strange behavior in some cases. Yann Date: Tue, 07 Oct 2008 16:55:14 +0200 From: "Yann JOBIC" <jo...@polytech.univ-mrs.fr> Subject: [OMPI users] OMPI link error with petsc 2.

[OMPI users] OMPI link error with petsc 2.3.3

2008-10-07 Thread Yann JOBIC
Hello, I'm using openmpi 1.3r19400 (ClusterTools 8.0), with sun studio 12, and solaris 10u5 I've got this error when linking a PETSc code : ld: warning: symbol `mpi_fortran_status_ignore_' has differing sizes: (file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file