Re: [OMPI users] growing memory use from MPI application

2019-06-20 Thread Yann Jobic via users

Hi,

Le 6/20/2019 à 3:31 PM, Noam Bernstein via users a écrit :



On Jun 20, 2019, at 4:44 AM, Charles A Taylor > wrote:


This looks a lot like a problem I had with OpenMPI 3.1.2.  I thought 
the fix was landed in 4.0.0 but you might
want to check the code to be sure there wasn’t a regression in 4.1.x. 
 Most of our codes are still running
3.1.2 so I haven’t built anything beyond 4.0.0 which definitely 
included the fix.


Unfortunately, 4.0.0 behaves the same.

One thing that I’m wondering if anyone familiar with the internals can 
explain is how you get a memory leak that isn’t freed when then program 
ends?  Doesn’t that suggest that it’s something lower level, like maybe 
a kernel issue?


Maybe it's only some data in cache memory, which is tagged as "used", 
but the kernel could use it, if needed. Have you tried to use the whole 
memory again with your code ? It sould work.


Yann



Noam


|
|
|
*U.S. NAVAL*
|
|
_*RESEARCH*_
|
LABORATORY

Noam Bernstein, Ph.D.
Center for Materials Physics and Technology
U.S. Naval Research Laboratory
T +1 202 404 8628  F +1 202 404 7546
https://www.nrl.navy.mil


___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Rounding errors and MPI

2017-01-16 Thread Yann Jobic

Hi,

Is there an overlapping section in the MPI part ?

Otherwise, please check :
- declaration type of all the variables (consistency)
- correct initialization of the array "wave" (to zero)
- maybe use temporary variables like
real size1,size2,factor
size1 = dx+dy
size2 = dhx+dhy
factor = dt*size2/(size1**2)
and then in the big loop:
wave(it,j,k)= wave(it,j,k)*factor
The code will also run faster.

Yann

Le 16/01/2017 à 14:28, Oscar Mojica a écrit :

Hello everybody

I'm having a problem with a parallel program written in fortran. I 
have a 3D array which is divided in two in the third dimension so 
thats two processes


perform some operations with a part of the cube, usinga subroutine. 
Each process also has the complete cube. Before each process call the 
subroutine,


I compare its sub array with its corresponding part of the whole cube. 
These are the same. The subroutine simply performs point-to-point 
operations in a loop, i.e.



 do k=k1,k2
  do j=1,nhx
   do it=1,nt
wave(it,j,k)= wave(it,j,k)*dt/(dx+dy)*(dhx+dhy)/(dx+dy)
 end do
   end do
  enddo


where, wave is the 3D array and the other values are constants.


After leaving the subroutine I notice that there is a difference in 
the values calculated by process 1 compared to the values that I get 
if the whole cube is passed to the subroutine but that this only works 
on its part, i.e.



---complete2017-01-12 10:30:23.0 -0400
+++ half  2017-01-12 10:34:57.0 -0400
@@ -4132545,7 +4132545,7 @@
   -2.5386049E-04
   -2.9899486E-04
   -3.4697619E-04
-  -3.7867704E-04
+ -3.7867710E-04
0.000E+00
0.000E+00
0.000E+00


When I do this with more processes the same thing happens with all 
processes other than zero. I find it very strange. I am disabling the 
optimization when compiling.


In the end the results are visually the same, but not numerically. I 
am working with simple precision.



Any idea what may be going on? I do not know if this is related to MPI



Oscar Mojica
Geologist Ph.D. in Geophysics
SENAI CIMATEC Supercomputing Center
Lattes: http://lattes.cnpq.br/0796232840554652



___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users



___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

[OMPI users] MPI_Sendrecv datatype memory bug ?

2016-11-24 Thread Yann Jobic

Hi all,

I'm going crazy about a possible bug in my code. I'm using a derived mpi 
datatype in a sendrecv function.
The problem is that the memory footprint of my code is growing as time 
increases.

The problem is not showing with a regular datatype, as MPI_DOUBLE.
I don't have this problem for openmpi 1.8.4, but it's present for 1.10.1 
and 2.0.1


The key parts of the code are (i'm using a 1D array with a macro in 
order to be 3D) :


Definition of the datatype:

  MPI_Type_vector( Ny, 1, Nx, MPI_DOUBLE, _COL );
  MPI_Type_commit( _COL ) ;

And the sendrecv part:

  MPI_Sendrecv( &(thebigone[_(1,0,k)]), 1, mpi.MPI_COL , 
mpi.left , 3, \
  &(thebigone[_(Nx-1,0,k)]) , 1, mpi.MPI_COL , 
mpi.right, 3, \

  mpi.com,  );

Is it coming from my code ?

I isolated the communications in a small code (500 lines). I can give it 
in order to reproduce the problem.


Thanks,

Yann


---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] valgrind invalid read

2016-11-22 Thread Yann Jobic

Hi,

I manually changed the file. Moreover i also tried the 1.8.4 openmpi 
version.


I still have this invalid read.

Am i doing something wrong ?

Thanks,

Yann


Le 22/11/2016 à 00:50, Gilles Gouaillardet a écrit :

Yann,


this is a bug that was previously reported, and the fix is pending on 
review.


meanwhile, you can manually apply the patch available at 
https://github.com/open-mpi/ompi/pull/2418



Cheers,


Gilles


On 11/18/2016 9:34 PM, Yann Jobic wrote:

Hi,

I'm using valgrind 3.12 with openmpi 2.0.1.
The code simply send an integer to another process with :
#include 
#include 
#include 

int main (int argc, char **argv) {
  const int tag = 13;
  int size, rank;

  MPI_Init(, );
  MPI_Comm_size(MPI_COMM_WORLD, );

  if (size < 2) {
  fprintf(stderr,"Requires at least two processes.\n");
  exit(-1);
  }

  MPI_Comm_rank(MPI_COMM_WORLD, );

  if (rank == 0) {
int i=3;
const int dest = 1;

MPI_Send(,   1, MPI_INT, dest, tag, MPI_COMM_WORLD);

printf("Rank %d: sent int\n", rank);
  }
  if (rank == 1) {
int j;
const int src=0;
MPI_Status status;

MPI_Recv(,   1, MPI_INT, src, tag, MPI_COMM_WORLD, );
printf("Rank %d: Received: int = %d\n", rank,j);
  }

  MPI_Finalize();

  return 0;
}


I'm getting the error :
valgrind MPI wrappers 46313: Active for pid 46313
valgrind MPI wrappers 46313: Try MPIWRAP_DEBUG=help for possible options
valgrind MPI wrappers 46314: Active for pid 46314
valgrind MPI wrappers 46314: Try MPIWRAP_DEBUG=help for possible options
Rank 0: sent int
==46314== Invalid read of size 4
==46314==at 0x400A3D: main (basic.c:33)
==46314==  Address 0xffefff594 is on thread 1's stack
==46314==  in frame #0, created by main (basic.c:5)
==46314==
Rank 1: Received: int = 3

The invalid read is at the printf line.

Do you have any clue of why am i getting it ?

I ran the code with :
LD_PRELOAD=$prefix/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -np 
2  $prefix/bin/valgrind ./exe


Thanks in advance,

Yann



---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

[OMPI users] valgrind invalid read

2016-11-18 Thread Yann Jobic

Hi,

I'm using valgrind 3.12 with openmpi 2.0.1.
The code simply send an integer to another process with :
#include 
#include 
#include 

int main (int argc, char **argv) {
  const int tag = 13;
  int size, rank;

  MPI_Init(, );
  MPI_Comm_size(MPI_COMM_WORLD, );

  if (size < 2) {
  fprintf(stderr,"Requires at least two processes.\n");
  exit(-1);
  }

  MPI_Comm_rank(MPI_COMM_WORLD, );

  if (rank == 0) {
int i=3;
const int dest = 1;

MPI_Send(,   1, MPI_INT, dest, tag, MPI_COMM_WORLD);

printf("Rank %d: sent int\n", rank);
  }
  if (rank == 1) {
int j;
const int src=0;
MPI_Status status;

MPI_Recv(,   1, MPI_INT, src, tag, MPI_COMM_WORLD, );
printf("Rank %d: Received: int = %d\n", rank,j);
  }

  MPI_Finalize();

  return 0;
}


I'm getting the error :
valgrind MPI wrappers 46313: Active for pid 46313
valgrind MPI wrappers 46313: Try MPIWRAP_DEBUG=help for possible options
valgrind MPI wrappers 46314: Active for pid 46314
valgrind MPI wrappers 46314: Try MPIWRAP_DEBUG=help for possible options
Rank 0: sent int
==46314== Invalid read of size 4
==46314==at 0x400A3D: main (basic.c:33)
==46314==  Address 0xffefff594 is on thread 1's stack
==46314==  in frame #0, created by main (basic.c:5)
==46314==
Rank 1: Received: int = 3

The invalid read is at the printf line.

Do you have any clue of why am i getting it ?

I ran the code with :
LD_PRELOAD=$prefix/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -np 2  
$prefix/bin/valgrind ./exe


Thanks in advance,

Yann

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

[OMPI users] infiniband question

2009-09-17 Thread Yann JOBIC

Hi,

I'm new to infiniband.
I installed the rdma_cm, rdma_ucm and ib_uverbs kernel modules.

When i'm running a ring test openmpi code, i've got :
[Lidia][0,1,1][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query] 
Set MTU to IBV value 4 (2048 bytes)
[Lidia][0,1,1][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query] 
Set MTU to IBV value 4 (2048 bytes)
[Lilou][0,1,0][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query] 
Set MTU to IBV value 4 (2048 bytes)
[Lilou][0,1,0][btl_openib_endpoint.c:992:mca_btl_openib_endpoint_qp_init_query] 
Set MTU to IBV value 4 (2048 bytes)


And then, the program hangs.

I thought i only need rdma communications, and don't need the DALP lib 
(with the iboip module).


I am wrong ?

Thanks,

Yann



--
___

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69 



Re: [OMPI users] SVD with mpi

2009-09-09 Thread Yann JOBIC

Attila Börcs wrote:

Hi Everyone,

I'd like to achieve singular value decomposition with mpi. I heard 
about Lanczos algorith and some different kind of algorith for svd, 
but I need some help about this theme. Knows anybody some usable code 
or tutorial about parallel svd?


Best Regards,

Attila

If you need a full decomposition, scalapack is the best.
Otherwise, you may take a look at SLEPc (which use the PETSc framework)


Yann


Re: [OMPI users] Program runs successfully...but with error messages displayed

2009-08-27 Thread Yann JOBIC

Jean Potsam wrote:

Dear All,
  I have installed openmpi 1.3.2 on one on the nodes of 
our cluster and is running a simple helloword mpi program. The program 
runs fine but I get a lot of unexpected messages in between the result.


##

jean@n06:~/examples$ mpirun -np 2 --host n06 hello_c
libibverbs: Fatal: couldn't read uverbs ABI version.
--
[[11410,1],1]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: n06

Another transport will be used instead, although this may result in
lower performance.
--
libibverbs: Fatal: couldn't read uverbs ABI version.

Hello, world, I am 0 of 2 and running on n06
Hello, world, I am 1 of 2 and running on n06


[n06:08470] 1 more process has sent help message help-mpi-btl-base.txt 
/ btl:no-nics
[n06:08470] Set MCA parameter "orte_base_help_aggregate" to 0 to see 
all help / error messages


##

Does anyone know why these messages appear and how to fix this.

Thanks 


Jean


start: -00-00 end: -00-00


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
You can define some default parameter in the 
$OMPIDIR/etc/openmpi-mca-params.conf


For instance, you can add :
# Exclude openib BTL, not currently supported
btl = ^openib,ofud

Yann

--
_______

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69 



Re: [OMPI users] Help: orted: command not found.

2009-08-24 Thread Yann JOBIC

Lee Amy wrote:

Hi,

I run some programs by using OpenMPI 1.3.3 and when I execute the
command I encountered such following error messages.

sh: orted: command not found
--
A daemon (pid 6797) died unexpectedly with status 127 while attempting
to launch so we are aborting.

There may be more information reported by the environment (see above).

This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
--
--
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
--
mpirun: clean termination accomplished

So could anyone tell me how to fix that problem?

Thanks.

Amy
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
  
You may use the variable OPAL_PREFIX, which point to your installation 
directory.


Yann

--
___

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69 



Re: [OMPI users] pipes system limit

2009-08-07 Thread Yann JOBIC

Rolf Vandevaart wrote:
This message is telling you that you have run out of file descriptors. 
I am surprised that the -mca parameter setting did not fix the problem.
Can you run limit or ulimit on your shell and send the information?  I 
typically set my limit to 65536 assuming the system allows it.


burl-16 58 =>limit descriptors
descriptors 65536
burl-16 59 =>

bash-3.00$ ulimit -n
65536
bash-3.00$


Rolf

Thanks for the fast answer !

I've done : limit descriptors 1024  (csh style)
And that's working fine. I took the linux descriptors by default.

Thanks again,

Yann




On 08/07/09 11:21, Yann JOBIC wrote:

Hello all,

I'm using hpc8.2 :
Lidia-jobic% ompi_info
Displaying Open MPI information for 32-bit ...
Package: ClusterTools 8.2
   Open MPI: 1.3.3r21324-ct8.2-b09j-r40
[...]

And i've got a X4600 machine (8*4 cores).

When i'm trying to run a 32 processor jobs, i've got :

Lidia-jobic% mpiexec --mca opal_set_max_sys_limits 1 -n 32 ./exe
[Lidia:29384] [[61597,0],0] ORTE_ERROR_LOG: The system limit on 
number of pipes a process can open was reached in file 
base/iof_base_setup.c at line 112
[Lidia:29384] [[61597,0],0] ORTE_ERROR_LOG: The system limit on 
number of pipes a process can open was reached in file 
odls_default_module.c at line 203
[Lidia:29384] [[61597,0],0] ORTE_ERROR_LOG: The system limit on 
number of network connections a process can open was reached in file 
oob_tcp.c at line 446
-- 

Error: system limit exceeded on number of network connections that 
can be open


This can be resolved by setting the mca parameter 
opal_set_max_sys_limits to 1,
increasing your limit descriptor setting (using limit or ulimit 
commands),

or asking the system administrator to increase the system limit.
-- 



I tried the ulimit, the mca parameter, i've got no idea of where to 
look at.

I've got the same computer under linux, and it's working fine...

Have you seen it ?
Do you know a way to bypass it ?

Many thanks,

Yann








--
___

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69 



[OMPI users] pipes system limit

2009-08-07 Thread Yann JOBIC

Hello all,

I'm using hpc8.2 :
Lidia-jobic% ompi_info
Displaying Open MPI information for 32-bit ...
Package: ClusterTools 8.2
   Open MPI: 1.3.3r21324-ct8.2-b09j-r40
[...]

And i've got a X4600 machine (8*4 cores).

When i'm trying to run a 32 processor jobs, i've got :

Lidia-jobic% mpiexec --mca opal_set_max_sys_limits 1 -n 32 ./exe
[Lidia:29384] [[61597,0],0] ORTE_ERROR_LOG: The system limit on number 
of pipes a process can open was reached in file base/iof_base_setup.c at 
line 112
[Lidia:29384] [[61597,0],0] ORTE_ERROR_LOG: The system limit on number 
of pipes a process can open was reached in file odls_default_module.c at 
line 203
[Lidia:29384] [[61597,0],0] ORTE_ERROR_LOG: The system limit on number 
of network connections a process can open was reached in file oob_tcp.c 
at line 446

--
Error: system limit exceeded on number of network connections that can 
be open


This can be resolved by setting the mca parameter 
opal_set_max_sys_limits to 1,

increasing your limit descriptor setting (using limit or ulimit commands),
or asking the system administrator to increase the system limit.
--

I tried the ulimit, the mca parameter, i've got no idea of where to look at.
I've got the same computer under linux, and it's working fine...

Have you seen it ?
Do you know a way to bypass it ?

Many thanks,

Yann


--
___

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69 



[OMPI users] MPI_Lookup_name

2009-06-09 Thread Yann JOBIC

Hi all,

I'm trying to get MPI_Lookup_name working.
The codes are working fine with mpich2.
I'm using ompi-1.3.2 (r21054, from the tar version)

Here's the error message :
[homard:26336] *** An error occurred in MPI_Lookup_name
[homard:26336] *** on communicator MPI_COMM_WORLD
[homard:26336] *** MPI_ERR_NAME: invalid name argument
[homard:26336] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

The method i used :
TERM1 : ompi-server -d --report-uri test
TERM2 : mpirun -ompi-server test -np 1 server
TERM3 : mpirun -ompi-server test -np 1 client
Then i've got the error.

Here's the codes :
http://www.latp.univ-mrs.fr/~jobic/server.c
http://www.latp.univ-mrs.fr/~jobic/client.c

I also have some strange errors, like :
[homard:26319] [[34061,0],0] ORTE_ERROR_LOG: Bad parameter in file 
base/rml_base_contact.c at line 153
[homard:26319] [[34061,0],0] ORTE_ERROR_LOG: Bad parameter in file 
rml_oob_contact.c at line 55
[homard:26319] [[34061,0],0] ORTE_ERROR_LOG: Bad parameter in file 
base/rml_base_contact.c at line 91


Have you succeed in making MPI_Lookup_name work ??

Thanks,

Yann



--
___

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69 



[OMPI users] strange error, seems inable to launch job

2009-02-11 Thread Mr Yann JOBIC

Hello all,

I compiled ompi v1.3 (tarball)  with the intel compiler on debian etch. 
Everything went fine, thanks for the FAQ (quite complet)


But, when i'm running a job, i've got this error :
Trixy03-jobic% mpirun --verbose --debug-daemons -np 4 ./exe
[Trixy03:15140] [[19525,0],0] orted_cmd: received add_local_procs
[Trixy03:15140] [[19525,0],0] node[0].name Trixy03 daemon 0 arch ffc91200
--
mpirun was unable to launch the specified application as it encountered 
an error:


Error: pipe function call failed when setting up I/O forwarding subsystem
Node: Trixy03

while attempting to start process rank 0.
--

I don't understand what's going on, and how to debug...
I compiled mpich1, and i can successfully launch a job.

Have you got some ideas of what's going on ?

Many thanks,

Yann

PS : Some possible interesting information :
Open MPI SVN revision: r20295
Open MPI release date: Jan 19, 2009
Open RTE: 1.3
Build CFLAGS: -DNDEBUG -mp1 -m64 -O3 -fno-alias -msse3 -static-intel 
-finline-functions -fno-strict-aliasing

-restrict -fexceptions -pthread -fvisibility=hidden
Build CXXFLAGS: -DNDEBUG -mp1 -m64 -O3 -fno-alias -msse3 -static-intel 
-finline-functions -fexceptions -pthread

Build FFLAGS: -mp1 -m64 -O3 -fno-alias -msse3 -static-intel -fexceptions
Build FCFLAGS: -mp1 -m64 -O3 -fno-alias -msse3 -static-intel 
-fexceptions -fexceptions

Build LDFLAGS: -export-dynamic  -fexceptions
Build LIBS: -lnsl -lutil 
Wrapper extra CFLAGS: -fexceptions -pthread

Wrapper extra CXXFLAGS: -fexceptions -pthread
Wrapper extra FFLAGS: -fexceptions
Wrapper extra FCFLAGS: -fexceptions
Wrapper extra LDFLAGS:
Wrapper extra LIBS:   -ldl   -Wl,--export-dynamic -lnsl -lutil




--
___

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69 


Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-08 Thread Yann JOBIC

Hello,

I used cc to compile. I tried to use mpicc/mpif90 to compile PETSC, but 
it changed nothing.

I still have the same error.

I'm giving you the whole compile proccess :
4440p-jobic% gmake solv_ksp
mpicc -o solv_ksp.o -c -fPIC -m64 -I/opt/lib/petsc 
-I/opt/lib/petsc/bmake/amd-64-openmpi_no_debug -I/opt/lib/petsc/include 
-I/opt/SUNWhpc/HPC8.0/include -I/opt/SUNWhpc/HPC8.0/include/amd64 -I. 
-D__SDIR__="" solv_ksp.c
mpicc -fPIC -m64  -o solv_ksp solv_ksp.o 
-R/opt/lib/petsc/lib/amd-64-openmpi_no_debug 
-L/opt/lib/petsc/lib/amd-64-openmpi_no_debug -lpetscsnes -lpetscksp 
-lpetscdm -lpetscmat -lpetscvec -lpetsc   -lX11  -lsunperf -lsunmath -lm 
-ldl -R/opt/mx/lib/amd64 -R/opt/SUNWhpc/HPC8.0/lib/amd64 
-R/opt/SUNWhpc/HPC8.0/lib/amd64 -L/opt/SUNWhpc/HPC8.0/lib/amd64 -lmpi 
-lopen-rte -lopen-pal -lnsl -lrt -lm -lsocket -lmpi_f77 -lmpi_f90 
-R/opt/SUNWspro/lib/amd64 -R/opt/SUNWspro/lib/amd64 
-L/opt/SUNWspro/lib/amd64 -R/opt/SUNWspro/prod/lib/amd64 
-L/opt/SUNWspro/prod/lib/amd64 -R/usr/ccs/lib/amd64 -L/usr/ccs/lib/amd64 
-R/lib/64 -L/lib/64 -R/usr/lib/64 -L/usr/lib/64 -lm -lfui -lfai -lfsu 
-lsunmath -lmtsk -lm   -ldl  -R/usr/ucblib

ld: warning: symbol `mpi_fortran_status_ignore_' has differing sizes:
   (file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file 
/opt/SUNWhpc/HPC8.0/lib/amd64/libmpi_f90.so value=0x14);

   /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so definition taken
/usr/bin/rm -f solv_ksp.o


Thanks for your help,

Yann

Terry Dontje wrote:

Yann,

How were you trying to link your code with PETSc?  Did you use mpif90 
or mpif77 wrappers or were you using cc or mpicc wrappers?  I ran some 
basic tests that test the usage of MPI_STATUS_IGNORE using mpif90 (and 
mpif77) and it works fine.  However I was able to generate a similar 
error as you did when tried to link things with the cc program. 
If you are using cc to link could you possibly try to use mpif90 to 
link your code?


--td

Date: Tue, 07 Oct 2008 16:55:14 +0200
From: "Yann JOBIC" <jo...@polytech.univ-mrs.fr>
Subject: [OMPI users] OMPI link error with petsc 2.3.3
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <48eb7852.6070...@polytech.univ-mrs.fr>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello,

I'm using openmpi 1.3r19400 (ClusterTools 8.0), with sun studio 12, 
and solaris 10u5


I've got this error when linking a PETSc code :
ld: warning: symbol `mpi_fortran_status_ignore_' has differing sizes:
   (file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file 
/opt/SUNWhpc/HPC8.0/lib/amd64/libmpi_f90.so value=0x14);

   /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so definition taken


Isn't it very strange ?

Have you got any idea on the way to solve it ?

Many thanks,

Yann

  


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
___

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69 


Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-07 Thread Yann JOBIC

Terry Dontje wrote:

Yann,

I'll take a look at this it looks like there definitely is an issue 
between our libmpi.so and libmpi_f90.so files.


I noticed that the linkage message is a warning does the code actually 
fail when running?


--td

Thanks for you fast answer.
No, the program is running and gives some good results (so far, for some 
small cases).

However i don't know if we'll have some strange behavior in some cases.

Yann


Date: Tue, 07 Oct 2008 16:55:14 +0200
From: "Yann JOBIC" <jo...@polytech.univ-mrs.fr>
Subject: [OMPI users] OMPI link error with petsc 2.3.3
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <48eb7852.6070...@polytech.univ-mrs.fr>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello,

I'm using openmpi 1.3r19400 (ClusterTools 8.0), with sun studio 12, 
and solaris 10u5


I've got this error when linking a PETSc code :
ld: warning: symbol `mpi_fortran_status_ignore_' has differing sizes:
   (file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file 
/opt/SUNWhpc/HPC8.0/lib/amd64/libmpi_f90.so value=0x14);

   /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so definition taken


Isn't it very strange ?

Have you got any idea on the way to solve it ?

Many thanks,

Yann

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
___

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69 


[OMPI users] OMPI link error with petsc 2.3.3

2008-10-07 Thread Yann JOBIC

Hello,

I'm using openmpi 1.3r19400 (ClusterTools 8.0), with sun studio 12, and 
solaris 10u5


I've got this error when linking a PETSc code :
ld: warning: symbol `mpi_fortran_status_ignore_' has differing sizes:
   (file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file 
/opt/SUNWhpc/HPC8.0/lib/amd64/libmpi_f90.so value=0x14);

   /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so definition taken


Isn't it very strange ?

Have you got any idea on the way to solve it ?

Many thanks,

Yann