Re: [OMPI users] glut display 'occasionally' opens

2010-12-06 Thread Ralph Castain
Hmmm...yes, the code does seem to handle that '=' being in there. Forgot it was there. Depending on the version you are using, mpirun could just open the display for you. There is an mpirun option that tells us to please start each app in its own xterm. You shouldn't need forwarding if you are

Re: [OMPI users] MPI_Send doesn't work if the data >= 2GB

2010-12-06 Thread Gus Correa
Hi Xianjun Suggestions/Questions: 1) Did you check if malloc returns a non-NULL pointer? Your program is assuming this, but it may not be true, and in this case the problem is not with MPI. You can print a message and call MPI_Abort if it doesn't. 2) Have you tried MPI_Isend/MPI_Irecv? Or perha

Re: [OMPI users] glut display 'occasionally' opens

2010-12-06 Thread brad baker
Without including the -x DISPLAY, glut doesn't know what display to open. For instance, without the -x DISPLAY parameter glut returns an error from each process stating that it could not find display "" (empty string). This strategy is briefly described in the openmpi FAQs

Re: [OMPI users] MPI_Send doesn't work if the data >= 2GB

2010-12-06 Thread 孟宪军
Hi Are you running on two processes (mpiexec -n 2)? Yes Have you tried to print Gsize? Yes, I had checked my codes several times, and I thought the errors came from the OpenMpi. :) The command line I used: "mpirun -hostfile ./Serverlist -np 2 ./test". The "Serverlist" file include several comput

Re: [OMPI users] glut display 'occasionally' opens

2010-12-06 Thread Ralph Castain
Guess I'm not entirely sure I understand how this is supposed to work. All the -x does is tell us to pickup an envar of the given name and forward its value to the remote apps. You can't set the envar's value on the cmd line. So you told mpirun to pickup the value of an envar called "DISPLAY=:0.

[OMPI users] glut display 'occasionally' opens

2010-12-06 Thread brad baker
Hello, I'm working on an mpi application that opens a glut display on each node of a small cluster for opengl rendering (each node has its own display). My current implementation scales great with mpich2, but I'd like to use openmpi infiniband, which is giving me trouble. I've had some success wi

Re: [OMPI users] MPI_Send doesn't work if the data >= 2GB

2010-12-06 Thread Gus Correa
Gus Correa wrote: Hi Xianjun Are you running on two processes (mpiexec -n 2)? I think this code will deadlock for more than two processes. The MPI_Recv won't have a matching send for rank>1. Also, this is C, not MPI, but you may be wrapping into the negative numbers. Have you tried to print Gsi

Re: [OMPI users] MPI_Send doesn't work if the data >= 2GB

2010-12-06 Thread Gus Correa
Hi Xianjun Are you running on two processes (mpiexec -n 2)? I think this code will deadlock for more than two processes. The MPI_Recv won't have a matching send for rank>1. Also, this is C, not MPI, but you may be wrapping into the negative numbers. Have you tried to print Gsize? It is probably

Re: [OMPI users] MPI_Send doesn't work if the data >= 2GB

2010-12-06 Thread Mike Dubman
Hi, What interconnect and command line do you use? For InfiniBand openib component there is a known issue with large transfers (2GB) https://svn.open-mpi.org/trac/ompi/ticket/2623 try disabling memory pinning: http://www.open-mpi.org/faq/?category=openfabrics#large-message-leave-pinned regards

Re: [OMPI users] difference between single and double precision

2010-12-06 Thread Eugene Loh
Mathieu Gontier wrote: Nevertheless, one can observed some differences between MPICH and OpenMPI from 25% to 100% depending on the options we are using into our software. Tests are lead on a single SGI node on 6 or 12 processes, and thus, I am focused on the sm option. Is it possible to narr

Re: [OMPI users] difference between single and double precision

2010-12-06 Thread Peter Kjellström
On Monday 06 December 2010 15:03:13 Mathieu Gontier wrote: > Hi, > > A small update. > My colleague made a mistake and there is no arithmetic performance > issue. Sorry for bothering you. > > Nevertheless, one can observed some differences between MPICH and > OpenMPI from 25% to 100% depending on

[OMPI users] Segmentation fault in mca_pml_ob1.so

2010-12-06 Thread Grzegorz Maj
Hi, I'm using mkl scalapack in my project. Recently, I was trying to run my application on new set of nodes. Unfortunately, when I try to execute more than about 20 processes, I get segmentation fault. [compn7:03552] *** Process received signal *** [compn7:03552] Signal: Segmentation fault (11) [c

Re: [OMPI users] meaning of MPI_THREAD_*

2010-12-06 Thread Hicham Mouline
    -Original Message- From: "Tim Prince" [n...@aol.com] List-Post: users@lists.open-mpi.org Date: 06/12/2010 01:40 PM To: us...@open-mpi.org Subject: Re: [OMPI users] meaning of MPI_THREAD_* >On 12/6/2010 3:16 AM, Hicham Mouline wrote: >> Hello, >> >> 1. MPI_THREAD_SINGLE: Only one thread

Re: [OMPI users] difference between single and double precision

2010-12-06 Thread Mathieu Gontier
Hi, A small update. My colleague made a mistake and there is no arithmetic performance issue. Sorry for bothering you. Nevertheless, one can observed some differences between MPICH and OpenMPI from 25% to 100% depending on the options we are using into our software. Tests are lead on a singl

Re: [OMPI users] meaning of MPI_THREAD_*

2010-12-06 Thread Tim Prince
On 12/6/2010 3:16 AM, Hicham Mouline wrote: Hello, 1. MPI_THREAD_SINGLE: Only one thread will execute. Does this really mean the process cannot have any other threads at all, even if they doen't deal with MPI at all? I'm curious as to how this case affects the openmpi implementation? Essentiall

[OMPI users] meaning of MPI_THREAD_*

2010-12-06 Thread Hicham Mouline
Hello, 1. MPI_THREAD_SINGLE: Only one thread will execute. Does this really mean the process cannot have any other threads at all, even if they doen't deal with MPI at all? I'm curious as to how this case affects the openmpi implementation? Essentially, what is the difference between MPI_THREAD_S

Re: [OMPI users] Scalability issue

2010-12-06 Thread Gustavo Correa
Hi Benjamin I guess you could compile OpenMPI with standard integer and real sizes. Then compile your application (DRAGON) with the flags to change to 8-byte integers and 8-byte reals. We have some programs here that use real8 and are compiled this way, and run without a problem. I guess this is w