Re: [OMPI users] sending message to the source(0) from other processors

2008-12-24 Thread Win Than Aung
I got the solution. I just need to set the appropriate tag to send and receive.sorry for asking thanks winthan On Wed, Dec 24, 2008 at 10:36 PM, Win Than Aung wrote: > thanks Eugene for your example, it helps me a lot.I bump into one more > problems > lets say I have the

Re: [OMPI users] sending message to the source(0) from other processors

2008-12-24 Thread Win Than Aung
thanks Eugene for your example, it helps me a lot.I bump into one more problems lets say I have the file content as follow I have total of six files which all contain real and imaginary value. " 1.001212 1.0012121 //0th 1.001212 1.0012121 //1st 1.001212 1.0012121 //2nd 1.001212

Re: [OMPI users] Problem with openmpi and infiniband

2008-12-24 Thread Tim Mattox
For your runs with Open MPI over InfiniBand, try using openib,sm,self for the BTL setting, so that shared memory communications are used within a node. It would give us another datapoint to help diagnose the problem. As for other things we would need to help diagnose the problem, please follow

Re: [OMPI users] mpiblast + openmpi + gridengine job faila to run

2008-12-24 Thread Joe Landman
Reuti wrote: Hi, Am 24.12.2008 um 07:55 schrieb Sangamesh B: Thanks Reuti. That sorted out the problem. Now mpiblast is able to run, but only on single node. i.e. mpiformatdb -> 4 fragments, mpiblast - 4 processes. Since each node is having 4 cores, the job will run on a single node and

Re: [OMPI users] Problem with openmpi and infiniband

2008-12-24 Thread Pavel Shamis (Pasha)
If the basic test run the installation is ok. So what happens when you try to run your application ? What is command line ? What is the error message ? do you run the application on the same set of machines with the same command line as IMB ? Pasha yes to both questions: the OMPI version

Re: [OMPI users] BTL question

2008-12-24 Thread Pavel Shamis (Pasha)
Teige, Scott W wrote: Greetings, I have observed strange behavior with an application running with OpenMPI 1.2.8, OFED 1.2. The application runs in two "modes", fast and slow. The exectution time is either within one second of 108 sec. or within one second of 67 sec. My cluster has 1 Gig

Re: [OMPI users] mpiblast + openmpi + gridengine job faila to run

2008-12-24 Thread Reuti
Hi, Am 24.12.2008 um 07:55 schrieb Sangamesh B: Thanks Reuti. That sorted out the problem. Now mpiblast is able to run, but only on single node. i.e. mpiformatdb -> 4 fragments, mpiblast - 4 processes. Since each node is having 4 cores, the job will run on a single node and works fine. With 8

[OMPI users] BTL question

2008-12-24 Thread Teige, Scott W
Greetings, I have observed strange behavior with an application running with OpenMPI 1.2.8, OFED 1.2. The application runs in two "modes", fast and slow. The exectution time is either within one second of 108 sec. or within one second of 67 sec. My cluster has 1 Gig ethernet and DDR Infiniband

Re: [OMPI users] Problem with openmpi and infiniband

2008-12-24 Thread Biagio Lucini
Pavel Shamis (Pasha) wrote: Biagio Lucini wrote: Hello, I am new to this list, where I hope to find a solution for a problem that I have been having for quite a longtime. I run various versions of openmpi (from 1.1.2 to 1.2.8) on a cluster with Infiniband interconnects that I use and

Re: [OMPI users] Problem with openmpi and infiniband

2008-12-24 Thread Pavel Shamis (Pasha)
Biagio Lucini wrote: Hello, I am new to this list, where I hope to find a solution for a problem that I have been having for quite a longtime. I run various versions of openmpi (from 1.1.2 to 1.2.8) on a cluster with Infiniband interconnects that I use and administer at the same time. The

Re: [OMPI users] mpiblast + openmpi + gridengine job faila to run

2008-12-24 Thread Sangamesh B
Thanks Reuti. That sorted out the problem. Now mpiblast is able to run, but only on single node. i.e. mpiformatdb -> 4 fragments, mpiblast - 4 processes. Since each node is having 4 cores, the job will run on a single node and works fine. With 8 processes, the job fails with following error