[OMPI users] Re :Re: Linpack Benchmark and File Descriptor Limits

2008-09-19 Thread Neeraj Chourasia
Hello,    With openmpi-1.3,  new mca feature is introduced namely --mca routed binomial. This ensures out of band communication to happen in binomial fashion and reduces the net socket opening and hence solves file open issues.-NeerajOn Thu, 18 Sep 2008 16:46:23 -0700 Open MPI Users wrote I'm

[OMPI users] RDMA-CM

2008-06-17 Thread Neeraj Chourasia
Hello everyone,    I downloaded openmpi-1.3 version from night tarballs to check RDMA-CM support. I am able to compile and install it, but dont know how to run it as there is no documentation provided. Did someone try running it with OpenMPI?My another question is Does OpenMPI1.3 has progress-t

[OMPI users] Openmpi with SGE

2008-02-20 Thread Neeraj Chourasia
Hello everyone,    I am facing problem while calling mpirun in a loop when using with SGE. My sge version is SGE6.1AR_snapshot3. The script i am submitting via sge is xlet i=0while [ $i -lt 100 ]do    echo "###

[OMPI users] orte in persistent mode

2007-12-31 Thread Neeraj Chourasia
Dear All,    I am wondering if ORTE can be run in persistent mode. It has already been raised in Mailing list ( http://www.open-mpi.org/community/lists/users/2006/03/0939.php),  where it was said that the problem is still there. I just want to know, if its fixed or being fixed ?   Reason, why i

[OMPI users] Re :Re: what is MPI_IN_PLACE

2007-12-11 Thread Neeraj Chourasia
collective section in the MPI standard to see all the restrictions. Thanks, george.On Dec 11, 2007, at 5:56 AM, Neeraj Chourasia wrote: > Hello everyone, > > While going through collective algorithms, I came across> preprocessor directive MPI_IN_PLACE which i

[OMPI users] what is MPI_IN_PLACE

2007-12-11 Thread Neeraj Chourasia
Hello everyone,    While going through collective algorithms, I came across preprocessor directive MPI_IN_PLACE which is (void *)1. Its always being compared against source buffer(sbuf). My question is when MPI_IN_PLACE == sbuf condition would be true. As far as i understand, sbuf is the address

Re: [OMPI users] OpenIB problems

2007-11-29 Thread Neeraj Chourasia
Hi Guys, The alternative to THREAD_MULTIPLE problem is to use --mca mpi_leave_pinned 1 to mpirun option. This will ensure 1 RDMA operation contrary to splitting data in MAX RDMA size (default to 1MB). If your data size is small say below 1 MB, program will run well with THREAD_MULTIPLE. P

[OMPI users] version 1.3

2007-11-28 Thread Neeraj Chourasia
Hello Guys, When is the version 1.3 scheduled to be released? As it would contain checkpointing, library for non-blocking communication, ConnectX for QP's, it would be great to have it ASAP. Since i am evaluating MVAPICH against OpenMPI, i found that MVAPICH still has upper hand in terms of

[OMPI users] Adding new API

2007-11-05 Thread Neeraj Chourasia
Hello Everyone,    I just want to add extra API to be used by application guys. This API can be called from C application and has to be compiled and linked by MPICC. But i am getting undefined references, even though i am exporting it in the source code. Could some one tell me the steps, i shou

[OMPI users] Re :Re: OpenMP and OpenMPI Issue

2007-11-01 Thread Neeraj Chourasia
, at 12:17 AM, Neeraj Chourasia wrote:> Hi folks, > > I have been seeing some nasty behaviour in MPI_Send/Recv> with large dataset(8 MB), when used with OpenMP and Openmpi> together with IB Interconnect. Attached is a program. > >

[OMPI users] OpenMP and OpenMPI Issue

2007-10-30 Thread Neeraj Chourasia
Hi folks,    I have been seeing some nasty behaviour in MPI_Send/Recv with large dataset(8 MB), when used with OpenMP and Openmpi together with IB Interconnect. Attached is a program.       The code first calls MPI_Init_thread() followed by openmp thread creation API. The program works fine

[OMPI users] MPI_Send issues with openib btl

2007-10-26 Thread Neeraj Chourasia
hi,    We are facing some problem when calling MPI_Send over IB. The problem looks similar to ticket https://svn.open-mpi.org/trac/ompi/ticket/232, but this time its for IB Interface. When forcefully running the program using --mca btl tcp,self its running fine.    On Ib, its giving error messa

[OMPI users] Re :Re: Process 0 with different time executing the same code

2007-10-26 Thread Neeraj Chourasia
Hi,    Please ensure if following things are correct1) The array bounds are equal. Means \"my_x\" and \"size_y\" has the same value on all nodes.2) Nodes are homogenous. To check that, you could decide root to be some different node and run the program-NeerajOn Fri, 26 Oct 2007 10:13:15 +0500 (

[OMPI users] OpenMPI 1.2.4 vs 1.2

2007-10-24 Thread Neeraj Chourasia
Hello Guys,    I had openmpi v1.2 installed on my cluster. Couple of days back, i thought to upgrade it to v1.2.4(latest release i suppose). Since i didnt want to take risk, i first installed it on temporary location and did bandwidth and bidirectional bandwidth test provided by the OSU guys, a

[OMPI users] Compile test programs

2007-10-18 Thread Neeraj Chourasia
Hi all,    Could someone suggest me, how to compile programs given in test directory of the source code? There are couple of directories within test which contains sample programs about the usage of datastructures being used by open-MPI. I am able to compile some of the directories at it was ha

[OMPI users] Re :Re: Re :Re: Tuning Openmpi with IB Interconnect

2007-10-12 Thread Neeraj Chourasia
Yes, the buffer was being re-used. No we didnt try to benchmark it with netpipe and other stuffs. But the program was pretty simple. Do you think, I need to test it with bigger chunks (>8MB) for communication.?We also tried manipulating eager_limit and min_rdma_sze, but no success.NeerajOn Fri,

[OMPI users] Re :Re: Tuning Openmpi with IB Interconnect

2007-10-11 Thread Neeraj Chourasia
Hi,    The code was pretty simple. I was trying to send 8MB data from one rank to other in a loop(say 1000 iterations). And then i was taking the average of time taken and was calculating the bandwidth.The above logic i tried with both mpirun-with-mca-parameters and without any parameters. And t

[OMPI users] Tuning Openmpi with IB Interconnect

2007-10-11 Thread Neeraj Chourasia
Dear All,    Could anyone tell me the important tuning parameters in openmpi with IB interconnect? I tried setting eager_rdma, min_rdma_size, mpi_leave_pinned parameters from the mpirun command line on 38 nodes cluster (38*2 processors) but in vain. I found simple mpirun with no mca parameters

[OMPI users] Query regarding GPR

2007-10-09 Thread Neeraj Chourasia
Hi everybody,    I have a doubt regarding ORTE. One of the major functionality of orte is to maintain GPR, which subscribes and publishes information to the universe. I have a doubt saying, when we submit job from a machine, where does GPR gets created? Is it on the submit machine (HNP)?if YES,

[OMPI users] libnbc compilation

2007-10-01 Thread Neeraj Chourasia
Hello Everyone,    I was checking the development version from svn and found that support for libnbc is going to come in next release. I thought of compiling it, but failed to do.Could some one suggest me how to get it compiled.When i made changes to configure script(Basically added some flags)

Re: [OMPI users] another mpirun + xgrid question

2007-09-10 Thread Neeraj Chourasia
If you are using scheduler like PBS or SGE over MPI, there is an option called prolog and epilog, where you can give scripts which does copy operation. This script is called before and after job execution as the name suggests. Without it, in mpi itself, i have to see, if it can be done. T