[OMPI users] LSF with OpenMPI

2006-08-25 Thread Michael Kluskens
Is there anyone running OpenMPI on a machine with LSF batch queueing system. Last time I attempted this I discovered that PATH and LD_LIBRARY_PATH were not making it to the client nodes. I could force PATH to work using an OpenMPI option but I could not even force LD_LIBRARY_PATH over to

Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Sven Stork
Hello Miguel, On Friday 25 August 2006 15:40, Miguel Figueiredo Mascarenhas Sousa Filipe wrote: > Hi, > > On 8/25/06, Sven Stork wrote: > > > > Hello Miguel, > > > > this is caused by the shared memory mempool. Per default this shared > > memory > > mapping has a size of 512 MB. You can use th

Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Miguel Figueiredo Mascarenhas Sousa Filipe
Hi, On 8/25/06, Sven Stork wrote: Hello Miguel, this is caused by the shared memory mempool. Per default this shared memory mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to reduce size e.g. mpirun -mca mpool_sm_size ... using mpirun -mca mpool_sm_size 0 is acce

Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread George Bosilca
I suspect this is the shared memory used to communicate between processes. Please run your application adding the flag "--mca btl tcp,self" to the mpirun command line (*before the application name). If the virtual memory usage goes down then the 400MB are definitively comming from the shared me

Re: [OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Sven Stork
Hello Miguel, this is caused by the shared memory mempool. Per default this shared memory mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to reduce size e.g. mpirun -mca mpool_sm_size ... Thanks, Sven On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa

Re: [OMPI users] problem with ompi_info

2006-08-25 Thread George Bosilca
The directory when the libmpi.so is have to be added to the LD_LIBRARY_PATH, and of course the bin directory have to be added to the PATH. For more information about how and why please read http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path Thanks, george. On Fri, 25 Aug

[OMPI users] OpenMPI-1.1 virtual memory overhead

2006-08-25 Thread Miguel Figueiredo Mascarenhas Sousa Filipe
Hi there, I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit x86 chroot environment on that same machine. (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6) In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory usage (virtual address space usage) for each MP

Re: [OMPI users] Jumbo frames

2006-08-25 Thread Caird, Andrew J
Massimiliano, It should work automatically, but I have seen instances where switches or Ethernet cards can't support the full 9000 bytes per frame, and we've had to go as low as 6000 bytes to get consistent performance. It seems like everyone's interpretation of what the 9000 bytes is for is a li

[OMPI users] problem with ompi_info

2006-08-25 Thread Christine Kreuzer
Hi, I tried to install openmpi-1.1 on a AMD 64 OPTERON dual core (RHEL 4). I got no error message from ./configure and make all install. Not all tests passed (oob_test oob_test_self oob_test_packed and test_schema were skipped)so I entered ompi_info in the bin directory and got the following error

[OMPI users] Jumbo frames

2006-08-25 Thread Massimiliano Fatica
Hi, I am trying to use Jumbo frames but mpirun will not start the job. I am using OpenMPI v1.1 shipped with the latest Rocks (4.2). Ifconfig is reporting that all the NIC on the cluster are using an MTU of 9000 and the switch (HP Procurve) should be able to use Jumbo frames. Is there any special