Re: [OMPI devel] New OMPI MPI extension

2010-04-22 Thread Jeff Squyres
On Apr 22, 2010, at 12:34 PM, Rayson Ho wrote: > Seems like OMPI_Affinity_str() 's finest granularity is at the core > level. However, in SGE (Sun Grid Engine) we also offer thread level > (SMT) binding: > > http://wikis.sun.com/display/gridengine62u5/Using+Job+to+Core+Binding > > Will OpenMPI s

Re: [OMPI devel] New OMPI MPI extension

2010-04-22 Thread Rayson Ho
Jeff, Seems like OMPI_Affinity_str() 's finest granularity is at the core level. However, in SGE (Sun Grid Engine) we also offer thread level (SMT) binding: http://wikis.sun.com/display/gridengine62u5/Using+Job+to+Core+Binding Will OpenMPI support thread level binding in the future?? BTW, anot

Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-22 Thread Samuel K. Gutierrez
On Apr 22, 2010, at 10:08 AM, Rainer Keller wrote: Hello Oliver, thanks for the update. Just my $0.02: the upcoming Open MPI v1.5 will warn users, if their session directory is on NFS (or Lustre). ... or panfs :-) Samuel K. Gutierrez Best regards, Rainer On Thursday 22 April 2010 11

Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-22 Thread Rainer Keller
Hello Oliver, thanks for the update. Just my $0.02: the upcoming Open MPI v1.5 will warn users, if their session directory is on NFS (or Lustre). Best regards, Rainer On Thursday 22 April 2010 11:37:48 am Oliver Geisler wrote: > To sum up and give an update: > > The extended communication tim

Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-22 Thread Kenneth A. Lloyd
Oliver, Thank you for this summary insight. This substantially affects the structural design of software implementations, which points to a new analysis "opportunity" in our software. Ken Lloyd -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Be

Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-22 Thread Oliver Geisler
To sum up and give an update: The extended communication times while using shared memory communication of openmpi processes are caused by openmpi session directory laying on the network via NFS. The problem is resolved by establishing on each diskless node a ramdisk or mounting a tmpfs. By settin

Re: [OMPI devel] New OMPI MPI extension

2010-04-22 Thread Jeff Squyres
Fixed -- thanks! On Apr 22, 2010, at 12:35 AM, Rayson Ho wrote: > Hi Jeff, > > There's a typo in trunk/README: > > -> 1175 ...unrelated to wach other > > I guess you mean "unrelated to each other". > > Rayson > > > > On Wed, Apr 21, 2010 at 12:35 PM, Jeff Squyres wrote: > > Per the teleco

[OMPI devel] Segmentation fault on x86_64 on heterogeneous environment

2010-04-22 Thread Timur Magomedov
Hello, list. I have a strange segmentation fault on x86_64 machine running together with x86. I am running attached program that sends some bytes from process 0 to process 1. My configuration is: Machine #1: (process 0) arch: x86 hostname: magomedov-desktop linux distro: Ubuntu 9.10 Open M

[OMPI devel] sendrecv_replace: long time to allocate/free memory

2010-04-22 Thread Pascal Deveze
Hi all, The sendrecv_replace in Open MPI seems to allocate/free memory with MPI_Alloc_mem()/MPI_Free_mem() I measured the time to allocate/free a buffer of 1MB. MPI_Alloc_mem/MPI_Free_mem take 350us while malloc/free only take 8us. malloc/free in ompi/mpi/c/sendrecv_replace.c was replaced by

Re: [OMPI devel] New OMPI MPI extension

2010-04-22 Thread Rayson Ho
Hi Jeff, There's a typo in trunk/README: -> 1175 ...unrelated to wach other I guess you mean "unrelated to each other". Rayson On Wed, Apr 21, 2010 at 12:35 PM, Jeff Squyres wrote: > Per the telecon Tuesday, I committed a new OMPI MPI extension to the trunk: > >    https://svn.open-mpi.org/