On Apr 22, 2010, at 12:34 PM, Rayson Ho wrote:
> Seems like OMPI_Affinity_str() 's finest granularity is at the core
> level. However, in SGE (Sun Grid Engine) we also offer thread level
> (SMT) binding:
>
> http://wikis.sun.com/display/gridengine62u5/Using+Job+to+Core+Binding
>
> Will OpenMPI s
Jeff,
Seems like OMPI_Affinity_str() 's finest granularity is at the core
level. However, in SGE (Sun Grid Engine) we also offer thread level
(SMT) binding:
http://wikis.sun.com/display/gridengine62u5/Using+Job+to+Core+Binding
Will OpenMPI support thread level binding in the future??
BTW, anot
On Apr 22, 2010, at 10:08 AM, Rainer Keller wrote:
Hello Oliver,
thanks for the update.
Just my $0.02: the upcoming Open MPI v1.5 will warn users, if their
session
directory is on NFS (or Lustre).
... or panfs :-)
Samuel K. Gutierrez
Best regards,
Rainer
On Thursday 22 April 2010 11
Hello Oliver,
thanks for the update.
Just my $0.02: the upcoming Open MPI v1.5 will warn users, if their session
directory is on NFS (or Lustre).
Best regards,
Rainer
On Thursday 22 April 2010 11:37:48 am Oliver Geisler wrote:
> To sum up and give an update:
>
> The extended communication tim
Oliver,
Thank you for this summary insight. This substantially affects the
structural design of software implementations, which points to a new
analysis "opportunity" in our software.
Ken Lloyd
-Original Message-
From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
Be
To sum up and give an update:
The extended communication times while using shared memory communication
of openmpi processes are caused by openmpi session directory laying on
the network via NFS.
The problem is resolved by establishing on each diskless node a ramdisk
or mounting a tmpfs. By settin
Fixed -- thanks!
On Apr 22, 2010, at 12:35 AM, Rayson Ho wrote:
> Hi Jeff,
>
> There's a typo in trunk/README:
>
> -> 1175 ...unrelated to wach other
>
> I guess you mean "unrelated to each other".
>
> Rayson
>
>
>
> On Wed, Apr 21, 2010 at 12:35 PM, Jeff Squyres wrote:
> > Per the teleco
Hello, list.
I have a strange segmentation fault on x86_64 machine running together
with x86.
I am running attached program that sends some bytes from process 0 to
process 1. My configuration is:
Machine #1: (process 0)
arch: x86
hostname: magomedov-desktop
linux distro: Ubuntu 9.10
Open M
Hi all,
The sendrecv_replace in Open MPI seems to allocate/free memory with
MPI_Alloc_mem()/MPI_Free_mem()
I measured the time to allocate/free a buffer of 1MB.
MPI_Alloc_mem/MPI_Free_mem take 350us while malloc/free only take 8us.
malloc/free in ompi/mpi/c/sendrecv_replace.c was replaced by
Hi Jeff,
There's a typo in trunk/README:
-> 1175 ...unrelated to wach other
I guess you mean "unrelated to each other".
Rayson
On Wed, Apr 21, 2010 at 12:35 PM, Jeff Squyres wrote:
> Per the telecon Tuesday, I committed a new OMPI MPI extension to the trunk:
>
> https://svn.open-mpi.org/
10 matches
Mail list logo