How long will the 1.2 series be maintained?
This has been giving some of our customers a bit of heart burn, but it
can also be used to help push through the OFED upgrades on the
clusters (a good thing).
Josh
On 10/11/07, Jeff Squyres wrote:
> Reminder -- this RFC expires tonight.
>
> Speak now
On 8/24/07, Jeff Squyres wrote:
>
> Hmm. If you compile Open MPI with no memory manager, then it
> *shouldn't* be Open MPI's fault (unless there's a leak in the mvapi
> BTL...?). Verify that you did not actually compile Open MPI with a
> memory manager by running "ompi_info| grep ptmalloc2" -- i
We are using open-mpi on several 1000+ node clusters. We received
several new clusters using the Infiniserve 3.X software stack recently
and are having several problems with the vapi btl (yes, I know, it is
very very old and shouldn't be used. I couldn't agree with you more
but those are my march
velopers
>> Subject: Re: [OMPI devel] Best bw/lat performance for
>> microbenchmark/debug utility
>>
>> Josh Aune wrote:
>>> I am writing up some interconnect/network debugging software that is
>>> centered around ompi. What is the best set of functions to
&
I am writing up some interconnect/network debugging software that is
centered around ompi. What is the best set of functions to use to get
the best bandwidth and latency numbers for openmpi and why? I've been
asking around at work and some people say just send/recieve, though
some of the micro b
le on each).
> -Original Message-
> From: devel-boun...@open-mpi.org
> [mailto:devel-boun...@open-mpi.org] On Behalf Of Josh Aune
> Sent: Friday, March 31, 2006 4:43 PM
> To: Open MPI Developers
> Subject: [OMPI devel] process ordering/processes per node
>
> I have
So far, every system I have compiled open-mpi on I have hit this same
non-obvious configure failure. In each case I have added
--with-openib= and --with-openib-libs=. configure runs
just fine till it starts looking for OpenIB and reports that it can't
find most of the header files and what not r
I have a simple hello program where each child prints out the hostname
of the node it is running on. When I run this (on a bproc machine)
with -np 4 and no host file it launches one process per node on each
of the first 4 avaliable nodes. ie:
$ mpirun -np 4 ./mpi_hello
n1 hello
n3 hello
n2 hell