Entry looks good, but could probably use an additional sentence or two like:
On diskless nodes running Linux, use of /dev/shm may be an option if
supported by your distribution. This will use an in-memory file system
for the session directory, but will NOT result in a doubling of the
memory c
On May 14, 2010, at 12:24 PM, Josh Hursey wrote:
On May 12, 2010, at 1:07 PM, Abhishek Kulkarni wrote:
Updated RFC (w/ discussed changes):
=
=
[RFC 2/2] merge the OPAL SOS development branch into trunk
=
===
On 18/05/10 07:02, Jeff Squyres wrote:
> What's the advantage of /dev/shm? (I don't know anything
> about /dev/shm)
Looking at our CentOS 5.4 install /dev/shm is a tmpfs
filesystem (defaults to use up to 1/2 RAM of the system)
and hence should give lightning fast file I/O.
More info here:
http
On May 17, 2010, at 7:59 PM, Barrett, Brian W wrote:
> HWLOC could be extended to support Red Storm, probably, but we don't have the
> need or time to do such an implementation.
Fair enough.
> Given that, I'm not really picky about what the method of not breaking an
> existing supported plat
HWLOC could be extended to support Red Storm, probably, but we don't have the
need or time to do such an implementation. Given that, I'm not really picky
about what the method of not breaking an existing supported platform is, but I
think having HAVE_HWLOC defines everywhere is a bad idea...
B
Can hwloc be extended to support redstorm? I.e., does your os export the
topology info and/or support process binding?
Hwloc *is* an open mpi sub project, after all...
Other than extending hwloc, I don't know what else to do besides #if
OPAL_HAVE_HWLOC - the hwloc api is kinda big. In this way
I'd prefer we not commit something in opal/hwloc until we have a plan for
supporting platforms without hwloc support (ie, Red Storm). I have no
objection to your original RFC, but I had the impression at the time that you
had a plan in place for non-hwloc supported platforms.
Brian
On May 17,
On May 15, 2010, at 4:39 PM, Ralph Castain wrote:
> So, to ensure I understand, you are proposing that we completely eliminate
> the paffinity framework and commit to hwloc in its place?
I think there's 2 issues here:
- topology information
- binding
hwloc supports both. paffinity mainly supp
How's this?
http://www.open-mpi.org/faq/?category=sm#poor-sm-btl-performance
What's the advantage of /dev/shm? (I don't know anything about /dev/shm)
On May 17, 2010, at 4:08 AM, Sylvain Jeaugey wrote:
> I agree with Paul on the fact that a FAQ update would be great on this
> subject. /de
Sylvain Jeaugey wrote:
The XRC protocol seems to create shared receive queues, which is a
good thing. However, comparing memory used by an "X" queue versus
and "S" queue, we can see a large difference. Digging a bit into the
code, we found some
So, do you see that X consumes more that S ? This
Thanks Pasha for these details.
On Mon, 17 May 2010, Pavel Shamis (Pasha) wrote:
blocking is the receive queues, because they are created during MPI_Init,
so in a way, they are the "basic fare" of MPI.
BTW SRQ resources are also allocated on demand. We start with very small SRQ
and it is incre
Please see below.
When using XRC queues, Open MPI is indeed creating only one XRC queue
per node (instead of per-host). The problem is that the number of send
elements in this queue is multiplied by the number of processes on the
remote host.
So, what are we getting from this ? Not much, e
Hello Developers:
George and I talked some more about this change, and he has agreed that
it is OK. Therefore, I will be making this change sometime this week.
Rolf
On 04/23/10 11:47, George Bosilca wrote:
The keyword here is consolidation. It's not about violating the initial design,
it
On May 16, 2010, at 5:56 PM,
wrote:
> > Have you tried building Open MPI with the --disable-dlopen configure flag?
> > This will slurp all of OMPI's DSOs up into libmpi.so -- so there's no
> > dlopening at run-time. Hence, your app (R) can dlopen libmpi.so, but then
> > libmpi.so doesn't dlop
Hi list,
We did some testing on memory taken by Infiniband queues in Open MPI using
the XRC protocol, which is supposed to reduce the needed memory for
Infiniband connections.
When using XRC queues, Open MPI is indeed creating only one XRC queue per
node (instead of per-host). The problem is
I agree with Paul on the fact that a FAQ update would be great on this
subject. /dev/shm seems a good place to put the temporary files (when
available, of course).
Putting files in /dev/shm also showed better performance on our systems,
even with /tmp on a local disk.
Sylvain
On Sun, 16 May
16 matches
Mail list logo