~ 2300 KB - is it difference per machine or per MPI process ?
In OMPI XRC mode we allocate some additional resources that may consume
some memory (the hash table), but even so ~2M sounds too much for me.
When I will have time I will try to calculate the "resonable" difference.
Pasha
Sylvain J
On Mon, 17 May 2010, Pavel Shamis (Pasha) wrote:
Sylvain Jeaugey wrote:
The XRC protocol seems to create shared receive queues, which is a good
thing. However, comparing memory used by an "X" queue versus and "S"
queue, we can see a large difference. Digging a bit into the code, we
found some
Sylvain Jeaugey wrote:
The XRC protocol seems to create shared receive queues, which is a
good thing. However, comparing memory used by an "X" queue versus
and "S" queue, we can see a large difference. Digging a bit into the
code, we found some
So, do you see that X consumes more that S ? This
Thanks Pasha for these details.
On Mon, 17 May 2010, Pavel Shamis (Pasha) wrote:
blocking is the receive queues, because they are created during MPI_Init,
so in a way, they are the "basic fare" of MPI.
BTW SRQ resources are also allocated on demand. We start with very small SRQ
and it is incre
Please see below.
When using XRC queues, Open MPI is indeed creating only one XRC queue
per node (instead of per-host). The problem is that the number of send
elements in this queue is multiplied by the number of processes on the
remote host.
So, what are we getting from this ? Not much, e
Hi list,
We did some testing on memory taken by Infiniband queues in Open MPI using
the XRC protocol, which is supposed to reduce the needed memory for
Infiniband connections.
When using XRC queues, Open MPI is indeed creating only one XRC queue per
node (instead of per-host). The problem is