Can you send a small reproducer program?
> On Sep 28, 2015, at 4:45 PM, Diego Avesani wrote:
>
> Dear all,
>
> I have to use a send_request in a MPI_WAITALL.
> Here the strange things:
>
> If I use at the begging of the SUBROUTINE:
>
> INTEGER :: send_request(3), recv_request(3)
>
> I have
Dear all,
I have to use a send_request in a MPI_WAITALL.
Here the strange things:
If I use at the begging of the SUBROUTINE:
INTEGER :: send_request(3), recv_request(3)
I have no problem, but if I use
USE COMONVARS,ONLY : nMsg
with nMsg=3
and after that I declare
INTEGER :: send_request(nMsg
On Sep 27, 2015, at 1:38 PM, marcin.krotkiewski
wrote:
>
> Hello, everyone
>
> I am struggling a bit with IB performance when sending data from a POSIX
> shared memory region (/dev/shm). The memory is shared among many MPI
> processes within the same compute node. Essentially, I see a bit hec
Hi Nathan,
Hi Mike,
Thanks for the quick replies!
My problem is I don't know what are my applications. I mean, I know them,
but we are a general purpose cluster, running in production for quite a
while, and there are everybody, from quantum chemists to machine learners
to bioinformatists. SO a sy
I would like to add that you may want to play with the value and see
what works for your applications. Most applications should be using
malloc or similar functions to allocate large memory regions in the heap
and not on the stack.
-Nathan
On Mon, Sep 28, 2015 at 08:01:09PM +0300, Mike Dubman wr
Sorry, I attached the wrong output of ompi_info... this one is the right
one.
I also forgot to add the configure-line:
> configure --prefix=/sw/mpi/openmpi/1.8.5-gnu_sschu/
> --enable-orterun-prefix-by-default --enable-mpi-thread-multiple
> --with-verbs --with-tm=/sw/tools/torque/5.1.0/
> CC=/sw/t
Hello Grigory,
We observed ~10% performance degradation with heap size set to unlimited
for CFD applications.
You can measure your application performance with default and unlimited
"limits" and select the best setting.
Kind Regards.
M
On Mon, Sep 28, 2015 at 7:36 PM, Grigory Shamov wrote:
>
Hi All,
We have built OpenMPI (1.8.8., 1.10.0) against Mellanox OFED 2.4 and
corresponding MXM. When it runs now, it gives the following warning, per
process:
[1443457390.911053] [myhist:5891 :0] mxm.c:185 MXM WARN The
'ulimit -s' on the system is set to 'unlimited'. This may have neg
Hi Nathan,
On 23.09.2015 00:24, Nathan Hjelm wrote:
> I think I have the problem fixed. I went with a bitmap approach but I
> don't think that will scale well as node sizes increase since it
> requires n^2 bits to implement the post table. When I have time I will
> implement the approach used in o
Hello,
I've set up our new cluster using Infiniband using a combination of:
Debian, Torque/Maui, BeeGeeFS (formerly FHGFS)
Every node has two infiniband-ports, both of them having an IP-Adress.
One port shall be used for BeeGeeFS (which is working well) and the
other one for MPI-Communication.
I
Sorry for the long delay.
Unfortunately, I am no longer able to reproduce the Valgrind errors I reported
earlier with either the debug version or the normally-compiled version of OMPI
1.8.7. I don’t know what happened - probably some change to our cluster
infrastructure that I am not aware of
Harald,
thanks for the clarification, i clearly missed that !
i will fix it from now
Cheers,
Gilles
On 9/28/2015 4:49 PM, Harald Servat wrote:
Hello Gilles,
the webpages I pointed in the original mail and which are the
official open-mpi.org, miss the * in the declaration of MPI_Ibarrier,
Hello Gilles,
the webpages I pointed in the original mail and which are the
official open-mpi.org, miss the * in the declaration of MPI_Ibarrier,
aren't they?
See:
C Syntax
#include
int MPI_Barrier(MPI_Comm comm)
int MPI_Ibarrier(MPI_Comm comm, MPI_Request request)
13 matches
Mail list logo