I have done the test with v1.4.2 and indeed it fixes the problem.
Thanks Nysal.
Thank you also Terry for your help. With the fix I do not need anymore to
use a huge value of btl_tcp_eager_limit (I keep the default value) which
considerably decreases the memory consumption I had before. Everything w
2010/5/20 Nysal Jan
> This probably got fixed in https://svn.open-mpi.org/trac/ompi/ticket/2386
> Can you try 1.4.2, the fix should be in there.
>
>
I will test it soon (takes some time to install the new version on each
node) . It would be perfect if it fixes it.
I will tell you the result asap
Hello Terry,
Thanks for your answer.
2010/5/20 Terry Dontje
> Olivier Riff wrote:
>
> Hello,
>
> I assume this question has been already discussed many times, but I can not
> find on Internet a solution to my problem.
> It is about buffer size limit of MPI_Send and MPI_Recv with heterogeneous
This probably got fixed in https://svn.open-mpi.org/trac/ompi/ticket/2386
Can you try 1.4.2, the fix should be in there.
Regards
--Nysal
On Thu, May 20, 2010 at 2:02 PM, Olivier Riff wrote:
> Hello,
>
> I assume this question has been already discussed many times, but I can not
> find on Intern
Olivier Riff wrote:
Hello,
I assume this question has been already discussed many times, but I
can not find on Internet a solution to my problem.
It is about buffer size limit of MPI_Send and MPI_Recv with
heterogeneous system (32 bit laptop / 64 bit cluster).
My configuration is :
open mpi 1
Hello,
I assume this question has been already discussed many times, but I can not
find on Internet a solution to my problem.
It is about buffer size limit of MPI_Send and MPI_Recv with heterogeneous
system (32 bit laptop / 64 bit cluster).
My configuration is :
open mpi 1.4, configured with: --wi