Btw,

MPI_Type_hvector(20000, 1, 0, MPI_INT, &type);

Is just a weird datatype. Because the stride is 0, this datatype a memory
layout that includes 20000 times the same int. I'm not sure this was indeed
intended...

  George.


On Mon, Jan 19, 2015 at 12:17 AM, Gilles Gouaillardet <
gilles.gouaillar...@iferc.org> wrote:

> Adrian,
>
> i just fixed this in the master
> (
> https://github.com/open-mpi/ompi/commit/d14daf40d041f7a0a8e9d85b3bfd5eb570495fd2
> )
>
> the root cause is a corner case was not handled correctly :
>
> MPI_Type_hvector(20000, 1, 0, MPI_INT, &type);
>
> type has extent = 4 *but* size = 80000
> ob1 used to test only the extent to determine whether the message should
> be sent inlined or not
> extent <= 256 means try to send the message inline
> that meant a fragment of size 80000 (which is greater than 65536 e.g.
> max default size for IB) was allocated,
> and that failed.
>
> now both extent and size are tested, so the message is not sent inline,
> and it just works.
>
> Cheers,
>
> Gilles
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post:
> http://www.open-mpi.org/community/lists/devel/2015/01/16798.php
>

Reply via email to