> On Nov 25, 2014, at 01:12 , Gilles Gouaillardet
> wrote:
>
> Bottom line, though Open MPI implementation of MPI_Dist_graph_create is not
> deterministic, it is compliant with the MPI standard.
> /* not to mention this is not the right place to argue what the
George,
imho, you are right !
here is attached a new version of Ghislain's program and that uses
MPI_Dist_graph_neighbors_count and MPI_Dist_graph_neighbors
as you suggested.
it produces correct results
/* note that in this case, realDestinations is similar to targets,
so i might have left
I would argue this is a typical user level bug.
The major difference between the dist_create and dist_create_adjacent is
that in the later each process provides its neighbors in an order that is
expected (and that match the info provided to the MPI_Neighbor_alltoallw
call. When the topology is
Ghislain,
i can confirm there is a bug in mca_topo_base_dist_graph_distribute
FYI a proof of concept is available at
https://github.com/open-mpi/ompi/pull/283
and i recommend you use MPI_Dist_graph_create_adjacent if this meets
your needs.
as a side note, the right way to set the info is
Hi Gilles and Howard,
The use of MPI_Dist_graph_create_adjacent solves the issue :)
Thanks for your help!
Best reagrds,
Ghislain
2014-11-21 7:23 GMT+01:00 Gilles Gouaillardet :
> Hi Ghislain,
>
> that sound like a but in MPI_Dist_graph_create :-(
>
> you can
Hi Ghislain,
that sound like a but in MPI_Dist_graph_create :-(
you can use MPI_Dist_graph_create_adjacent instead :
MPI_Dist_graph_create_adjacent(MPI_COMM_WORLD, degrees, [0],
[0],
degrees, [0], [0], info,
rankReordering, );
it does not crash and as far as i
Hi Ghislain,
I tried to run your test with mvapich 1.9 and get a "message truncated"
failure at three ranks.
Howard
2014-11-20 8:51 GMT-07:00 Ghislain Viguier :
> Dear support,
>
> I'm encountering an issue with the MPI_Neighbor_alltoallw request of
> mpi-1.8.3.
>
For further information, the test also fails with MPI-1.8.4rc1.
2014-11-20 16:51 GMT+01:00 Ghislain Viguier :
> Dear support,
>
> I'm encountering an issue with the MPI_Neighbor_alltoallw request of
> mpi-1.8.3.
> I have enclosed a test case with information of my
Dear support,
I'm encountering an issue with the MPI_Neighbor_alltoallw request of
mpi-1.8.3.
I have enclosed a test case with information of my workstation.
In this test, I define a weighted topology for 5 processes, where the
weight represent the number of buffers to send/receive :
rank