Re: [petsc-dev] Swarm tag error

2022-11-23 Thread Stefano Zampini
If  ranks are uniquely listed in neighbour_procs, then you only need 1 fresh 
tag per communication round from PetscCommGetNewTag

> On Nov 23, 2022, at 9:08 PM, Dave May  wrote:
> 
> 
> 
> On Wed, 23 Nov 2022 at 08:57, Junchao Zhang  > wrote:
> From my reading, the code actually does not need multiple tags. You can just 
> let _get_tags() return a constant (say 0), or use your modulo MPI_TAG_UB 
> approach.
> 
> Yes I believe that is correct.
> 
>  
> 
> 541  for (i = 0; i < np; ++i) 
> PetscCallMPI(MPI_Isend(>messages_to_be_sent[i], 1, MPIU_INT, 
> de->neighbour_procs[i], de->send_tags[i], de->comm, >_requests[i]));
> 542  for (i = 0; i < np; ++i) 
> PetscCallMPI(MPI_Irecv(>messages_to_be_recvieved[i], 1, MPIU_INT, 
> de->neighbour_procs[i], de->recv_tags[i], de->comm, >_requests[np + i]));
> 
> --Junchao Zhang
> 
> 
> On Tue, Nov 22, 2022 at 11:59 PM Matthew Knepley  > wrote:
> On Tue, Nov 22, 2022 at 11:23 PM Junchao Zhang  > wrote:
> I don't understand why you need so many tags.  Is the communication pattern 
> actually MPI_Alltoallv, but you implemented it in MPI_Send/Recv?
> 
> I am preserving the original design from Dave until we do a more thorough 
> rewrite. I think he is using a different tag for each pair of processes to
> make debugging easier.
> 
> I don't think Alltoallv is appropriate most of the time. If you had a lot of 
> particles with a huge spread of velocities then you could get that, but most
> scenarios I think look close to nearest neighbor.
> 
>   Thanks,
> 
>   Matt
>  
> --Junchao Zhang
> 
> 
> On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley  > wrote:
> In data_ex.c, Swarm uses a distinct tag for each pair of processes. If the 
> number of processes exceeds 1024, there are > 1024^2 tags which exceeds 
> MPI_TAG_UB on Intel MPI.
> 
> My solution is going to be to use that process pair number modulo MPI_TAG_UB. 
> Does anyone have a slicker suggestion?
> 
>   Thanks,
> 
>   Matt
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 



Re: [petsc-dev] Swarm tag error

2022-11-23 Thread Dave May
On Wed, 23 Nov 2022 at 08:57, Junchao Zhang  wrote:

> From my reading, the code actually does not need multiple tags. You can
> just let _get_tags() return a constant (say 0), or use your modulo
> MPI_TAG_UB approach.
>

Yes I believe that is correct.



>
> 541 for (i = 0; i < np; ++i) PetscCallMPI(MPI_Isend(>
> messages_to_be_sent[i], 1, MPIU_INT, de->neighbour_procs[i], de->send_tags[i],
> de->comm, >_requests[i]));
> 542 for (i = 0; i < np; ++i) PetscCallMPI(MPI_Irecv(>
> messages_to_be_recvieved[i], 1, MPIU_INT, de->neighbour_procs[i], de->
> recv_tags[i], de->comm, >_requests[np + i]));
>
> --Junchao Zhang
>
>
> On Tue, Nov 22, 2022 at 11:59 PM Matthew Knepley 
> wrote:
>
>> On Tue, Nov 22, 2022 at 11:23 PM Junchao Zhang 
>> wrote:
>>
>>> I don't understand why you need so many tags.  Is the
>>> communication pattern actually MPI_Alltoallv, but you implemented it in
>>> MPI_Send/Recv?
>>>
>>
>> I am preserving the original design from Dave until we do a more thorough
>> rewrite. I think he is using a different tag for each pair of processes to
>> make debugging easier.
>>
>> I don't think Alltoallv is appropriate most of the time. If you had a lot
>> of particles with a huge spread of velocities then you could get that, but
>> most
>> scenarios I think look close to nearest neighbor.
>>
>>   Thanks,
>>
>>   Matt
>>
>>
>>> --Junchao Zhang
>>>
>>>
>>> On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley 
>>> wrote:
>>>
 In data_ex.c, Swarm uses a distinct tag for each pair of processes. If
 the number of processes exceeds 1024, there are > 1024^2 tags which exceeds
 MPI_TAG_UB on Intel MPI.

 My solution is going to be to use that process pair number modulo
 MPI_TAG_UB. Does anyone have a slicker suggestion?

   Thanks,

   Matt

 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener

 https://www.cse.buffalo.edu/~knepley/
 

>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>


Re: [petsc-dev] Swarm tag error

2022-11-23 Thread Dave May
On Mon, 21 Nov 2022 at 12:37, Matthew Knepley  wrote:

> In data_ex.c, Swarm uses a distinct tag for each pair of processes. If the
> number of processes exceeds 1024, there are > 1024^2 tags which exceeds
> MPI_TAG_UB on Intel MPI.
>
> My solution is going to be to use that process pair number modulo
> MPI_TAG_UB. Does anyone have a slicker suggestion?
>


I think it should be possible to use the adjacency graph associated with
the neighbour ranks which is defined within
_DMSwarmDataExCompleteCommunicationMap()
in the Mat object.

If Intel MPI cannot support tags greater than 1024, the proposition above
is going to be of limited value.
A job with 100 MPI ranks, with subdomains which each have 11 neighbour
ranks will exceed 1024.


>
>   Thanks,
>
>   Matt
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-dev] Swarm tag error

2022-11-23 Thread Dave May
On Tue, 22 Nov 2022 at 21:59, Matthew Knepley  wrote:

> On Tue, Nov 22, 2022 at 11:23 PM Junchao Zhang 
> wrote:
>
>> I don't understand why you need so many tags.  Is the
>> communication pattern actually MPI_Alltoallv, but you implemented it in
>> MPI_Send/Recv?
>>
>
> I am preserving the original design from Dave until we do a more thorough
> rewrite. I think he is using a different tag for each pair of processes to
> make debugging easier.
>

This is correct.


>
> I don't think Alltoallv is appropriate most of the time. If you had a lot
> of particles with a huge spread of velocities then you could get that, but
> most
> scenarios I think look close to nearest neighbor.
>
>   Thanks,
>
>   Matt
>
>
>> --Junchao Zhang
>>
>>
>> On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley 
>> wrote:
>>
>>> In data_ex.c, Swarm uses a distinct tag for each pair of processes. If
>>> the number of processes exceeds 1024, there are > 1024^2 tags which exceeds
>>> MPI_TAG_UB on Intel MPI.
>>>
>>> My solution is going to be to use that process pair number modulo
>>> MPI_TAG_UB. Does anyone have a slicker suggestion?
>>>
>>>   Thanks,
>>>
>>>   Matt
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> 
>>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-dev] Swarm tag error

2022-11-23 Thread Matthew Knepley
On Wed, Nov 23, 2022 at 10:56 AM Junchao Zhang 
wrote:

> From my reading, the code actually does not need multiple tags. You can
> just let _get_tags() return a constant (say 0), or use your modulo
> MPI_TAG_UB approach.
>

That is definitely true. What I wanted to do was change the operation as
little as possible, but prevent breaking.

  Thanks,

 Matt


> 541 for (i = 0; i < np; ++i) PetscCallMPI(MPI_Isend(>
> messages_to_be_sent[i], 1, MPIU_INT, de->neighbour_procs[i], de->send_tags[i],
> de->comm, >_requests[i]));
> 542 for (i = 0; i < np; ++i) PetscCallMPI(MPI_Irecv(>
> messages_to_be_recvieved[i], 1, MPIU_INT, de->neighbour_procs[i], de->
> recv_tags[i], de->comm, >_requests[np + i]));
>
> --Junchao Zhang
>
>
> On Tue, Nov 22, 2022 at 11:59 PM Matthew Knepley 
> wrote:
>
>> On Tue, Nov 22, 2022 at 11:23 PM Junchao Zhang 
>> wrote:
>>
>>> I don't understand why you need so many tags.  Is the
>>> communication pattern actually MPI_Alltoallv, but you implemented it in
>>> MPI_Send/Recv?
>>>
>>
>> I am preserving the original design from Dave until we do a more thorough
>> rewrite. I think he is using a different tag for each pair of processes to
>> make debugging easier.
>>
>> I don't think Alltoallv is appropriate most of the time. If you had a lot
>> of particles with a huge spread of velocities then you could get that, but
>> most
>> scenarios I think look close to nearest neighbor.
>>
>>   Thanks,
>>
>>   Matt
>>
>>
>>> --Junchao Zhang
>>>
>>>
>>> On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley 
>>> wrote:
>>>
 In data_ex.c, Swarm uses a distinct tag for each pair of processes. If
 the number of processes exceeds 1024, there are > 1024^2 tags which exceeds
 MPI_TAG_UB on Intel MPI.

 My solution is going to be to use that process pair number modulo
 MPI_TAG_UB. Does anyone have a slicker suggestion?

   Thanks,

   Matt

 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener

 https://www.cse.buffalo.edu/~knepley/
 

>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-dev] Swarm tag error

2022-11-23 Thread Junchao Zhang
>From my reading, the code actually does not need multiple tags. You can
just let _get_tags() return a constant (say 0), or use your modulo
MPI_TAG_UB approach.

541 for (i = 0; i < np; ++i)
PetscCallMPI(MPI_Isend(>messages_to_be_sent[i],
1, MPIU_INT, de->neighbour_procs[i], de->send_tags[i], de->comm, >
_requests[i]));
542 for (i = 0; i < np; ++i) PetscCallMPI(MPI_Irecv(>
messages_to_be_recvieved[i], 1, MPIU_INT, de->neighbour_procs[i], de->
recv_tags[i], de->comm, >_requests[np + i]));

--Junchao Zhang


On Tue, Nov 22, 2022 at 11:59 PM Matthew Knepley  wrote:

> On Tue, Nov 22, 2022 at 11:23 PM Junchao Zhang 
> wrote:
>
>> I don't understand why you need so many tags.  Is the
>> communication pattern actually MPI_Alltoallv, but you implemented it in
>> MPI_Send/Recv?
>>
>
> I am preserving the original design from Dave until we do a more thorough
> rewrite. I think he is using a different tag for each pair of processes to
> make debugging easier.
>
> I don't think Alltoallv is appropriate most of the time. If you had a lot
> of particles with a huge spread of velocities then you could get that, but
> most
> scenarios I think look close to nearest neighbor.
>
>   Thanks,
>
>   Matt
>
>
>> --Junchao Zhang
>>
>>
>> On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley 
>> wrote:
>>
>>> In data_ex.c, Swarm uses a distinct tag for each pair of processes. If
>>> the number of processes exceeds 1024, there are > 1024^2 tags which exceeds
>>> MPI_TAG_UB on Intel MPI.
>>>
>>> My solution is going to be to use that process pair number modulo
>>> MPI_TAG_UB. Does anyone have a slicker suggestion?
>>>
>>>   Thanks,
>>>
>>>   Matt
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> 
>>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-dev] Swarm tag error

2022-11-22 Thread Matthew Knepley
On Tue, Nov 22, 2022 at 11:23 PM Junchao Zhang 
wrote:

> I don't understand why you need so many tags.  Is the
> communication pattern actually MPI_Alltoallv, but you implemented it in
> MPI_Send/Recv?
>

I am preserving the original design from Dave until we do a more thorough
rewrite. I think he is using a different tag for each pair of processes to
make debugging easier.

I don't think Alltoallv is appropriate most of the time. If you had a lot
of particles with a huge spread of velocities then you could get that, but
most
scenarios I think look close to nearest neighbor.

  Thanks,

  Matt


> --Junchao Zhang
>
>
> On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley  wrote:
>
>> In data_ex.c, Swarm uses a distinct tag for each pair of processes. If
>> the number of processes exceeds 1024, there are > 1024^2 tags which exceeds
>> MPI_TAG_UB on Intel MPI.
>>
>> My solution is going to be to use that process pair number modulo
>> MPI_TAG_UB. Does anyone have a slicker suggestion?
>>
>>   Thanks,
>>
>>   Matt
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-dev] Swarm tag error

2022-11-22 Thread Junchao Zhang
I don't understand why you need so many tags.  Is the communication pattern
actually MPI_Alltoallv, but you implemented it in MPI_Send/Recv?

--Junchao Zhang


On Mon, Nov 21, 2022 at 2:37 PM Matthew Knepley  wrote:

> In data_ex.c, Swarm uses a distinct tag for each pair of processes. If the
> number of processes exceeds 1024, there are > 1024^2 tags which exceeds
> MPI_TAG_UB on Intel MPI.
>
> My solution is going to be to use that process pair number modulo
> MPI_TAG_UB. Does anyone have a slicker suggestion?
>
>   Thanks,
>
>   Matt
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


[petsc-dev] Swarm tag error

2022-11-21 Thread Matthew Knepley
In data_ex.c, Swarm uses a distinct tag for each pair of processes. If the
number of processes exceeds 1024, there are > 1024^2 tags which exceeds
MPI_TAG_UB on Intel MPI.

My solution is going to be to use that process pair number modulo
MPI_TAG_UB. Does anyone have a slicker suggestion?

  Thanks,

  Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/