LocalToGlobal is a DM thing..
Sajid, do use DM?
If you need to add off procesor entries then DM could give you a local
vector as Matt said that you can add to for off procesor values and then
you could use the CPU communication in DM.

On Thu, Mar 17, 2022 at 7:19 PM Matthew Knepley <[email protected]> wrote:

> On Thu, Mar 17, 2022 at 4:46 PM Sajid Ali Syed <[email protected]> wrote:
>
>> Hi PETSc-developers,
>>
>> Is it possible to use VecSetValues with distributed-memory CUDA & Kokkos
>> vectors from the device, i.e. can I call VecSetValues with GPU memory
>> pointers and expect PETSc to figure out how to stash on the device it until
>> I call VecAssemblyBegin (at which point PETSc could use GPU-aware MPI to
>> populate off-process values) ?
>>
>> If this is not currently supported, is supporting this on the roadmap?
>> Thanks in advance!
>>
>
> VecSetValues() will fall back to the CPU vector, so I do not think this
> will work on device.
>
> Usually, our assembly computes all values and puts them in a "local"
> vector, which you can access explicitly as Mark said. Then
> we call LocalToGlobal() to communicate the values, which does work
> directly on device using specialized code in VecScatter/PetscSF.
>
> What are you trying to do?
>
>   THanks,
>
>       Matt
>
>
>> Thank You,
>> Sajid Ali (he/him) | Research Associate
>> Scientific Computing Division
>> Fermi National Accelerator Laboratory
>> s-sajid-ali.github.io
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>

Reply via email to