Hi PETSc-developers, Is it possible to use VecSetValues with distributed-memory CUDA & Kokkos vectors from the device, i.e. can I call VecSetValues with GPU memory pointers and expect PETSc to figure out how to stash on the device it until I call VecAssemblyBegin (at which point PETSc could use GPU-aware MPI to populate off-process values) ?
If this is not currently supported, is supporting this on the roadmap? Thanks in advance! Thank You, Sajid Ali (he/him) | Research Associate Scientific Computing Division Fermi National Accelerator Laboratory s-sajid-ali.github.io<http://s-sajid-ali.github.io>
