Re: [petsc-dev] Matshell with PETSs solvers using GPU

2023-12-13 Thread Han Tran
@Jed: thank you for your answer! @Barry: yes, I am thinking on CUDA Fortran. Thank you, -Han > On Dec 12, 2023, at 6:41 PM, Barry Smith wrote: > > > Are you thinking CUDA Fortran or some other "Fortran but running on the > GPU"? > > >> On Dec 12, 2023, at 8:11 PM, Jed Brown wrote: >>

Re: [petsc-dev] Matshell with PETSs solvers using GPU

2023-12-12 Thread Barry Smith
Are you thinking CUDA Fortran or some other "Fortran but running on the GPU"? > On Dec 12, 2023, at 8:11 PM, Jed Brown wrote: > > Han Tran writes: > >> Hi Jed, >> >> Thank you for your answer. I have not had a chance to work on this since I >> asked. I have some follow-up questions.

Re: [petsc-dev] Matshell with PETSs solvers using GPU

2023-12-12 Thread Jed Brown
Han Tran writes: > Hi Jed, > > Thank you for your answer. I have not had a chance to work on this since I > asked. I have some follow-up questions. > > (1) From the Petsc manual, > https://petsc.org/release/manualpages/Vec/VecGetArrayAndMemType/, it shows > that both VecGetArrayAndMemType()

Re: [petsc-dev] Matshell with PETSs solvers using GPU

2022-11-04 Thread Jed Brown
Yes, this is supported. You can use VecGetArrayAndMemType() to get access to device memory. You'll often use DMGlobalToLocalBegin/End() or VecScatter to communicate, but that will use GPU-aware MPI if your Vec is a device vector. Han Tran writes: > Hi, > > I am aware that PETSc recently

[petsc-dev] Matshell with PETSs solvers using GPU

2022-11-04 Thread Han Tran
Hi, I am aware that PETSc recently supports solvers on GPU. I wonder whether PETSc supports MatShell with GPU solvers, i.e., I have a user-defined MatMult() function residing on the device, and I want to use MatShell directly with PETSc GPU solvers without any transfer back and forth between