@Jed: thank you for your answer!
@Barry: yes, I am thinking on CUDA Fortran.
Thank you,
-Han
> On Dec 12, 2023, at 6:41 PM, Barry Smith wrote:
>
>
> Are you thinking CUDA Fortran or some other "Fortran but running on the
> GPU"?
>
>
>> On Dec 12, 2023, at 8:11 PM, Jed Brown wrote:
>>
Are you thinking CUDA Fortran or some other "Fortran but running on the GPU"?
> On Dec 12, 2023, at 8:11 PM, Jed Brown wrote:
>
> Han Tran writes:
>
>> Hi Jed,
>>
>> Thank you for your answer. I have not had a chance to work on this since I
>> asked. I have some follow-up questions.
Han Tran writes:
> Hi Jed,
>
> Thank you for your answer. I have not had a chance to work on this since I
> asked. I have some follow-up questions.
>
> (1) From the Petsc manual,
> https://petsc.org/release/manualpages/Vec/VecGetArrayAndMemType/, it shows
> that both VecGetArrayAndMemType()
Yes, this is supported. You can use VecGetArrayAndMemType() to get access to
device memory. You'll often use DMGlobalToLocalBegin/End() or VecScatter to
communicate, but that will use GPU-aware MPI if your Vec is a device vector.
Han Tran writes:
> Hi,
>
> I am aware that PETSc recently
Hi,
I am aware that PETSc recently supports solvers on GPU. I wonder whether PETSc
supports MatShell with GPU solvers, i.e., I have a user-defined MatMult()
function residing on the device, and I want to use MatShell directly with PETSc
GPU solvers without any transfer back and forth between