Sorry about that. The current code is buggy anyways; I will let you know when 
I have tested it extensively so you can try again.

  Barry


> On Oct 1, 2025, at 3:47 PM, LEDAC Pierre <[email protected]> wrote:
> 
> Sorry the correct error is:
> 
> /export/home/catA/pl254994/trust/petsc/lib/src/LIBPETSC/build/petsc-barry-2025-09-30-add-matgetdiagonal-cuda/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu(3765):
>  error: argument of type "int*" is incompatible with parameter of type "const 
> PetscInt *"
>         GetDiagonal_CSR<<<(int)((n + 255) / 256), 256, 0, 
> PetscDefaultCudaStream>>>(cusparsestruct->rowoffsets_gpu->data().get(), 
> matstruct->cprowIndices->data().get(), 
> cusparsestruct->workVector->data().get(), n, darray);
> 
> 
> Pierre LEDAC
> Commissariat à l’énergie atomique et aux énergies alternatives
> Centre de SACLAY
> DES/ISAS/DM2S/SGLS/LCAN
> Bâtiment 451 – point courrier n°41
> F-91191 Gif-sur-Yvette
> +33 1 69 08 04 03
> +33 6 83 42 05 79
> De : LEDAC Pierre
> Envoyé : mercredi 1 octobre 2025 21:46:00
> À : Barry Smith
> Cc : Junchao Zhang; [email protected] <mailto:[email protected]>
> Objet : RE: [petsc-users] [GPU] Jacobi preconditioner
>  
> Hi all,
> 
> Thanks for the MR, there is a build issue cause we use --with-64-bit-indices:
> 
> /export/home/catA/pl254994/trust/petsc/lib/src/LIBPETSC/build/petsc-barry-2025-09-30-add-matgetdiagonal-cuda/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu(3765):
>  error: argument of type "PetscInt" is incompatible with parameter of type 
> "const PetscInt *"
>         GetDiagonal_CSR<<<(int)((n + 255) / 256), 256, 0, 
> PetscDefaultCudaStream>>>(cusparsestruct->rowoffsets_gpu->data().get(), 
> matstruct->cprowIndices->data().get(), 
> cusparsestruct->workVector->data().get(), n, darray);
> 
> Thanks,
> 
> Pierre LEDAC
> Commissariat à l’énergie atomique et aux énergies alternatives
> Centre de SACLAY
> DES/ISAS/DM2S/SGLS/LCAN
> Bâtiment 451 – point courrier n°41
> F-91191 Gif-sur-Yvette
> +33 1 69 08 04 03
> +33 6 83 42 05 79
> De : Barry Smith <[email protected] <mailto:[email protected]>>
> Envoyé : mercredi 1 octobre 2025 18:48:37
> À : LEDAC Pierre
> Cc : Junchao Zhang; [email protected] <mailto:[email protected]>
> Objet : Re: [petsc-users] [GPU] Jacobi preconditioner
>  
> 
>     I have finally created an MR that moves the Jacobi accessing of the 
> diagonal to the GPU, which should improve the GPU performance of your code. 
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/8756__;!!G_uCfscf7eWS!dzuxOoOUatxToc7TIXOe_lHWwFJf4p7OkCsU3kBG-UwhwYKWsW7nmw9g3sZmJq9UcS1KgnPvlIFK27vJGYCgWDc$
>  
> 
>     Please give it a try and let us know if it causes any difficulties or, 
> hopefully, improves your code's performance significantly.
> 
>    Sorry for the long delay, NVIDIA is hiring too many PETSc developers away 
> from us.
> 
>    Barry
> 
>> On Jul 31, 2025, at 6:46 AM, LEDAC Pierre <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Thanks Barry, I agree but didn't dare asking for that.
>> 
>> Pierre LEDAC
>> Commissariat à l’énergie atomique et aux énergies alternatives
>> Centre de SACLAY
>> DES/ISAS/DM2S/SGLS/LCAN
>> Bâtiment 451 – point courrier n°41
>> F-91191 Gif-sur-Yvette
>> +33 1 69 08 04 03
>> +33 6 83 42 05 79
>>  
>> De : Barry Smith <[email protected] <mailto:[email protected]>>
>> Envoyé : mercredi 30 juillet 2025 20:34:26
>> À : Junchao Zhang
>> Cc : LEDAC Pierre; [email protected] <mailto:[email protected]>
>> Objet : Re: [petsc-users] [GPU] Jacobi preconditioner
>>  
>> 
>>    We absolutely should have a MatGetDiagonal_SeqAIJCUSPARSE(). It's 
>> somewhat embarrassing that we don't provide this.
>> 
>>    I have found some potential code at 
>> https://urldefense.us/v3/__https://stackoverflow.com/questions/60311408/how-to-get-the-diagonal-of-a-sparse-matrix-in-cusparse__;!!G_uCfscf7eWS!dzuxOoOUatxToc7TIXOe_lHWwFJf4p7OkCsU3kBG-UwhwYKWsW7nmw9g3sZmJq9UcS1KgnPvlIFK27vJar78Rs8$
>>  
>> 
>>    Barry
>> 
>> 
>> 
>> 
>>> On Jul 28, 2025, at 11:43 AM, Junchao Zhang <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Yes, MatGetDiagonal_SeqAIJCUSPARSE hasn't been implemented.  petsc/cuda and 
>>> petsc/kokkos backends are separate code.  
>>> If petsc/kokkos meet your needs, then just use them.  For petsc users, we 
>>> hope it will be just a difference of extra --download-kokkos 
>>> --download-kokkos-kernels in configuration. 
>>> 
>>> --Junchao Zhang
>>> 
>>> 
>>> On Mon, Jul 28, 2025 at 2:51 AM LEDAC Pierre <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>>> Hello all,
>>>> 
>>>> We are solving with PETSc a linear system updated every time step 
>>>> (constant stencil but coefficients changing).
>>>> 
>>>> The matrix is preallocated once with MatSetPreallocationCOO() then filled 
>>>> each time step with MatSetValuesCOO() and we use device pointers for 
>>>> coo_i, coo_j, and coefficients values.
>>>> 
>>>> It is working fine with a GMRES Ksp solver and PC Jacobi but we are 
>>>> surprised to see that every time step, during PCSetUp, 
>>>> MatGetDiagonal_SeqAIJ is called whereas the matrix is on the device. 
>>>> Looking at the API, it seems there is no MatGetDiagonal_SeqAIJCUSPARSE() 
>>>> but a MatGetDiagonal_SeqAIJKOKKOS().
>>>> 
>>>> Does it mean we should use Kokkos backend in PETSc to have Jacobi 
>>>> preconditioner built directly on device ? Or I am doing something wrong ?
>>>> NB: Gmres is running well on device.
>>>> 
>>>> I could use -ksp_reuse_preconditioner to avoid Jacobi being recreated each 
>>>> solve on host but it increases significantly the number of iterations.
>>>> 
>>>> Thanks,
>>>> 
>>>> <pastedImage.png>
>>>> 
>>>> 
>>>> Pierre LEDAC
>>>> Commissariat à l’énergie atomique et aux énergies alternatives
>>>> Centre de SACLAY
>>>> DES/ISAS/DM2S/SGLS/LCAN
>>>> Bâtiment 451 – point courrier n°41
>>>> F-91191 Gif-sur-Yvette
>>>> +33 1 69 08 04 03
>>>> +33 6 83 42 05 79

Reply via email to