Paul,

Thank you for your suggestion. I will try different spack install 
specifications.

Cho
________________________________
From: Grosse-Bley, Paul Leonard <paul.grosse-b...@stud.uni-heidelberg.de>
Sent: Friday, June 30, 2023 4:07 AM
To: Ng, Cho-Kuen <c...@slac.stanford.edu>
Cc: petsc-users@mcs.anl.gov <petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Using PETSc GPU backend

Hi Cho,

you might want to specify the GPU architecture to make sure that everything is 
compiled optimally.
I.e. "spack install petsc +cuda cuda_arch=80 +zoltan"

Best,
Paul


On Friday, June 30, 2023 01:50 CEST, petsc-users-requ...@mcs.anl.gov wrote:

Date: Thu, 29 Jun 2023 23:50:10 +0000
From: "Ng, Cho-Kuen" <c...@slac.stanford.edu>
To: "petsc-users@mcs.anl.gov" <petsc-users@mcs.anl.gov>
Subject: [petsc-users] Using PETSc GPU backend
Message-ID:
<byapr07mb5431a3dacc91a963572d66e4e0...@byapr07mb5431.namprd07.prod.outlook.com>

Content-Type: text/plain; charset="iso-8859-1"

I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and 
used it by "spack load petsc/fwge6pf". Then I compiled the application code 
(purely CPU code) linking to the petsc package, hoping that I can get 
performance improvement using the petsc GPU backend. However, the timing was 
the same using the same number of MPI tasks with and without GPU accelerators. 
Have I missed something in the process, for example, setting up PETSc options 
at runtime to use the GPU backend?

Thanks,
Cho

Reply via email to