Mark,

The application code reads in parameters from an input file, where we can put 
the PETSc runtime options. Then we pass the options to PetscInitialize(...). 
Does that sounds right?

Cho
________________________________
From: Ng, Cho-Kuen <c...@slac.stanford.edu>
Sent: Thursday, June 29, 2023 8:32 PM
To: Mark Adams <mfad...@lbl.gov>
Cc: petsc-users@mcs.anl.gov <petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Using PETSc GPU backend

Mark,

Thanks for the information. How do I put the runtime options for the 
executable, say, a.out, which does not have the provision to append arguments? 
Do I need to change the C++ main to read in the options?

Cho
________________________________
From: Mark Adams <mfad...@lbl.gov>
Sent: Thursday, June 29, 2023 5:55 PM
To: Ng, Cho-Kuen <c...@slac.stanford.edu>
Cc: petsc-users@mcs.anl.gov <petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Using PETSc GPU backend

Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left

The last column of the performance data (from -log_view) will be the percent 
flops on the GPU. Check that that is > 0.

The end of the output will list the options that were used and options that 
were _not_ used (if any). Check that there are no options left.

Mark

On Thu, Jun 29, 2023 at 7:50 PM Ng, Cho-Kuen via petsc-users 
<petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>> wrote:
I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and 
used it by "spack load petsc/fwge6pf". Then I compiled the application code 
(purely CPU code) linking to the petsc package, hoping that I can get 
performance improvement using the petsc GPU backend. However, the timing was 
the same using the same number of MPI tasks with and without GPU accelerators. 
Have I missed something in the process, for example, setting up PETSc options 
at runtime to use the GPU backend?

Thanks,
Cho

Reply via email to