Hi, Herbert, I don't have much experience on OpenACC and PETSc CI doesn't have such tests. Could you avoid using nvfortran and instead use gfortran to compile your Fortran + OpenACC code? If you, then you can use the latest petsc code and make our debugging easier. Also, could you provide us with a test and instructions to reproduce the problem?
Thanks! --Junchao Zhang On Thu, Oct 16, 2025 at 5:07 AM howen via petsc-users < [email protected]> wrote: > Dear All, > > I am interfacing our CFD code (Fortran + OpenACC) to Petsc. > Since we use OpenACC the natural choice for us is to use Nvidia´s nvhpc > compiler. The Gnu compiler does not work well and we do not have access to > the Cray compiler. > > I already know that the latest version of Petsc does not compile with > nvhpc, I am therefore using version 3.21. > I get good results on the CPU both in serial and parallel (MPI). However, > the GPU implementation, that is what we are interested in, only work > correctly for the serial version. In parallel, the results are different. > Even for a CG solve. > > I would like to know, if you have experience with the Nvidia compiler. I > am particularly interested if you have already observed issues with it. > Your opinion on whether to put further effort into trying to find a bug I > may have introduced during the interfacing is highly appreciated. > > Best, > > Herbert Owen > Senior Researcher, Dpt. Computer Applications in Science and Engineering > Barcelona Supercomputing Center (BSC-CNS) > Tel: +34 93 413 4038 > Skype: herbert.owen > > https://urldefense.us/v3/__https://scholar.google.es/citations?user=qe5O2IYAAAAJ&hl=en__;!!G_uCfscf7eWS!aTIjTK-Fcr9yL0t3yY0n7IDAF_kAHNA6X3j7omFcZJel4Laq7RWEgItjDi9CSvwBIXaih6jOEqyr6gaPlp-TJZZrsWBw$ > > <https://urldefense.us/v3/__https://scholar.google.es/citations?user=qe5O2IYAAAAJ&hl=en__;!!G_uCfscf7eWS!abuM7ozzUs7eISYBumHNxpvO2Tuy74KRM4-WWcunXHZVjQf1V032xQrCzTfC5vA_NM-35xMEZ9yJ8XK-3QFqjWBSWuUi$> > > > > > > > > >
