Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist

2020-02-24 Thread Satish Balay via petsc-users
nvidia-smi gives some relevant info. I'm not sure what exactly the cuda-version 
listed here refers to..

[is it the max version of cuda - this driver is compatible with?]

Satish

-

[balay@p1 ~]$ nvidia-smi 
Mon Feb 24 09:15:26 2020   
+-+
| NVIDIA-SMI 440.59   Driver Version: 440.59   CUDA Version: 10.2 |
|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. |
|===+==+==|
|   0  Quadro T2000Off  | :01:00.0 Off |  N/A |
| N/A   45CP8 4W /  N/A |182MiB /  3911MiB |  0%  Default |
+---+--+--+
   
+-+
| Processes:   GPU Memory |
|  GPU   PID   Type   Process name Usage  |
|=|
|0  1372  G   /usr/libexec/Xorg180MiB |
+-+
[balay@p1 ~]$ 


On Mon, 24 Feb 2020, Junchao Zhang via petsc-users wrote:

> [0]PETSC ERROR: error in cudaSetDevice CUDA driver version is insufficient
> for CUDA runtime version
> 
> That means you need to update your cuda driver for CUDA 10.2.  See minimum
> requirement in Table 1 at
> https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#major-components
> 
> --Junchao Zhang
> 
> 
> On Sun, Feb 23, 2020 at 3:33 PM Abhyankar, Shrirang G <
> shrirang.abhyan...@pnnl.gov> wrote:
> 
> > I was using CUDA v10.2. Switching to 9.2 gives a clean make test.
> >
> >
> >
> > Thanks,
> >
> > Shri
> >
> >
> >
> >
> >
> > *From: *petsc-users  on behalf of
> > "Abhyankar, Shrirang G via petsc-users" 
> > *Reply-To: *"Abhyankar, Shrirang G" 
> > *Date: *Sunday, February 23, 2020 at 3:10 PM
> > *To: *petsc-users , Junchao Zhang <
> > jczh...@mcs.anl.gov>
> > *Subject: *Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist
> >
> >
> >
> > I am getting an error now for CUDA driver version. Any suggestions?
> >
> >
> >
> > petsc:maint$ make test
> >
> > Running test examples to verify correct installation
> >
> > Using PETSC_DIR=/people/abhy245/software/petsc and
> > PETSC_ARCH=debug-mode-newell
> >
> > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI
> > process
> >
> > See http://www.mcs.anl.gov/petsc/documentation/faq.html
> >
> > [0]PETSC ERROR: - Error Message
> > --
> >
> > [0]PETSC ERROR: Error in system call
> >
> > [0]PETSC ERROR: error in cudaSetDevice CUDA driver version is insufficient
> > for CUDA runtime version
> >
> > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html
> > for trouble shooting.
> >
> > [0]PETSC ERROR: Petsc Release Version 3.12.4, unknown
> >
> > [0]PETSC ERROR: ./ex19 on a debug-mode-newell named newell01.pnl.gov by
> > abhy245 Sun Feb 23 12:49:55 2020
> >
> > [0]PETSC ERROR: Configure options --download-fblaslapack --download-make
> > --download-metis --download-parmetis --download-scalapack
> > --download-suitesparse --download-superlu_dist-gpu=1
> > --download-superlu_dist=1 --with-cc=mpicc --with-clanguage=c++
> > --with-cuda-dir=/share/apps/cuda/10.2 --with-cuda=1
> > --with-cxx-dialect=C++11 --with-cxx=mpicxx --with-fc=mpif77 --with-openmp=1
> > PETSC_ARCH=debug-mode-newell
> >
> > [0]PETSC ERROR: #1 PetscCUDAInitialize() line 261 in
> > /qfs/people/abhy245/software/petsc/src/sys/objects/init.c
> >
> > [0]PETSC ERROR: #2 PetscOptionsCheckInitial_Private() line 652 in
> > /qfs/people/abhy245/software/petsc/src/sys/objects/init.c
> >
> > [0]PETSC ERROR: #3 PetscInitialize() line 1010 in
> > /qfs/people/abhy245/software/petsc/src/sys/objects/pinit.c
> >
> > --
> >
> > Primary job  termina

Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist

2020-02-24 Thread Junchao Zhang via petsc-users
[0]PETSC ERROR: error in cudaSetDevice CUDA driver version is insufficient
for CUDA runtime version

That means you need to update your cuda driver for CUDA 10.2.  See minimum
requirement in Table 1 at
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#major-components

--Junchao Zhang


On Sun, Feb 23, 2020 at 3:33 PM Abhyankar, Shrirang G <
shrirang.abhyan...@pnnl.gov> wrote:

> I was using CUDA v10.2. Switching to 9.2 gives a clean make test.
>
>
>
> Thanks,
>
> Shri
>
>
>
>
>
> *From: *petsc-users  on behalf of
> "Abhyankar, Shrirang G via petsc-users" 
> *Reply-To: *"Abhyankar, Shrirang G" 
> *Date: *Sunday, February 23, 2020 at 3:10 PM
> *To: *petsc-users , Junchao Zhang <
> jczh...@mcs.anl.gov>
> *Subject: *Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist
>
>
>
> I am getting an error now for CUDA driver version. Any suggestions?
>
>
>
> petsc:maint$ make test
>
> Running test examples to verify correct installation
>
> Using PETSC_DIR=/people/abhy245/software/petsc and
> PETSC_ARCH=debug-mode-newell
>
> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI
> process
>
> See http://www.mcs.anl.gov/petsc/documentation/faq.html
>
> [0]PETSC ERROR: - Error Message
> --
>
> [0]PETSC ERROR: Error in system call
>
> [0]PETSC ERROR: error in cudaSetDevice CUDA driver version is insufficient
> for CUDA runtime version
>
> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
>
> [0]PETSC ERROR: Petsc Release Version 3.12.4, unknown
>
> [0]PETSC ERROR: ./ex19 on a debug-mode-newell named newell01.pnl.gov by
> abhy245 Sun Feb 23 12:49:55 2020
>
> [0]PETSC ERROR: Configure options --download-fblaslapack --download-make
> --download-metis --download-parmetis --download-scalapack
> --download-suitesparse --download-superlu_dist-gpu=1
> --download-superlu_dist=1 --with-cc=mpicc --with-clanguage=c++
> --with-cuda-dir=/share/apps/cuda/10.2 --with-cuda=1
> --with-cxx-dialect=C++11 --with-cxx=mpicxx --with-fc=mpif77 --with-openmp=1
> PETSC_ARCH=debug-mode-newell
>
> [0]PETSC ERROR: #1 PetscCUDAInitialize() line 261 in
> /qfs/people/abhy245/software/petsc/src/sys/objects/init.c
>
> [0]PETSC ERROR: #2 PetscOptionsCheckInitial_Private() line 652 in
> /qfs/people/abhy245/software/petsc/src/sys/objects/init.c
>
> [0]PETSC ERROR: #3 PetscInitialize() line 1010 in
> /qfs/people/abhy245/software/petsc/src/sys/objects/pinit.c
>
> --
>
> Primary job  terminated normally, but 1 process returned
>
> a non-zero exit code. Per user-direction, the job has been aborted.
>
> --
>
> --
>
> mpiexec detected that one or more processes exited with non-zero status,
> thus causing
>
> the job to be terminated. The first process to do so was:
>
>
>
>   Process name: [[46518,1],0]
>
>   Exit code:88
>
> --
>
> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI
> processes
>
> See http://www.mcs.anl.gov/petsc/documentation/faq.html
>
> [0]PETSC ERROR: - Error Message
> --
>
> [1]PETSC ERROR: - Error Message
> --
>
> [1]PETSC ERROR: Error in system call
>
> [1]PETSC ERROR: [0]PETSC ERROR: Error in system call
>
> [0]PETSC ERROR: error in cudaGetDeviceCount CUDA driver version is
> insufficient for CUDA runtime version
>
> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
>
> error in cudaGetDeviceCount CUDA driver version is insufficient for CUDA
> runtime version
>
> [1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
>
> [1]PETSC ERROR: [0]PETSC ERROR: Petsc Release Version 3.12.4, unknown
>
> [0]PETSC ERROR: ./ex19 on a debug-mode-newell named newell01.pnl.gov by
> abhy245 Sun Feb 23 12:49:57 2020
>
> [0]PETSC ERROR: Configure options --download-fblaslapack --download-make
> --download-metis --download-parmetis --download-scalapack
> --download-suitesparse --download-superlu_dist-gpu=1
> --download-superlu_dist=1 --with-cc=mpicc --with-clanguage=c++
> --with-cuda-dir=/share/apps/cuda/10.2 --wit

Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist

2020-02-23 Thread Abhyankar, Shrirang G via petsc-users
I was using CUDA v10.2. Switching to 9.2 gives a clean make test.

Thanks,
Shri


From: petsc-users  on behalf of "Abhyankar, 
Shrirang G via petsc-users" 
Reply-To: "Abhyankar, Shrirang G" 
Date: Sunday, February 23, 2020 at 3:10 PM
To: petsc-users , Junchao Zhang 
Subject: Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist

I am getting an error now for CUDA driver version. Any suggestions?

petsc:maint$ make test
Running test examples to verify correct installation
Using PETSC_DIR=/people/abhy245/software/petsc and PETSC_ARCH=debug-mode-newell
Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process
See http://www.mcs.anl.gov/petsc/documentation/faq.html
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Error in system call
[0]PETSC ERROR: error in cudaSetDevice CUDA driver version is insufficient for 
CUDA runtime version
[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.12.4, unknown
[0]PETSC ERROR: ./ex19 on a debug-mode-newell named newell01.pnl.gov by abhy245 
Sun Feb 23 12:49:55 2020
[0]PETSC ERROR: Configure options --download-fblaslapack --download-make 
--download-metis --download-parmetis --download-scalapack 
--download-suitesparse --download-superlu_dist-gpu=1 --download-superlu_dist=1 
--with-cc=mpicc --with-clanguage=c++ --with-cuda-dir=/share/apps/cuda/10.2 
--with-cuda=1 --with-cxx-dialect=C++11 --with-cxx=mpicxx --with-fc=mpif77 
--with-openmp=1 PETSC_ARCH=debug-mode-newell
[0]PETSC ERROR: #1 PetscCUDAInitialize() line 261 in 
/qfs/people/abhy245/software/petsc/src/sys/objects/init.c
[0]PETSC ERROR: #2 PetscOptionsCheckInitial_Private() line 652 in 
/qfs/people/abhy245/software/petsc/src/sys/objects/init.c
[0]PETSC ERROR: #3 PetscInitialize() line 1010 in 
/qfs/people/abhy245/software/petsc/src/sys/objects/pinit.c
--
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--
--
mpiexec detected that one or more processes exited with non-zero status, thus 
causing
the job to be terminated. The first process to do so was:

  Process name: [[46518,1],0]
  Exit code:88
--
Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI 
processes
See http://www.mcs.anl.gov/petsc/documentation/faq.html
[0]PETSC ERROR: - Error Message 
--
[1]PETSC ERROR: - Error Message 
--
[1]PETSC ERROR: Error in system call
[1]PETSC ERROR: [0]PETSC ERROR: Error in system call
[0]PETSC ERROR: error in cudaGetDeviceCount CUDA driver version is insufficient 
for CUDA runtime version
[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
error in cudaGetDeviceCount CUDA driver version is insufficient for CUDA 
runtime version
[1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[1]PETSC ERROR: [0]PETSC ERROR: Petsc Release Version 3.12.4, unknown
[0]PETSC ERROR: ./ex19 on a debug-mode-newell named newell01.pnl.gov by abhy245 
Sun Feb 23 12:49:57 2020
[0]PETSC ERROR: Configure options --download-fblaslapack --download-make 
--download-metis --download-parmetis --download-scalapack 
--download-suitesparse --download-superlu_dist-gpu=1 --download-superlu_dist=1 
--with-cc=mpicc --with-clanguage=c++ --with-cuda-dir=/share/apps/cuda/10.2 
--with-cuda=1 --with-cxx-dialect=C++11 --with-cxx=mpicxx --with-fc=mpif77 
--with-openmp=1 PETSC_ARCH=debug-mode-newell
[0]PETSC ERROR: #1 PetscCUDAInitialize() line 254 in 
/qfs/people/abhy245/software/petsc/src/sys/objects/init.c
[0]PETSC ERROR: #2 PetscOptionsCheckInitial_Private() line 652 in 
/qfs/people/abhy245/software/petsc/src/sys/objects/init.c
[0]PETSC ERROR: #3 PetscInitialize() line 1010 in 
/qfs/people/abhy245/software/petsc/src/sys/objects/pinit.c
Petsc Release Version 3.12.4, unknown
[1]PETSC ERROR: ./ex19 on a debug-mode-newell named newell01.pnl.gov by abhy245 
Sun Feb 23 12:49:57 2020
[1]PETSC ERROR: Configure options --download-fblaslapack --download-make 
--download-metis --download-parmetis --download-scalapack 
--download-suitesparse --download-superlu_dist-gpu=1 --download-superlu_dist=1 
--with-cc=mpicc --with-clanguage=c++ --with-cuda-dir=/share/apps/cuda/10.2 
--with-cuda=1 --with-cxx-dialect=C++11 --with-cxx=mpicxx --with-fc=mpif77 
--with-openmp=1 PETSC_ARCH=debug-mode-newell
[1]PETSC ERROR: #1 PetscCUDAInitialize() line 254 in 

Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist

2020-02-22 Thread Junchao Zhang via petsc-users
Great. Thanks.

On Sat, Feb 22, 2020 at 8:59 PM Balay, Satish  wrote:

> The fix is now in both  maint and master
>
> https://gitlab.com/petsc/petsc/-/merge_requests/2555
>
> Satish
>
> On Sat, 22 Feb 2020, Junchao Zhang via petsc-users wrote:
>
> > We met the error before and knew why. Will fix it soon.
> > --Junchao Zhang
> >
> >
> > On Sat, Feb 22, 2020 at 11:43 AM Abhyankar, Shrirang G via petsc-users <
> > petsc-users@mcs.anl.gov> wrote:
> >
> > > Thanks, Satish. Configure and make go through fine. Getting an
> undefined
> > > reference error for VecGetArrayWrite_SeqCUDA.
> > >
> > >
> > >
> > > Shri
> > >
> > > *From: *Satish Balay 
> > > *Reply-To: *petsc-users 
> > > *Date: *Saturday, February 22, 2020 at 8:25 AM
> > > *To: *"Abhyankar, Shrirang G" 
> > > *Cc: *"petsc-users@mcs.anl.gov" 
> > > *Subject: *Re: [petsc-users] Using PETSc with GPU supported
> SuperLU_Dist
> > >
> > >
> > >
> > > On Sat, 22 Feb 2020, Abhyankar, Shrirang G via petsc-users wrote:
> > >
> > >
> > >
> > > Hi,
> > >
> > > I want to install PETSc with GPU supported SuperLU_Dist. What are
> the
> > > configure options I should be using?
> > >
> > >
> > >
> > >
> > >
> > > Shri,
> > >
> > >
> > >
> > >
> > >
> > > if self.framework.argDB['download-superlu_dist-gpu']:
> > >
> > >   self.cuda   =
> framework.require('config.packages.cuda',self)
> > >
> > >   self.openmp =
> > > framework.require('config.packages.openmp',self)
> > >
> > >   self.deps   =
> > > [self.mpi,self.blasLapack,self.cuda,self.openmp]
> > >
> > > <<<<<
> > >
> > >
> > >
> > > So try:
> > >
> > >
> > >
> > > --with-cuda=1 --download-superlu_dist=1 --download-superlu_dist-gpu=1
> > > --with-openmp=1 [and usual MPI, blaslapack]
> > >
> > >
> > >
> > > Satish
> > >
> > >
> > >
> > >
> > >
> >
>
>


Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist

2020-02-22 Thread Satish Balay via petsc-users
The fix is now in both  maint and master

https://gitlab.com/petsc/petsc/-/merge_requests/2555

Satish

On Sat, 22 Feb 2020, Junchao Zhang via petsc-users wrote:

> We met the error before and knew why. Will fix it soon.
> --Junchao Zhang
> 
> 
> On Sat, Feb 22, 2020 at 11:43 AM Abhyankar, Shrirang G via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> 
> > Thanks, Satish. Configure and make go through fine. Getting an undefined
> > reference error for VecGetArrayWrite_SeqCUDA.
> >
> >
> >
> > Shri
> >
> > *From: *Satish Balay 
> > *Reply-To: *petsc-users 
> > *Date: *Saturday, February 22, 2020 at 8:25 AM
> > *To: *"Abhyankar, Shrirang G" 
> > *Cc: *"petsc-users@mcs.anl.gov" 
> > *Subject: *Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist
> >
> >
> >
> > On Sat, 22 Feb 2020, Abhyankar, Shrirang G via petsc-users wrote:
> >
> >
> >
> > Hi,
> >
> > I want to install PETSc with GPU supported SuperLU_Dist. What are the
> > configure options I should be using?
> >
> >
> >
> >
> >
> > Shri,
> >
> >
> >
> >
> >
> > if self.framework.argDB['download-superlu_dist-gpu']:
> >
> >   self.cuda   = framework.require('config.packages.cuda',self)
> >
> >   self.openmp =
> > framework.require('config.packages.openmp',self)
> >
> >   self.deps   =
> > [self.mpi,self.blasLapack,self.cuda,self.openmp]
> >
> > <<<<<
> >
> >
> >
> > So try:
> >
> >
> >
> > --with-cuda=1 --download-superlu_dist=1 --download-superlu_dist-gpu=1
> > --with-openmp=1 [and usual MPI, blaslapack]
> >
> >
> >
> > Satish
> >
> >
> >
> >
> >
> 



Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist

2020-02-22 Thread Junchao Zhang via petsc-users
We met the error before and knew why. Will fix it soon.
--Junchao Zhang


On Sat, Feb 22, 2020 at 11:43 AM Abhyankar, Shrirang G via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Thanks, Satish. Configure and make go through fine. Getting an undefined
> reference error for VecGetArrayWrite_SeqCUDA.
>
>
>
> Shri
>
> *From: *Satish Balay 
> *Reply-To: *petsc-users 
> *Date: *Saturday, February 22, 2020 at 8:25 AM
> *To: *"Abhyankar, Shrirang G" 
> *Cc: *"petsc-users@mcs.anl.gov" 
> *Subject: *Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist
>
>
>
> On Sat, 22 Feb 2020, Abhyankar, Shrirang G via petsc-users wrote:
>
>
>
> Hi,
>
> I want to install PETSc with GPU supported SuperLU_Dist. What are the
> configure options I should be using?
>
>
>
>
>
> Shri,
>
>
>
>
>
> if self.framework.argDB['download-superlu_dist-gpu']:
>
>   self.cuda   = framework.require('config.packages.cuda',self)
>
>   self.openmp =
> framework.require('config.packages.openmp',self)
>
>   self.deps   =
> [self.mpi,self.blasLapack,self.cuda,self.openmp]
>
> <<<<<
>
>
>
> So try:
>
>
>
> --with-cuda=1 --download-superlu_dist=1 --download-superlu_dist-gpu=1
> --with-openmp=1 [and usual MPI, blaslapack]
>
>
>
> Satish
>
>
>
>
>


Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist

2020-02-22 Thread Satish Balay via petsc-users
Looks like a bug in petsc that needs fixing. However - you shouldn't need 
options '--with-cxx-dialect=C++11 --with-clanguage=c++'

Satish

On Sat, 22 Feb 2020, Abhyankar, Shrirang G via petsc-users wrote:

> Thanks, Satish. Configure and make go through fine. Getting an undefined 
> reference error for VecGetArrayWrite_SeqCUDA.
> 
> Shri
> From: Satish Balay 
> Reply-To: petsc-users 
> Date: Saturday, February 22, 2020 at 8:25 AM
> To: "Abhyankar, Shrirang G" 
> Cc: "petsc-users@mcs.anl.gov" 
> Subject: Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist
> 
> On Sat, 22 Feb 2020, Abhyankar, Shrirang G via petsc-users wrote:
> 
> Hi,
> I want to install PETSc with GPU supported SuperLU_Dist. What are the 
> configure options I should be using?
> 
> 
> Shri,
> 
> 
> if self.framework.argDB['download-superlu_dist-gpu']:
>   self.cuda   = framework.require('config.packages.cuda',self)
>   self.openmp = framework.require('config.packages.openmp',self)
>   self.deps   = [self.mpi,self.blasLapack,self.cuda,self.openmp]
> <<<<<
> 
> So try:
> 
> --with-cuda=1 --download-superlu_dist=1 --download-superlu_dist-gpu=1 
> --with-openmp=1 [and usual MPI, blaslapack]
> 
> Satish
> 
> 
> 



Re: [petsc-users] Using PETSc with GPU supported SuperLU_Dist

2020-02-22 Thread Satish Balay via petsc-users
On Sat, 22 Feb 2020, Abhyankar, Shrirang G via petsc-users wrote:

> Hi,
>I want to install PETSc with GPU supported SuperLU_Dist. What are the 
> configure options I should be using?


Shri,

>
if self.framework.argDB['download-superlu_dist-gpu']:
  self.cuda   = framework.require('config.packages.cuda',self)
  self.openmp = framework.require('config.packages.openmp',self)
  self.deps   = [self.mpi,self.blasLapack,self.cuda,self.openmp]
<

So try:

--with-cuda=1 --download-superlu_dist=1 --download-superlu_dist-gpu=1 
--with-openmp=1 [and usual MPI, blaslapack]

Satish



[petsc-users] Using PETSc with GPU supported SuperLU_Dist

2020-02-22 Thread Abhyankar, Shrirang G via petsc-users
Hi,
   I want to install PETSc with GPU supported SuperLU_Dist. What are the 
configure options I should be using?

Thanks,
Shri


Re: [petsc-users] Using PETSc with GPU

2019-03-15 Thread Yuyun Yang via petsc-users
Good point, thank you so much for the advice! I'll take that into consideration.

Best regards,
Yuyun

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Jed Brown 
Sent: Friday, March 15, 2019 7:06:29 PM
To: Yuyun Yang; Smith, Barry F.
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Using PETSc with GPU

Yuyun Yang via petsc-users  writes:

> Currently we are forming the sparse matrices explicitly, but I think the goal 
> is to move towards matrix-free methods and use a stencil, which I suppose is 
> good to use GPUs for and more efficient. On the other hand, I've also read 
> about matrix-free operations in the manual just on the CPUs. Would there be 
> any benefit then to switching to GPU (looks like matrix-free in PETSc is 
> rather straightforward to use, whereas writing the kernel function for GPU 
> stencil would require quite a lot of work)?

It all depends what kind of computation happens in there and how well
you can implement it for the GPU.  It's important to have a clear idea
of what you expect to achieve.  For example, if you write an excellent
GPU implementation of your SNES residual/matrix-free Jacobian, it might
be 2-3x faster than a good CPU implementation on hardware of similar
cost ($ or Watt).  But you still need preconditioning, which is usually
at least half the work, and perhaps a preconditioner runs the same speed
on GPU and CPU (CPU version often converges a bit faster;
preconditioning operations are often less amenable to GPUs).  So after
all that effort, and now with code that is likely harder to maintain,
you go from 4 seconds per solve to 3 seconds per solve on hardware of
the same cost.  Is that worth it?

Maybe, but you probably want that to be in the critical path for your
research and/or customers.


Re: [petsc-users] Using PETSc with GPU

2019-03-15 Thread Jed Brown via petsc-users
Yuyun Yang via petsc-users  writes:

> Currently we are forming the sparse matrices explicitly, but I think the goal 
> is to move towards matrix-free methods and use a stencil, which I suppose is 
> good to use GPUs for and more efficient. On the other hand, I've also read 
> about matrix-free operations in the manual just on the CPUs. Would there be 
> any benefit then to switching to GPU (looks like matrix-free in PETSc is 
> rather straightforward to use, whereas writing the kernel function for GPU 
> stencil would require quite a lot of work)?

It all depends what kind of computation happens in there and how well
you can implement it for the GPU.  It's important to have a clear idea
of what you expect to achieve.  For example, if you write an excellent
GPU implementation of your SNES residual/matrix-free Jacobian, it might
be 2-3x faster than a good CPU implementation on hardware of similar
cost ($ or Watt).  But you still need preconditioning, which is usually
at least half the work, and perhaps a preconditioner runs the same speed
on GPU and CPU (CPU version often converges a bit faster;
preconditioning operations are often less amenable to GPUs).  So after
all that effort, and now with code that is likely harder to maintain,
you go from 4 seconds per solve to 3 seconds per solve on hardware of
the same cost.  Is that worth it?

Maybe, but you probably want that to be in the critical path for your
research and/or customers.


Re: [petsc-users] Using PETSc with GPU

2019-03-15 Thread Smith, Barry F. via petsc-users



> On Mar 15, 2019, at 7:33 PM, Yuyun Yang via petsc-users 
>  wrote:
> 
> Thanks Matt, I've seen that page, but there isn't that much documentation, 
> and there is only one CUDA example, so I wanted to check if there may be more 
> references or examples somewhere else. We have very large linear systems that 
> need to be solved every time step, and which involves matrix-matrix 
> multiplications,

where do these matrix-matrix multiplications appear? Are you providing a 
"matrix-free" based operator for your linear system where you apply 
matrix-vector operations via a subroutine call? Or are you explicitly forming 
sparse matrices and using them to define the operator?



> so we thought GPU could have some benefits, but we are unsure how difficult 
> it is to migrate parts of the code to GPU with PETSc. From that webpage it 
> seems like we only need to specify the Vec / Mat option on the command line 
> and maybe change a few functions to have CUDA? The CUDA example however also 
> involves using thrust and programming a kernel function, so I want to make 
> sure I know how this works before trying to implement.

   How much, if any, CUDA/GPU code you have to write depends on what you want 
to have done on the GPU. If you provide a sparse matrix and only want  the 
system solve to take place on the GPU then you don't need to write any CUDA/GPU 
code, you just use the "CUDA" vector and matrix class. If you are doing 
"matrix-free" solves and you provide the routine that performs the 
matrix-vector product then you need to write/optimize that routine for CUDA/GPU.

   Barry

> 
> Thanks a lot,
> Yuyun
> 
> Get Outlook for iOS
> From: Matthew Knepley 
> Sent: Friday, March 15, 2019 2:54:02 PM
> To: Yuyun Yang
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] Using PETSc with GPU
>  
> On Fri, Mar 15, 2019 at 5:30 PM Yuyun Yang via petsc-users 
>  wrote:
> Hello team,
> 
>  
> 
> Our group is thinking of using GPUs for the linear solves in our code, which 
> is written in PETSc. I was reading the 2013 book chapter on implementation of 
> PETSc using GPUs but wonder if there is any more updated reference that I 
> check out? I also saw one example cuda code online (using thrust), but would 
> like to check with you if there is a more complete documentation of how the 
> GPU implementation is done?
> 
> 
> Have you seen this page? https://www.mcs.anl.gov/petsc/features/gpus.html
> 
> Also, before using GPUs, I would take some time to understand what you think 
> the possible benefit can be.
> For example, there is almost no benefit is you use BLAS1, and you would have 
> a huge maintenance burden
> with a different toolchain. This is also largely true for SpMV, since the 
> bandwidth difference between CPUs
> and GPUs is now not much. So you really should have some kind of flop 
> intensive (BLAS3-like) work in there
> somewhere or its hard to see your motivation.
> 
>   Thanks,
> 
>  Matt
>  
>  
> Thanks very much!
> 
>  
> 
> Best regards,
> 
> Yuyun
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/



Re: [petsc-users] Using PETSc with GPU

2019-03-15 Thread Yuyun Yang via petsc-users
Thanks Matt, I've seen that page, but there isn't that much documentation, and 
there is only one CUDA example, so I wanted to check if there may be more 
references or examples somewhere else. We have very large linear systems that 
need to be solved every time step, and which involves matrix-matrix 
multiplications, so we thought GPU could have some benefits, but we are unsure 
how difficult it is to migrate parts of the code to GPU with PETSc. From that 
webpage it seems like we only need to specify the Vec / Mat option on the 
command line and maybe change a few functions to have CUDA? The CUDA example 
however also involves using thrust and programming a kernel function, so I want 
to make sure I know how this works before trying to implement.

Thanks a lot,
Yuyun

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Matthew Knepley 
Sent: Friday, March 15, 2019 2:54:02 PM
To: Yuyun Yang
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Using PETSc with GPU

On Fri, Mar 15, 2019 at 5:30 PM Yuyun Yang via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello team,

Our group is thinking of using GPUs for the linear solves in our code, which is 
written in PETSc. I was reading the 2013 book chapter on implementation of 
PETSc using GPUs but wonder if there is any more updated reference that I check 
out? I also saw one example cuda code online (using thrust), but would like to 
check with you if there is a more complete documentation of how the GPU 
implementation is done?

Have you seen this page? https://www.mcs.anl.gov/petsc/features/gpus.html

Also, before using GPUs, I would take some time to understand what you think 
the possible benefit can be.
For example, there is almost no benefit is you use BLAS1, and you would have a 
huge maintenance burden
with a different toolchain. This is also largely true for SpMV, since the 
bandwidth difference between CPUs
and GPUs is now not much. So you really should have some kind of flop intensive 
(BLAS3-like) work in there
somewhere or its hard to see your motivation.

  Thanks,

 Matt


Thanks very much!

Best regards,
Yuyun


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>


[petsc-users] Using PETSc with GPU

2019-03-15 Thread Yuyun Yang via petsc-users
Hello team,

Our group is thinking of using GPUs for the linear solves in our code, which is 
written in PETSc. I was reading the 2013 book chapter on implementation of 
PETSc using GPUs but wonder if there is any more updated reference that I check 
out? I also saw one example cuda code online (using thrust), but would like to 
check with you if there is a more complete documentation of how the GPU 
implementation is done?

Thanks very much!

Best regards,
Yuyun