Thanks for the trick. We can prepare the example script for Lonestar6 and
mention it.
--Junchao Zhang
On Fri, Apr 19, 2024 at 11:55 AM Sreeram R Venkat
wrote:
> I talked to the MVAPICH people, and they told me to try adding
> /path/to/mvapich2-gdr/lib64/libmpi.so to LD_PRELOAD (apparently,
I talked to the MVAPICH people, and they told me to try adding
/path/to/mvapich2-gdr/lib64/libmpi.so to LD_PRELOAD (apparently, they've
had this issue before). This seemed to do the trick; I can build everything
with MVAPICH2-GDR and run with it now. Not sure if this is something you
want to add
I looked at it before and checked again, and still see
Yes, I saw this paper
https://urldefense.us/v3/__https://www.sciencedirect.com/science/article/abs/pii/S016781912100079X__;!!G_uCfscf7eWS!dHsBib5l9Muy07HNhTdWzjZYUUlhkMkHrO7blcUjZQbwvChOEe0pDb5zyW-3qjEF78R3JHlcfjthtoxJY5VolUpbQw$
that mentioned it, and I heard in Barry's talk at SIAM PP this
On Wed, Apr 17, 2024 at 7:51 AM Sreeram R Venkat
wrote:
> Do you know if there are plans for NCCL support in PETSc?
>
What is your need? Do you mean using NCCL for the MPI communication?
>
> On Tue, Apr 16, 2024, 10:41 PM Junchao Zhang
> wrote:
>
>> Glad to hear you found a way. Did you
Victor, through the SMART PETSc project, I do have access to Frontera and
Lonestar6.
--Junchao Zhang
On Wed, Apr 17, 2024 at 3:55 AM Victor Eijkhout
wrote:
>
>
>
>
>- Did you use Frontera at TACC? If yes, I could have a try.
>
>
>
> If you’re interested in access to other TACC machines
Do you know if there are plans for NCCL support in PETSc?
On Tue, Apr 16, 2024, 10:41 PM Junchao Zhang
wrote:
> Glad to hear you found a way. Did you use Frontera at TACC? If yes, I
> could have a try.
>
> --Junchao Zhang
>
>
> On Tue, Apr 16, 2024 at 8:35 PM Sreeram R Venkat
> wrote:
>
>>
*Did you use Frontera at TACC? If yes, I could have a try.
If you’re interested in access to other TACC machines that can be arranged. I
once set up a project for PETSc access to TACC. I think that was for a github
CI but we never actually set that up.
Victor.
Glad to hear you found a way. Did you use Frontera at TACC? If yes, I
could have a try.
--Junchao Zhang
On Tue, Apr 16, 2024 at 8:35 PM Sreeram R Venkat
wrote:
> I finally figured out a way to make it work. I had to build PETSc and my
> application using the (non GPU-aware) Intel MPI.
I finally figured out a way to make it work. I had to build PETSc and my
application using the (non GPU-aware) Intel MPI. Then, before running, I
switch to the MVAPICH2-GDR.
I'm not sure why that works, but it's the only way I've found to compile
and run successfully without throwing any errors
You may need to set some env variables. This can be system specific so you
might want to look at docs or ask TACC how to run with GPU-aware MPI.
Mark
On Fri, Dec 8, 2023 at 5:17 PM Sreeram R Venkat wrote:
> Actually, when I compile my program with this build of PETSc and run, I
> still get the
Actually, when I compile my program with this build of PETSc and run, I
still get the error:
PETSC ERROR: PETSc is configured with GPU support, but your MPI is not
GPU-aware. For better performance, please use a GPU-aware MPI.
I have the mvapich2-gdr module loaded and MV2_USE_CUDA=1.
Is there
Thank you, changing to CUDA 11.4 fixed the issue. The mvapich2-gdr module
didn't require CUDA 11.4 as a dependency, so I was using 12.0
On Fri, Dec 8, 2023 at 1:15 PM Satish Balay wrote:
> Executing: mpicc -show
> stdout: icc -I/opt/apps/cuda/11.4/include -I/opt/apps/cuda/11.4/include
> -lcuda
Executing: mpicc -show
stdout: icc -I/opt/apps/cuda/11.4/include -I/opt/apps/cuda/11.4/include -lcuda
-L/opt/apps/cuda/11.4/lib64/stubs -L/opt/apps/cuda/11.4/lib64 -lcudart -lrt
-Wl,-rpath,/opt/apps/cuda/11.4/lib64 -Wl,-rpath,XORIGIN/placeholder
-Wl,--build-id -L/opt/apps/cuda/11.4/lib64/ -lm
On Fri, Dec 8, 2023 at 1:54 PM Sreeram R Venkat wrote:
> I am trying to build PETSc with CUDA using the CUDA-Aware MVAPICH2-GDR.
>
> Here is my configure command:
>
> ./configure PETSC_ARCH=linux-c-debug-mvapich2-gdr --download-hypre
> --with-cuda=true --cuda-dir=$TACC_CUDA_DIR --with-hdf5=true
15 matches
Mail list logo