hSxkF6EdZAW52HSty8Q$
Basically, you have to respect Fortran's pickiness about passing the correct
dimension (or lack of dimension) of arguments. In the error below, you need to
pass PETSC_NULL_INTEGER_ARRAY
On Jul 19, 2024, at 12:20 PM, Vanella, Marcos (Fed) via petsc-users
mailto:petsc-us
Bgzvaq6SJkJPQ0MzGeR_DjeB4_rn-1VhQT1FBxHQNruqNJ4dGIXVW9CAoatv$
Basically, you have to respect Fortran's pickiness about passing the correct
dimension (or lack of dimension) of arguments. In the error below, you need to
pass PETSC_NULL_INTEGER_ARRAY
On Jul 19, 2024, at 12:20 PM, Vanella,
have to respect Fortran's pickiness about passing the correct
dimension (or lack of dimension) of arguments. In the error below, you need to
pass PETSC_NULL_INTEGER_ARRAY
On Jul 19, 2024, at 12:20 PM, Vanella, Marcos (Fed) via petsc-users
wrote:
This Message Is From an External Sender
Hi, I did an update and compiled PETSc in Frontier with gnu compilers. When
compiling my code with PETSc I see this new error pop up:
Building mpich_gnu_frontier
ftn -c -m64 -O2 -g -std=f2018 -frecursive -ffpe-summary=none -fall-intrinsics
-cpp -DGITHASH_PP=\"FDS-6.9.1-894-g0b77ae0-FireX\"
to:
module load craype-accel-nvidia80
And then rebuild PETSc, your application
And have the same list of modules loaded at runtime.
Satish
On Thu, 2 May 2024, Vanella, Marcos (Fed) via petsc-users wrote:
> Thank you Satish and Junchao! I was able to compile PETSc with your configure
> o
> --Junchao Zhang
>
>
> On Thu, May 2, 2024 at 10:23 AM Satish Balay via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> > Try:
> >
> > module use /soft/modulefiles
> >
> > Satish
> >
> > On Thu, 2 May 2024, Vanella, Marcos (Fed) via pets
Hi all, it seems the modules in Polaris have changed (can't find
cudatoolkit-standalone anymore).
Does anyone have recent experience compiling the library with gnu and cuda in
the machine?
Thank you!
Marcos
29, 2024, at 12:05 PM, Vanella, Marcos (Fed) via petsc-users
wrote:
This Message Is From an External Sender
This message came from outside your organization.
Hi Satish,
Ok thank you for clarifying. I don't need to include Metis in the config phase
then (not using anywhere else).
Is there a way
o *.so |grep METIS_PartGraphKway
libcholmod.so:0026e500 T SuiteSparse_metis_METIS_PartGraphKway
<<<
And metis routines are already in -lcholmod [with some namespace fixes]
Satish
On Mon, 29 Apr 2024, Vanella, Marcos (Fed) via petsc-users wrote:
> Hi all, I'm wondering.. Is
Hi all, I'm wondering.. Is it possible to get SuiteSparse to use Metis at
configure time with PETSc? Using Metis for reordering at symbolic factorization
phase gives lower filling for factorization matrices than AMD in some cases
(faster solution phase).
I tried this with gcc compilers and
Hi all, we are trying to compile PETSc in Frontier using the structured matrix
hierarchical solver strumpack, which uses GPU and might be a good candidate for
our Poisson discretization.
The list of libs I used for PETSc in this case is:
$./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3"
we can see what parameters you used.
The last 100 in this row:
KSPSolve1197 0.0 2.0291e+02 0.0 2.55e+11 0.0 3.9e+04 8.0e+04
3.1e+04 12 100 100 100 49 12 100 100 100 98 2503-nan 0 1.80e-050
0.00e+00 100
tells us that all the flops were logged on GPUs.
You do need a
0
tells us that all the flops were logged on GPUs.
You do need at least 100K equations per GPU to see speedup, so don't worry
about small problems.
Mark
On Tue, Mar 5, 2024 at 12:52 PM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi all, I compiled the
0 1.80e-050
0.00e+00 100
tells us that all the flops were logged on GPUs.
You do need at least 100K equations per GPU to see speedup, so don't worry
about small problems.
Mark
On Tue, Mar 5, 2024 at 12:52 PM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov
Hi all, I compiled the latest PETSc source in Frontier using gcc+kokkos and hip
options:
./configure COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" FCOPTFLAGS="-O3"
HIPOPTFLAGS="-O3" --with-debugging=0 --with-cc=cc --with-cxx=CC --with-fc=ftn
--with-hip --with-hipc=hipcc
t;
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
mailto:petsc-users@mcs.anl.gov>>; Paul, Chandan
(IntlAssoc) mailto:chandan.p...@nist.gov>>
Subject: Re: [petsc-users] Using Sundials from PETSc
On Mon, Oct 16, 2023 at 2:29 PM Vanella, Marcos (Fed) via petsc-users
mailto
!
From: Matthew Knepley
Sent: Monday, October 16, 2023 3:03 PM
To: Vanella, Marcos (Fed)
Cc: petsc-users@mcs.anl.gov ; Paul, Chandan
(IntlAssoc)
Subject: Re: [petsc-users] Using Sundials from PETSc
On Mon, Oct 16, 2023 at 2:29 PM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users
Hi, we were wondering if it would be possible to call the latest version of
Sundials from PETSc?
We are interested in doing chemistry using GPUs and already have interfaces to
PETSc from our code.
Thanks,
Marcos
arse -vec_type cuda
Then, check again with nvidia-smi to see if GPU memory is evenly allocated.
--Junchao Zhang
On Tue, Aug 22, 2023 at 3:03 PM Matthew Knepley
mailto:knep...@gmail.com>> wrote:
On Tue, Aug 22, 2023 at 2:54 PM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.
Hi Junchao, both the slurm scontrol show job_id -dd and looking at
CUDA_VISIBLE_DEVICES does not provide information about which MPI process is
associated to which GPU in the node in our system. I can see this with
nvidia-smi, but if you have any other suggestion using slurm I would like to
Ok thanks Junchao, so is GPU 0 actually allocating memory for the 8 MPI
processes meshes but only working on 2 of them?
It says in the script it has allocated 2.4GB
Best,
Marcos
From: Junchao Zhang
Sent: Monday, August 21, 2023 3:29 PM
To: Vanella, Marcos (Fed)
Hi Junchao, something I'm noting related to running with cuda enabled linear
solvers (CG+HYPRE, CG+GAMG) is that for multi cpu-multi gpu calculations, the
GPU 0 in the node is taking what seems to be all sub-matrices corresponding to
all the MPI processes in the node. This is the result of the
-users@mcs.anl.gov
Subject: Re: [petsc-users] CUDA error trying to run a job with two mpi
processes and 1 GPU
Hi, Marcos,
Could you build petsc in debug mode and then copy and paste the whole error
stack message?
Thanks
--Junchao Zhang
On Thu, Aug 10, 2023 at 5:51 PM Vanella, Marcos (Fed) via
Hi, I'm trying to run a parallel matrix vector build and linear solution with
PETSc on 2 MPI processes + one V100 GPU. I tested that the matrix build and
solution is successful in CPUs only. I'm using cuda 11.5 and cuda enabled
openmpi and gcc 9.3. When I run the job with GPU enabled I get the
@mcs.anl.gov>
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil
(unstructured) poisson solution
On Mon, Jun 26, 2023 at 12:08 PM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Than you Matt and Mar
e hypre
As Matt said MG should be faster. How many iterations was it taking?
Try a 100^3 and check that the iteration count does not change much, if at all.
Mark
On Mon, Jun 26, 2023 at 11:35 AM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi, I was wo
at 12:08 PM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Than you Matt and Mark, I'll try your suggestions. To configure with hypre can
I just use the --download-hypre configure line?
Yes,
Thanks,
Matt
That is what I did with suitesparse, ver
that also: -pc_type hypre
As Matt said MG should be faster. How many iterations was it taking?
Try a 100^3 and check that the iteration count does not change much, if at all.
Mark
On Mon, Jun 26, 2023 at 11:35 AM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi,
Hi, I was wondering if anyone has experience on what combinations are more
efficient to solve a Poisson problem derived from a 7 point stencil on a single
mesh (serial).
I've been doing some tests of multigrid and cholesky on a 50^3 mesh. -pc_type
mg takes about 75% more time than -pc_type
t: Re: [petsc-users] Compiling PETSC with Intel OneAPI compilers and
OpenMPI
On Mon, May 15, 2023 at 11:19 AM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello, I'm trying to compile the PETSc library version 3.19.1 with OpenMPI
4.1.4 and the OneAPI 2022
gov>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Compiling PETSC with Intel OneAPI compilers and
OpenMPI
On Mon, May 15, 2023 at 11:19 AM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov
vane...@nist.gov>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Compiling PETSC with Intel OneAPI compilers and
OpenMPI
On Mon, May 15, 2023 at 11:19 AM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-u
, Vanella, Marcos (Fed) via petsc-users wrote:
> Hi Satish, well turns out this is not an M1 Mac, it is an older Intel Mac
> (2019).
> I'm trying to get a local computer to do development and tests, but I also
> have access to linux clusters with GPU which we plan to go to ne
etsc-users] Compiling PETSC with Intel OneAPI compilers and
OpenMPI
On Mon, May 15, 2023 at 11:19 AM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello, I'm trying to compile the PETSc library version 3.19.1 with OpenMPI
4.1.4 and the OneAPI 2022 Update 2 Inte
er.
What does intel compilers provide you for this use case?
Why not use xcode/clang with gfortran here - i.e native ARM binaries?
Satish
On Mon, 15 May 2023, Vanella, Marcos (Fed) via petsc-users wrote:
> Hello, I'm trying to compile the PETSc library version 3.19.1 with OpenMPI
> 4.1.4
15, 2023 at 11:19 AM Vanella, Marcos (Fed) via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hello, I'm trying to compile the PETSc library version 3.19.1 with OpenMPI
4.1.4 and the OneAPI 2022 Update 2 Intel Compiler suite on a Mac with OSX
Ventura 13.3.1.
I can compile PETSc in
Hello, I'm trying to compile the PETSc library version 3.19.1 with OpenMPI
4.1.4 and the OneAPI 2022 Update 2 Intel Compiler suite on a Mac with OSX
Ventura 13.3.1.
I can compile PETSc in debug mode with this configure and make lines. I can run
the PETSC tests, which seem fine.
When I compile
37 matches
Mail list logo