Correction! The MKL library mkl_rt.lib was mistakenly included in the configure
below. It is not needed to build PETSc, but the app requires to link to it and
petsc.lib.
Thuc
-Original Message-
From: petsc-users [mailto:petsc-users-boun...@mcs.anl.gov] On Behalf Of Thuc Bui
Sent: Tuesd
Hi Satish and Barry,
Thank you very for getting back to me with the suggestion of the
OMP_NUM_THREADS environmental variable and how to set the option
no_signal_handler per Barry's suggestion.
Yes, I am using threaded Intel MKL. So, I first set the environmental variable
OMP_NUM_THREADS=1,
Thank you for your reply.
Let's call this matrix *M*:
(A B C D)
(E F G H)
(I J K L)
Now, instead of doing KSP with just *M*, what if I want *M^TM*? In this
case, the matvec implementation would be as follows:
- same partitioning of blocks A, B, ..., L among the 12 MPI ranks
- matvec look
The PetscLayout local sizes for PETSc (a,b,c) vector (0,0,0,number of rows
of D, 0,0,0, number of rows of H, 0,0,0,number of rows of L)
The PetscLayout local sizes for PETSc (w,x,y,z) vector (number of columns of
A, number of columns of B, number of columns of C, number of columns of
With the example you have given, here is what I would like to do:
- 12 MPI ranks
- Each rank has one block (rank 0 has A, rank 1 has B, ..., rank 11 has
L) - to make the rest of this easier I'll refer to the rank containing
block A as "rank A", and so on
- rank A, rank B, rank C, an
( a ) ( A B C D ) ( w )
( b ) = ( E F G H ) ( x )
( c )( IJ K L ) ( y )
( z )
I have no idea what "The input vector is partitioned across each row, and the
output vector is partitioned across each column" means.
I have a custom implementation of a matrix-vector product that inherently
relies on a 2D processor partitioning of the matrix. That is, if the matrix
looks like:
A B C D
E F G H
I J K L
in block form, we use 12 processors, each having one block. The input
vector is partitioned across each row, a
Its a run time option to petsc (application) binary.
So you can either specify it via command line - at run time - or add it to env
variable "PETSC_OPTIONS" - or add it to $HOME/.petscrc file
Satish
On Tue, 19 Sep 2023, Thuc Bui wrote:
> Hi Barry,
>
>
>
> Thanks for getting back to me. Th
Hi Barry,
Thanks for getting back to me. The diagnostics were generated when tracing
under the VS debugger. To use the option –no_signal_handler, I believe I will
need to reconfigure PETSc with this additional option. I will try it now.
Thuc
From: Barry Smith [mailto:bsm...@petsc.d
BTW: Can check if you are using threaded MKL?
We default to:
Libraries: -L/cygdrive/c/PROGRA~2/Intel/oneAPI/mkl/2022.1.0/lib/intel64
mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib
If using threaded MKL - try using env variable "OMP_NUM_THREADS=1" and see that
makes a differe
Can you run in the Microsoft Visual Studio debugger? Use the additional PETSc
option -no_signal_handler
It won't show exactly where the SEGV happens but might focus in a bit on it.
For example it may be ddot() or ddot_().
Barry
> On Sep 19, 2023, at 2:04 AM, Thuc Bui wrote:
>
> Hi
On Tue, 19 Sep 2023, Matthew Knepley wrote:
> On Tue, Sep 19, 2023 at 7:04 AM Thuc Bui wrote:
>
> > Hi Barry,
> >
> >
> >
> > Visual Studio 2022 is the problem! The code linked to Petsc 3.18.6 built
> > with VS 2022 also crashes at the same place. The same errors are shown
> > below. I don’t rem
12 matches
Mail list logo