Thank you so much Matt, for getting back to me so quickly. Yes, using another
PC fixes the issues.
Many thanks again for your help,
Thuc
From: Matthew Knepley [mailto:knep...@gmail.com]
Sent: Monday, July 24, 2023 6:50 PM
To: Thuc Bui
Cc: petsc-users
Subject: Re: [petsc-users] 3D
On Mon, Jul 24, 2023 at 8:16 PM Thuc Bui wrote:
> Dear PETSc Users/Developers.
>
>
>
> I have been successfully using PETsc on Windows without MPI for a while
> now. I have now attempted to implement PETSc with MPI on Windows 10. I have
> built a release version of PETSc 3.18.6 with MS MPI
Dear PETSc Users/Developers.
I have been successfully using PETsc on Windows without MPI for a while now.
I have now attempted to implement PETSc with MPI on Windows 10. I have built
a release version of PETSc 3.18.6 with MS MPI 10.1.2, Intel MKL 3.279 (2020)
and Visual Studio 2019 as a static
I interpreted Danial's question as referring to small block sizes for
multi-dof per point/vertex/cell in the mesh, like Satish, but please
clarify.
With that assumption, as you know PETSc does not support explicit variable
block sizes, like ML does for instance, but the conventional wisdom has
Perhaps you need
> make PETSC_DIR=~/asd/petsc-3.19.3 PETSC_ARCH=arch-mswin-c-opt all
> On Jul 24, 2023, at 1:11 PM, Константин via petsc-users
> wrote:
>
> Good evening. After configuring petsc I had to write this comand on cygwin64.
> $ make PETSC_DIR=/home/itugr/asd/petsc-3.19.3
Good evening. After configuring petsc I had to write this comand on cygwin64.
$ make PETSC_DIR=/home/itugr/asd/petsc-3.19.3 PETSC_ARCH=arch-mswin-c-opt all
But I have such problem
makefile:26: /home/itugr/asd/petsc-3.19.3/lib/petsc/conf/rules.utils: No such
file or directory
make[1]: *** No rule
On Mon, Jul 24, 2023 at 6:34 AM Daniel Stone
wrote:
> Hello PETSc Users/Developers,
>
> A collegue of mine is looking into implementing an adaptive implicit
> method (AIM) over
> PETSc in our simulator. This has led to some interesting questions about
> what can
> be done with blocked matrices,
One way to boost performance [of MatVec etc] in sparse matrices with
blocks is by avoiding loading (from memory to cpu registers) of
row/col indices for the blocks - when possible. [the performance
boost here come by the fact that the memory bandwidth requirements get
reduced]
So we have BAIJ
Hello PETSc Users/Developers,
A collegue of mine is looking into implementing an adaptive implicit method
(AIM) over
PETSc in our simulator. This has led to some interesting questions about
what can
be done with blocked matrices, which I'm not able to answer myself - does
anyone have
any insight?
On the hypre versioning - aha. For this project I locked the petsc version
a little while ago (3.19.1), but I've been using a fresh clone of hypre, so
clearly
it's too modern a version. Using the appropriate version of hypre (2.28.0,
according to hypre.py) might fix some things.
I may have other
10 matches
Mail list logo