Dear PETSc developers,
I installed OpenMPI3 first and then installed PETSc with that mpi. Currently,
I'm facing a scalability issue, in detail, I tested that using OpenMPI to
calculate an addition of two distributed arrays and I get a good scalability.
The problem is when I calculate the
On Tue, Oct 10, 2023 at 9:28 AM Gong Yujie
wrote:
> Dear PETSc developers,
>
> I installed OpenMPI3 first and then installed PETSc with that mpi.
> Currently, I'm facing a scalability issue, in detail, I tested that using
> OpenMPI to calculate an addition of two distributed arrays and I get a
Take a look at
https://petsc.org/release/faq/#what-kind-of-parallel-computers-or-clusters-are-needed-to-use-petsc-or-why-do-i-get-little-speedup
Check the binding that OpenMPI is using (by the way, there are much more
recent OpenMPI versions, I suggest using them). Run the STREAMS
Run STREAMS with
MPI_BINDING="-map-by socket --bind-to core --report-bindings" make mpistreams
send the result
Also run
lscpu
numactl -H
if they are available on your machine, send the result
> On Oct 10, 2023, at 10:17 AM, Gong Yujie wrote:
>
> Dear Barry,
>
> I tried to use the
This looks like a false positive or there is some subtle bug here that we
are not seeing.
Could this be the first time parallel PtAP has been used (and reported) in
petsc4py?
Mark
On Tue, Oct 10, 2023 at 8:27 PM Matthew Knepley wrote:
> On Tue, Oct 10, 2023 at 5:34 PM Thanasis Boutsikakis <
>
Good Evening,
I am looking to implement a form of Navier-Stokes with SUPG Stabilization and
shock capturing using PETSc's FEM infrastructure. In this implementation, I
need access to the cell's shape function gradients and natural coordinate
gradients for calculations within the point-wise
I disagree with what Mark and Matt are saying: your code is fine, the error
message is fine, petsc4py is fine (in this instance).
It’s not a typical use case of MatPtAP(), which is mostly designed for MatAIJ,
not MatDense.
On the one hand, in the MatDense case, indeed there will be a mismatch
Hi,
Sorry for my late response. I tried with your suggestions and I think I made a
progress. But I still got issues. Let me explain my latest mesh routine:
- DMPlexCreateBoxMesh
- DMSetFromOptions
- PetscSectionCreate
- PetscSectionSetNumFields
- PetscSectionSetFieldDof
- PetscSectionSetDof
-
Hi all,
Revisiting my code and the proposed solution from Pierre, I realized this works
only in sequential. The reason is that PETSc partitions those matrices only
row-wise, which leads to an error due to the mismatch between number of columns
of A (non-partitioned) and the number of rows of
On Tue, Oct 10, 2023 at 5:34 PM Thanasis Boutsikakis <
thanasis.boutsika...@corintis.com> wrote:
> Hi all,
>
> Revisiting my code and the proposed solution from Pierre, I realized this
> works only in sequential. The reason is that PETSc partitions those
> matrices only row-wise, which leads to
My initial plan was to write a new code using only PETSc. However, I don't see
how to do what I want within the point-wise residual function. Am I missing
something?
Yes. I would be interested in collaborating on the ceed-fluids. I took a quick
look at the links you provided and it looks
On Tue, Oct 10, 2023 at 7:01 PM erdemguer wrote:
>
> Hi,
> Sorry for my late response. I tried with your suggestions and I think I
> made a progress. But I still got issues. Let me explain my latest mesh
> routine:
>
>
>1. DMPlexCreateBoxMesh
>2. DMSetFromOptions
>3.
Do you want to write a new code using only PETSc or would you be up for
collaborating on ceed-fluids, which is a high-performance compressible SUPG
solver based on DMPlex with good GPU support? It uses the metric to compute
covariant length for stabilization. We have YZƁ shock capturing, though
13 matches
Mail list logo