How exactly does the aux data work? What is typically available there? Is it
something the user can populate?
From: Matthew Knepley
Sent: Wednesday, October 11, 2023 8:07 PM
To: Brandon Denton
Cc: Jed Brown ; petsc-users
Subject: Re: [petsc-users] FEM
On Wed, Oct 11, 2023 at 4:15 PM Brandon Denton wrote:
> By natural coordinates, I am referring to the reference element
> coordinates. Usually these are represented as (xi, eta, zeta) in the
> literature.
>
> Yes. I would like to have the Jacobian and the derivatives of the map
> available
Thank you! That's exactly what I need.
Sent with [Proton Mail](https://proton.me/) secure email.
--- Original Message ---
On Wednesday, October 11th, 2023 at 4:17 PM, Matthew Knepley
wrote:
> On Wed, Oct 11, 2023 at 4:42 AM erdemguer wrote:
>
>> Hi again,
>
> I see the problem. FV
By natural coordinates, I am referring to the reference element coordinates.
Usually these are represented as (xi, eta, zeta) in the literature.
Yes. I would like to have the Jacobian and the derivatives of the map available
within PetscDSSetResidual() f0 and f1 functions. I believe
On Wed, Oct 11, 2023 at 2:09 PM Brandon Denton wrote:
> Thank you for the discussion.
>
> Are we agreed then that the derivatives of the natural coordinates are
> required for the described approach? If so, is this something PETSc can
> currently do within the point-wise residual functions?
>
I
Thank you for the discussion.
Are we agreed then that the derivatives of the natural coordinates are required
for the described approach? If so, is this something PETSc can currently do
within the point-wise residual functions?
Matt - Thank you for the command line option for the 2nd
Matthew Knepley writes:
> On Wed, Oct 11, 2023 at 1:03 PM Jed Brown wrote:
>
>> I don't see an attachment, but his thesis used conservative variables and
>> defined an effective length scale in a way that seemed to assume constant
>> shape function gradients. I'm not aware of systematic
On Wed, Oct 11, 2023 at 1:03 PM Jed Brown wrote:
> I don't see an attachment, but his thesis used conservative variables and
> defined an effective length scale in a way that seemed to assume constant
> shape function gradients. I'm not aware of systematic literature comparing
> the covariant
I don't see an attachment, but his thesis used conservative variables and
defined an effective length scale in a way that seemed to assume constant shape
function gradients. I'm not aware of systematic literature comparing the
covariant and contravariant length measures on anisotropic meshes,
I'm on it.
Philip Fackler
Research Software Engineer, Application Engineering Group
Advanced Computing Systems Research Section
Computer Science and Mathematics Division
Oak Ridge National Laboratory
From: Junchao Zhang
Sent: Wednesday, October 11, 2023 10:14
To:
Jose,
Thanks very much for your help with this. Greatly appreciated. I will look at
the MR. Please let me know if you do get the Fortran example working.
Thanks, and best regards,
Kenneth
From: Jose E. Roman
Date: Wednesday, October 11, 2023 at 2:41 AM
To: Kenneth C Hall
Cc:
The same docs should be available in
https://web.cels.anl.gov/projects/petsc/download/release-snapshots/petsc-with-docs-3.20.0.tar.gz
Satish
On Wed, 11 Oct 2023, Richter, Roland wrote:
> Hei,
> Thank you very much for the answer! I looked it up, but petsc.org seems to
> be a bit unstable here,
On Wed, Oct 11, 2023 at 4:42 AM erdemguer wrote:
> Hi again,
>
I see the problem. FV ghosts mean extra boundary cells added in FV methods
using DMPlexCreateGhostCells() in order to impose boundary conditions. They
are not the "ghost" cells for overlapping parallel decompositions. I have
changed
I was thinking about trying to implement Ben Kirk's approach to Navier-Stokes
(see attached paper; Section 5). His approach uses these quantities to align
the orientation of the unstructured element/cell with the fluid velocity to
apply the stabilization/upwinding and to detect shocks.
If you
On Wed, Oct 11, 2023 at 4:22 AM Richter, Roland
wrote:
> Hei,
>
> following my last question I managed to configure PETSc with Intel MPI and
> Intel OneAPI using the following configure-line:
>
>
>
> *./configure --prefix=/media/storage/local_opt/petsc
> --with-scalar-type=complex
On Tue, Oct 10, 2023 at 9:34 PM Brandon Denton via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Good Evening,
>
> I am looking to implement a form of Navier-Stokes with SUPG Stabilization
> and shock capturing using PETSc's FEM infrastructure. In this
> implementation, I need access to the
Hi again,
Here is my code:
#include
static char help[] = "dmplex";
int main(int argc, char **argv)
{
PetscCall(PetscInitialize(, , NULL, help));
DM dm, dm_dist;
PetscSection section;
PetscInt cStart, cEndInterior, cEnd, rank;
PetscInt nc[3] = {3, 3, 3};
PetscReal upper[3] = {1, 1, 1};
> On 11 Oct 2023, at 9:13 AM, Thanasis Boutsikakis
> wrote:
>
> Very good catch Pierre, thanks a lot!
>
> This made everything work: the two-step process and the ptap(). I mistakenly
> thought that I should not let the local number of columns to be None, since
> the matrix is only
Hei,
following my last question I managed to configure PETSc with Intel MPI and
Intel OneAPI using the following configure-line:
./configure --prefix=/media/storage/local_opt/petsc
--with-scalar-type=complex --with-cc=mpiicc --with-cxx=mpiicpc
--CPPFLAGS="-fPIC -march=native -mavx2"
Very good catch Pierre, thanks a lot!
This made everything work: the two-step process and the ptap(). I mistakenly
thought that I should not let the local number of columns to be None, since the
matrix is only partitioned row-wise. Could you please explain what happened
because of my setting
That’s because:
size = ((None, global_rows), (global_cols, global_cols))
should be:
size = ((None, global_rows), (None, global_cols))
Then, it will work.
$ ~/repo/petsc/arch-darwin-c-debug-real/bin/mpirun -n 4 python3.12 test.py &&
echo $?
0
Thanks,
Pierre
> On 11 Oct 2023, at 8:58
Furthermore, I tried to perform the Galerkin projection in two steps by
substituting
> A_prime = A.ptap(Phi)
With
AL = Phi.transposeMatMult(A)
A_prime = AL.matMult(Phi)
And running this with 3 procs, results to the false creation of a matrix AL
that has 3 times bigger dimensions that it
Pierre, I see your point, but my experiment shows that it does not even run due
to size mismatch, so I don’t see how being sparse would change things here.
There must be some kind of problem with the parallel ptap(), because it does
run sequentially. In order to test that, I changed the flags
Hei,
Thank you very much for the answer! I looked it up, but petsc.org seems to
be a bit unstable here, quite often I can't reach petsc.org.
Regards,
Roland Richter
-Ursprüngliche Nachricht-
Von: Satish Balay
Gesendet: mandag 9. oktober 2023 17:29
An: Barry Smith
Cc: Richter, Roland ;
Kenneth,
The MatDuplicate issue should be fixed in the following MR
https://gitlab.com/petsc/petsc/-/merge_requests/6912
Note that the NLEIGS solver internally uses MatDuplicate for creating multiple
copies of the shell matrix, each one with its own value of lambda. Hence your
implementation
25 matches
Mail list logo