Thanks, Matt.
Following your explanations, my understanding is this "If we use multiple
MPI processors, the global numbering of the vertices (global domain) will
be different from that with only one processor, right? ". If this is the
case, will it be easy for us to check the assembled matrix
On Thu, May 18, 2023 at 8:47 PM neil liu wrote:
> Thanks, Matt. I am using the following steps to build a local to global
> mapping.
> Step 1) PetscSectionCreate ();
> PetscSectionSetNumFields();
> PetscSectionSetChart ();
> //Set dof for each node
> PetscSectionSetup (s);
> Step
Thanks, Matt. I am using the following steps to build a local to global
mapping.
Step 1) PetscSectionCreate ();
PetscSectionSetNumFields();
PetscSectionSetChart ();
//Set dof for each node
PetscSectionSetup (s);
Step 2)
PetscCall(DMGetLocalToGlobalMapping(dm, ));
On Wed, May 17, 2023 at 6:58 PM neil liu wrote:
> Dear Petsc developers,
>
> I am writing my own code to calculate the FEM matrix. The following is my
> general framework,
>
> DMPlexCreateGmsh();
> MPI_Comm_rank (Petsc_comm_world, );
> DMPlexDistribute (.., .., );
>
> dm = dmDist;
> //This can
Dear Petsc developers,
I am writing my own code to calculate the FEM matrix. The following is my
general framework,
DMPlexCreateGmsh();
MPI_Comm_rank (Petsc_comm_world, );
DMPlexDistribute (.., .., );
dm = dmDist;
//This can create separate dm s for different processors. (reordering.)