Re: [petsc-users] Using dmplexdistribute do parallel FEM code.

2023-05-19 Thread neil liu
Thanks, Matt. Following your explanations, my understanding is this "If we use multiple MPI processors, the global numbering of the vertices (global domain) will be different from that with only one processor, right? ". If this is the case, will it be easy for us to check the assembled matrix

Re: [petsc-users] Using dmplexdistribute do parallel FEM code.

2023-05-18 Thread Matthew Knepley
On Thu, May 18, 2023 at 8:47 PM neil liu wrote: > Thanks, Matt. I am using the following steps to build a local to global > mapping. > Step 1) PetscSectionCreate (); > PetscSectionSetNumFields(); > PetscSectionSetChart (); > //Set dof for each node > PetscSectionSetup (s); > Step

Re: [petsc-users] Using dmplexdistribute do parallel FEM code.

2023-05-18 Thread neil liu
Thanks, Matt. I am using the following steps to build a local to global mapping. Step 1) PetscSectionCreate (); PetscSectionSetNumFields(); PetscSectionSetChart (); //Set dof for each node PetscSectionSetup (s); Step 2) PetscCall(DMGetLocalToGlobalMapping(dm, ));

Re: [petsc-users] Using dmplexdistribute do parallel FEM code.

2023-05-17 Thread Matthew Knepley
On Wed, May 17, 2023 at 6:58 PM neil liu wrote: > Dear Petsc developers, > > I am writing my own code to calculate the FEM matrix. The following is my > general framework, > > DMPlexCreateGmsh(); > MPI_Comm_rank (Petsc_comm_world, ); > DMPlexDistribute (.., .., ); > > dm = dmDist; > //This can

[petsc-users] Using dmplexdistribute do parallel FEM code.

2023-05-17 Thread neil liu
Dear Petsc developers, I am writing my own code to calculate the FEM matrix. The following is my general framework, DMPlexCreateGmsh(); MPI_Comm_rank (Petsc_comm_world, ); DMPlexDistribute (.., .., ); dm = dmDist; //This can create separate dm s for different processors. (reordering.)