We would need more details as to exactly what goes wrong to determine any 
kind of fix; my guess would be that the layout of the velocity vectors and 
temperature vectors is slightly different since the DMDA uses nelx+1 while the 
stag uses nelx and may split things up slightly differently in parallel. You 
could try very small problems with say 2 or 4 ranks and put known values into 
the vectors and look at the ghost point update locations and exact locations in 
the local and global vectors to make sure everything is where you expect it to 
be.

  There is also the possibility of using a DMStag for the temperature instead 
of DMDA since DMDA provides essentially a subset of the functionality of DMDA 
to get a more consistent layout of the unknowns in the two vectors.

  Finally, one could use a single DMStag with all the unknowns but treat 
subvectors of the unknowns differently in your discretization and solve 
process. So assign velocity values (I guess) to the cell faces and temperature 
to the cell vertices, this will give consistent parallel decomposition of 
values but you will have to manage the fact that the unknowns are interlaced 
into a single vector so your solver portions of the code may need to "pull" out 
appropriate subvectors.

  Barry


> On Apr 22, 2022, at 9:45 AM, Carl-Johan Thore <carl-johan.th...@liu.se> wrote:
> 
> Hi!
>  
> I'm working on a convection-diffusion heat transfer problem. The temperature
> is discretized using standard Q1 elements and a DMDA. The flow is modelled
> using a stabilized Q1-Q0 method for which DMStag seemed like a good choice. 
> The codes for the temperature
> and flow work fine separately (both in serial and parallel), but when 
> combined and running
> in parallel, a problem sometimes arises in the assembly of the thermal system 
> matrix.
> Here’s a rough sketch of the combined code:
>  
> // Create dmda for thermal problem and dmstag for flow problem
> DMDACreate3d(PETSC_COMM_WORLD, bx, by, bz, DMDA_STENCIL_BOX, nelx+1, nely+1, 
> nelz+1, PETSC_DECIDE, PETSC_DECIDE, PETSC_DECIDE,
>                 1, stencilWidth, 0, 0, 0, &dmthermal);                        
>                           
> …
> // A bit of code to adjust Lx,Ly,Lz so that dmthermal and dmflow are 
> compatible in the sense of having the same
> // local elements
> …
> DMStagCreate3d(PETSC_COMM_WORLD, bx, by, bz, nelx, nely, nelz, md,nd,pd,
> 3,0,0,0,DMSTAG_STENCIL_BOX,stencilWidth,Lx,Ly,Lz,&dmflow);                
>                                                                               
>         
> PetscInt edofT[8];          // 8-noded element with 1 temp DOF per node
> DMStagStencil edofF[24];       // 8 nodes with 3 velocity DOFs each
>  
> // Assemble thermal system matrix K
> for (PetscInt e=0 ...)   // Loop over local elements
> {
> // Populate edofF, edofT
>                              // Get element velocities in ue from local 
> velocity vector uloc
>                              
> DMStagVecGetValuesStencil(dmflow,uloc,24,edof,ue);
>                                                           ...
>                              Ke = Ke_diffusion + Ke_convection(ue)
>                                                           ...
>                              MatSetValuesLocal(K, 8, edofT, 8, edofT, Ke, 
> ADD_VALUES);
> }
>  
> This always works fine in serial, but depending on the mesh and the number of 
> ranks,
> we don't always get the correct values in the element velocity vector ue. I 
> suspect
> this has something to do with the ordering of the elements and/or the DOFs, 
> because
> the elements in the global velocity vector are always the same but their 
> order may change
> (judging from the output of VecView at least).
>  
> Is it possible to ensure compatibility between the dm:s, or find some kind of 
> mapping
> between them, so that something along the lines of the code above always 
> works?
>  
> Kind regards,
> Carl-Johan

Reply via email to