Hi Wolfgang,

Thanks for your feedback.  I have this two-triangulation problem working in 
serial, and I have other jobs that work with MPI.  When I put them 
together, I end up with something like the code below.  It seems like there 
will be some issues, though.


--- Reading in two triangulations
     std::ifstream in;
     in.open("input/solid.inp");
  GridIn<3> grid_in_sld;
  grid_in_sld.attach_triangulation(triangulation_sld);
  grid_in_sld.read_abaqus(in);

  std::ifstream in2;
  in2.open("input/flat3x3.inp");
  GridIn<2,3> grid_in_sh;
  grid_in_sh.attach_triangulation(triangulation_sh);
  grid_in_sh.read_abaqus(in2);


--- Partitioning the two meshes

    GridTools::partition_triangulation_zorder(
          Utilities::MPI::n_mpi_processes(mpi_communicator), 
triangulation_sld); // maybe divide n_mpi by 2
    GridTools::partition_multigrid_levels(triangulation_sld);
    auto construction_data_sld =
            
TriangulationDescription::Utilities::create_description_from_triangulation(
                  triangulation_sld,  mpi_communicator,
                  
TriangulationDescription::Settings::construct_multigrid_hierarchy);
    tria_pft_sld.create_triangulation(construction_data_sld);

    GridTools::partition_triangulation_zorder(
          Utilities::MPI::n_mpi_processes(mpi_communicator), 
triangulation_sh);// maybe divide n_mpi by 2
    GridTools::partition_multigrid_levels(triangulation_sh);
    auto construction_data_sh =
            
TriangulationDescription::Utilities::create_description_from_triangulation(
                  triangulation_sh,  mpi_communicator,
                  
TriangulationDescription::Settings::construct_multigrid_hierarchy);
    tria_pft_sh.create_triangulation(construction_data_sh);


If I do this, then it seems that I'd end up with two conflicting 
partitions.  If I have n_mpi_processes=8 , for example, this approach would 
assign ~1/8th of the solid mesh to process 1 AND ~1/8th of the shell mesh 
to process 1.  This seems like a problem.

I was thinking that maybe I could set it up such that the larger solid mesh 
could be split into 6 partitions over processes 0-5.  Then the smaller 
shell mesh could be split into 2 partitions using processes 6 and 7.  
However, I didn't see any way to have the process numbers start at a 
non-zero number.


As you pointed out, I will have an issue with the off-diagonal matrices and 
the coupling between the two triangulations.  Is it possible for me to 
build up all of the sparsity patterns and matrices *and then *perform the 
partitioning?


Thanks,
Alex
On Tuesday, January 16, 2024 at 10:04:52 PM UTC-5 Wolfgang Bangerth wrote:

> On 1/16/24 07:03, Alex Quinlan wrote:
> > 
> > I've come up against another issue in my mixed-dimensional mesh.  I am 
> now 
> > trying to implement MPI, which requires partitioning the triangulation.
> > 
> > In my case, I have two triangulations: one for the solid <3,3> elements 
> and 
> > one for the shell <2,3> elements.  These two triangulations are couple 
> at 
> > multiple points.  I combine them into a blockmatrix with the form:
> > 
> > [ A_solid   ,    B_coupling;
> >   C_coupling,    D_shell      ]
> > 
> > I may also need to expand this in the future to a matrix with 3x3 or nxn 
> blocks.
> > 
> > Do you have any thoughts on how I can partition this problem?  The suite 
> of 
> > GridTools::partition_triangulation seem like they will not work for my 
> > two-triangulation approach.
>
> Can you elaborate what the problem you are facing is? At least for the 
> diagonal blocks of the matrix, you would simply use a matrix class such as 
> PETScWrappers::MPI::SparseMatrix that knows how to partition itself onto 
> different processes. Similarly, for your vectors, you'd use the 
> appropriate 
> block vector class in which each vector block is partitioned. You'd have 
> to 
> initialize these individually, but I suspect you're already doing that 
> anyway. 
> In each case, blocking (into the 2x2 structure you show above) and 
> partitioning (among MPI processes) are orthogonal issues.
>
> The biggest issue I see is how you can build the off-diagonal blocks above 
> because to do so, you need to know information about both meshes on the 
> same 
> process, and generally the partition of one triangulation owned by a 
> specific 
> process does not geometrically overlap with the partition of the other 
> triangulation owned by the same process.
>
> Best
> W.
>
> -- 
> ------------------------------------------------------------------------
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b4b70796-8c43-453e-b7de-98c1cca89243n%40googlegroups.com.

Reply via email to