Hello everyone,

(question formulated at the bottom, for further info read this)

I am working on an ALE-FSI code in deal.ii available here:
http://www.thomaswick.org/gallery_engl.html 
The nice feature of this specific method is, that with pulling back the 
fluid into the 
Lagrangian/undeformed/reference configuration 
(see. e.g. in publications from Wick, Richter, Jodlbauer, Langer&Yang etc)
one can use globally defined function spaces for (vector-valued) velocity 
and
displacement fields and for the scalar pressure.
Globally herein means, in solid and fluid domain, i.e., we can use the 
following:

FESystem<dim> fe;
fe (FE_Q<dim>(pressDegree+1), dim,  // velocities
     FE_Q<dim>(pressDegree+1), dim,  // displacements
     FE_DGP<dim>(pressDegree), 1),   // pressure

Now, as we want to efficiently solve this system using the preconditioners 
for 
each block independently, we use a LDU-decomposition ( Jodlbauer et al., 
2019, IJNME).
Therefore, we sort degrees of freedom via the cell->material_id(), which 
shall 
indicate fluid or solid subdomain. So, we get from
[vel ; def ; press] to [ vel_fluid ; def_fluid ; press_fluid ; vel_solid ; 
def_solid ; press_solid].

The setup() so far looks something like this [see setup.txt]:
---------------------------------------
1.) dist dofs.
2.) renumbering: Cuthill_McKee, component_wise & finally depending on 
material_id().
3.) count sizes of blocks (which gives correct numbers).
4.) create indexsets for locally owned & relevant dofs.
5.) setup constraints.
6.) create bdsp:

BlockDynamicSparsityPattern dsp (6,6);
    std::vector <unsigned int> n_dof_per_block(6);
    n_dof_per_block[0] = n_v_f;
    n_dof_per_block[1] = n_u_f;
    n_dof_per_block[2] = n_p_f;
    n_dof_per_block[3] = n_v_s;
    n_dof_per_block[4] = n_u_s;
    n_dof_per_block[5] = n_p_s;

    for (int i=0; i<6; i++)
    for (int j=0; j<6; j++)
    if (true) //n_dof_per_block[i] > 0 && n_dof_per_block[j] > 0)
    dsp.block(i,j).reinit(n_dof_per_block[i], n_dof_per_block[j]);

    dsp.compress();
    dsp.collect_sizes();

    DoFTools::make_sparsity_pattern (dof_handler, dsp,
        constraints, false,
    Utilities::MPI::this_mpi_process(mpi_communicator));

    SparsityTools::distribute_sparsity_pattern (dsp,
                                                
dof_handler.locally_owned_dofs_per_processor(),
                                                mpi_communicator,
                                                locally_relevant_dofs);

7.) Now use bdsp to initialize system_matrix (LA::MPI::BlockSparseMatrix):

    system_matrix.reinit (locally_owned_partitioning,
          locally_owned_partitioning,
                                      dsp,
                                      mpi_communicator);
--------------------------------

The very last part (reinit) is working only iff every processor owns parts 
of fluid AND solid domain (which is so far not ensured).
This behaviour can be seen running exactly the same problem with mpirun -n 
2 or -n 3, which is attached.

THE QUESTION IS NOW:
How may one reinit the blocks in a BlockSparseMatrix, if not every 
subdomain has locally_owned_dofs in all blocks? 
Or what is the point I am missing here?

I added a chunk of code at the end, that shows, that the 
block(block_row,block_col).reinit(...) fails, if the locally_owned indexset 
is empty.

Thanks for any help in advance, 
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f49d70e0-407f-4220-92ec-82905ff6dde1%40googlegroups.com.
  timer.enter_section("Setup system.");

  system_matrix.clear ();
  dof_handler.distribute_dofs (fe);  
  DoFRenumbering::Cuthill_McKee (dof_handler);

  // We are dealing with dim+dim+1 components for this
  // dim-dimensional fluid-structure interacion problem
  // Precisely, we use:
  // velocities:                0
  // structure displacement:    1
  // scalar pressure field:     2
  std::vector<unsigned int> block_component (dim+dim+1,0); // velocity
  for (unsigned int i=0; i<dim ; ++i)
          { block_component[dim+i] = 1; }                      // displacement
  block_component[dim+dim] = 2;                            // pressure
 
  DoFRenumbering::component_wise (dof_handler, block_component);

  // Count DoFs per block.
  std::vector<unsigned int> dofs_per_block (3);
  DoFTools::count_dofs_per_block (dof_handler, dofs_per_block, 
block_component);  
  const unsigned int n_v = dofs_per_block[0],
                                         n_u = dofs_per_block[1],
                                         n_p = dofs_per_block[2];

  // Now sort DoFs based on material_id, i.e., sort all DoFs NOT corresponding
  // to material_id == 0 (fluid) back and count material hits per block.
  unsigned int n_v_f = 0, n_u_f = 0, n_p_f = 0,
               n_v_s = 0, n_u_s = 0, n_p_s = 0;

  {
  std::vector <bool> sort_back_material (dof_handler.n_dofs(), false);
  const unsigned int sort_back_except_material_id = 0;
  renumber_dofs_material_id (sort_back_except_material_id, sort_back_material);

  // Count sub-components.
  for (unsigned int i = 0; i<dof_handler.n_dofs(); i++)
        {       if (i < n_v)
                {       if (sort_back_material[i])
                        {       n_v_s += 1; }
                        else
                        {       n_v_f += 1; }
                }
                else if (i >= n_v && i < n_v+n_u)
                {       if (sort_back_material[i])
                        {       n_u_s += 1; }
                        else
                        {       n_u_f += 1; }
                }
                else if (i >= n_v+n_u && i < n_v+n_u+n_p)
                {       if (sort_back_material[i])
                        {       n_p_s += 1;}
                        else
                        {   n_p_f += 1;}
                }
                else
                {       std::cout << "Dimension mismatch" << std::endl;
                        Assert(false, ExcInternalError());}
        }
  }

  pcout << "Cells:   " << triangulation.n_active_cells() << std::endl;
  pcout << "DoFs:    " << dof_handler.n_dofs() << " ( n_v = " << n_v << " = " 
<< n_v_f << "+" << n_v_s
                                                                                
 << " , " << "n_u = " << n_u << " = " << n_u_f << "+" << n_u_s
                                                                             << 
" , " << "n_p = " << n_p << " = " << n_p_f << "+" << n_p_s << " )" << std::endl;

  Assert(n_v_f > 0 && n_v_s > 0 && n_u_f > 0 && n_u_s > 0 && n_p_f > 0 && n_p_s 
> 0, ExcInternalError());

  // We split up the IndexSet for locally owned and locally relevant DoFs
  // into IndexSets based on how we want to create the block matrices
  // and vectors.

  IndexSet locally_owned_dofs, locally_relevant_dofs;
  locally_owned_dofs = dof_handler.locally_owned_dofs();
  DoFTools::extract_locally_relevant_dofs (dof_handler,
                                                                                
   locally_relevant_dofs);

  unsigned int idx_end_v_f = n_v_f;
  unsigned int idx_end_u_f = idx_end_v_f + n_u_f;
  unsigned int idx_end_p_f = idx_end_u_f + n_p_f;
  unsigned int idx_end_v_s = idx_end_p_f + n_v_s;
  unsigned int idx_end_u_s = idx_end_v_s + n_u_s;
  unsigned int idx_end_p_s = idx_end_u_s + n_p_s;

  locally_owned_partitioning.resize(6);
  locally_owned_partitioning[0] = locally_owned_dofs.get_view(0          , 
idx_end_v_f);
  locally_owned_partitioning[1] = locally_owned_dofs.get_view(idx_end_v_f, 
idx_end_u_f);
  locally_owned_partitioning[2] = locally_owned_dofs.get_view(idx_end_u_f, 
idx_end_p_f);
  locally_owned_partitioning[3] = locally_owned_dofs.get_view(idx_end_p_f, 
idx_end_v_s);
  locally_owned_partitioning[4] = locally_owned_dofs.get_view(idx_end_v_s, 
idx_end_u_s);
  locally_owned_partitioning[5] = locally_owned_dofs.get_view(idx_end_u_s, 
idx_end_p_s);

  locally_relevant_partitioning.resize(6);
  locally_relevant_partitioning[0] = locally_relevant_dofs.get_view(0          
, idx_end_v_f);
  locally_relevant_partitioning[1] = 
locally_relevant_dofs.get_view(idx_end_v_f, idx_end_u_f);
  locally_relevant_partitioning[2] = 
locally_relevant_dofs.get_view(idx_end_u_f, idx_end_p_f);
  locally_relevant_partitioning[3] = 
locally_relevant_dofs.get_view(idx_end_p_f, idx_end_v_s);
  locally_relevant_partitioning[4] = 
locally_relevant_dofs.get_view(idx_end_v_s, idx_end_u_s);
  locally_relevant_partitioning[5] = 
locally_relevant_dofs.get_view(idx_end_u_s, idx_end_p_s);

  // Check locally owned IndexSets.
  for (unsigned int i=0; i<Utilities::MPI::n_mpi_processes(mpi_communicator); 
i++)
          if (Utilities::MPI::this_mpi_process(mpi_communicator) == i)
                  for (unsigned int j=0; j<6; j++)
                  std::cout << " proc " << i
                            << " comp " << j
                                    << " n_owned "    << 
Utilities::int_to_string (locally_owned_partitioning[j].n_elements(), 6)
                                        << " n_relevant " << 
Utilities::int_to_string (locally_relevant_partitioning[j].n_elements(), 6) << 
std::endl;

  // Set up constraints.
  {
        constraints.reinit (locally_relevant_dofs);
        DoFTools::make_hanging_node_constraints (dof_handler, constraints);
        set_newton_zero_bc ();
        constraints.close ();
  }

  // Construct sparsity pattern.
  {

    BlockDynamicSparsityPattern dsp (6,6);
    std::vector <unsigned int> n_dof_per_block(6);
    n_dof_per_block[0] = n_v_f;
    n_dof_per_block[1] = n_u_f;
    n_dof_per_block[2] = n_p_f;
    n_dof_per_block[3] = n_v_s;
    n_dof_per_block[4] = n_u_s;
    n_dof_per_block[5] = n_p_s;

    for (int i=0; i<6; i++)
        for (int j=0; j<6; j++)
                if (true) //n_dof_per_block[i] > 0 && n_dof_per_block[j] > 0)
                        dsp.block(i,j).reinit(n_dof_per_block[i], 
n_dof_per_block[j]);

    dsp.compress();
    dsp.collect_sizes();

    DoFTools::make_sparsity_pattern (dof_handler, dsp,
                                                             constraints, false,
                                                                         
Utilities::MPI::this_mpi_process(mpi_communicator));

    SparsityTools::distribute_sparsity_pattern (dsp,
                                                
dof_handler.locally_owned_dofs_per_processor(),
                                                mpi_communicator,
                                                locally_relevant_dofs);

//    // Print out the sparsity pattern
//    if(Utilities::MPI::n_mpi_processes() == 1)
//    { sparsity_pattern.copy_from (dsp);
//        std::ofstream out ("sparsity_pattern_blockwise.gpl");
//        sparsity_pattern.print_gnuplot(out);
//        // in terminal : gnuplot [ENTER]   plot "sparsity_pattern.gpl"
//    }

    // Init matrix with the sparsity pattern.
    system_matrix.reinit (locally_owned_partitioning,
                                          locally_owned_partitioning,
                          dsp,
                          mpi_communicator);


//    // Init matrix with the sparsity pattern.
//    system_matrix.reinit(6,6);
//    for (int i=0; i<6; i++)
//      for (int j=0; j<6; j++)
//      {
//              pcout << "i " << Utilities::int_to_string(i, 2) << " "
//                        << "j " << Utilities::int_to_string(j, 2) << 
std::endl;
//              if (locally_owned_partitioning[i].n_elements() > 0 && 
locally_owned_partitioning[j].n_elements() > 0)
//                      
system_matrix.block(i,j).reinit(locally_owned_partitioning[i],
//                                                              
locally_owned_partitioning[j],
//                                                                              
                dsp.block(i,j), mpi_communicator);
//              else
//                      system_matrix.block(i,j).reinit(mpi_communicator,
//                                                              
n_dof_per_block[i],
//                                                                              
                n_dof_per_block[j],
//                                                                              
                locally_owned_partitioning[i].n_elements(),
//                                                                              
                locally_owned_partitioning[j].n_elements(),
//                                                                              
                0, true);
//      }
//    system_matrix.collect_sizes();

  }
 
  // Init distributed vectors.
  solution.reinit                          (locally_owned_partitioning, 
mpi_communicator); // solution at time t.
  old_timestep_solution.reinit (locally_owned_partitioning, mpi_communicator); 
// solution at time t-dt.
  system_rhs.reinit                (locally_owned_partitioning, 
mpi_communicator); // newton residuum.

  timer.exit_section(); 

Cells:   2640
DoFs:    51120 ( n_v = 21600 = 18408+3192 , n_u = 21600 = 18408+3192 , n_p = 
7920 = 6684+1236 )
 proc 0 comp 0 n_owned 010258 n_relevant 010766
 proc 0 comp 1 n_owned 010258 n_relevant 010766
 proc 0 comp 2 n_owned 003693 n_relevant 003876
 proc 0 comp 3 n_owned 000752 n_relevant 001096
 proc 0 comp 4 n_owned 000752 n_relevant 001096
 proc 0 comp 5 n_owned 000267 n_relevant 000399
 proc 1 comp 0 n_owned 008150 n_relevant 008838
 proc 1 comp 1 n_owned 008150 n_relevant 008838
 proc 1 comp 2 n_owned 002991 n_relevant 003159
 proc 1 comp 3 n_owned 002440 n_relevant 002936
 proc 1 comp 4 n_owned 002440 n_relevant 002936
 proc 1 comp 5 n_owned 000969 n_relevant 001095

===================================================================
Parameters
==========
Density fluid:     1000
Density structure: 1000
Viscosity fluid:   0.001
alpha_u:           1
Lame coeff. mu:    2e+06

Timestep 0 (CN_shifted): 0 (0.001)
===================================================================


        0.000000e+00

------------------
Domain force in x1 @t= 1.000000e-03  :  0.000000e+00 (fluid domain int)
Domain force in x1 @t= 1.000000e-03  :  0.000000e+00 (solid domain int)
Domain force in x2 @t= 1.000000e-03  :  0.000000e+00 (fluid domain int)
Domain force in x2 @t= 1.000000e-03  :  0.000000e+00 (solid domain int)
Min J @t = 1.000000e-03 , min(J_f) = 1.000000e+00
------------------
Timestep 1 (CN_shifted): 1.000000e-03 (1.000000e-03)
===================================================================
Cells:   2640
DoFs:    51120 ( n_v = 21600 = 18408+3192 , n_u = 21600 = 18408+3192 , n_p = 
7920 = 6684+1236 )
 proc 0 comp 0 n_owned 005664 n_relevant 006634
 proc 0 comp 1 n_owned 005664 n_relevant 006634
 proc 0 comp 2 n_owned 001980 n_relevant 002349
 proc 0 comp 3 n_owned 000000 n_relevant 000000
 proc 0 comp 4 n_owned 000000 n_relevant 000000
 proc 0 comp 5 n_owned 000000 n_relevant 000000
 proc 2 comp 0 n_owned 004708 n_relevant 005250
 proc 2 comp 1 n_owned 004708 n_relevant 005250
 proc 2 comp 2 n_owned 001677 n_relevant 001860
 proc 2 comp 3 n_owned 000814 n_relevant 001422
 proc 2 comp 4 n_owned 000814 n_relevant 001422
 proc 2 comp 5 n_owned 000303 n_relevant 000483
 proc 1 comp 0 n_owned 004594 n_relevant 005892
 proc 1 comp 1 n_owned 004594 n_relevant 005892
 proc 1 comp 2 n_owned 001713 n_relevant 002058
 proc 1 comp 3 n_owned 000752 n_relevant 001096
 proc 3 comp 0 n_owned 003442 n_relevant 004616
 proc 3 comp 1 n_owned 003442 n_relevant 004616
 proc 1 comp 4 n_owned 000752 n_relevant 001096
 proc 1 comp 5 n_owned 000267 n_relevant 000399
 proc 3 comp 2 n_owned 001314 n_relevant 001602
 proc 3 comp 3 n_owned 001626 n_relevant 002104
 proc 3 comp 4 n_owned 001626 n_relevant 002104
 proc 3 comp 5 n_owned 000666 n_relevant 000789

--------------------------------------------------------
An error occurred in line <385> of file 
</home/richardschu/deal.ii-candi/tmp/unpack/deal.II-v9.0.1/source/lac/petsc_parallel_sparse_matrix.cc>
 in function
    void dealii::PETScWrappers::MPI::SparseMatrix::do_reinit(const 
dealii::IndexSet&, const dealii::IndexSet&, const SparsityPatternType&) [with 
SparsityPatternType = dealii::DynamicSparsityPattern]
The violated condition was: 
    local_columns.n_elements()>0
Additional information: 
    This exception -- which is used in many places in the library -- usually 
indicates that some condition which the author of the code thought must be 
satisfied at a certain point in an algorithm, is not fulfilled. An example 
would be that the first part of an algorithm sorts elements of an array in 
ascending order, and a second part of the algorithm later encounters an element 
that is not larger than the previous one.

There is usually not very much you can do if you encounter such an exception 
since it indicates an error in deal.II, not in your own program. Try to come up 
with the smallest possible program that still demonstrates the error and 
contact the deal.II mailing lists with it to obtain help.

Stacktrace:
-----------
#0  /home/richardschu/deal.ii-candi/deal.II-v9.0.1/lib/libdeal_II.g.so.9.0.1: 
void 
dealii::PETScWrappers::MPI::SparseMatrix::do_reinit<dealii::DynamicSparsityPattern>(dealii::IndexSet
 const&, dealii::IndexSet const&, dealii::DynamicSparsityPattern const&)
#1  /home/richardschu/deal.ii-candi/deal.II-v9.0.1/lib/libdeal_II.g.so.9.0.1: 
void 
dealii::PETScWrappers::MPI::SparseMatrix::reinit<dealii::DynamicSparsityPattern>(dealii::IndexSet
 const&, dealii::IndexSet const&, dealii::DynamicSparsityPattern const&, 
ompi_communicator_t* const&)
#2  /home/richardschu/deal.ii-candi/deal.II-v9.0.1/lib/libdeal_II.g.so.9.0.1: 
dealii::PETScWrappers::MPI::BlockSparseMatrix::reinit(std::vector<dealii::IndexSet,
 std::allocator<dealii::IndexSet> > const&, std::vector<dealii::IndexSet, 
std::allocator<dealii::IndexSet> > const&, dealii::BlockDynamicSparsityPattern 
const&, ompi_communicator_t* const&)
#3  step-fsi-parallel-block: FSI_ALE_Problem<2>::setup_system()
#4  step-fsi-parallel-block: FSI_ALE_Problem<2>::run()
#5  step-fsi-parallel-block: main
--------------------------------------------------------

Calling MPI_Abort now.
To break execution in a GDB session, execute 'break MPI_Abort' before running. 
You can also put the following into your ~/.gdbinit:
  set breakpoint pending on
  break MPI_Abort
  set breakpoint pending auto
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD 
with errorcode 255.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

Reply via email to