Hi Peter, Thanks a lot for the suggestion. With it I think I managed to achieved the desired effect. First, I populated a LinearAlgebra::distributed::Vector<float> with the material ids and called the update_ghost_values() method afterwards. In a second active_cell_iterator I used said vector and the global_active_cell_index() of each cell pair for the boolean comparison of the material id. I could not do a visual verification due to the assertion data_vector.size() == triangulation->n_active_cells() inside the add_data_vector() method with DataVectorType::type_cell_data as third argument, i.e.,
data_out.add_data_vector( cell_is_at_interface, "cell_is_at_interface", dealii::DataOut_DoFData<dealii::DoFHandler<dim, dim>, dim, dim>::DataVectorType::type_cell_data); The assertion does not hold for a distributed vector as n_global_active_cells > n_active_cells. Maybe this is not the correct method for LinearAlgebra::distributed::Vector<float>? Nevertheless, I instead counted the amount of times the boolean comparison was true and it coincides with what it is expected from the amount of global refinements of the unit square (with 3 global refinements there are a total of 16 cells at the interface) and I checked the x-coordinate of the cells' center to verify that it is indeed at the interface. It seems that when working in parallel out of the methods CellAccessor::material_id(), CellAccessor::active_fe_index() and CellAccessor::global_active_cell_index(), only the latter returns the correct value when called from a neighbor cell outside the locally owned subdomain. I could observe this when printing the global_active_cell_index, active_fe_index and the material_id pairs each the the boolean comparison was true. Here are the results in serial Global active cell index pair ( 5, 16) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair ( 7, 18) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (13, 24) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (15, 26) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (16, 5) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (18, 7) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (24, 13) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (26, 15) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (37, 48) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (39, 50) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (45, 56) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (47, 58) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (48, 37) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (50, 39) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (56, 45) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (58, 47) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) and in parallel (-np 3) Global active cell index pair (24, 13) with active_fe_index pair ( 2, 0) and material_id pair ( 2, 0) Global active cell index pair (26, 15) with active_fe_index pair ( 2, 0) and material_id pair ( 2, 0) Global active cell index pair (37, 48) with active_fe_index pair ( 1, 0) and material_id pair ( 1, 0) Global active cell index pair (45, 56) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (47, 58) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (48, 37) with active_fe_index pair ( 2, 0) and material_id pair ( 2, 0) Global active cell index pair (50, 39) with active_fe_index pair ( 2, 0) and material_id pair ( 2, 0) Global active cell index pair (56, 45) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (58, 47) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair ( 5, 16) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair ( 7, 18) with active_fe_index pair ( 1, 2) and material_id pair ( 1, 2) Global active cell index pair (13, 24) with active_fe_index pair ( 1, 0) and material_id pair ( 1, 0) Global active cell index pair (15, 26) with active_fe_index pair ( 1, 0) and material_id pair ( 1, 0) Global active cell index pair (16, 5) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (18, 7) with active_fe_index pair ( 2, 1) and material_id pair ( 2, 1) Global active cell index pair (39, 50) with active_fe_index pair ( 1, 0) and material_id pair ( 1, 0) If the neighbor cell is outside the locally owned subdomain, the material_id(), active_fe_index() methods return zero. Attached is the updated MWE with which the above terminal output was obtained and which reproduces the assertion by the add_data_vector() method. Cheers, Jose On Tuesday, March 21, 2023 at 11:00:45 AM UTC+1 peterr...@gmail.com wrote: > Hi Jose, > > not sure. You could use > Triangulation::global_active_cell_index_partitioner() to initialize a > distributed vector, access it via CellAccessor::global_active_cell_index(), > and update the ghost values. > > Peter > On Tuesday, March 21, 2023 at 10:47:14 AM UTC+1 jose.a...@gmail.com wrote: > >> Hello dealii community, >> >> I am working on a problem with several subdomains. At the interface >> between them a boundary integral is to be evaluated. I am identifying the >> interface by comparing the material_id of neighboring cells (or their >> active_fe_index as I am using a different FESystem per subdomain). In order >> to speed up the search during assembly, a Vector<float> is previously >> filled with 1.0 at the cells where the material_id/active_fe_index differ. >> This approach works in serial but in parallel the material_id() call of a >> neighbor cell outside the locally owned subdomain always returns 0 (An >> assertion is missing here). As such, not only the interface between >> subdomains is marked but also the interface between locally owned >> subdomains, as shown in the attached picture >> >> My question is, if there is an equivalent to locally owned and relevant >> dofs for the case of cells, i.e., locally owned and relevant cells such >> that the material_id/active_fe_index of the neighboring cell outside the >> locally owned subdomain can be read? Alternatively, is there a built-in >> method/member which can be used for my purpose or someone has already done >> it through another approach? >> >> Attached is also the MWE used to obtain the attached screenshot. >> >> Cheers, >> Jose >> > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/905ef15c-9e75-481c-b2c1-d49a963d7f5dn%40googlegroups.com.
#include <deal.II/base/conditional_ostream.h> #include <deal.II/distributed/tria.h> #include <deal.II/dofs/dof_handler.h> #include <deal.II/grid/tria.h> #include <deal.II/grid/grid_generator.h> #include <deal.II/grid/grid_out.h> #include <deal.II/lac/la_parallel_vector.h> #include <deal.II/numerics/data_out.h> #include <deal.II/numerics/data_out_dof_data.h> namespace Tests { template<int dim> class MarkInterface { public: MarkInterface(); void run(); private: dealii::ConditionalOStream pcout; dealii::parallel::distributed::Triangulation<dim> triangulation; dealii::DoFHandler<dim> dof_handler; dealii::LinearAlgebra::distributed::Vector<float> locally_owned_subdomain; dealii::LinearAlgebra::distributed::Vector<float> material_id; dealii::LinearAlgebra::distributed::Vector<float> active_fe_index; dealii::LinearAlgebra::distributed::Vector<float> cell_is_at_interface; const double edge_length; void make_grid(); void mark_grid(); void assembly_test(); void output(); }; template<int dim> MarkInterface<dim>::MarkInterface() : pcout( std::cout, dealii::Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0), triangulation(MPI_COMM_WORLD), dof_handler(triangulation), edge_length(1.0) {} template<int dim> void MarkInterface<dim>::run() { pcout << "Running in " + std::to_string(dim) + "-d:\n"; make_grid(); mark_grid(); assembly_test(); output(); } template<int dim> void MarkInterface<dim>::make_grid() { // Grid dealii::GridGenerator::hyper_cube( triangulation, 0, edge_length, false); triangulation.refine_global(3); // Subdomains for (const auto &active_cell : triangulation.active_cell_iterators()) if (active_cell->is_locally_owned()) { if (std::fabs(active_cell->center()[0]) < edge_length/2.0) active_cell->set_material_id(1); else active_cell->set_material_id(2); } // Match active FE index to material index for (const auto &active_cell : dof_handler.active_cell_iterators()) { if (active_cell->is_locally_owned()) { active_cell->set_active_fe_index(active_cell->material_id()); } } } template <int dim> void MarkInterface<dim>::mark_grid() { pcout << " Marking\n"; cell_is_at_interface.reinit(triangulation.global_active_cell_index_partitioner().lock()); locally_owned_subdomain.reinit(triangulation.global_active_cell_index_partitioner().lock()); material_id.reinit(triangulation.global_active_cell_index_partitioner().lock()); active_fe_index.reinit(triangulation.global_active_cell_index_partitioner().lock()); cell_is_at_interface = -1.0; locally_owned_subdomain = -1.0; material_id = -1.0; active_fe_index = -1.0; cell_is_at_interface.update_ghost_values(); locally_owned_subdomain.update_ghost_values(); material_id.update_ghost_values(); active_fe_index.update_ghost_values(); for (const auto &active_cell : dof_handler.active_cell_iterators()) { if (active_cell->is_locally_owned()) { locally_owned_subdomain[active_cell->global_active_cell_index()] = triangulation.locally_owned_subdomain(); material_id[active_cell->global_active_cell_index()] = active_cell->material_id(); active_fe_index[active_cell->global_active_cell_index()] = active_cell->active_fe_index(); } } material_id.update_ghost_values(); unsigned int n_calls = 0; for (const auto &active_cell : dof_handler.active_cell_iterators()) { if (active_cell->is_locally_owned()) { for (const auto &face_index : active_cell->face_indices()) { if (!active_cell->face(face_index)->at_boundary() && material_id[active_cell->global_active_cell_index()] != material_id[active_cell->neighbor(face_index)->global_active_cell_index()]) { cell_is_at_interface[active_cell->global_active_cell_index()] = 1.0; std::cout << " active_cell->center()[0] = " << active_cell->center()[0] << ", active_cell->neighbor(face_index)->center()[0] = " << active_cell->neighbor(face_index)->center()[0] << std::endl; n_calls++; break; } } } } cell_is_at_interface.update_ghost_values(); n_calls = dealii::Utilities::MPI::sum(n_calls, MPI_COMM_WORLD); pcout << "\n n_calls = " << n_calls << "\n\n"; MPI_Barrier(MPI_COMM_WORLD); } template<int dim> void MarkInterface<dim>::assembly_test() { pcout << " Testing assembly loop template\n"; for (const auto &active_cell : dof_handler.active_cell_iterators()) { if (active_cell->is_locally_owned() && cell_is_at_interface[active_cell->global_active_cell_index()]) { for (const auto &face_index : active_cell->face_indices()) { if (!active_cell->face(face_index)->at_boundary() && material_id[active_cell->global_active_cell_index()] != material_id[active_cell->neighbor(face_index)->global_active_cell_index()]) { std::cout << " Global active cell index pair (" << std::setw(2) << active_cell->global_active_cell_index() << ", " << std::setw(2) << active_cell->neighbor(face_index)->global_active_cell_index() << ") with active_fe_index pair (" << std::setw(2) << active_cell->active_fe_index() << ", " << std::setw(2) << active_cell->neighbor(face_index)->active_fe_index() << ") and material_id pair (" << std::setw(2) << active_cell->material_id() << ", " << std::setw(2) << active_cell->neighbor(face_index)->material_id() << ")\n"; } } } } } template<int dim> void MarkInterface<dim>::output() { dealii::DataOut<dim> data_out; data_out.attach_dof_handler(dof_handler); data_out.add_data_vector(locally_owned_subdomain, "locally_owned_subdomain", dealii::DataOut_DoFData<dealii::DoFHandler<dim, dim>, dim, dim>:: DataVectorType::type_cell_data); data_out.add_data_vector(material_id, "material_id", dealii::DataOut_DoFData<dealii::DoFHandler<dim, dim>, dim, dim>:: DataVectorType::type_cell_data); data_out.add_data_vector(active_fe_index, "active_fe_index", dealii::DataOut_DoFData<dealii::DoFHandler<dim, dim>, dim, dim>:: DataVectorType::type_cell_data); data_out.add_data_vector(cell_is_at_interface, "cell_is_at_interface", dealii::DataOut_DoFData<dealii::DoFHandler<dim, dim>, dim, dim>:: DataVectorType::type_cell_data); data_out.build_patches(); data_out.write_vtu_in_parallel( "triangulation_" + std::to_string(dim) + ".vtu", MPI_COMM_WORLD); } } // namespace Test int main(int argc, char *argv[]) { try { dealii::Utilities::MPI::MPI_InitFinalize mpi_initialization( argc, argv, dealii::numbers::invalid_unsigned_int); { Tests::MarkInterface<2> test; test.run(); } { Tests::MarkInterface<3> test; //test.run(); } } catch (std::exception &exc) { std::cerr << std::endl << std::endl << "----------------------------------------------------" << std::endl; std::cerr << "Exception on processing: " << std::endl << exc.what() << std::endl << "Aborting!" << std::endl << "----------------------------------------------------" << std::endl; return 1; } catch (...) { std::cerr << std::endl << std::endl << "----------------------------------------------------" << std::endl; std::cerr << "Unknown exception!" << std::endl << "Aborting!" << std::endl << "----------------------------------------------------" << std::endl; return 1; } return 0; }