Hi Timo,

Thank you for your answer, this sounds really good!
I tried to use the class, but I still have some Problems. In the example 
you are just transferring an integer variable. Is it also possible to 
transfer a whole CellDataStorage element (or at least some packed vector 
with ist elements) from a parent cell to each child? Do you also have an 
example how this works?

Thanks in advance,
Frederik

Am Donnerstag, 26. Oktober 2017 19:34:37 UTC+2 schrieb Timo Heister:

> Frederik, 
>
> without looking at your code (sorry!), I think you can use 
> Triangulation::register_data_attach and notify_ready_to_unpack to work 
> with any kind of data that you want to transfer around (this is the 
> functionality that is used by SolutionTransfer and the 
> ContinuousQuadratureDataTransfer. 
>
> See 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dealii_dealii_blob_master_tests_mpi_attach-5Fdata-5F01.cc&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=4zqq8LzalJka1USeMsXKUwuzA9ZYVYDocY6Qd-sHk98&s=nC5wgqjDzcurV-G_a89DWCjEZbxP9MHaiIWP1yIKd4w&e=
>  
> for an example. 
>
> On Thu, Oct 26, 2017 at 12:50 PM, 'Frederik S.' via deal.II User Group 
> <dea...@googlegroups.com <javascript:>> wrote: 
> > Hello! 
> > 
> > I've got a question on the usage of CellDataStorage for refined cells 
> for a 
> > parallelized simulation. Below you'll find a sample of the code I use 
> for 
> > refinement. 
> > 
> > I'm transferring the solution as stated in 
> > 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.dealii.org_8.5.0_doxygen_deal.II_classparallel-5F1-5F1distributed-5F1-5F1SolutionTransfer.html&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=4zqq8LzalJka1USeMsXKUwuzA9ZYVYDocY6Qd-sHk98&s=Nc9Mb-wVDF_hg0oMht1A43zN_F7kpOv6vDewaKyyvzI&e=
>  
> > and I sadly can't use ContinuousQuadratureDataTransfer, because my 
> CellData 
> > is discontinous. Therefore I'm iterating through all cells, look at each 
> > cells parent and copy the CellData of the parent to the child cells (the 
> > blue part below; in reality the copying process is much longer, but then 
> the 
> > code would be unnecessarily complex for this example). If I run the code 
> > with just one cpu core, everything works perfectly fine, so in this 
> example 
> > every child cell has it's parent's CellData. But for simulations with 
> more 
> > than one core, I just copy zeros into the child cells CellData (but I do 
> not 
> > get an runtime-error). 
> > 
> > Where did I make a mistake? Or is it just not possible to access parent 
> > cell's CellData for parallel meshes? 
> > 
> > Thanks in advance! 
> > 
> > -------------------------------------------------------- 
> > 
> > template <int dim> 
> > void TEST<dim>::refine_grid (){ 
> > const unsigned int n_q_points    = quadrature_formula.size(); 
> > std::vector<const PETScWrappers::MPI::Vector *> solution_vectors (2); 
> > solution_vectors[0] = &solution; 
> > solution_vectors[1] = &old_solution; 
> > parallel::distributed::SolutionTransfer<dim, PETScWrappers::MPI::Vector 
> > 
> > soltrans(dof_handler); 
> > typename parallel::distributed::Triangulation<dim>::active_cell_iterator 
> > cell = triangulation.begin_active(), endc = triangulation.end(); 
> > for (; cell!=endc; ++cell){ 
> > if (cell->subdomain_id() == this_mpi_process){ 
> > cell->set_refine_flag(); 
> > } 
> > } 
> > triangulation.prepare_coarsening_and_refinement(); 
> > soltrans.prepare_for_coarsening_and_refinement(solution_vectors); 
> > triangulation.execute_coarsening_and_refinement(); 
> > setup_system (0); 
> > typename parallel::distributed::Triangulation<dim>::active_cell_iterator 
> > cell2 = triangulation.begin_active(), endc2 = triangulation.end(); 
> > for (; cell2!=endc2; ++cell2){ 
> > point_history_accessor.initialize(cell2, n_q_points); 
> > } 
> > int highest_level_after_refinement=triangulation.n_global_levels()-1; 
> > PETScWrappers::MPI::Vector interpolated_solution(system_rhs); 
> > PETScWrappers::MPI::Vector interpolated_old_solution(system_rhs); 
> > std::vector<PETScWrappers::MPI::Vector *> tmp (2); 
> > tmp[0] = &(interpolated_solution); 
> > tmp[1] = &(interpolated_old_solution); 
> > soltrans.interpolate(tmp); 
> > cell2 = triangulation.begin_active(), endc2 = triangulation.end(); 
> > for (; cell2!=endc2; ++cell2) 
> > if(cell2->level()==highest_level_after_refinement){ 
> > typename parallel::distributed::Triangulation<dim>::cell_iterator 
> > cell_p=cell2->parent(); 
> > const std::vector<std::shared_ptr<PointHistory<dim> > > point_history_p 
> = 
> > point_history_accessor.get_data(cell_p); 
> > for (unsigned int child=0; child<8; child++){ 
> > unsigned int q_point=0; 
> > std::vector<std::shared_ptr<PointHistory<dim> > > point_history = 
> > point_history_accessor.get_data(cell_p->child(child)); 
> > for (unsigned int q_point=0;q_point<n_q_points;q_point++){ 
> > point_history[q_point]->status=point_history_p[q_point]->status; 
> > } 
> > } 
> > } 
> > constraints.clear (); 
> > constraints.reinit (locally_relevant_dofs); 
> > DoFTools::make_hanging_node_constraints (dof_handler, constraints); 
> > constraints.close (); 
> > constraints.distribute(interpolated_solution); 
> > constraints.distribute(interpolated_old_solution); 
> > solution     = interpolated_solution; 
> > old_solution = interpolated_old_solution; 
> > dof_handler.distribute_dofs (fe); 
> > } 
> > 
> > -- 
> > The deal.II project is located at 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.dealii.org_&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=4zqq8LzalJka1USeMsXKUwuzA9ZYVYDocY6Qd-sHk98&s=vf2x5H8OxETNRY5Xkg_2UlqLPYffjI7fsx3-Ng2CfHo&e=
>  
> > For mailing list/forum options, see 
> > 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_forum_dealii-3Fhl-3Den&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=4zqq8LzalJka1USeMsXKUwuzA9ZYVYDocY6Qd-sHk98&s=na9F3U-UB7JStYIGEje_0_pNnfNGJoooknwYVOSoG6c&e=
>  
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "deal.II User Group" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to dealii+un...@googlegroups.com <javascript:>. 
> > For more options, visit 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_optout&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=4zqq8LzalJka1USeMsXKUwuzA9ZYVYDocY6Qd-sHk98&s=uYqlSNAOmeNTEXlF1E5WamZiNmM6rO8HIgUxzSTtkn4&e=
>  
> . 
>
>
>
> -- 
> Timo Heister 
> http://www.math.clemson.edu/~heister/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to