Hello,

I had a few more questions and was some grateful for some clarification:

1. I tried out the p4est_ghost_expand as Wolfgang suggested, but it gives 
me an error because for some reason I get 
if(cell->is_ghost){cell->user_flag_set()==true} after the calls 
communicate_dof_indices_on_marked_cells() in dof_handler_policy.cc line 
3875. I believe the function communicate_dof_indices_on_marked_cells() in 
line 3492 in the same file exchanges the dof indices (sending the local 
cells that are ghost somewhere else and receiving its ghost cells from 
other procs) but the unpack function should identify all the cells marked 
as ghost and receive the corresponding indices from the pack function, 
right ? Or have I misunderstood something here ?

2. All places in example 40 (my reference) for the matrix and rhs assembly, 
you use the locally_relevant_dofs and locally_owned_dofs. As you say in the 
Glossary 
<http://dealii.org/developer/doxygen/deal.II/DEALGlossary.html#GlossParallelScaling>,
 
for locally_active_dofs 

" Since degrees of freedom are owned by only one processor, degrees of 
freedom on interfaces between cells owned by different processors may be 
owned by one or the other, so not all degrees of freedom on a locally owned 
cell are also locally owned degrees of freedom." 

And hence you assemble your local matrices with the locally_owned_dofs. But 
for the overlapped decomposition, I need to assemble matrices including the 
"ghost cells" . In my case, they wouldn't be ghost cells but locally owned 
cells but duplicated between logically overlapping processes. I am not 
really sure how to go about doing this.

Thank you,

Best Regards,
Pratik.

On Monday, April 2, 2018 at 10:01:54 PM UTC+2, Wolfgang Bangerth wrote:
>
>
> > Thank you for your reply. I understand that DD methods are not quite 
> > that popular now. But I wanted to explore the avenue keeping in mind the 
> > issue of scalability as well. For scalability, as you mention 
> > communication is probably the bottleneck. 
>
> No, communication is not the issue. It's not a computational problem but 
> a mathematical one: in each DD iteration, you only transport information 
> from one subdomain to the next subdomain. If you split the domain into 
> too many subdomains, you will need a lot of iterations to transport 
> information across the entire domain. In other words, the reason why DD 
> methods are not good for large processor counts is because the number of 
> outer iterations grows with the number of processors you have -- it just 
> doesn't scale. 
>
> A different perspective is that DD methods lack some kind of coarse grid 
> correction in which you exchange information globally. You could 
> presumably use a non-overlapping mortar method and get good parallel 
> scalability if you had a good preconditioner for the Schur complement 
> posed on the skeleton (i.e., the DoFs that are located on the interfaces 
> between subdomains). Of course, constructing such preconditioners has 
> also proven to be difficult. 
>
>
>
> > It would be very helpful if you could provide me with some pointers for 
> > where and how to change the p4est functions in deal.II. 
>
> All of the code that interfaces with p4est is in 
> source/distributed/tria.cc. We build the ghost layer in line 2532 where 
> we call p4est_ghost_new (through its dimension independent alias). That 
> function is defined here: 
>
>
> http://p4est.github.io/api/p4est__ghost_8h.html#a34a0bfb7169437f6fc2382a67c47e89d
>  
>
> You'll probably have to call p4est_ghost_expand() one or more times: 
>
>
> http://p4est.github.io/api/p4est__ghost_8h.html#ab9750fa62cbc17285a0eb5cfe13a1e28
>  
>
> I would just stick a call to this function right after the one to 
> ghost_new as a trial and see what happens in a small test program that 
> on each processor just outputs information about what cells are locally 
> owned and which are ghosts. 
>
> Best 
>   W. 
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 bang...@colostate.edu 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to