Dear Wolfgang,

thank you for your reply! I only noticed it now a while later since I had 
thought the topic was dead back when I posted it. Thank you for your time 
and effort!
Remarks to your response below:

Am Dienstag, 30. Oktober 2018 17:26:21 UTC+1 schrieb Wolfgang Bangerth:
>
>
> Pascal, 
> I don't think anyone responded to your email here: 
>
> > I will try to be as short as possible 
>
> That was only moderately successful ;-)) 
>
>
> > - if more details are required 
> > feel free to ask. Also I offer to submit all mesh generation code I 
> > create in the future, since others might have similar needs at some 
> point. 
>
> We would of course love to include this! 
>
I will, once I am satisfied with my solution, offer some examples and make 
them available. 

>
>
> > I work on a 3d mesh with purely axis-parallel edges. The mesh is a 
> > 2d-mesh (say in x and y direction) extruded to 3d (z-direction). Due to 
> > the scheme I use it is required, that the distributed version of this 
> > mesh be distributed as intervals along the z-axis ( process 0 has all 
> > cells with cell->center()[2] in [0,1], process 1 has all cells with 
> > cell->center()[2] in (1,2] and so on.) 
> > What I did originally was simply generating a mesh consisting of 
> > n_processes cells, let that mesh be auto partitioned, then turning 
> > partitioning off 
>
> Out of curiosity, how do you do this? 
>
I start with p::d::tria with 
parallel::distributed::Triangulation<3>::Settings::no_automatic_repartitioning  
and hand it to GridGenerator::subdivided_parallelepiped<3, 3>. In this call 
I use the version that also takes a vector of subdivision per by dimension 
and hand it a vector containtin [1,1,n_processes]. I then manually call 
tria->repartition() which gives me an ordered triangulation where the cells 
are partitioned according to their z-coordintates. I can then perform 
refinement and the basic structure remains the same because automatic 
repartitioning is turned off. The only real downside to this (apart from 
feeling really weird and hacky) is that I am now stuck with a p::d::tria 
which doesn't allow anisotropic refinement which would be nice for me and I 
cannot use another starting mesh then the mesh with one cell per processor 
because then the partitioning done by p4est is no longer easy to predict.

>
>
> > So my questions would be: 
> > 1. given a 2D-mesh, what is the easiest way to get a distributed 
> > 3d-extrusion with the partitioning described above?  (auto-partitioning 
> > generates balls in this mesh, not cuboids with z-orthogonal 
> > process-to-process interfaces) One suggestion for a function here would 
> > be a function to transform a shared to a distributed mesh because on a 
> > shared mesh I could set the subdomain Ids and then just call that 
> > function when I'm done 
>
> You can't do this for a parallel::distributed::Triangulation. That's 
> because the partitioning is done in p4est and p4est orders cells in the 
> depth-first tree ordering along a (space filling) curve and then just 
> subdivides it by however many processors you have. 
>
> The only control you have with p4est is how much weight each cell has in 
> this partition, but you can't say that cutting between cells A and B 
> should be considered substantially worse than cutting between cells C 
> and D -- which is what you are saying: Cutting cells into partitions is 
> totally ok if the partition boundary runs between copies of the base 2d 
> mesh, but should be prohibited if the cut is between cells within a copy. 
>
> Other partitioning algorithms allow this sort of thing. The partition 
> graphs (where each node is a cell, and an edge the connection between 
> cells). Their goal is to find partitions of roughly equal weight (where 
> the weight of a partition is the sum of its node weights) while trying 
> to minimize the cost (where the cost is the sum of the edge weights over 
> all cut edges). If you were to assign very small weights to all edges 
> between cells of different copies, and large weights to edges within a 
> copy, then you would get what you want. 
>
> It should not be terribly difficult to do this with 
> parallel::shared::Triangulation since there we use regular graph 
> partitioning algorithms. But I don't see how it is possible for 
> parallel::distributed::Triangulation. 
>
> OK, that is what I feared. p::s::T would work but I will keep looking for 
a solution with p::d::T since this application should scale to huge meshes 
eventually. 

>
> > 2. say I have a layer of faces in that mesh (in the interior) and I need 
> > nedelec elements and nodal elements on these faces (codimension 1) to 
> > evaluate their shape function and gradients in quadrature points of a 
> > 2D-quadrature on the faces. What is the best way to do this? (if it was 
> > a boundary I could give a boundary id and call 
> > GridGenerator::extract_boundary_mesh but it's not always on the 
> boundary. 
>
> I don't think it would be terribly difficult to write the equivalent of 
> that function that extracts a surface mesh based on faces have some 
> particular property (not necessarily on the boundary of the domain). 
> Take a look at the existing function and see whether you can easily 
> adapt it for your purposes. 
>
> I have a candidate which I will start testing soon. I will report back if 
it works well. I only forsee major problems if the cutting surface is at an 
angle with the mesh edges. In that case marking every edge or face 
intersected by the cutting surface would not yield a complete mesh without 
holes. I.e. cutting diagonally through a mesh composed of equal cubes would 
always require the choice of one "side".

>
> > 3. There is a description on how to manually generate a mesh which seems 
> > easy enough in my case. How does this work for a distributed mesh? Is 
> > the only version to generate a mesh and then auto-partition or can I 
> > somehow define the partitioning in the generation phase similar to the 
> > way I could set subdomain Ids in a shared parallel mesh? 
>
> No, you can't. For both parallel::distributed::Triangulation and 
> parallel::shared::Triangulation, all processors store the entire coarse 
> mesh. There is of course a difference is how they are then partitioned 
> into locally owned, ghost, and artificial cells. p::d::Triangulation 
> doesn't give you much leverage in how this is done (other than the cell 
> weights) whereas for a p::s::Triangulation, you can basically partition 
> the entire mesh as you please. 
>
I have one more idea I will try out as a fun experiment to see if it works: 
it might be possible to circumvent the problems by writing a script that 
guesses cell weights such that the partitioning is close to what i want by 
locally minimizing errors. It would  compute the differnce between the 
domain a cell should be in and the domain it is currently put in and update 
the cellweights wherever that is not 0. Maybe this could give me a vector 
of cellweights that allows to call repartitioning by p4est but that is a 
VERY long shot and the computation might be very slow (but assembly times 
dont really matter for my problem). I will report back. 

>
> Best 
>   W. 
>
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 bang...@colostate.edu 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to