Re: [deal.II] Overlapping decomposition using p4est and deal.ii

2018-04-11 Thread Wolfgang Bangerth


1. I tried out the p4est_ghost_expand as Wolfgang suggested, but it gives me 
an error because for some reason I get 
if(cell->is_ghost){cell->user_flag_set()==true} after the calls 
communicate_dof_indices_on_marked_cells() in dof_handler_policy.cc line 3875. 
I believe the function communicate_dof_indices_on_marked_cells() in line 3492 
in the same file exchanges the dof indices (sending the local cells that are 
ghost somewhere else and receiving its ghost cells from other procs) but the 
unpack function should identify all the cells marked as ghost and receive the 
corresponding indices from the pack function, right ? Or have I misunderstood 
something here ?


Hm, good point. When we decide who each process needs to send the locally 
owned DoF indices to (so that they can be set on ghost cells on the receiving 
process), we determine the cells in question by asking whether one of their 
vertices is adjacent to a ghost cell -- in other words, whether a cell is at 
the boundary of the locally owned subdomain.


But if you have more than one layer of ghost cells, then this is not enough. I 
think you need to extend the logic in a way where each process sends a list of 
its ghost cells to the owning process of these cells, and then gets the 
information back for exactly those cells. That may be a little bit of work, 
but I don't think it's going to be terribly difficult to implement. Let us 
know if you need help with this.



2. All places in example 40 (my reference) for the matrix and rhs assembly, 
you use the locally_relevant_dofs and locally_owned_dofs. As you say in the 
Glossary 
, 
for locally_active_dofs


" Since degrees of freedom are owned by only one processor, degrees of freedom 
on interfaces between cells owned by different processors may be owned by one 
or the other, so not all degrees of freedom on a locally owned cell are also 
locally owned degrees of freedom."


And hence you assemble your local matrices with the locally_owned_dofs. But 
for the overlapped decomposition, I need to assemble matrices including the 
"ghost cells" . In my case, they wouldn't be ghost cells but locally owned 
cells but duplicated between logically overlapping processes. I am not really 
sure how to go about doing this.


This is what I tried to explain in my previous email: on each process, you 
need a separate enumeration of all degrees of freedom that live on that 
process (either on locally owned or ghost cells). This enumeration must be 
independent of the "global" enumeration we typically use in deal.II which 
uniquely splits up all DoF indices among the processors.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Constraints on support points on Q2 elements and output

2018-04-11 Thread Jie Cheng
Hi Daniel

Thank you for your help!

The code below suggests that you want to constraint all degrees of freedom. 
> Is this really what you intend? If so, it might be better to just 
> interpolate the values using VectorTools::interpolate 
> .
>   
>
>

Yes, I want to constrain the velocity of the fluid in a certain region. The 
problem arises in my fluid-structure interaction code using immersed finite 
element method where I want to interpolate the solid velocity to the fluid 
grid which is overlapped with the solid as Dirichlet BCs. I find the 
to-be-constrained region on the fly as the solid moves. Since the dofs to 
be constrained are not on the actual boundary and are constantly changing, 
I can't use VectorTools::interpolate. 
 

> Don't rely on any particular order and use the whole finite element 
> instead of just the first base element in your code! To find out if a 
> degree of freedom corresponds to the velocity FiniteElement you can then 
> call system_to_component_index 
> 
> .
> The problem with the approach above is that there (probably?) is no 
> DoFHandler that is responsible for the velocity degrees of freedoms alone. 
> Hence, a cell iterator (based on a DoFHandler) can only give you the global 
> degrees of freedoms
> for all the components at the same time. By using 
> system_to_component_index you can then decide which to use.
>

Following the idea you described, I came up with the following code, but 
something went wrong...
//
const vector& unit_points = fe.get_unit_support_points(); // 
the size should be equal to dofs_per_cell
Quadrature dummy_q(unit_points.size()); 
MappingQGeneric mapping(1);
FEValues dummy_fe_values(mapping, fe, dummy_q, 
update_quadrature_points);
std::vector dof_indices(fe.dofs_per_cell); // need 
this to find the global dof index

for (auto cell = dof_handler.begin_active(); cell != dof_handler.end(); 
++cell)
{
  dummy_fe.reinit(cell);
  *cell->get_dof_indices(dof_indices); // querying all the global indicies 
associated with this cell*
  for (unsigned int i = 0; i  < unit_points.size(); ++i)
  {
// fe is defined as FESystem(FE_Q(degree+1), dim, 
FE_Q(degree), 1) 

*   auto index = fe.system_to_component_index(i); // find the component 
number and base number of current dof iif (index.first < dim) // Is 
this the correct method to tell if the dof i is a velocity dof rather than 
a pressure dof??*
{ 
  Point mapped_point = mapping.transform_unit_to_real_cell(cell, 
unit_points[i]); // get the real coordinates
  Vector value = f(mapped_point); // Calculate (interpolate) 
the constrained value which is a *vector*

  // set the inhomogeneity constraints on the velocity
  *auto line = dof_indices[i]; // I think this is the global dof number 
corresponding to dof i*
  constraints.add_line(line);
  *constraints.set_inhomogeneity(**line, value[index.first]); // the 
component number should be the correct velocity component to constrain *
}
  }
}
///
I think the logic should be good, but I might have misunderstood the return 
value of system_to_component.. Please let me know if you can see the error.
 

> No, as far as I know we don't support writing the results as 2nd order 
> elements. The approach we take instead is to split a cell into several 
> (most of the time degree^dim) using DataOut::build_patches(degree) to 
> obtain a linear interpolation of the correct order.
>

I can't believe I did not know DataOut::build_patches can do this trick! 
This is exactly what I need!

Thank you
Jie

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: multiply constrained dofs (hanging nodes+periodic) fails a simple test case

2018-04-11 Thread Sambit Das
I understand my mistake now: there are extra hanging nodes constraints in 
the constraint matrix with (hanging nod constraints + PBC) compared to the 
constraint matrix with only hanging node constraints.
That is why the minimal example fails.

Sambit

On Wednesday, April 11, 2018 at 1:00:25 PM UTC-5, Sambit Das wrote:
>
>
>>> true, but I don't see why you would have the same norms if you 
>> distribute with constraints from hanging nodes only or constraints from 
>> hanging nodes+ PBC.
>> I think we can agree that the two ConstraintMatrix objects should be 
>> different as in the case of PBC you additionally need to make sure that FE 
>> space on the refined boundary matches that on the opposite, non-refined 
>> side. 
>>
>
> Yes I agree the ConstraintMatrix objects are different but the 
> coefficients (a_{ij}) of the hanging node constraint equations 
>
> x_{i} = a_{ij} x_{j}
>
>  would be the same in both cases, only x_{j}'s would be different in both 
> cases. Now x_{j}'s are nodes without any constraints which are set to the 
> correct values explicitly in both cases:
>
> if(!constraints.is_constrained(globalDofIndex))
>vec1[globalDofIndex]=nodalCoor.norm();
>
> if(!onlyHangingNodeConstraints.is_constrained(globalDofIndex))
>vec2[globalDofIndex]=nodalCoor.norm();
>
> So the hanging nodes in both cases should have the same value after 
> calling distribute. 
>  
>  
>
>> If you suspect that there is a bug in constraints, you could check this 
>> by simply choosing some more-or-less random vector, distribute and 
>> plot-over-line in Paraview / Visit. 
>> More cumbersome comparison would be to evaluate random field at the 
>> opposite points.
>> You can use FEField function and then choose   L/2-\delta  and 
>> -L/2+\delta   with \delta = 1e-8 or so for X coordinate and then 
>> whatever you want to Y/Z. This should give you the same value anywhere on 
>> the two periodically matching points for a random input vector after 
>> constraints are distributed.
>>
>>  I will try doing this. 
>
> Best,
> Sambit
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Problem with parallelization when using hyper_cube_slit

2018-04-11 Thread Roberto Porcù
Dear all,

  in the reply to my first post I forgot the code.
I attach it to this reply.

Best

Roberto P.


On Tuesday, February 20, 2018 at 7:27:37 PM UTC-5, Roberto Porcù wrote:
>
> Dear all,
>
> I'm solving a linear elasticity problem on a cracked domain created by 
> means
> of the  GridGenerator::hyper_cube_slit  function.
> When I solve it in serial everything works fine.
> When I solve it in parallel I get a problem on a node which is on one
> of the slit boundaries as it is possible to see from the attached pictures.
> I don't understand why that is happening because I wrote other parallel 
> codes in deal.II
> and everything was always working fine. I've checked my code many times, 
> also comparing It to the tutorials.
> I am supposing there is something related to the slit.
>
> Thanks in advance,
> Roberto
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
#include 
#include 

#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 

#include 
#include 

namespace CrackedDomain { namespace aux {

  class AnalyticalSolution : public dealii::Function<2>
  {
public:
  AnalyticalSolution()
: dealii::Function<2>(1)
  {}

  virtual double
  value(const dealii::Point<2> & p, const unsigned int component=0) const;

  virtual dealii::Tensor<1,2>
  gradient(const dealii::Point<2> & p, const unsigned int component=0) const;

  virtual double
  laplacian(const dealii::Point<2> & p, const unsigned int component=0) const;

  virtual double
  forcing(const dealii::Point<2> & p, const unsigned int component=0) const;
  };

  class UpperCrack : public dealii::Function<2>
  {
public:
  UpperCrack()
: dealii::Function<2>(1)
  {}

  virtual double
  value(const dealii::Point<2> & p, const unsigned int component=0) const;
  };

  class LowerCrack : public dealii::Function<2>
  {
public:
  LowerCrack()
: dealii::Function<2>(1)
  {}

  virtual double
  value(const dealii::Point<2> & p, const unsigned int component=0) const;
  };

  class UpperEdge : public dealii::Function<2>
  {
public:
  UpperEdge()
: dealii::Function<2>(1)
  {}

  virtual double
  value(const dealii::Point<2> & p, const unsigned int component=0) const;
  };

  class LowerEdge : public dealii::Function<2>
  {
public:
  LowerEdge()
: dealii::Function<2>(1)
  {}

  virtual double
  value(const dealii::Point<2> & p, const unsigned int component=0) const;
  };

  double
  AnalyticalSolution::value(const dealii::Point<2> & p,
const unsigned int) const
  {
const double x = p[0];
const double y = p[1];

if((x > 0) or (std::abs(x) < 1.e-14))
  return 0;

if(y > 0)
  return x*x*std::exp(-1*x);

if(y < 0)
  return -1*x*x*std::exp(-1*x);

AssertThrow(false, dealii::ExcMessage("Error"));

return 0;
  }


  dealii::Tensor<1,2>
  AnalyticalSolution::gradient(const dealii::Point<2> & p,
   const unsigned int) const
  {
dealii::Tensor<1,2> result;
result = 0;

const double x = p[0];
const double y = p[1];

if((x > 0) or (std::abs(x) < 1.e-14)) {
  return result;
}

if(y > 0) {
  result[0] = 2*x*std::exp(-1*x) - x*x*std::exp(-1*x);
  return result;
}

if(y < 0) {
  result[0] = -2*x*std::exp(-1*x) + x*x*std::exp(-1*x);
  return result;
}

AssertThrow(false, dealii::ExcMessage("Error"));

return result;
  }


  double
  AnalyticalSolution::laplacian(const dealii::Point<2> & p,
const unsigned int) const
  {
const double x = p[0];
const double y = p[1];

if((x > 0) or (std::abs(x) < 1.e-14))
  return 0;

if(y > 0)
  return 2*std::exp(-1*x) - 4*x*std::exp(-1*x) + x*x*std::exp(-1*x);

if(y < 0)
  return -2*std::exp(-1*x) + 4*x*std::exp(-1*x) - x*x*std::exp(-1*x);

AssertThrow(false, dealii::ExcMessage("Error"));

return 0;
  }


  double
  AnalyticalSolution::forcing(const dealii::Point<2> & p,
  const unsigned int component) const
  {
return -1 * laplacian(p, component);
  }


  double
  UpperCrack::value(const dealii::Point<2> & p,
const unsigned int) const
  {
const double x = p[0];

return x*x*std::exp(-1*x);
  }


  double
  

[deal.II] Re: Problem with parallelization when using hyper_cube_slit

2018-04-11 Thread Roberto Porcù
Dear all,

  since I am still experiencing the same issue, I tried to solve a very 
simple and basic problem
on a domain created with the  GridGenerator::hyper_cube_slit  and then 
rotated of -90 degrees.

As you can see in the attached figures I am experiencing the same issue: 
when I run
the program in parallel there is something wrong  on the fracture of the 
domain. In particular there
is a problem at the intersection of the different subdomains on the 
fracture, as you can see
from the picture of the derivatives (I computed those derivatives with 
Paraview using the filter "Compute derivatives").

I attach the code of my simulation. It is a very simple code having the 
same structure of
tutorial step-40. I made it like this in order to make it more readable.

Looking forward to hearing from you.

Roberto


On Tuesday, February 20, 2018 at 7:27:37 PM UTC-5, Roberto Porcù wrote:
>
> Dear all,
>
> I'm solving a linear elasticity problem on a cracked domain created by 
> means
> of the  GridGenerator::hyper_cube_slit  function.
> When I solve it in serial everything works fine.
> When I solve it in parallel I get a problem on a node which is on one
> of the slit boundaries as it is possible to see from the attached pictures.
> I don't understand why that is happening because I wrote other parallel 
> codes in deal.II
> and everything was always working fine. I've checked my code many times, 
> also comparing It to the tutorials.
> I am supposing there is something related to the slit.
>
> Thanks in advance,
> Roberto
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: multiply constrained dofs (hanging nodes+periodic) fails a simple test case

2018-04-11 Thread Denis Davydov


On Wednesday, April 11, 2018 at 7:00:18 PM UTC+2, Sambit Das wrote:
>
> Hi Denis,
>
>
>> I don't think that's the case. The domain is indeed periodic, but this is 
>> completely detached from location of support/nodal points. 
>> Same applies to geometry, you will have different coordinates of vertices 
>> across the PBC so
>>
>>
> I agree, the location of nodal points is detached from the periodicity of 
> the domain, but in this case the origin is at the center of the hypercube. 
> This artificially enforces that the nodal_coordinate.norm() is periodic. 
>

true, but I don't see why you would have the same norms if you distribute 
with constraints from hanging nodes only or constraints from hanging nodes+ 
PBC.
I think we can agree that the two ConstraintMatrix objects should be 
different as in the case of PBC you additionally need to make sure that FE 
space on the refined boundary matches that on the opposite, non-refined 
side. 

If you suspect that there is a bug in constraints, you could check this by 
simply choosing some more-or-less random vector, distribute and 
plot-over-line in Paraview / Visit. 
More cumbersome comparison would be to evaluate random field at the 
opposite points.
You can use FEField function and then choose   L/2-\delta  and -L/2+\delta 
  with \delta = 1e-8 or so for X coordinate and then 
whatever you want to Y/Z. This should give you the same value anywhere on 
the two periodically matching points for a random input vector after 
constraints are distributed.
 
Denis.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: multiply constrained dofs (hanging nodes+periodic) fails a simple test case

2018-04-11 Thread Sambit Das
Hi Denis,


> I don't think that's the case. The domain is indeed periodic, but this is 
> completely detached from location of support/nodal points. 
> Same applies to geometry, you will have different coordinates of vertices 
> across the PBC so
>
>
I agree, the location of nodal points is detached from the periodicity of 
the domain, but in this case the origin is at the center of the hypercube. 
This artificially enforces that the nodal_coordinate.norm() is periodic. 

Best,
Sambit

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Use of 'VectorTools::interpolate_boundary_values' using numerical solution.

2018-04-11 Thread Jaekwang Kim
Thanks for giving me an idea for this!
I will take a look approaches you just mentioned. 

Jaekwang

On Wednesday, April 11, 2018 at 12:24:26 AM UTC-5, Jie Cheng wrote:
>
> Hi Jaekwang
>
> One way I can think of is to construct your ConstraintMatrix for Part B 
> manually. Iterate over your triangulation, find all the dofs on the target 
> boundaries, then use ConstraintMatrix::add_line and 
> ConstraintMatrix::set_inhomogeinity to set the BC.
>
> Another method might be putting your calc_tangential_velocity function 
> inside New_VelBndValues. I assume this function needs to know the solution, 
> triangulation etc. Then you would have to create some references or dealii 
> SmartPointers in this class as well.
>
> I hope this helps.
>
> Jie
>
> On Monday, April 9, 2018 at 6:32:28 PM UTC-4, Jaekwang Kim wrote:
>>
>> Hi, all 
>>
>>
>> I am always thank you for all guys here in this community. 
>>
>>
>> I have a question on the use of  VectorTools::interpolate_boundary_values 
>> functions
>>
>>
>> Typically we use this function in when we set-up system as in the form 
>>
>>
>>
>> VectorTools::interpolate_boundary_values (dof_handler,
>>
>> 1,
>>
>>*Original**_VelBndValues(),*
>>
>>constraints,
>>
>>fe.component_mask(velocities));
>>
>>
>>
>> The red lined functions are usually declared out side of main class. 
>>
>>
>> One problem I am interested in now is to (Part A) solve stokes flow with 
>> no slip boundary condition on every boundary one time (as step-22), -
>>
>>
>> and then to (Part B) solve same system again with slightly different 
>> boundary condition comes from the straight problem
>>
>>
>> If I finish Part A, i have numerical solution of (u,p) linked with dot 
>> handler. 
>>
>>
>> In (Part B), I post process solutions part A, e.g. I calculated stress 
>> value and I calculated tangential velocity of a boundary using Navier slip 
>> condition, 
>>
>>
>> So, I need to use 
>>
>>
>>
>> VectorTools::interpolate_boundary_values (dof_handler,
>>
>> 1,
>>
>>*New_VelBndValues(),*
>>
>>constraints,
>>
>>fe.component_mask(velocities));
>>
>>
>> such that 
>>
>> 
>>
>>  template
>>
>>  double
>>
>>  New_VelBndValues::value (constPoint  ,
>>
>>  constunsignedintcomponent) const
>>
>>  {
>>
>>  Assert (component < this->n_components,
>>
>>  ExcIndexRange (component, 0, this->n_components));
>>
>>  
>>
>>  if(component == 0)
>>
>>  *return calc_tangential_velocity (); (Error Here!!)*
>>
>>  else
>>
>>  return0;
>>
>>  }
>>
>>
>>
>>
>> where '*calc_tangential_velocity ();' *is declared in main class, since 
>> I should have information of dot_handler and solution vectors.
>>
>>
>>   template
>>
>>   classStokesProblem
>>
>>   {
>>
>>   public:
>>
>> StokesProblem (constunsignedintdegree);
>>
>> voidrun ();
>>
>>   private:
>>
>> 
>>
>>   ……
>>
>>  
>>
>> *double calc_tangential_velocity ();*
>>
>>  
>>
>> BlockVector solution;
>>
>>  
>>
>> };
>>
>> 
>>
>>   
>>
>> Obviously, this method does not work, I mean I cannot compile the code 
>> and it shows  'use of undeclared identifier 'calc_tangential_velocity' on 
>> the line  *(Error Here!!)*
>>
>> I would appreciate any one who comment it. It seems that complier has to 
>> know what is that before. 
>>
>>
>> Will this be possible in a certain way? How can I  go around this problem 
>> ? 
>>
>>
>> Thanks, 
>>
>>
>> Jaekwang 
>>
>>
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Overlapping decomposition using p4est and deal.ii

2018-04-11 Thread Pratik
Hello,

I had a few more questions and was some grateful for some clarification:

1. I tried out the p4est_ghost_expand as Wolfgang suggested, but it gives 
me an error because for some reason I get 
if(cell->is_ghost){cell->user_flag_set()==true} after the calls 
communicate_dof_indices_on_marked_cells() in dof_handler_policy.cc line 
3875. I believe the function communicate_dof_indices_on_marked_cells() in 
line 3492 in the same file exchanges the dof indices (sending the local 
cells that are ghost somewhere else and receiving its ghost cells from 
other procs) but the unpack function should identify all the cells marked 
as ghost and receive the corresponding indices from the pack function, 
right ? Or have I misunderstood something here ?

2. All places in example 40 (my reference) for the matrix and rhs assembly, 
you use the locally_relevant_dofs and locally_owned_dofs. As you say in the 
Glossary 
,
 
for locally_active_dofs 

" Since degrees of freedom are owned by only one processor, degrees of 
freedom on interfaces between cells owned by different processors may be 
owned by one or the other, so not all degrees of freedom on a locally owned 
cell are also locally owned degrees of freedom." 

And hence you assemble your local matrices with the locally_owned_dofs. But 
for the overlapped decomposition, I need to assemble matrices including the 
"ghost cells" . In my case, they wouldn't be ghost cells but locally owned 
cells but duplicated between logically overlapping processes. I am not 
really sure how to go about doing this.

Thank you,

Best Regards,
Pratik.

On Monday, April 2, 2018 at 10:01:54 PM UTC+2, Wolfgang Bangerth wrote:
>
>
> > Thank you for your reply. I understand that DD methods are not quite 
> > that popular now. But I wanted to explore the avenue keeping in mind the 
> > issue of scalability as well. For scalability, as you mention 
> > communication is probably the bottleneck. 
>
> No, communication is not the issue. It's not a computational problem but 
> a mathematical one: in each DD iteration, you only transport information 
> from one subdomain to the next subdomain. If you split the domain into 
> too many subdomains, you will need a lot of iterations to transport 
> information across the entire domain. In other words, the reason why DD 
> methods are not good for large processor counts is because the number of 
> outer iterations grows with the number of processors you have -- it just 
> doesn't scale. 
>
> A different perspective is that DD methods lack some kind of coarse grid 
> correction in which you exchange information globally. You could 
> presumably use a non-overlapping mortar method and get good parallel 
> scalability if you had a good preconditioner for the Schur complement 
> posed on the skeleton (i.e., the DoFs that are located on the interfaces 
> between subdomains). Of course, constructing such preconditioners has 
> also proven to be difficult. 
>
>
>
> > It would be very helpful if you could provide me with some pointers for 
> > where and how to change the p4est functions in deal.II. 
>
> All of the code that interfaces with p4est is in 
> source/distributed/tria.cc. We build the ghost layer in line 2532 where 
> we call p4est_ghost_new (through its dimension independent alias). That 
> function is defined here: 
>
>
> http://p4est.github.io/api/p4est__ghost_8h.html#a34a0bfb7169437f6fc2382a67c47e89d
>  
>
> You'll probably have to call p4est_ghost_expand() one or more times: 
>
>
> http://p4est.github.io/api/p4est__ghost_8h.html#ab9750fa62cbc17285a0eb5cfe13a1e28
>  
>
> I would just stick a call to this function right after the one to 
> ghost_new as a trial and see what happens in a small test program that 
> on each processor just outputs information about what cells are locally 
> owned and which are ghosts. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.