Re: [deal.II] extrude triangulation with n_slices = 1

2023-04-06 Thread Greg Wang


Hi Wolfgang,

Thanks a lot for clarifying!

I decided to modify the code to adopt a p::shared::T model with METIS and
realized anisotropic refinement doesn’t seem to work for this case either. 
The
error comes from AffineConstraints’ unacceptance of any refinement cases 
that
are not isotropic. The same error can be reproduced for my serial code as 
well.
If I change the RefinementCase, when it’s, say, cut_yz, to cut_xyz, hence 
going
back to isotropic refinement, both codes run without a problem.

My code is implementing a space-time IP-HDG method. It would be nice to have
anisotropic refinements in order to separate local time stepping and spatial
refinement within a space-time slab. Here is the part of the code where
AffineConstraints gets involved and it’s pretty much copied verbatim from 
step-51:
constraints.clear(); DoFTools::make_hanging_node_constraints(dof_handler, 
constraints); std::map *> 
boundary_functions; Solution solution_function(nu); 
boundary_functions[0] = _function; 
VectorTools::interpolate_boundary_values(dof_handler, boundary_functions, 
constraints); constraints.close(); 

And here is the runtime error message:

An error occurred in line <1102> of file 
 
in function
void dealii::DoFTools::internal::make_hp_hanging_node_constraints(const 
dealii::DoFHandler&, dealii::AffineConstraints&) [with int dim = 3; int 
spacedim = 3; number = double]
The violated condition was:
cell->face(face)->refinement_case() == RefinementCase::isotropic_refinement
Additional information:
You are trying to use functionality in deal.II that is currently not
implemented. In many cases, this indicates that there simply didn’t
appear much of a need for it, or that the author of the original code
did not have the time to implement a particular case. If you hit this
exception, it is therefore worth the time to look into the code to
find out whether you may be able to implement the missing
functionality. If you do, please consider providing a patch to the
deal.II development sources (see the deal.II website on how to
contribute).

I’m wondering if it would be ill-advised to simply remove the assertion and
re-compile the library. If so, I’m thinking about going a bit deeper and 
see if
I can come up with a patch. In that case, I would really appreciate some
insights in terms of AffineConstraints’ incompatibilities, if any, with 
hanging
nodes created by anisotropic refinement.

Thanks,
Greg
​
On Tuesday, April 4, 2023 at 8:18:58 PM UTC Wolfgang Bangerth wrote:

>
> Greg:
>
> > We want to construct a 3D triangulation by extruding a 2D triangulation 
> > (one that potentially contains hanging nodes) and we only want one 
> > slice/layer of mesh on the extrusion direction.
> > 
> > Looking around in the GridGenerator namespace led me to the 
> > extrude_triangulation function. It’s doing everything we desire except 
> > for that (a) the slices/layers on the extrusion direction has to be at 
> > least two
>
> I think this is poorly described in the documentation. The number of 
> slices = the number of cell layers plus one. Two slices => one layer of 
> cells. I've fixed this here:
> https://github.com/dealii/dealii/pull/15028
>
> > and (b) the 2D mesh must be a coarse mesh. I’m wondering if 
> > there are tips on getting around these two restrictions.
>
> This you can't get around.
>
>
> > Originally I was using GridGenerator::hyper_rectangle teamed with 
> > anisotropic refinement cut_xy. But this ceases to work after the code 
> > was re-implemented with parallel::distributed::Triangulation because 
> > “this class does not support anisotropic refinement, because it relies 
> > on the p4est library that does not support this” [1].
>
> Yes, and this is true regardless of how you generate the mesh. You can't 
> create an anisotropically refined mesh for p::d::T. This also means that 
> you cannot extrude a refined 2d mesh into 3d that has only one layer of 
> cells. You can extrude a coarse mesh, though, and then refine the 
> resulting mesh -- it will then have more than one layer in z-direction 
> in some locations, however.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/8470ea57-c826-45fa-8784-3e8bc4b4a581n%40googlegroups.com.


Re: [deal.II] Extracting element solution in step-40

2023-04-06 Thread Wolfgang Bangerth

On 4/6/23 10:18, Wasim Niyaz Munshi ce21d400 wrote:

How do I get the no.of cells owned by the processor?


Triangulation::n_locally_owned_active_cells().

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/12bfea7e-6b92-9af1-d6fa-6d4db1e99f10%40colostate.edu.


Re: [deal.II] Extracting element solution in step-40

2023-04-06 Thread Wasim Niyaz Munshi ce21d400
I appreciate the clarification. I thought that global indexing was no 
longer present as the solution vector is distributed.
I have one more doubt. I want to create a vector (H_vector) that stores 
some value for each Gauss point in the domain.
For a serial problem, I was doing something like this:* H_vector = 
Vector (8*(triangulation.n_active_cells()));*

*8 because the problem is in 3d, so I have 8 Gauss points per cell.*
Now, for a MPI code, this H_vector would also be  *LA::MPI::Vector*, and 
its size should be* 8*no. of cells owned by the processor.*
How do I get the no.of cells owned by the processor?

Thanks and regards
Wasim

On Thursday, April 6, 2023 at 9:24:14 PM UTC+5:30 Wolfgang Bangerth wrote:

> On 4/6/23 06:02, Wasim Niyaz Munshi ce21d400 wrote:
> > 
> > I don't have a solution_vector for a parallel code, but a 
> > locally_relevant_solution. I want to know that, given this 
> > locally_relevant_solution and the cell, how do I get the element_sol?
> > The global_dof will not be helpful here, as the solution_vector is 
> > distributed across a number of processors.
>
> Daniel's question is correct, but to this specific point: A distributed 
> vector (and it's locally relevant incarnation) is still a global vector, 
> indexed by global degree of freedom numbers, and so the code remains 
> correct.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/afbbd383-46a9-40cc-9a02-d2b4347898b3n%40googlegroups.com.


Re: [deal.II] Unable to match the performance in step-40

2023-04-06 Thread Wolfgang Bangerth

On 4/6/23 10:06, Wasim Niyaz Munshi ce21d400 wrote:

**

Yes, I also had the same feeling. But, when I look at the plot in the tutorial 
of step-40 for 52M Dofs, I see that they have solved the problem using just 32 
processors also. Can you kindly let me know how much memory is available when 
you you run the problem on 32 processors? I get the memory error even when I 
use 80 processors (250 GB memory).


Wasim: Why don't you try a problem of intermediate size?

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fbc32dad-3dae-7b4f-0710-36b461de105f%40colostate.edu.


Re: [deal.II] Unable to match the performance in step-40

2023-04-06 Thread Wasim Niyaz Munshi ce21d400
Yes, I also had the same feeling. But, when I look at the plot in the
tutorial of step-40 for 52M Dofs, I see that they have solved the problem
using just 32 processors also. Can you kindly let me know how much memory
is available when you you run the problem on 32 processors? I get the
memory error even when I use 80 processors (250 GB memory).

Thanks and regards

Wasim Niyaz
Research scholar
CE Dept.
IITM

On Thu, 6 Apr, 2023, 9:21 pm Wolfgang Bangerth, 
wrote:

> On 4/6/23 01:31, Wasim Niyaz Munshi ce21d400 wrote:
> > **
> >
> > I tried to run step-40 with 52M DOFs on 32 processors. I am using
> > *GridGenerator::subdivided_hyper_rectangle *to create a mesh with
> > 5000*5000 elements. I have a single cycle in my simulation. However, I
> > am running into some memory issues.
> >   I am getting the following error: *Running with PETSc on 32 MPI
> > rank(s)...*
> > *Cycle 0:
> >
> --
> > *
> > *mpirun noticed that process rank 5 with PID 214402 on node tattva
> > exited on signal 9 (Killed)*.
> > I tried with 40 processors (125 GB RAM) but I am getting the same error.
>
> I'm pretty sure you run out of memory. You need a smaller problem, a
> larger machine, or both.
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/SP2s3PajYcY/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/08b5e423-617a-3df5-9075-a609c819ebba%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAM8ps5A9%3DPumDHxQB6g%3D6DHbOMY7KoXay3fhrjCK51At%3DunYhw%40mail.gmail.com.


Re: [deal.II] Extracting element solution in step-40

2023-04-06 Thread Wolfgang Bangerth

On 4/6/23 06:02, Wasim Niyaz Munshi ce21d400 wrote:


I don't have a solution_vector for a parallel code, but a 
locally_relevant_solution. I want to know that, given this 
locally_relevant_solution and the cell, how do I get the element_sol?
The global_dof will not be helpful here, as the solution_vector is 
distributed across a number of processors.


Daniel's question is correct, but to this specific point: A distributed 
vector (and it's locally relevant incarnation) is still a global vector, 
indexed by global degree of freedom numbers, and so the code remains 
correct.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/23dfeb3b-001e-7cf0-e56e-fdbeb52edd78%40colostate.edu.


Re: [deal.II] Unable to match the performance in step-40

2023-04-06 Thread Wolfgang Bangerth

On 4/6/23 01:31, Wasim Niyaz Munshi ce21d400 wrote:

**

I tried to run step-40 with 52M DOFs on 32 processors. I am using 
*GridGenerator::subdivided_hyper_rectangle *to create a mesh with 
5000*5000 elements. I have a single cycle in my simulation. However, I 
am running into some memory issues.
  I am getting the following error: *Running with PETSc on 32 MPI 
rank(s)...*

*Cycle 0:
--
*
*mpirun noticed that process rank 5 with PID 214402 on node tattva 
exited on signal 9 (Killed)*.

I tried with 40 processors (125 GB RAM) but I am getting the same error.


I'm pretty sure you run out of memory. You need a smaller problem, a 
larger machine, or both.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/08b5e423-617a-3df5-9075-a609c819ebba%40colostate.edu.


Re: [deal.II] Extracting element solution in step-40

2023-04-06 Thread Daniel Arndt
Wasim,

The answer depends very much on what you actually want to do with that
solution vector.
Do you want a representation of the solution (assuming you are using Q1?
nodal elements) on a single process/all processes
or are you just interested in the partial solution on every process
separately?
What you are doing looks quite similar to what DataOut would give you for
visualizing solutions.

Best,
Daniel

On Thu, Apr 6, 2023 at 8:02 AM Wasim Niyaz Munshi ce21d400 <
ce21d...@smail.iitm.ac.in> wrote:

> Hello everyone.
> I want to extract the element solution vector from the global solution
> once the problem is solved in step-40. For a serial code, I would do
> something like this:
>
> *int i=0;*
>
>
>
> *for (const auto vertex : cell->vertex_indices()) { int a =
> (cell->vertex_dof_index(vertex, 0)); element_sol[i] =  solution_vector[a];*
>
> *  i=i+1; }*
>
> I don't have a solution_vector for a parallel code, but a
> locally_relevant_solution. I want to know that, given this
> locally_relevant_solution and the cell, how do I get the element_sol?
> The global_dof will not be helpful here, as the solution_vector is
> distributed across a number of processors.
>
> Thanks and regards
> Wasim
>
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/b2834b5b-f4f2-4931-9c5e-3f40d91b0648n%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAOYDWb%2BCgOgk6%2BWr%2BMZBXaRwtK9OQgDFs-JkcuGeRCtRWb8nAg%40mail.gmail.com.


Re: [deal.II] Understanding MeshWorker::mesh_loop order with adaptive refinement

2023-04-06 Thread Corbin Foucart
I also think there may be a small typo in the documentation

;

"If the flag AssembleFlags::assemble_own_cells is passed, then the default
behavior is to first loop over faces and do the work there, and then
compute the actual work on the cell. It is possible to perform the
integration on the cells after working on faces, by adding the flag
AssembleFlags::cells_after_faces."

to my eye, in both cases, the work on the face is done, followed by the
work on the cell. I think, however, the default behavior is to work on the
cells first, followed by work on the faces.

On Mon, Apr 3, 2023 at 6:35 PM Corbin Foucart 
wrote:

> Hello everyone,
>
> I'm solving a 1D explicit DG-FEM problem and I've encountered behavior
> that I don't understand using MeshWorker::mesh_loop.
>
>- I'm using the MeshWorker::assemble_own_interior_faces_both flag
>since I want to do work on each face corresponding to the same cell 
> interior
>- If the grid is created using typical GridGenerator calls, the order
>is exactly as I'd expect: first, work is done on the cell, then the faces
>(boundary or interior), followed by the next cell (further, all mass
>matrices are the same)
>- However, if I manually adapt the mesh by refining some cells, the
>order seems to change; the face work is done without respect to the cell
>interior last worked on
>
> I've attached a stripped-down program that illustrates this behavior on a
> toy mesh in 1D, as well as the output.
>
>- Ultimately, my goal is to assemble an inverse mass matrix on each
>cell, and apply it to a residual vector containing interior and face
>contributions (which can be done element-wise since the elements are
>FE_DGQ)
>- I was attempting to store the inverse via CopyData and then apply it
>in the copy worker.
>- However, I'm finding that due to the order of execution, I can't
>rely on the face work being done immediately after the cell work, and the
>inverse mass matrix stored to CopyData is often from another cell than the
>faces being worked on.
>- I could do the assembly and application of the inverse mass matrices
>separately, or store the inverse mass matrices in a map to the cell
>iterators, but I'm curious why this ordering occurs.
>
> Am I misunderstanding how mesh_loop is supposed to work? Any guidance
> would be appreciated!
>
> Corbin
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/ac4d4550-b014-4440-b2c2-227dba71fffen%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAE%2BMPodXQ5tZWiOkhq1j-p4uuvQiWOg3n57R2z7cWCKMz1pKFQ%40mail.gmail.com.


[deal.II] Extracting element solution in step-40

2023-04-06 Thread Wasim Niyaz Munshi ce21d400
Hello everyone.
I want to extract the element solution vector from the global solution once 
the problem is solved in step-40. For a serial code, I would do something 
like this:

*int i=0;*



*for (const auto vertex : cell->vertex_indices()) { int a = 
(cell->vertex_dof_index(vertex, 0)); element_sol[i] =  solution_vector[a];*

*  i=i+1; }*

I don't have a solution_vector for a parallel code, but a 
locally_relevant_solution. I want to know that, given this 
locally_relevant_solution and the cell, how do I get the element_sol?
The global_dof will not be helpful here, as the solution_vector is 
distributed across a number of processors.

Thanks and regards
Wasim


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b2834b5b-f4f2-4931-9c5e-3f40d91b0648n%40googlegroups.com.


[deal.II] Trouble installing p4est from candi M1 mac

2023-04-06 Thread Matteo Malvestiti
Good afternoon.
I’m truly sorry to bother you, but I’ve spent a lot of time trying to fix 
this problem, without any success.
I’m trying to install dealii on my M1 macbook air with MacOs Ventura.
Ive been following the guide on 
https://github.com/dealii/dealii/wiki/Apple-ARM-M1-OSX

I installed brew
I verified I have mac developer tools updated
Using brew I installed cmake and openmpi.
Using brew, with much more effort and not complete certainty of success, I 
installed gcc11.

I cloned candi git repo.
I set the following env variables, to use clang instead of gcc11:

*export CC=mpicc; export CXX=mpicxx; export FC=mpifort; export FF=mpifort; 
\  OMPI_CC=clang; export OMPI_CXX=clang++; export OMPI_FC=gfortran-1*
I began installing the packages together but had troubles.

So I proceeded doing them one by one.
hdf5 went fine
But p4est exits with the following error:







*Build FAST version in 
/Users/matteom/dealii-candi/tmp/build/p4est-2.3.2/FAST/Users/##/dealii-candi/tmp/unpack/p4est-2.3.2/configure:
 
line 4056: test: argument expectedconfigure: error: in 
`/Users/##/dealii-candi/tmp/build/p4est-2.3.2/FAST':configure: error: 
Fortran 77 compiler cannot create executablesSee `config.log' for more 
detailsError: Error in configure*
Note: I even tried to switch to Master branch, but nothing changed.


Do you have any clue what I could try next?

Thanks for your cooperation.
Best wishes,
Matteo Malvestiti

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7612a93f-66f0-45e1-96ec-ca6824d8efban%40googlegroups.com.


Re: [deal.II] Unable to match the performance in step-40

2023-04-06 Thread Wasim Niyaz Munshi ce21d400
I tried to run step-40 with 52M DOFs on 32 processors. I am using 
*GridGenerator::subdivided_hyper_rectangle 
*to create a mesh with 5000*5000 elements. I have a single cycle in my 
simulation. However, I am running into some memory issues.
 I am getting the following error: *Running with PETSc on 32 MPI rank(s)...*


*Cycle 
0:--*
*mpirun noticed that process rank 5 with PID 214402 on node tattva exited 
on signal 9 (Killed)*.
I tried with 40 processors (125 GB RAM) but I am getting the same error.
On Wednesday, April 5, 2023 at 11:07:25 PM UTC+5:30 Wolfgang Bangerth wrote:

> On 4/5/23 11:27, Wasim Niyaz Munshi ce21d400 wrote:
> > I am running in release mode. I am attaching the results for cycle 3 for 
> > both debug and release modes. I will try to reproduce the plot of wall 
> > time vs the number of processors for 52M DOFs as given in the tutorial 
> > problem. That would be a better way to compare the performances!
>
> Yes!
>
> As for why your output function is so slow, the only thing I can imagine 
> is that whatever disk you write to is rather slow -- but I don't know 
> for sure.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6a7e7265-9330-4300-bb4c-bc4a31ae54bfn%40googlegroups.com.