Re: [deal.II] Solving time dependent heat equation with MPI (PETsc)

2017-10-15 Thread Mark Ma
Hi Lucas,

I am currently studying of this, if you have some problems or very exciting 
process,  we could discuss if you would like to share. :)

Best,
Mark

在 2017年10月13日星期五 UTC+2下午4:50:02,Lucas Campos写道:
>
> Dear Wolfgang,
>
> Thank you for your explanation. Currently, I am using a code that was not 
> written by me, and uses the MatrixTools::apply_boundary_values() 
> approach. I try to change it to use the ConstraintMatrix one. For that, 
> step-40 seems to be the best starting point, like Mark did. 
>
> Thanks again,
> Lucas
>
>
>
> On 13 October 2017 at 16:43, Wolfgang Bangerth  > wrote:
>
>> On 10/13/2017 08:39 AM, Lucas Campos wrote:
>>
>>>
>>> In general, using MatrixTools::apply_boundary_values() is not the 
>>> way to go
>>> with MPI programs. Rather, use a ConstraintMatrix and incorporate the
>>> boundary
>>> values into the same object as you do with hanging node constraints.
>>>
>>>
>>> This is the way to go due to correctness, or in the sense of scalability?
>>>
>>
>> MatrixTools::apply_boundary_values() needs access to the elements of the 
>> matrix because it wants to modify elements after they have already been 
>> written into the matrix. That is already difficult if the matrix is owned 
>> by PETSc or Trilinos -- we can get access to these elements, but it is not 
>> efficient to do so.
>>
>> But the bigger issue is that the function wants to access elements not 
>> only for the rows of constrained DoFs, but also for the columns. That means 
>> that you may have to access elements that are actually stored on other 
>> processors -- something that can not be done efficiently. Consequently, 
>> MatrixTools::apply_boundary_values() does not attempt to eliminate columns 
>> of the matrix, and you will end up with a non-symmetric matrix even if your 
>> problem is symmetric.
>>
>> It is better to use the approach via ConstraintMatrix that deals with 
>> entries before they even get into the matrix (and therefore in particular 
>> before matrix entries are sent to other processors).
>>
>> Best
>>  W.
>>
>> -- 
>> 
>> Wolfgang Bangerth  email: bang...@colostate.edu 
>> 
>>
>>www: http://www.math.colostate.edu/~bangerth/
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- You received this message because you are subscribed to a topic in 
>> the Google Groups "deal.II User Group" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/dealii/5hC7jODg-7k/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> dealii+un...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Solving time dependent heat equation with MPI (PETsc)

2017-10-15 Thread Mark Ma
Prof. Wolfgang Bangerth,

Now the projection of initial values (rewrite the code by manually assemble 
the matrix and system_rhs and calculate) run OK, but the time updating of T 
is not correct, same phenomenon appears. I believe this may arise from the 
fact that direct using matrix vmult (i.e. 
mass_matrix_T.vmult (system_rhs, old_solution_T_cal);

) instead of assembling and distribute_local_to_global again may ignore 
eliminating the constraint points in matrix or vector, when using 
constriantMatrix and interporate_boundary_values to apply the boundary 
condition, I am now checking of this. 

Is there a simple way to update the RHS of old value using something simple 
like vmult?

Best,
Mark

//---
//Updating of T in time domain--the old way, working in non-mpi version
//---

  //
  //time dependent

//assign right hand side
mass_matrix_T.vmult (system_rhs, old_solution_T_cal);

laplace_matrix_T.vmult (tmp, old_solution_T_cal);
system_rhs.add (-time_step * (1-theta), tmp);

assemble_rhs_T (time);
forcing_terms = dynamic_rhs_T;
forcing_terms *= time_step * theta;

assemble_rhs_T (time - time_step); 
tmp = dynamic_rhs_T;
forcing_terms.add (time_step*(1-theta),tmp);
system_rhs.add (1,forcing_terms);

//assign system matrix
system_matrix.copy_from (mass_matrix_T);
system_matrix.add (time_step * theta, laplace_matrix_T);




在 2017年10月13日星期五 UTC+2下午4:43:30,Wolfgang Bangerth写道:
>
> On 10/13/2017 08:39 AM, Lucas Campos wrote: 
> > 
> > In general, using MatrixTools::apply_boundary_values() is not the 
> way to go 
> > with MPI programs. Rather, use a ConstraintMatrix and incorporate 
> the 
> > boundary 
> > values into the same object as you do with hanging node constraints. 
> > 
> > 
> > This is the way to go due to correctness, or in the sense of 
> scalability? 
>
> MatrixTools::apply_boundary_values() needs access to the elements of the 
> matrix because it wants to modify elements after they have already been 
> written into the matrix. That is already difficult if the matrix is owned 
> by 
> PETSc or Trilinos -- we can get access to these elements, but it is not 
> efficient to do so. 
>
> But the bigger issue is that the function wants to access elements not 
> only 
> for the rows of constrained DoFs, but also for the columns. That means 
> that 
> you may have to access elements that are actually stored on other 
> processors 
> -- something that can not be done efficiently. Consequently, 
> MatrixTools::apply_boundary_values() does not attempt to eliminate columns 
> of 
> the matrix, and you will end up with a non-symmetric matrix even if your 
> problem is symmetric. 
>
> It is better to use the approach via ConstraintMatrix that deals with 
> entries 
> before they even get into the matrix (and therefore in particular before 
> matrix entries are sent to other processors). 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Error when applying initial values to MPI::Vector in multiple dimensions

2017-10-15 Thread Mark Ma
Dear Maxi,

For projecting initial values in MPI (project function may fail using PETsc 
or Trilinos), it is more convenient by direct solving the equation:
Mass_matrix*Solution =(InitialvaluesFunc,phi)
this part in MPI is not hard, you could do it very quickly I think.

Best,
Mark


在 2017年10月13日星期五 UTC+2上午11:05:01,Maxi Miller写道:
>
> I try to apply initial values to a vector defined as 
> *LinearAlgebraTrilinos::MPI::Vector 
> *using 
> VectorTools::project (dof_handler, hanging_node_constraints,
>  QGauss(fe.degree+1),
>  InitialValues(),
>  local_solution);
>
>
>
> When initializing the variable fe (as FESystem) with one or two 
> components, it works fine. For more than two components I get the error
>
>  
> An error occurred in line <1366> of file 
> <~/Downloads/dealii/include/deal.II/numerics/vector_tools.templates.h> in 
> function 
> void dealii::VectorTools::{anonymous}::project(const 
> dealii::Mapping&, const dealii::DoFHandler&, const 
> dealii::ConstraintMatrix&, const dealii::Quadrature&, const 
> dealii::Function&, VectorType&, bool, 
> const dealii
> ::Quadrature<(dim - 1)>&, bool) [with VectorType = 
> dealii::TrilinosWrappers::MPI::Vector; int dim = 2; typename 
> VectorType::value_type = double] 
> The violated condition was:  
> (dynamic_cast* > 
> (&(dof.get_triangulation()))==nullptr) 
> Additional information:  
> You are trying to use functionality in deal.II that is currently not 
> implemented. In many cases, this indicates that there simply didn't appear 
> much of a need for it, or that the author of the original code did not have 
> the time to implement a particular case. If you
>  hit this exception, it is therefore worth the time to look into the code to 
> find out whether you may be able to implement the missing functionality. If 
> you do, please consider providing a patch to the deal.II development sources 
> (see the deal.II website on how to contri
> bute). 
>  
> Stacktrace: 
> --- 
> #0  /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre:  
> #1  /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre: void 
> dealii::VectorTools::project<2, dealii::TrilinosWrappers::MPI::Vector, 
> 2>(dealii::DoFHandler<2, 2> const&, dealii::ConstraintMatrix const&, 
> dealii::Quadrature<2> const&, dealii::Function<2, 
> dealii::TrilinosWrappers::MPI
> ::Vector::value_type> const&, dealii::TrilinosWrappers::MPI::Vector&, bool, 
> dealii::Quadrature<(2)-(1)> const&, bool) 
> #2  ./main: Step15::MinimalSurfaceProblem<2>::run() 
> #3  ./main: main 
>  
>  
> [linux-lb8c:15830] *** Process received signal *** 
> [linux-lb8c:15830] Signal: Aborted (6) 
> [linux-lb8c:15830] Signal code:  (-6) 
> [linux-lb8c:15830] [ 0] /lib64/libpthread.so.0(+0x12270)[0x7f294a477270] 
> [linux-lb8c:15830] [ 1] /lib64/libc.so.6(gsignal+0x110)[0x7f2946c1f0d0] 
> [linux-lb8c:15830] [ 2] /lib64/libc.so.6(abort+0x151)[0x7f2946c206b1] 
> [linux-lb8c:15830] [ 3] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(+0x6b9e5d1)[0x7f295b49e5d1] 
> [linux-lb8c:15830] [ 4] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(_ZN6dealii18deal_II_exceptions9internals5abortERKNS_13ExceptionBaseE+0x1a)[0x7f295b49edaf]
>  
> [linux-lb8c:15830] [ 5] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(_ZN6dealii18deal_II_exceptions9internals11issue_errorINS_18StandardExceptions17ExcNotImplementedEEEvNS1_17ExceptionHandlingEPKciS7_S7_S7_T_+0x98)[0x7f2957373ea1]
>  
> [linux-lb8c:15830] [ 6] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(+0x3f38e23)[0x7f2958838e23] 
> [linux-lb8c:15830] [ 7] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(_ZN6dealii11VectorTools7projectILi2ENS_16TrilinosWrappers3MPI6VectorELi2EEEvRKNS_10DoFHandlerIXT_EXT1_EEERKNS_16ConstraintMatrixERKNS_10QuadratureIXT_EEERKNS_8FunctionIXT1_ENT0_10value_typeEEERSH_bRKNSC_IX
> miT_Li1b+0x2f)[0x7f295894906e] 
> [linux-lb8c:15830] [ 8] 
> ./main(_ZN6Step1521MinimalSurfaceProblemILi2EE3runEv+0xc08)[0x420d08] 
> [linux-lb8c:15830] [ 9] ./main(main+0x3c)[0x414ad0] 
> [linux-lb8c:15830] [10] 
> /lib64/libc.so.6(__libc_start_main+0xea)[0x7f2946c09f4a] 
> [linux-lb8c:15830] [11] ./main(_start+0x2a)[0x41477a] 
> [linux-lb8c:15830] *** End of error message *** 
> Abgebrochen (Speicherabzug geschrieben)
>
> when running in debug mode. It runs fine in release mode. Why does that 
> happen for more than two components, and how can I fix/circumvent that? Or 
> did I (again) forget something? 
>
> My minimal example is attached, the behaviour happens when setting 
> NUM_COMPONENTS via 
>
> #define NUM_COMPONENTS 100
>
> to a value larger than 2.
>
>
> Thank you!
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an em

Re: [deal.II] Solving time dependent heat equation with MPI (PETsc)

2017-10-15 Thread Mark Ma
Prof. W. Bangerth,

Please find attached source code, where the projection part is ok now, but 
time updating of solution part results is incorrect I believe (see boundary 
results). This confused me a few weeks and still did not find a solution. 

Best,
Mark

在 2017年10月13日星期五 UTC+2下午2:55:04,Wolfgang Bangerth写道:
>
> On 10/13/2017 02:06 AM, Mark Ma wrote: 
> > 
> > later, I changed the control into 
> > | 
> > | 
> > SolverControlsolver_control 
> (5*system_rhs.size(),1e-12*system_rhs.l2_norm()) 
> > | 
> > | 
> > this works well for structure size of um or nm. I think previous setting 
> may 
> > lead to a loss of precision so that the results are always incorrect. 
>
> Yes, indeed -- using a tolerance relative to the size of the rhs vector is 
> important. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
/* -
 *
 * Copyright (C) 1999 - 2016 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE at
 * the top level of the deal.II distribution.
 *
 * -

 *
 * Author: Wolfgang Bangerth, University of Heidelberg, 1999
 */


// @sect3{Include files}

// The first few (many?) include files have already been used in the previous
// example, so we will not explain their meaning here again.
//#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
//#include 
#include 
//#include 
//#include 

/*
#include 
#include 
#include 
#include 
*/

#include 
#include 
#include 
#include 


#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 

// This is new, however: in the previous example we got some unwanted output
// from the linear solvers. If we want to suppress it, we have to include this
// file and add a single line somewhere to the program (see the main()
// function below for that):
#include 

#include 

#include 

// The final step, as in previous programs, is to import all the deal.II class
// and function names into the global namespace:
using namespace dealii;


// @sect3{The Step4 class template}

// This is again the same Step4 class as in the previous
// example. The only difference is that we have now declared it as a class
// with a template parameter, and the template parameter is of course the
// spatial dimension in which we would like to solve the Laplace equation. Of
// course, several of the member variables depend on this dimension as well,
// in particular the Triangulation class, which has to represent
// quadrilaterals or hexahedra, respectively. Apart from this, everything is
// as before.
template 
class Step4
{
public:
  Step4 ();
  void run ();

private:
  void make_grid ();
  void setup_system();
  void assemble_system ();

  void assemble_rhs_T(double rhs_time);
  void solve_T ();
  void solve_T_run ();
  void output_results (int output_num) const;
  
  MPI_Commmpi_communicator;


  parallel::distributed::Triangulation   triangulation;
  FE_Qfe;
  DoFHandler  dof_handler;

  ConstraintMatrix constraints;

//  SparsityPattern  sparsity_pattern;

  // system_matrix firstly used for projection of
  //init_T
  TrilinosWrappers::SparseMatrix system_matrix_T;
  TrilinosWrappers::SparseMatrix mass_matrix_T;
  TrilinosWrappers::SparseMatrix laplace_matrix_T;

  // with ghost cells
  //old_solution_T
  TrilinosWrappers::MPI::Vector   old_solution_T; 

  // only locally owned cells
  //old_solution_T_cal
  //system_rhs
  TrilinosWrappers::MPI::Vector   old_solution_T_cal;
  TrilinosWrappers::MPI::Vector   new_solution_T;
  TrilinosWrappers::MPI::Vector   system_rhs_T;

// also store right hande side
  TrilinosWrappers::MPI::Vector   dynamic_rhs_T;


  IndexSet  locally_owned_dofs;
  IndexSet  locally_relevant_dofs

Re: [deal.II] Solving time dependent heat equation with MPI (PETsc)

2017-10-15 Thread Wolfgang Bangerth

On 10/15/2017 12:41 PM, Mark Ma wrote:


Now the projection of initial values (rewrite the code by manually assemble 
the matrix and system_rhs and calculate) run OK, but the time updating of T is 
not correct, same phenomenon appears. I believe this may arise from the fact 
that direct using matrix vmult (i.e.

|
mass_matrix_T.vmult (system_rhs,old_solution_T_cal);
|

) instead of assembling and distribute_local_to_global again may ignore 
eliminating the constraint points in matrix or vector, when using 
constriantMatrix and interporate_boundary_values to apply the boundary 
condition, I am now checking of this.


So when you visualize the solution, the error is at the boundary but it looks 
correct in the interior?



Is there a simple way to update the RHS of old value using something simple 
like vmult?


What is the equation you are trying to implement?

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Error when applying initial values to MPI::Vector in multiple dimensions

2017-10-15 Thread Denis Davydov
Or you may want to use interpolate if this is enough
https://www.dealii.org/developer/doxygen/deal.II/namespaceVectorTools.html#a05db6c8cebf924b417dd92f525efe3db

Regards,
Denis

On Friday, October 13, 2017 at 11:05:01 AM UTC+2, Maxi Miller wrote:
>
> I try to apply initial values to a vector defined as 
> *LinearAlgebraTrilinos::MPI::Vector 
> *using 
> VectorTools::project (dof_handler, hanging_node_constraints,
>  QGauss(fe.degree+1),
>  InitialValues(),
>  local_solution);
>
>
>
> When initializing the variable fe (as FESystem) with one or two 
> components, it works fine. For more than two components I get the error
>
>  
> An error occurred in line <1366> of file 
> <~/Downloads/dealii/include/deal.II/numerics/vector_tools.templates.h> in 
> function 
> void dealii::VectorTools::{anonymous}::project(const 
> dealii::Mapping&, const dealii::DoFHandler&, const 
> dealii::ConstraintMatrix&, const dealii::Quadrature&, const 
> dealii::Function&, VectorType&, bool, 
> const dealii
> ::Quadrature<(dim - 1)>&, bool) [with VectorType = 
> dealii::TrilinosWrappers::MPI::Vector; int dim = 2; typename 
> VectorType::value_type = double] 
> The violated condition was:  
> (dynamic_cast* > 
> (&(dof.get_triangulation()))==nullptr) 
> Additional information:  
> You are trying to use functionality in deal.II that is currently not 
> implemented. In many cases, this indicates that there simply didn't appear 
> much of a need for it, or that the author of the original code did not have 
> the time to implement a particular case. If you
>  hit this exception, it is therefore worth the time to look into the code to 
> find out whether you may be able to implement the missing functionality. If 
> you do, please consider providing a patch to the deal.II development sources 
> (see the deal.II website on how to contri
> bute). 
>  
> Stacktrace: 
> --- 
> #0  /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre:  
> #1  /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre: void 
> dealii::VectorTools::project<2, dealii::TrilinosWrappers::MPI::Vector, 
> 2>(dealii::DoFHandler<2, 2> const&, dealii::ConstraintMatrix const&, 
> dealii::Quadrature<2> const&, dealii::Function<2, 
> dealii::TrilinosWrappers::MPI
> ::Vector::value_type> const&, dealii::TrilinosWrappers::MPI::Vector&, bool, 
> dealii::Quadrature<(2)-(1)> const&, bool) 
> #2  ./main: Step15::MinimalSurfaceProblem<2>::run() 
> #3  ./main: main 
>  
>  
> [linux-lb8c:15830] *** Process received signal *** 
> [linux-lb8c:15830] Signal: Aborted (6) 
> [linux-lb8c:15830] Signal code:  (-6) 
> [linux-lb8c:15830] [ 0] /lib64/libpthread.so.0(+0x12270)[0x7f294a477270] 
> [linux-lb8c:15830] [ 1] /lib64/libc.so.6(gsignal+0x110)[0x7f2946c1f0d0] 
> [linux-lb8c:15830] [ 2] /lib64/libc.so.6(abort+0x151)[0x7f2946c206b1] 
> [linux-lb8c:15830] [ 3] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(+0x6b9e5d1)[0x7f295b49e5d1] 
> [linux-lb8c:15830] [ 4] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(_ZN6dealii18deal_II_exceptions9internals5abortERKNS_13ExceptionBaseE+0x1a)[0x7f295b49edaf]
>  
> [linux-lb8c:15830] [ 5] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(_ZN6dealii18deal_II_exceptions9internals11issue_errorINS_18StandardExceptions17ExcNotImplementedEEEvNS1_17ExceptionHandlingEPKciS7_S7_S7_T_+0x98)[0x7f2957373ea1]
>  
> [linux-lb8c:15830] [ 6] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(+0x3f38e23)[0x7f2958838e23] 
> [linux-lb8c:15830] [ 7] 
> /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre(_ZN6dealii11VectorTools7projectILi2ENS_16TrilinosWrappers3MPI6VectorELi2EEEvRKNS_10DoFHandlerIXT_EXT1_EEERKNS_16ConstraintMatrixERKNS_10QuadratureIXT_EEERKNS_8FunctionIXT1_ENT0_10value_typeEEERSH_bRKNSC_IX
> miT_Li1b+0x2f)[0x7f295894906e] 
> [linux-lb8c:15830] [ 8] 
> ./main(_ZN6Step1521MinimalSurfaceProblemILi2EE3runEv+0xc08)[0x420d08] 
> [linux-lb8c:15830] [ 9] ./main(main+0x3c)[0x414ad0] 
> [linux-lb8c:15830] [10] 
> /lib64/libc.so.6(__libc_start_main+0xea)[0x7f2946c09f4a] 
> [linux-lb8c:15830] [11] ./main(_start+0x2a)[0x41477a] 
> [linux-lb8c:15830] *** End of error message *** 
> Abgebrochen (Speicherabzug geschrieben)
>
> when running in debug mode. It runs fine in release mode. Why does that 
> happen for more than two components, and how can I fix/circumvent that? Or 
> did I (again) forget something? 
>
> My minimal example is attached, the behaviour happens when setting 
> NUM_COMPONENTS via 
>
> #define NUM_COMPONENTS 100
>
> to a value larger than 2.
>
>
> Thank you!
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.g