source code are attached.

在 2017年10月16日星期一 UTC+2下午9:14:13,Wolfgang Bangerth写道:
>
> On 10/16/2017 09:23 AM, Mark Ma wrote: 
> > 
> > It almost looks to me like you're applying both Dirichlet values (via 
> > the ConstraintMatrix) and Neumann boundary values (via a boundary 
> > integral). But then I haven't taken a look at the code, so I can't 
> > really say for sure. 
> > 
> > 
> > I think I only applied Dirichlet values via ConstraintMatrix. There 
> > is no boundary integrals, so I believe Neumann BC is not 
> > implemented. 
>
> My guess came from the fact that a Neumann boundary condition would act 
> like a boundary heat source. It looks like you have a heat source in the 
> last layer of cells around the boundary. 
>
> I don't know whether that is true, but it's a place I'd try to 
> investigate in debugging. 
>
>
> > But may be you are right since for most of the case, heat equations 
> > are solved assuming an adiabatic process, that is dT/dx = 0 at 
> > boundaries. This zero values is automatically satisfied. 
>
> That's a different question. Whether one may *physically* want a 
> different equation is besides the point -- you have chosen one equation 
> you'd like to solve, but the numerical solution is wrong. It's 
> worthwhile trying to understand why that is before thinking about 
> *other* equations. 
>
>
> > I think the problem comes from the way of setting Boundary values 
> > using constraint method, which in dealii it makes some kind of 
> > constraints at vertex of boundaries, i.e. x22 = x1/2 + x2/2, 
>
> Hm, that would be the constraint of a hanging node. Do you have hanging 
> nodes in your problem? If you do, may I suggest to simplify the problem 
> and see if you have the same problem on uniform meshes? 
>
> Best 
>   W. 
>
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 bang...@colostate.edu 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
/* ---------------------------------------------------------------------
 *
 * Copyright (C) 1999 - 2016 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE at
 * the top level of the deal.II distribution.
 *
 * ---------------------------------------------------------------------

 *
 * Author: Wolfgang Bangerth, University of Heidelberg, 1999
 */


// @sect3{Include files}

// The first few (many?) include files have already been used in the previous
// example, so we will not explain their meaning here again.
//#include <deal.II/grid/tria.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/dofs/dof_accessor.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/function.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/matrix_tools.h>
#include <deal.II/lac/vector.h>
#include <deal.II/lac/full_matrix.h>
//#include <deal.II/lac/sparse_matrix.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
//#include <deal.II/lac/solver_cg.h>
//#include <deal.II/lac/precondition.h>

/*
#include <deal.II/lac/petsc_parallel_vector.h>
#include <deal.II/lac/petsc_parallel_sparse_matrix.h>
#include <deal.II/lac/petsc_solver.h>
#include <deal.II/lac/petsc_precondition.h>
*/

#include <deal.II/lac/trilinos_vector.h>
#include <deal.II/lac/trilinos_sparse_matrix.h>
#include <deal.II/lac/trilinos_solver.h>
#include <deal.II/lac/trilinos_precondition.h>


#include <deal.II/lac/sparsity_tools.h>
#include <deal.II/lac/generic_linear_algebra.h>

#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/utilities.h>
#include <deal.II/base/index_set.h>
#include <deal.II/distributed/tria.h>

#include <deal.II/numerics/data_out.h>
#include <fstream>
#include <iostream>

// This is new, however: in the previous example we got some unwanted output
// from the linear solvers. If we want to suppress it, we have to include this
// file and add a single line somewhere to the program (see the main()
// function below for that):
#include <deal.II/base/logstream.h>

#include <deal.II/lac/constraint_matrix.h>

#include <deal.II/base/timer.h>

// The final step, as in previous programs, is to import all the deal.II class
// and function names into the global namespace:
using namespace dealii;


// @sect3{The <code>Step4</code> class template}

// This is again the same <code>Step4</code> class as in the previous
// example. The only difference is that we have now declared it as a class
// with a template parameter, and the template parameter is of course the
// spatial dimension in which we would like to solve the Laplace equation. Of
// course, several of the member variables depend on this dimension as well,
// in particular the Triangulation class, which has to represent
// quadrilaterals or hexahedra, respectively. Apart from this, everything is
// as before.
template <int dim>
class Step4
{
public:
  Step4 ();
  void run ();

private:
  void make_grid ();
  void setup_system();
  void assemble_system ();

  void assemble_rhs_T(double rhs_time);
  void solve_T ();
  void solve_T_run ();
  void output_results (int output_num) const;
  
  MPI_Comm                                    mpi_communicator;


  parallel::distributed::Triangulation<dim>   triangulation;
  FE_Q<dim>            fe;
  DoFHandler<dim>      dof_handler;

  ConstraintMatrix     constraints;

//  SparsityPattern      sparsity_pattern;

  // system_matrix firstly used for projection of
  //        init_T
  TrilinosWrappers::SparseMatrix system_matrix_T;
  TrilinosWrappers::SparseMatrix mass_matrix_T;
  TrilinosWrappers::SparseMatrix laplace_matrix_T;

  // with ghost cells
  //    old_solution_T
  TrilinosWrappers::MPI::Vector       old_solution_T; 

  // only locally owned cells
  //    old_solution_T_cal
  //    system_rhs
  TrilinosWrappers::MPI::Vector       old_solution_T_cal;
  TrilinosWrappers::MPI::Vector       new_solution_T;
  TrilinosWrappers::MPI::Vector       system_rhs_T;

        // also store right hande side
  TrilinosWrappers::MPI::Vector       dynamic_rhs_T;


  IndexSet              locally_owned_dofs;
  IndexSet              locally_relevant_dofs;

  ConditionalOStream    pcout;

  double theta;


};




//  BC and Initial values----------------------------------------------

template <int dim>
class BoundaryValues_T : public Function<dim>
{
public:
  BoundaryValues_T () : Function<dim>() {}

  virtual double value (const Point<dim>   &p,
                        const unsigned int  component = 0) const;
};

template <int dim>
class InitialValues_T : public Function<dim>
{
public:
  InitialValues_T () : Function<dim>() {}

  virtual double value (const Point<dim>   &p,
                        const unsigned int  component = 0) const;
};

template <int dim>
double BoundaryValues_T<dim>::value (const Point<dim> &/*p*/,
                                   const unsigned int /*component*/) const
{
//  return p.square();
  return 300;
}


template <int dim>
double InitialValues_T<dim>::value (const Point<dim> &/*p*/,
                                   const unsigned int /*component*/) const
{
//  return p.square();
  return 300;
}





//--------rho*C---------------------------------------------------------
template <int dim>
class RhoC_T : public Function<dim>
{
public:
  RhoC_T () : Function<dim>() {}

  virtual double value (const Point<dim>   &p,
                        const unsigned int  component = 0) const;
};

template <int dim>
double RhoC_T<dim>::value (const Point<dim> &/*p*/,
                                   const unsigned int /*component*/) const
{
//  return p.square();
  return 2200*1600;
}



// -----Kappa---------------------------------------------------------
template <int dim>
class Kappa_T : public Function<dim>
{
public:
  Kappa_T () : Function<dim>() {}

  virtual double value (const Point<dim>   &p,
                        const unsigned int  component = 0) const;
};

template <int dim>
double Kappa_T<dim>::value (const Point<dim> &/*p*/,
                                   const unsigned int /*component*/) const
{
//  return p.square();
  return 5;
}



//  -------------------------------------------------
//  ------------------------------------------------
//          RIGHTHAND SIDE
template <int dim>
class RightHandside_T : public Function<dim>
{
public:
  RightHandside_T () : Function<dim>() {}

  virtual double value (const Point<dim>   &p,
                        const unsigned int  component = 0) const;
};

template <int dim>
double RightHandside_T<dim>::value (const Point<dim> &p,
                                   const unsigned int component) const
{
    Assert (component == 0, ExcInternalError());

    if(p[2] >= - 30e-6)
    {

    double global_Pow_laser = 0.21;
    double global_V_scan_x = 80e-6;
    double global_V_scan_y = 80e-6;
    double global_init_laser_x0 = 0;
    double global_init_laser_y0 = 0;
    double global_c_laser = 5e-6;
    double global_PI = 3.1415927;

    double P00 = global_Pow_laser / global_PI / global_c_laser / global_c_laser / 2.0;
    return 1e6*P00 * std::exp(-( (p[0] - global_V_scan_x * this->get_time()-global_init_laser_x0) * (p[0] - global_V_scan_x * this->get_time()-global_init_laser_x0) + (p[1] - global_V_scan_y * this->get_time()-global_init_laser_y0) * (p[1] - global_V_scan_y * this->get_time()-global_init_laser_y0)) / 2.0 / global_c_laser / global_c_laser)*std::exp(-1e6*std::abs(p[2]));

    }
    else
        return 0;



}




//----Constructor----- 
template <int dim>
Step4<dim>::Step4 ()
  :
  mpi_communicator (MPI_COMM_WORLD),
  triangulation (mpi_communicator),
  dof_handler (triangulation),
  fe (1),
  pcout (std::cout,Utilities::MPI::this_mpi_process(mpi_communicator) == 0),
  theta(0.5)
{}


// -----grid------------ 
template <int dim>
void Step4<dim>::make_grid ()
{
  GridGenerator::hyper_cube (triangulation, -30e-6,30e-6);
  triangulation.refine_global (5);

  pcout << "   Number of active cells: "
            << triangulation.n_global_active_cells()
            << std::endl
            << "   Total number of cells: "
            << triangulation.n_cells()
            << std::endl;
}

//----setup_system------- 
template <int dim>
void Step4<dim>::setup_system ()
{
  dof_handler.distribute_dofs (fe);

  pcout << "   Number of degrees of freedom: "
            << dof_handler.n_dofs()
            << std::endl;

  locally_owned_dofs = dof_handler.locally_owned_dofs();
  DoFTools::extract_locally_relevant_dofs (dof_handler, locally_relevant_dofs);

  // we want to output solution, so here should have ghost cells
  //    WITH    GHOST CELLS
  old_solution_T.reinit (locally_owned_dofs,locally_relevant_dofs,mpi_communicator);

  //    LOCALLY OWNED CELLS
  old_solution_T_cal.reinit     (locally_owned_dofs,mpi_communicator);
  new_solution_T.reinit         (locally_owned_dofs,mpi_communicator);
  dynamic_rhs_T.reinit             (locally_owned_dofs,mpi_communicator);
  system_rhs_T.reinit             (locally_owned_dofs,mpi_communicator);

  constraints.clear();
  constraints.reinit (locally_relevant_dofs);
  VectorTools::interpolate_boundary_values (dof_handler,
                                            0,
                                            BoundaryValues_T<dim>(),
                                            constraints);                                      
  constraints.close();


  DynamicSparsityPattern dsp(locally_relevant_dofs);
  DoFTools::make_sparsity_pattern (dof_handler, dsp,constraints,false);
  SparsityTools::distribute_sparsity_pattern (dsp,
                                              dof_handler.n_locally_owned_dofs_per_processor(),
                                              mpi_communicator,
                                              locally_relevant_dofs);


  mass_matrix_T.reinit (locally_owned_dofs,locally_owned_dofs,dsp,mpi_communicator);
  laplace_matrix_T.reinit (locally_owned_dofs,locally_owned_dofs,dsp,mpi_communicator);
  system_matrix_T.reinit (locally_owned_dofs,locally_owned_dofs,dsp,mpi_communicator);


}


// @sect4{Step4::assemble_system}


template <int dim>
void Step4<dim>::assemble_system ()
{
  QGauss<dim>  quadrature_formula(2);


  const InitialValues_T<dim> initial_value_func;

  const RhoC_T<dim>             rho_C_fun_T;
  const Kappa_T<dim>            kappa_fun_T;

  FEValues<dim> fe_values (fe, quadrature_formula,
                           update_values   | update_gradients |
                           update_quadrature_points | update_JxW_values);

  const unsigned int   dofs_per_cell = fe.dofs_per_cell;
  const unsigned int   n_q_points    = quadrature_formula.size();

  FullMatrix<double>   local_init_T_matrix (dofs_per_cell, dofs_per_cell);
  Vector<double>       local_init_T_rhs (dofs_per_cell);

  FullMatrix<double>   local_rho_c_T_matrix (dofs_per_cell, dofs_per_cell);
  FullMatrix<double>   local_kappa_T_matrix (dofs_per_cell, dofs_per_cell);

  std::vector<types::global_dof_index> local_dof_indices (dofs_per_cell);


  typename DoFHandler<dim>::active_cell_iterator
  cell = dof_handler.begin_active(),
  endc = dof_handler.end();

  for (; cell!=endc; ++cell)
      if(cell->is_locally_owned())
    {
      fe_values.reinit (cell);
      local_init_T_matrix = 0;
      local_rho_c_T_matrix = 0;
      local_kappa_T_matrix = 0;
      local_init_T_rhs = 0;


      for (unsigned int q=0; q<n_q_points; ++q)
        for (unsigned int i=0; i<dofs_per_cell; ++i)
          {
            const Tensor<1,dim>     div_phi_i_u = fe_values.shape_grad (i,q);
            const double            phi_i_u = fe_values.shape_value (i,q);

            for (unsigned int j=0; j<dofs_per_cell; ++j)
            {

              const Tensor<1,dim>       div_phi_j_u = fe_values.shape_grad(j,q);
              const double              phi_j_u = fe_values.shape_value (j,q);

              local_init_T_matrix(i,j) += (phi_i_u *
                                           phi_j_u *
                                           fe_values.JxW (q));

              local_rho_c_T_matrix(i,j) += (rho_C_fun_T.value(fe_values.quadrature_point(q)) *
                                            phi_i_u *
                                            phi_j_u *
                                            fe_values.JxW (q));

              local_kappa_T_matrix(i,j) += (kappa_fun_T.value(fe_values.quadrature_point(q)) *
                                            div_phi_i_u *
                                            div_phi_j_u *
                                            fe_values.JxW (q));
              



            }

            local_init_T_rhs(i) += (phi_i_u *
                                    initial_value_func.value (fe_values.quadrature_point (q)) *
                                    fe_values.JxW (q));
          }

      cell->get_dof_indices (local_dof_indices);

      constraints.distribute_local_to_global(local_init_T_matrix,
                                             local_init_T_rhs,
                                             local_dof_indices,
                                             system_matrix_T,
                                             system_rhs_T);

      constraints.distribute_local_to_global(local_rho_c_T_matrix,
                                             local_dof_indices,
                                             mass_matrix_T);

      constraints.distribute_local_to_global(local_kappa_T_matrix,
                                             local_dof_indices,
                                             laplace_matrix_T);


    }

  system_matrix_T.compress(VectorOperation::add);
  mass_matrix_T.compress(VectorOperation::add);
  laplace_matrix_T.compress(VectorOperation::add);
  system_rhs_T.compress(VectorOperation::add);



  }


//----------ASSEMBLE_--------------RIGHT_HANDSIDE_T
template <int dim>
void Step4<dim>::assemble_rhs_T (double rhs_time)
{
  QGauss<dim>  quadrature_formula(2);


  RightHandside_T<dim> rhs_func_T;
  rhs_func_T.set_time(rhs_time);

  FEValues<dim> fe_values (fe, quadrature_formula,
                           update_values   |
                           update_quadrature_points | update_JxW_values);


  const unsigned int   dofs_per_cell = fe.dofs_per_cell;
  const unsigned int   n_q_points    = quadrature_formula.size();

  Vector<double>       local_rhs_vector_T (dofs_per_cell);


  std::vector<types::global_dof_index> local_dof_indices (dofs_per_cell);


  dynamic_rhs_T = 0 ;

  typename DoFHandler<dim>::active_cell_iterator
  cell = dof_handler.begin_active(),
  endc = dof_handler.end();

  for (; cell!=endc; ++cell)
      if(cell->is_locally_owned())
    {
      fe_values.reinit (cell);
      local_rhs_vector_T = 0;

      for (unsigned int q=0; q<n_q_points; ++q)
        for (unsigned int i=0; i<dofs_per_cell; ++i)
          {
            const double            phi_i_u = fe_values.shape_value (i,q);

            local_rhs_vector_T(i) +=    (phi_i_u *
                                         rhs_func_T.value (fe_values.quadrature_point (q)) *
                                         fe_values.JxW (q));
          }

      cell->get_dof_indices (local_dof_indices);


      constraints.distribute_local_to_global(local_rhs_vector_T,
                                             local_dof_indices,
                                             dynamic_rhs_T);


    }

      dynamic_rhs_T.compress(VectorOperation::add);


  }


// @sect4{Step4::solve}

// Solving the linear system of equations is something that looks almost
// identical in most programs. In particular, it is dimension independent, so
// this function is copied verbatim from the previous example.
template <int dim>
void Step4<dim>::solve_T ()
{
  TrilinosWrappers::MPI::Vector   completely_distributed_solution (locally_owned_dofs,mpi_communicator);
//  SolverControl     solver_control (dof_handler.n_dofs(),1e-12);
//  SolverControl     solver_control (5*system_rhs_T.size(),1e-12*system_rhs_T.l2_norm());
  SolverControl     solver_control (5*system_rhs_T.size(),1e-12*system_rhs_T.l2_norm(),true);

  TrilinosWrappers::SolverCG      solver (solver_control);

  TrilinosWrappers::PreconditionAMG  preconditioner;
  TrilinosWrappers::PreconditionAMG::AdditionalData  data;

  preconditioner.initialize(system_matrix_T,data);

  solver.solve (system_matrix_T,completely_distributed_solution,system_rhs_T,preconditioner);


  // We have made one addition, though: since we suppress output from the
  // linear solvers, we have to print the number of iterations by hand.
  pcout     << "   " << solver_control.last_step()
            << " CG iterations needed to obtain convergence." << std::endl
            << "\t initial convergence value = " << solver_control.initial_value() << std::endl
            << "\t final convergence value = " << solver_control.last_value() << std::endl
            << std::endl;

  constraints.distribute (completely_distributed_solution);
  new_solution_T = completely_distributed_solution;

}

template <int dim>
void Step4<dim>::solve_T_run ()
{
  TrilinosWrappers::MPI::Vector   completely_distributed_solution (locally_owned_dofs,mpi_communicator);
//  SolverControl     solver_control (25*dof_handler.n_dofs(),1e-21,true);
  SolverControl     solver_control (5*system_rhs_T.size(),1e-12*system_rhs_T.l2_norm(),true);
//  SolverControl     solver_control (1e6,1e-18,true);

  TrilinosWrappers::SolverCG      solver (solver_control);

  TrilinosWrappers::PreconditionAMG  preconditioner;
  TrilinosWrappers::PreconditionAMG::AdditionalData  data;

  preconditioner.initialize(system_matrix_T,data);

  solver.solve (system_matrix_T,completely_distributed_solution,system_rhs_T,preconditioner);


  // We have made one addition, though: since we suppress output from the
  // linear solvers, we have to print the number of iterations by hand.
  pcout     << "   " << solver_control.last_step()
            << " CG iterations needed to obtain convergence." << std::endl
            << "\t initial convergence value = " << solver_control.initial_value() << std::endl
            << "\t final convergence value = " << solver_control.last_value() << std::endl
            << std::endl;

  constraints.distribute (completely_distributed_solution);
  new_solution_T = completely_distributed_solution;

}


// @sect4{Step4::output_results}

// This function also does what the respective one did in step-3. No changes
// here for dimension independence either.
//
// The only difference to the previous example is that we want to write output
// in VTK format, rather than for gnuplot. VTK format is currently the most
// widely used one and is supported by a number of visualization programs such
// as Visit and Paraview (for ways to obtain these programs see the ReadMe
// file of deal.II). To write data in this format, we simply replace the
// <code>data_out.write_gnuplot</code> call by
// <code>data_out.write_vtk</code>.
//
// Since the program will run both 2d and 3d versions of the Laplace solver,
// we use the dimension in the filename to generate distinct filenames for
// each run (in a better program, one would check whether <code>dim</code> can
// have other values than 2 or 3, but we neglect this here for the sake of
// brevity).
template <int dim>
void Step4<dim>::output_results (int output_num) const
  {

    DataOut<dim> data_out;

    data_out.attach_dof_handler (dof_handler);
    data_out.add_data_vector (old_solution_T,       "T");


    data_out.build_patches ();

    const std::string filename = ("solution-" +
                                 Utilities::int_to_string (output_num, 3) +
                                 "." +
                                 Utilities::int_to_string (triangulation.locally_owned_subdomain(),4) +
                                 ".vtu");
    std::ofstream output (filename.c_str());
    data_out.write_vtu (output);


    // output the overall solution
    if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
    {
        std::vector<std::string> filenames;
        for (unsigned int i=0;i<Utilities::MPI::n_mpi_processes(mpi_communicator);++i)
            filenames.push_back ("solution-" +
                                 Utilities::int_to_string (output_num, 3) +
                                 "." +
                                 Utilities::int_to_string (i,4) +
                                 ".vtu");
        std::ofstream master_output (("solution-" + 
                                     Utilities::int_to_string (output_num,3)+
                                     ".pvtu").c_str());
        data_out.write_pvtu_record (master_output,filenames);

    }

  }



// @sect4{Step4::run}

// This is the function which has the top-level control over everything. Apart
// from one line of additional output, it is the same as for the previous
// example.
template <int dim>
void Step4<dim>::run ()
{
  pcout << "Solving problem in " << dim << " space dimensions." << std::endl;

  make_grid();
  setup_system ();
  assemble_system ();
  //--------PROJECTION-INITIAL-VALUES--------
  solve_T ();           // solution stored in new_solution_T; 
  old_solution_T = new_solution_T;
  old_solution_T_cal = new_solution_T;
  output_results (0);    // output initial values; need ghost cells 


    TrilinosWrappers::MPI::Vector tmp (locally_owned_dofs,mpi_communicator);
    TrilinosWrappers::MPI::Vector forcing_terms (locally_owned_dofs,mpi_communicator);

    system_matrix_T = 0;
    system_rhs_T = 0;

    int timestep_number;
    double time_step = 1e-6;
    double global_simulation_end_time = 30 * time_step;
    double time;

    for (timestep_number=1, time=time_step;
         time<= global_simulation_end_time;
         time+=time_step, ++timestep_number)
      {
        pcout << "Time step " << timestep_number
                  << " at t=" << time
                  << std::endl;

        
        mass_matrix_T.vmult (system_rhs_T, old_solution_T_cal);
        laplace_matrix_T.vmult (tmp,old_solution_T_cal);
        system_rhs_T.add(-time_step * (1-theta),tmp);
/*
        assemble_rhs_T (time);
        forcing_terms = dynamic_rhs_T;
        forcing_terms *= time_step * theta;

        assemble_rhs_T (time - time_step);
        tmp = dynamic_rhs_T;
        forcing_terms.add (time_step*(1-theta),tmp);

        system_rhs_T.add(1,forcing_terms);
*/
                // system_matrix_T
        system_matrix_T.copy_from (mass_matrix_T);
        system_matrix_T.add (time_step * theta, laplace_matrix_T);

        constraints.set_zero(system_rhs_T);

        solve_T_run ();     // solve and store solution to new_solution_T

        // 
        old_solution_T = new_solution_T;        // used for output, GHOST CELLS
        old_solution_T_cal = new_solution_T;

        output_results (timestep_number);


      }


}


// @sect3{The <code>main</code> function}

// And this is the main function. It also looks mostly like in step-3, but if
// you look at the code below, note how we first create a variable of type
// <code>Step4@<2@></code> (forcing the compiler to compile the class template
// with <code>dim</code> replaced by <code>2</code>) and run a 2d simulation,
// and then we do the whole thing over in 3d.
//
// In practice, this is probably not what you would do very frequently (you
// probably either want to solve a 2d problem, or one in 3d, but not both at
// the same time). However, it demonstrates the mechanism by which we can
// simply change which dimension we want in a single place, and thereby force
// the compiler to recompile the dimension independent class templates for the
// dimension we request. The emphasis here lies on the fact that we only need
// to change a single place. This makes it rather trivial to debug the program
// in 2d where computations are fast, and then switch a single place to a 3 to
// run the much more computing intensive program in 3d for `real'
// computations.
//
// Each of the two blocks is enclosed in braces to make sure that the
// <code>laplace_problem_2d</code> variable goes out of scope (and releases
// the memory it holds) before we move on to allocate memory for the 3d
// case. Without the additional braces, the <code>laplace_problem_2d</code>
// variable would only be destroyed at the end of the function, i.e. after
// running the 3d problem, and would needlessly hog memory while the 3d run
// could actually use it.
int main (int argc, char *argv[])
{
//  deallog.depth_console (0);
/*
  const std::string filename = ("deallog.txt");
  std::ofstream output (filename.c_str());
  deallog.attach(output);
*/
      Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);


  {
    Step4<2> laplace_problem_3d;
    laplace_problem_3d.run ();
  }

  return 0;
}
##
#  CMake script for the step-4 tutorial program:
##

# Set the name of the project and target:
SET(TARGET "step-4")
#SET(TARGET "step-4-rhs-constraints")

# Declare all source files the target consists of. Here, this is only
# the one step-X.cc file, but as you expand your project you may wish
# to add other source files as well. If your project becomes much larger,
# you may want to either replace the following statement by something like
#    FILE(GLOB_RECURSE TARGET_SRC  "source/*.cc")
#    FILE(GLOB_RECURSE TARGET_INC  "include/*.h")
#    SET(TARGET_SRC ${TARGET_SRC}  ${TARGET_INC}) 
# or switch altogether to the large project CMakeLists.txt file discussed
# in the "CMake in user projects" page accessible from the "User info"
# page of the documentation.
SET(TARGET_SRC
  ${TARGET}.cc
  )

# Usually, you will not need to modify anything beyond this point...

CMAKE_MINIMUM_REQUIRED(VERSION 2.8.8)

FIND_PACKAGE(deal.II 8.5.0 QUIET
  HINTS ${deal.II_DIR} ${DEAL_II_DIR} ../ ../../ $ENV{DEAL_II_DIR}
  )
IF(NOT ${deal.II_FOUND})
  MESSAGE(FATAL_ERROR "\n"
    "*** Could not locate a (sufficiently recent) version of deal.II. ***\n\n"
    "You may want to either pass a flag -DDEAL_II_DIR=/path/to/deal.II to 
cmake\n"
    "or set an environment variable \"DEAL_II_DIR\" that contains this path."
    )
ENDIF()

DEAL_II_INITIALIZE_CACHED_VARIABLES()
PROJECT(${TARGET})
DEAL_II_INVOKE_AUTOPILOT()

Reply via email to