Hello deal.ii community,
I hope you all are doing great. I am writing my code in parallel, following 
step-55. I am working on time-dependent, incompressible, parameterized NSE 
with UQ. I can compile and run my code but if I run my code in 8 processors 
it runs in 1 and the give me error for other processors.
Could anyone please help me to solve the issue? I have attached the code 
and parameter file.

Thanks for your time and help.

Number of active cells: 16

   Number of degrees of freedom: 187 (162+25)

Time step size = 0.02, Number of time steps = 50

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.

   Solved in 36 iterations.


--------------------------------------------------------

An error occurred in line <529> of file 
</Users/mdmahmudulislam/dealii-candi/tmp/unpack/deal.II-v9.7.0/include/deal.II/lac/vector.templates.h>
 
in function

    typename Vector<Number>::real_type dealii::Vector<float>::l2_norm() 
const [number = float]

The violated condition was: 

    dealii::numbers::is_finite(scale * std::sqrt(sum))

Additional information: 

    In a significant number of places, deal.II checks that some

    intermediate value is a finite number (as opposed to plus or minus

    infinity, or NaN/Not a Number). In the current function, we

    encountered a number that is not finite (its value is (nan,0) and

    therefore violates the current assertion).

    

    This may be due to the fact that some operation in this function

    created such a value, or because one of the arguments you passed to

    the function already had this value from some previous operation. In

    the latter case, this function only triggered the error but may not

    actually be responsible for the computation of the number that is not

    finite.

    

    There are two common cases where this situation happens. First, your

    code (or something in deal.II) divides by zero in a place where this

    should not happen. Or, you are trying to solve a linear system with an

    unsuitable solver (such as an indefinite or non-symmetric linear

    system using a Conjugate Gradient solver); such attempts oftentimes

    yield an operation somewhere that tries to divide by zero or take the

    square root of a negative value.

    

    In any case, when trying to find the source of the error, recall that

    the location where you are getting this error is simply the first

    place in the program where there is a check that a number (e.g., an

    element of a solution vector) is in fact finite, but that the actual

    error that computed the number may have happened far earlier. To find

    this location, you may want to add checks for finiteness in places of

    your program visited before the place where this error is produced.

    One way to check for finiteness is to use the 'AssertIsFinite' macro.


Stacktrace:

-----------

#0  2   libdeal_II.g.9.7.0.dylib            0x000000011778f500 
_ZNK6dealii6VectorIfE7l2_normEv + 636: 2   libdeal_II.g.9.7.0.dylib        
    0x000000011778f500 _ZNK6dealii6VectorIfE7l2_normEv 

#1  3   step-55                             0x0000000104a72300 
_ZN4SNSE29StochasticNavierStokesProblemILi2EE6EnergyEd + 400: 3   step-55   
                          0x0000000104a72300 
_ZN4SNSE29StochasticNavierStokesProblemILi2EE6EnergyEd 

#2  4   step-55                             0x0000000104a61fe8 
_ZN4SNSE29StochasticNavierStokesProblemILi2EE3runEv + 3048: 4   step-55     
                        0x0000000104a61fe8 
_ZN4SNSE29StochasticNavierStokesProblemILi2EE3runEv 

#3  5   step-55                             0x0000000104a611cc main + 184: 
5   step-55                             0x0000000104a611cc main 

#4  6   dyld                                0x000000018bbe9d54 start + 
7184: 6   dyld                                0x000000018bbe9d54 start 

--------------------------------------------------------


Calling MPI_Abort now.

To break execution in a GDB session, execute 'break MPI_Abort' before 
running. You can also put the following into your ~/.gdbinit:

  set breakpoint pending on

  break MPI_Abort

  set breakpoint pending auto


Best Regards,
Md Mahmudul Islam
Graduate Student,
University of Alabama at Birmingham

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/dealii/7618f7fb-eb06-4d57-ae50-325c67f150fen%40googlegroups.com.

Attachment: parameter-file.prm
Description: Binary data

// Standard deal.II base utilities
#include <deal.II/base/function.h>
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/logstream.h>
#include <deal.II/base/utilities.h>
#include <deal.II/base/parameter_handler.h>
#include <deal.II/base/convergence_table.h>
#include <deal.II/base/timer.h>
#include <deal.II/base/index_set.h>
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/mpi.h>
#include <deal.II/base/tensor_function.h>
#include <deal.II/base/work_stream.h>
#include <deal.II/lac/generic_linear_algebra.h>

// Linear Algebra backend selector
namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !defined(DEAL_II_PETSC_WITH_COMPLEX) && \
  !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
  using namespace dealii::LinearAlgebraPETSc;
#  define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
  using namespace dealii::LinearAlgebraTrilinos;
#else
#  error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
} // namespace LA

// Linear Algebra core utilities
#include <deal.II/lac/affine_constraints.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/vector.h>
#include <deal.II/lac/block_vector.h>
#include <deal.II/lac/block_sparse_matrix.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/solver_gmres.h>
#include <deal.II/lac/solver_minres.h>
#include <deal.II/lac/precondition.h>
#include <deal.II/lac/linear_operator_tools.h>

// Backend-specific (PETSc or Trilinos)

# include <deal.II/lac/petsc_vector.h>
# include <deal.II/lac/petsc_sparse_matrix.h>
//# include <deal.II/lac/petsc_block_vector.h>
//# include <deal.II/lac/petsc_block_sparse_matrix.h>
# include <deal.II/lac/petsc_solver.h>
# include <deal.II/lac/petsc_precondition.h>
//# include <deal.II/lac/petsc_vector_base.h>

// DoF and FE utilities
#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/grid/grid_refinement.h>
#include <deal.II/grid/grid_in.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/grid/manifold_lib.h>

#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/fe/fe_values.h>

#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_renumbering.h>
#include <deal.II/dofs/dof_accessor.h>
#include <deal.II/dofs/dof_tools.h>

// Parallel distributed triangulations
#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>

// Numerics
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/numerics/matrix_tools.h>

// C++ headers
#include <fstream>
#include <sstream>
#include <iostream>
#include <cmath>

// As in all programs, the namespace dealii is included:
namespace SNSE
{
  using namespace dealii;

  namespace LinearSolvers
  {
    template <class Matrix, class Preconditioner>
    class InverseMatrix : public Subscriptor
    {
    public:
      InverseMatrix(const Matrix &m, const Preconditioner &preconditioner);
 
      template <typename VectorType>
      void vmult(VectorType &dst, const VectorType &src) const;
 
    private:
      const SmartPointer<const Matrix> matrix;
      const Preconditioner            &preconditioner;
    };
 
 
    template <class Matrix, class Preconditioner>
    InverseMatrix<Matrix, Preconditioner>::InverseMatrix(
      const Matrix         &m,
      const Preconditioner &preconditioner)
      : matrix(&m)
      , preconditioner(preconditioner)
    {}
 
 
 
    template <class Matrix, class Preconditioner>
    template <typename VectorType>
    void
    InverseMatrix<Matrix, Preconditioner>::vmult(VectorType       &dst,
                                                 const VectorType &src) const
    {
      SolverControl        solver_control(src.size(), 1e-8 * src.l2_norm());
      SolverCG<VectorType> cg(solver_control);
      dst = 0;
 
      try
        {
          cg.solve(*matrix, dst, src, preconditioner);
        }
      catch (std::exception &e)
        {
          Assert(false, ExcMessage(e.what()));
        }
    }
 
 
    template <class PreconditionerA, class PreconditionerS>
    class BlockDiagonalPreconditioner : public Subscriptor
    {
    public:
      BlockDiagonalPreconditioner(const PreconditionerA &preconditioner_A,
                                  const PreconditionerS &preconditioner_S);
 
      void vmult(LA::MPI::BlockVector       &dst,
                 const LA::MPI::BlockVector &src) const;
 
    private:
      const PreconditionerA &preconditioner_A;
      const PreconditionerS &preconditioner_S;
    };
 
    template <class PreconditionerA, class PreconditionerS>
    BlockDiagonalPreconditioner<PreconditionerA, PreconditionerS>::
      BlockDiagonalPreconditioner(const PreconditionerA &preconditioner_A,
                                  const PreconditionerS &preconditioner_S)
      : preconditioner_A(preconditioner_A)
      , preconditioner_S(preconditioner_S)
    {}
 
 
    template <class PreconditionerA, class PreconditionerS>
    void BlockDiagonalPreconditioner<PreconditionerA, PreconditionerS>::vmult(
      LA::MPI::BlockVector       &dst,
      const LA::MPI::BlockVector &src) const
    {
      preconditioner_A.vmult(dst.block(0), src.block(0));
      preconditioner_S.vmult(dst.block(1), src.block(1));
    }
 
  } // namespace LinearSolvers

  namespace RunTimeParameters
  {
     class Data_Storage
     {

     public:

       Data_Storage();
       ~Data_Storage();
       void read_data(const char *filename);

        double final_time,
              expected_viscosity,
              time_step,
              mu,
              epsilon,
              gamma;

       unsigned int Number_of_refinements,
                    pressure_degree,
                    numTimeSteps;

     protected:
       ParameterHandler prm;
     };


     Data_Storage::Data_Storage()
      {
         prm.enter_subsection ("Mesh & geometry parameters");
         {
           prm.declare_entry("Number_of_refinements", "1",
                        Patterns::Integer (0, 15),
                        "The number of global refines we do on the mesh.");

           prm.declare_entry("pressure_fe_degree","1",
                        Patterns::Integer (1, 5),
                        "The polynomial degree for the pressure space.");
          }

          prm.leave_subsection ();

          prm.enter_subsection ("Physical constants");
          {

            prm.declare_entry("final_time", "0.01",
                        Patterns::Double(0),
                        "Simulation Time");

            prm.declare_entry("expected_viscosity", "0.001",
                        Patterns::Double(0),
                        "expected_viscosity");



            //prm.declare_entry("number_of_time_steps", "10",
            //            Patterns::Double(0),
            //            "Total Number of Time Steps Used");

            //prm.declare_entry("time_step_size", "0.0001",
            //            Patterns::Double(0),
            //            "Time Step Size");

            prm.declare_entry("epsilon","0.1",
                        Patterns::Double(0),
                        "The Perturbation Parameter");

            prm.declare_entry("mu","0.1",
                        Patterns::Double(0),
                        "The Tuning Parameter");

            prm.declare_entry("gamma","1000.0",
                        Patterns::Double(0),
                        "Grad-div stabilization Parameter");

           }

           prm.leave_subsection ();

      }

     Data_Storage::~Data_Storage()
    {}

     void Data_Storage::read_data(const char *filename)
      {
         std::ifstream file (filename);
         AssertThrow (file, ExcFileNotOpen (filename));

         prm.parse_input (file);

         prm.enter_subsection ("Physical constants");
         {
            final_time = prm.get_double ("final_time");
            expected_viscosity = prm.get_double ("expected_viscosity");
            //time_step = prm.get_double("time_step_size");
            //numTimeSteps = prm.get_integer("number_of_time_steps");
            epsilon = prm.get_double("epsilon");
            mu = prm.get_double("mu");
            gamma = prm.get_double("gamma");

          }

         prm.leave_subsection ();

         prm.enter_subsection ("Mesh & geometry parameters");
          {
             Number_of_refinements = prm.get_integer("Number_of_refinements");
             pressure_degree = prm.get_integer("pressure_fe_degree");
           }

         prm.leave_subsection ();

        }

  }


  namespace EquationData{

    template <int dim>
    class ZeroFxn : public Function<dim>
    {
      private:
        double ep;
      public:
      ZeroFxn () : Function<dim>(dim+1) {}

      virtual double set_parameter(double epsilon = 0);
      virtual double value (const Point<dim>   &p,
                          const unsigned int  component = 0) const override;

      virtual void vector_value (const Point<dim> &p,
                               Vector<double>   &value) const override;

      };

    template <int dim>
    double ZeroFxn<dim>::set_parameter(double epsilon)
    {
     ep = epsilon;
     return ep;
    }


    template <int dim>
    double  ZeroFxn<dim>::value (const Point<dim>  &  p,
                              const unsigned int component) const
      {

       Assert (component < this->n_components,
            ExcIndexRange (component, 0, this->n_components));

       double x = p[0];
       double y = p[1];
       double t = this->get_time();

       if (component == 0)
         return (1.0+ep)*(cos(y)+(1.0+exp(t))*sin(y));
       if (component == 1)
         return (1.0+ep)*(sin(x)+(1.0+exp(t))*cos(x));
       return 0;
      }


     template <int dim>
    void  ZeroFxn<dim>::vector_value (const Point<dim> &p, Vector<double>   &values) const
    {
       for (unsigned int c=0; c<this->n_components; ++c)
        values(c) = ZeroFxn<dim>::value (p, c);
     }


  // We implement similar functions for the right hand side:

  template <int dim>


     class RightHandSide : public Function<dim>

     {
     private:
       double ep;
       double nu;
     public:
       RightHandSide () : Function<dim>(dim+1) {}

     virtual double set_parameter(double epsilon = 0, double viscosity=0);

     virtual double value (const Point<dim>   &p,
                        const unsigned int  component = 0) const override;
     virtual void vector_value (const Point<dim> &p,
                             Vector<double>   &value) const override;

     };

    template <int dim>
    double RightHandSide<dim>::set_parameter(double epsilon, double viscosity)
    {
     nu = viscosity;
     ep = epsilon;
     return ep;
    }

    template <int dim>
    double  RightHandSide<dim>::value (const Point<dim>  &  p , 
				const unsigned int  component) const
    {
	double x = p[0];
      double y = p[1];
      //std::cout<<"viscosity = "<<nu<<std::endl;
     //std::cout<<"magnetic_viscosity = "<<nu_m<<std::endl;
//      double nu   = 0.01;
//      double nu_m = 0.001;
      //std::cout<<"RHS ep = "<<ep<<std::endl;
      double t = this->get_time();
     // std::cout<<"My epsilon value in right_v ="<<ep<<std::endl;

      if (component == 0)
        return (1.0+ep)*exp(t)*sin(y)+cos(x+y)*(1.0+exp(t))+(1.0+ep)*nu*(cos(y)+(1.0+exp(t))*sin(y))+(1.0+ep)*(1.0+ep)*(sin(x)+(1.0+exp(t))*cos(x))*(-sin(y)+(1.0+exp(t))*cos(y));
      if (component == 1)
        return (1.0+ep)*exp(t)*cos(x)+cos(x+y)*(1.0+exp(t))+(1.0+ep)*nu*(sin(x)+(1.0+exp(t))*cos(x))+(1.0+ep)*(1.0+ep)*(cos(x)-(1.0+exp(t))*sin(x))*(cos(y)+(1.0+exp(t))*sin(y));
       //10th change.berry
      return 0;
    }


    template <int dim>
    void  RightHandSide<dim>::vector_value (const Point<dim> &p, Vector<double>   &values) const
    {
       for (unsigned int c=0; c<this->n_components; ++c)
            values(c) = RightHandSide<dim>::value (p, c);
    }

    template <int dim>
    class InitialCondition : public Function<dim>
    {
      private:
        double ep;
      public:
      InitialCondition () : Function<dim>(dim+1) {}

        virtual double set_parameter(double epsilon = 0);
        virtual void vector_value (const Point<dim> &p, Vector<double>   &value) const override;
        virtual void vector_gradient (const Point<dim>   &p, std::vector< Tensor < 1, dim > > & gradients) const override;
     };

    template <int dim>
    void InitialCondition<dim>::vector_gradient (const Point<dim>   &p, std::vector< Tensor < 1, dim > > & gradients) const
     {
        double x = p[0];
        double y = p[1];
        double t = this->get_time();
        // std::cout<<"My epsilon valueExact_v ="<<ep<<std::endl;



        gradients[0][0] = 0;
        gradients[0][1] = (1.0+ep)*(-sin(y)+(1+exp(t))*cos(y));
        gradients[1][0] = (1.0+ep)*(cos(x)-(1+exp(t))*sin(x));
        gradients[1][1] = 0;


     }

     template <int dim>
    double InitialCondition<dim>::set_parameter(double epsilon)
    {
     ep = epsilon;
     return ep;
    }

    template <int dim>
    void InitialCondition<dim>::vector_value (const Point<dim> &p, Vector<double>   &values) const
    {
      Assert (values.size() == dim+1, ExcDimensionMismatch (values.size(), dim+1));

      double x = p[0];
      double y = p[1];
      double t = this->get_time();
     //static const double PI = 3.14159265358979323846;
     //std::cout<<"Time = "<<t<<std::endl;
      //std::cout<<"Exact ep = "<<ep<<std::endl;
      values(0) = (1.0+ep)*(cos(y)+(1.0+exp(t))*sin(y));
      values(1) = (1.0+ep)*(sin(x)+(1.0+exp(t))*cos(x));

      values(2) = sin(x+y)*(1.0+exp(t));
     }

   }


template <int dim>
class StochasticNavierStokesProblem
{
public:
  StochasticNavierStokesProblem(const unsigned int timestep_number,
                                const RunTimeParameters::Data_Storage &data);

  void run();

private:
  const double       T;
  const unsigned int n_global_refines;
  const unsigned int degree;

  double viscosity;

  void setup_dofs();
  void assemble_system(double viscosity, double mean_viscosity);
  void implement_boundary_condition();

  void solve();
  void output_results(const unsigned int refinement_cycle) const;
  void refine_mesh();
  void Energy(double t);

  void print_mesh_info(const Triangulation<dim> &tria,
                       const std::string        &filename);

  std::vector<types::boundary_id> boundary_ids;

  MPI_Comm mpi_communicator;

  const FESystem<dim>                              fe;
  parallel::distributed::Triangulation<dim> triangulation;
  DoFHandler<dim>                            dof_handler;

  AffineConstraints<double> constraints;

  std::vector<IndexSet> owned_partitioning;
  std::vector<IndexSet> relevant_partitioning;

  LA::MPI::BlockSparseMatrix system_matrix;
  LA::MPI::BlockSparseMatrix preconditioner_matrix;

  LA::MPI::BlockVector old_timestep_solution;
  LA::MPI::BlockVector old_old_timestep_solution;
  LA::MPI::BlockVector solution;
  LA::MPI::BlockVector Interpolate_true_solution;

  LA::MPI::BlockVector Storage_nn[20], fluctuation_solution[20];
  LA::MPI::BlockVector Storage_n[20];

  LA::MPI::BlockVector present_mean_solution, old_mean_solution;

  LA::MPI::BlockVector system_rhs, exact, mean_solution;

  ConvergenceTable convergence_table;

  EquationData::RightHandSide<dim> right_hand_side;
  EquationData::ZeroFxn<dim>       zero_fxn;
  EquationData::InitialCondition<dim> initial_condition;

  double       time, time_step;
  unsigned int timestep_number;
  unsigned int Number_of_realizations = 20;
  double       epsilon, mu, expected_viscosity, gamma;

  ConditionalOStream pcout;
  TimerOutput        computing_timer;

  FILE *pFile;
};

template <int dim>
StochasticNavierStokesProblem<dim>::StochasticNavierStokesProblem(
  const unsigned int timestep_number,
  const RunTimeParameters::Data_Storage &data)
  :
  T(data.final_time),
  n_global_refines(data.Number_of_refinements),
  degree(data.pressure_degree),
  viscosity(0.0), // will be updated during simulation
  mpi_communicator(MPI_COMM_WORLD),
  fe(FE_Q<dim>(degree + 1), dim,
     FE_Q<dim>(degree), 1),
  triangulation(mpi_communicator,
                typename Triangulation<dim>::MeshSmoothing(
                  Triangulation<dim>::smoothing_on_refinement |
                  Triangulation<dim>::smoothing_on_coarsening)),
  dof_handler(triangulation),
  time_step(T / double(timestep_number)),
  timestep_number(timestep_number),
  epsilon(data.epsilon),
  mu(data.mu),
  expected_viscosity(data.expected_viscosity),
  gamma(data.gamma),
  pcout(std::cout,
        (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)),
  computing_timer(mpi_communicator, pcout, TimerOutput::never, TimerOutput::wall_times)
{}

template <int dim>
void StochasticNavierStokesProblem<dim>::setup_dofs()
{
  TimerOutput::Scope t(computing_timer, "setup");

  //system_matrix.clear();

  dof_handler.distribute_dofs(fe);

  std::vector<unsigned int> block_component(dim + 1, 0);
  block_component[dim] = 1;
  DoFRenumbering::component_wise(dof_handler, block_component);

  const std::vector<types::global_dof_index> dofs_per_block =
    DoFTools::count_dofs_per_fe_block(dof_handler, block_component);

  const unsigned int n_u = dofs_per_block[0];
  const unsigned int n_p = dofs_per_block[1];

  pcout << "   Number of active cells: " << triangulation.n_active_cells() << '\n'
        << "   Number of degrees of freedom: " << dof_handler.n_dofs()
        << " (" << n_u << '+' << n_p << ')' << std::endl;

  // Define partitioning for parallel data
  const IndexSet &locally_owned_dofs = dof_handler.locally_owned_dofs();

  owned_partitioning = {
    locally_owned_dofs.get_view(0, n_u),
    locally_owned_dofs.get_view(n_u, n_u + n_p)
  };

  const IndexSet locally_relevant_dofs =
    DoFTools::extract_locally_relevant_dofs(dof_handler);

  relevant_partitioning = {
    locally_relevant_dofs.get_view(0, n_u),
    locally_relevant_dofs.get_view(n_u, n_u + n_p)
};

{
      constraints.reinit(locally_owned_dofs, locally_relevant_dofs);
 
      const FEValuesExtractors::Vector velocity(0);
      const FEValuesExtractors::Scalar pressure(dim);
      DoFTools::make_hanging_node_constraints(dof_handler, constraints);
      VectorTools::interpolate_boundary_values(dof_handler,
                                               0,
                                               zero_fxn,
                                               constraints,
                                               fe.component_mask(velocity));
      constraints.close();
    }

  {
      system_matrix.clear();
 
      Table<2, DoFTools::Coupling> coupling(dim + 1, dim + 1);
      for (unsigned int c = 0; c < dim + 1; ++c)
        for (unsigned int d = 0; d < dim + 1; ++d)
          //if (c == dim && d == dim)
            //coupling[c][d] = DoFTools::none;
          //else if (c == dim || d == dim || c == d)
            coupling[c][d] = DoFTools::always;
          //else
            //coupling[c][d] = DoFTools::none;
 
      BlockDynamicSparsityPattern dsp(relevant_partitioning);
 
      DoFTools::make_sparsity_pattern(
        dof_handler, coupling, dsp, constraints, false);
 
      SparsityTools::distribute_sparsity_pattern(
        dsp,
        dof_handler.locally_owned_dofs(),
        mpi_communicator,
        locally_relevant_dofs);
 
      system_matrix.reinit(owned_partitioning, dsp, mpi_communicator);
  }

  {
      preconditioner_matrix.clear();
 
      Table<2, DoFTools::Coupling> coupling(dim + 1, dim + 1);
      for (unsigned int c = 0; c < dim + 1; ++c)
        for (unsigned int d = 0; d < dim + 1; ++d)
          if (c == dim && d == dim)
            coupling[c][d] = DoFTools::always;
          else
            coupling[c][d] = DoFTools::none;
 
      BlockDynamicSparsityPattern dsp(relevant_partitioning);
 
      DoFTools::make_sparsity_pattern(
        dof_handler, coupling, dsp, constraints, false);
      SparsityTools::distribute_sparsity_pattern(
        dsp,
        Utilities::MPI::all_gather(mpi_communicator,
                                   dof_handler.locally_owned_dofs()),
        mpi_communicator,
        locally_relevant_dofs);
      preconditioner_matrix.reinit(owned_partitioning, dsp, mpi_communicator);
    }

solution.reinit(owned_partitioning, relevant_partitioning, mpi_communicator);
old_timestep_solution.reinit(owned_partitioning, relevant_partitioning, mpi_communicator);
old_old_timestep_solution.reinit(owned_partitioning, relevant_partitioning, mpi_communicator);
present_mean_solution.reinit(owned_partitioning, relevant_partitioning, mpi_communicator);
//Interpolate_true_solution.reinit(owned_partitioning, relevant_partitioning, mpi_communicator);
system_rhs.reinit(owned_partitioning, mpi_communicator);

for (unsigned int j = 0; j < Number_of_realizations; ++j)
{
  Storage_nn[j].reinit(owned_partitioning, relevant_partitioning, mpi_communicator);
  Storage_n[j].reinit(owned_partitioning, relevant_partitioning, mpi_communicator);
  fluctuation_solution[j].reinit(owned_partitioning, relevant_partitioning, mpi_communicator);
}
}

template <int dim>
  void StochasticNavierStokesProblem<dim>::assemble_system(double viscosity, double mean_viscosity)
  {
    TimerOutput::Scope t(computing_timer, "assembly");

    system_matrix = 0;
    system_rhs    = 0;
    preconditioner_matrix = 0;

    QGauss<dim> quadrature_formula(degree + 2);

    FEValues<dim> fe_values(fe,
                            quadrature_formula,
                            update_values | update_quadrature_points |
                            update_JxW_values | update_gradients);

    const unsigned int dofs_per_cell = fe.dofs_per_cell;
    const unsigned int n_q_points    = quadrature_formula.size();

    FullMatrix<double> local_matrix(dofs_per_cell, dofs_per_cell);
    FullMatrix<double> local_preconditioner_matrix(dofs_per_cell, dofs_per_cell);
    Vector<double>     local_rhs(dofs_per_cell);
    std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);

    std::vector<Vector<double>> rhs_values(n_q_points, Vector<double>(dim + 1));

    const FEValuesExtractors::Vector velocity (0);
    const FEValuesExtractors::Scalar pressure (dim);

    std::vector<Tensor<2, dim>> grad_phi(dofs_per_cell);
    std::vector<double>         div_phi(dofs_per_cell);
    std::vector<double>         phi_p(dofs_per_cell);
    std::vector<Tensor<1, dim>> phi(dofs_per_cell);

    std::vector<Tensor<1, dim>> old_time_values(n_q_points);
    std::vector<Tensor<2, dim>> old_gradients_values(n_q_points);
    std::vector<Tensor<1, dim>> old_old_time_values(n_q_points);
    std::vector<Tensor<2, dim>> old_old_gradients_values(n_q_points);
    std::vector<Tensor<1, dim>> old_time_mean_values(n_q_points);
    std::vector<Tensor<1, dim>> old_time_fluc_u_values(n_q_points);

    for (const auto &cell : dof_handler.active_cell_iterators())
      if (cell->is_locally_owned())
        {
          fe_values.reinit(cell);

          local_matrix = 0;
          local_rhs    = 0;
          local_preconditioner_matrix = 0;

          fe_values[velocity].get_function_values(old_old_timestep_solution, old_old_time_values);
          fe_values[velocity].get_function_gradients(old_old_timestep_solution, old_old_gradients_values);
          fe_values[velocity].get_function_values(present_mean_solution, old_time_mean_values);
          fe_values[velocity].get_function_values(old_timestep_solution, old_time_values);
          fe_values[velocity].get_function_gradients(old_timestep_solution, old_gradients_values);

          right_hand_side.vector_value_list(fe_values.get_quadrature_points(), rhs_values);

          for (unsigned int q = 0; q < n_q_points; ++q)
            {
              for (unsigned int k = 0; k < dofs_per_cell; ++k)
                {
                  grad_phi[k] = fe_values[velocity].gradient(k, q);
                  div_phi[k]  = fe_values[velocity].divergence(k, q);
                  phi_p[k]    = fe_values[pressure].value(k, q);
                  phi[k]      = fe_values[velocity].value(k, q);
                }

              double prandtl_mixing_length = 0.0;
              for (unsigned int j = 0; j < Number_of_realizations; ++j)
                {
                  fe_values[velocity].get_function_values(fluctuation_solution[j], old_time_fluc_u_values);
                  prandtl_mixing_length += old_time_fluc_u_values[q] * old_time_fluc_u_values[q];
                }

              for (unsigned int i = 0; i < dofs_per_cell; ++i)
                {
                  for (unsigned int j = 0; j < dofs_per_cell; ++j)
                    {
                      local_matrix(i, j) +=
                        (1.5 * phi[i] * phi[j] +
                         time_step * mean_viscosity * scalar_product(grad_phi[i], grad_phi[j]) -
                         time_step * div_phi[i] * phi_p[j] +
                         time_step * gamma * div_phi[i] * div_phi[j] +
                         0.5 * time_step * grad_phi[j] * old_time_mean_values[q] * phi[i] -
                         0.5 * time_step * grad_phi[i] * old_time_mean_values[q] * phi[j] -
                         time_step * phi_p[i] * div_phi[j] +
                         2.0 * mu * pow(time_step, 2) * prandtl_mixing_length * scalar_product(grad_phi[i], grad_phi[j])
                        )*fe_values.JxW(q);

                      local_preconditioner_matrix(i, j) += 1/ mean_viscosity *
                        phi_p[i] * phi_p[j] * fe_values.JxW(q);
                    }

                  const unsigned int component_i = fe.system_to_component_index(i).first;

                  local_rhs(i) +=
                    (time_step * fe_values.shape_value(i, q) * rhs_values[q](component_i) +
                     2.0 * old_time_values[q] * phi[i] -
                     0.5 * old_old_time_values[q] * phi[i] -
                     2.0 * time_step * (viscosity - mean_viscosity) * scalar_product(grad_phi[i], old_gradients_values[q]) +
                     time_step * (viscosity - mean_viscosity) * scalar_product(grad_phi[i], old_old_gradients_values[q]) -
                     2.0 * time_step * old_time_values[q] * old_gradients_values[q] * phi[i] +
                     time_step * old_old_time_values[q] * old_gradients_values[q] * phi[i] +
                     time_step * old_time_mean_values[q] * old_gradients_values[q] * phi[i] +
                     time_step * old_time_values[q] * old_old_gradients_values[q] * phi[i] -
                     0.5 * time_step * old_old_time_values[q] * old_old_gradients_values[q] * phi[i] -
                     0.5 * time_step * old_time_mean_values[q] * old_old_gradients_values[q] * phi[i] +
                     2.0 * time_step * old_time_values[q] * grad_phi[i] * old_time_values[q] -
                     time_step * old_time_values[q] * grad_phi[i] * old_old_time_values[q] -
                     time_step * old_time_mean_values[q] * grad_phi[i] * old_time_values[q] -
                     time_step * old_old_time_values[q] * grad_phi[i] * old_time_values[q] +
                     0.5 * time_step * old_old_time_values[q] * grad_phi[i] * old_old_time_values[q] +
                     0.5 * time_step * old_time_mean_values[q] * grad_phi[i] * old_old_time_values[q])
                    * fe_values.JxW(q);
                }
            }

          cell->get_dof_indices(local_dof_indices);
          constraints.distribute_local_to_global(local_matrix, local_rhs,
                                                 local_dof_indices,
                                                 system_matrix, system_rhs);
          constraints.distribute_local_to_global(local_preconditioner_matrix,
                                                 local_dof_indices,
                                                 preconditioner_matrix);
        }

    system_matrix.compress(VectorOperation::add);
    preconditioner_matrix.compress(VectorOperation::add);
    system_rhs.compress(VectorOperation::add);
    //AssertIsFinite(system_matrix);
}

template <int dim>
void StochasticNavierStokesProblem<dim>::solve()
{
  TimerOutput::Scope t(computing_timer, "solve");

  // AMG preconditioner for velocity block
  LA::MPI::PreconditionAMG prec_A;
  {
    LA::MPI::PreconditionAMG::AdditionalData data;
#ifdef USE_PETSC_LA
    data.symmetric_operator = true;
#endif
    prec_A.initialize(system_matrix.block(0, 0), data);
  }

  // AMG preconditioner for pressure block (for Schur complement)
  LA::MPI::PreconditionAMG prec_S;
  {
    LA::MPI::PreconditionAMG::AdditionalData data;
#ifdef USE_PETSC_LA
    data.symmetric_operator = true;
#endif
    prec_S.initialize(preconditioner_matrix.block(1, 1), data);
  }

  using mp_inverse_t = LinearSolvers::InverseMatrix<LA::MPI::SparseMatrix,
                                                    LA::MPI::PreconditionAMG>;
  const mp_inverse_t mp_inverse(preconditioner_matrix.block(1, 1), prec_S);

  const LinearSolvers::BlockDiagonalPreconditioner<LA::MPI::PreconditionAMG,
                                                   mp_inverse_t>
    preconditioner(prec_A, mp_inverse);

  SolverControl solver_control(system_matrix.m(),
                               1e-10 * system_rhs.l2_norm());

  SolverGMRES<LA::MPI::BlockVector>::AdditionalData gmres_data(100); // restart every 100
  SolverGMRES<LA::MPI::BlockVector> solver(solver_control, gmres_data);

  LA::MPI::BlockVector distributed_solution(owned_partitioning,
                                            mpi_communicator);

  constraints.set_zero(distributed_solution);

  solver.solve(system_matrix,
               distributed_solution,
               system_rhs,
               preconditioner);

  pcout << "   Solved in " << solver_control.last_step() << " iterations."
        << std::endl;

  constraints.distribute(distributed_solution);

  solution = distributed_solution;
}


template <int dim>
void StochasticNavierStokesProblem<dim>::output_results(const unsigned int refinement_cycle) const
{
  std::vector<std::string> solution_names(dim, "velocity");
  solution_names.emplace_back("pressure");

  std::vector<DataComponentInterpretation::DataComponentInterpretation>
    data_component_interpretation(dim, DataComponentInterpretation::component_is_part_of_vector);
  data_component_interpretation.push_back(DataComponentInterpretation::component_is_scalar);

  DataOut<dim> data_out;
  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(present_mean_solution,
                           solution_names,
                           DataOut<dim>::type_dof_data,
                           data_component_interpretation);

  data_out.build_patches();

  std::string filename = "solution-" + Utilities::int_to_string(refinement_cycle, 2) + ".vtu";

  // Write output in parallel
  data_out.write_vtu_in_parallel(filename, mpi_communicator);
}

template <int dim>
void StochasticNavierStokesProblem<dim>::Energy(double t)
{
  const ComponentSelectFunction<dim> velocities(std::make_pair(0, dim), dim + 1);
  Vector<float> difference_per_cell(triangulation.n_active_cells());

  VectorTools::integrate_difference(dof_handler,
                                    present_mean_solution,
                                    Functions::ZeroFunction<dim>(dim + 1),
                                    difference_per_cell,
                                    QGauss<dim>(degree + 2),
                                    VectorTools::H1_seminorm,
                                    &velocities);

  // Sum local cell norms over MPI
  const double local_energy = 0.5 * difference_per_cell.l2_norm();
  const double energy_mean = Utilities::MPI::sum(local_energy, mpi_communicator);

  if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0 && pFile)
    std::fprintf(pFile, "%f  %f\n", t, energy_mean);
}

template <int dim>
void StochasticNavierStokesProblem<dim>::print_mesh_info(const Triangulation<dim> &tria,
                                                         const std::string        &filename)
{
  ConditionalOStream pcout(std::cout, Utilities::MPI::this_mpi_process(mpi_communicator) == 0);

  pcout << "Mesh info:" << std::endl
        << " dimension: " << dim << std::endl
        << " no. of cells: " << tria.n_active_cells() << std::endl;

  {
    std::map<unsigned int, unsigned int> boundary_count;
    for (const auto &cell : tria.active_cell_iterators())
      {
        for (unsigned int face = 0; face < GeometryInfo<dim>::faces_per_cell; ++face)
          {
            if (cell->face(face)->at_boundary())
              boundary_count[cell->face(face)->boundary_id()]++;
          }
      }

    pcout << " boundary indicators: ";
    for (const auto &pair : boundary_count)
    {
      pcout << pair.first << " (" << pair.second << " times) ";
    }
    pcout << std::endl;
  }

  // Only rank 0 writes the EPS file
  if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
    {
      std::ofstream out(filename);
      GridOut grid_out;
      grid_out.write_eps(tria, out);
      pcout << " written to " << filename << std::endl << std::endl;
    }
}

template <int dim>
void StochasticNavierStokesProblem<dim>::run ()
{
  //double viscosity;                                            
  double mean_viscosity = 0.0;

  pFile = fopen("energy.txt", "w");

  pcout << "Problem Data:" << std::endl;
  pcout << "--------------------------------------------------" << std::endl;
  pcout << "expected_viscosity = " << expected_viscosity << std::endl;
  pcout << "gamma = " << gamma << std::endl;
  pcout << "epsilon = " << epsilon << std::endl;
  pcout << "mu = " << mu << std::endl;
  pcout << "n_global_refines = " << n_global_refines << std::endl;
  pcout << "--------------------------------------------------" << std::endl;

  double random1[20] = {0.975712,-0.345349,-0.166474,-0.081588,0.088924,0.640705,-0.607515,-0.022931,0.635968,0.847934,0.242202,0.055026,-0.695076,0.886830,-0.724688,-0.162158,0.521596,-0.688013,0.453780,-0.325601};
  double random2[20] = {0.414552,-0.233426,0.786402,0.208497,-0.381272,0.370817,-0.265752,0.935250,-0.470776,-0.226254,-0.039371,0.277783,0.804871,0.365681,0.284943,0.551865,0.075924,-0.262991,-0.381761,0.284678};

  //double ep = 0.0;
  double factor = 1.0;
  unsigned int Number_of_realizations = 20;

  for (unsigned int j = 0; j < Number_of_realizations; ++j)
  {
    mean_viscosity += expected_viscosity / Number_of_realizations + factor * random1[j] * expected_viscosity / 10.0 / Number_of_realizations;
  }
  GridGenerator::hyper_cube (triangulation, 0, 1);
  boundary_ids = triangulation.get_boundary_ids();
  triangulation.refine_global(n_global_refines);

setup_dofs ();

  time = 0.0;

  initial_condition.set_time(time);

  for (unsigned int j = 0; j < Number_of_realizations; ++j)
  {
    double ep = epsilon * random2[j];
    initial_condition.set_parameter(ep);
    LA::MPI::BlockVector tmp(owned_partitioning, mpi_communicator);
    VectorTools::interpolate(dof_handler, initial_condition, tmp);
    Storage_nn[j] = tmp;
  }

  time = time_step;
  pcout << "Time step size = " << time_step << ", Number of time steps = " << timestep_number << std::endl;
  initial_condition.set_time(time);

  for (unsigned int j = 0; j < Number_of_realizations; ++j)
  {
    double ep = epsilon * random2[j];
    initial_condition.set_parameter(ep);
    LA::MPI::BlockVector tmp(owned_partitioning, mpi_communicator);
    VectorTools::interpolate(dof_handler, initial_condition, tmp);
    Storage_n[j] = tmp;
  }

  old_old_timestep_solution = 0;
  old_timestep_solution = 0;
  initial_condition.set_parameter(0.0);

  for (unsigned int n = 2; n <= timestep_number; ++n)
  {
    present_mean_solution = 0;
    time = time_step * n;

    right_hand_side.set_time(time);
    zero_fxn.set_time(time);
    initial_condition.set_time(time);
    LA::MPI::BlockVector mean_solution(owned_partitioning, mpi_communicator);
    mean_solution = 0;

    for (unsigned int j = 0; j < Number_of_realizations; ++j)
    {
      double viscosity = expected_viscosity + random1[j] * expected_viscosity / 10.0;
      double ep = epsilon * random2[j];

      right_hand_side.set_parameter(ep, viscosity);
      zero_fxn.set_parameter(ep);
      initial_condition.set_parameter(ep);

      old_old_timestep_solution = Storage_nn[j];
      old_timestep_solution = Storage_n[j];

      assemble_system(viscosity, mean_viscosity);
      solve();

      {
      LA::MPI::BlockVector tmp(owned_partitioning, mpi_communicator);
      tmp = Storage_n[j];
      Storage_nn[j] = tmp;

      tmp = solution;
      Storage_n[j] = tmp;
      }

      LA::MPI::BlockVector tmp_mean(owned_partitioning, mpi_communicator);
      tmp_mean.add(2.0 / Number_of_realizations, Storage_n[j],
                 -1.0 / Number_of_realizations, Storage_nn[j]);
      mean_solution += tmp_mean;
      LA::MPI::BlockVector tmp_fluct(owned_partitioning, mpi_communicator);
      tmp_fluct = 0;
      fluctuation_solution[j] = tmp_fluct;
    }

    for (unsigned int j = 0; j < Number_of_realizations; ++j)
    {
    LA::MPI::BlockVector tmp_fluct(owned_partitioning, mpi_communicator);
    tmp_fluct.add(2.0, Storage_n[j], -1.0, Storage_nn[j]);
    tmp_fluct.add(-1.0, mean_solution);
    fluctuation_solution[j] = tmp_fluct;
    }
    present_mean_solution = mean_solution;

    output_results(n);
    Energy(time);
  }

  pcout << "\nTotal number of timesteps: " << timestep_number << std::endl;
}
}

int main(int argc, char *argv[])
{
  try
    {
      using namespace dealii;
      using namespace SNSE;

      Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
      const MPI_Comm mpi_communicator = MPI_COMM_WORLD;

      ConditionalOStream pcout(std::cout,
                               Utilities::MPI::this_mpi_process(mpi_communicator) == 0);

      deallog.depth_console(0);

      RunTimeParameters::Data_Storage data;
      data.read_data("parameter-file.prm");

      int n;
      for (unsigned int i = 4; i < 5; ++i)
        {
          // n = static_cast<int>(std::pow(2, i));
          n = 50;

          StochasticNavierStokesProblem<2> flow_problem(n, data);
          flow_problem.run();
        }
    }
  catch (std::exception &exc)
    {
      std::cerr << std::endl
                << "----------------------------------------------------"
                << std::endl;
      std::cerr << "Exception on processing: " << std::endl
                << exc.what() << std::endl
                << "Aborting!" << std::endl
                << "----------------------------------------------------"
                << std::endl;
      return 1;
    }
  catch (...)
    {
      std::cerr << std::endl
                << "----------------------------------------------------"
                << std::endl;
      std::cerr << "Unknown exception!" << std::endl
                << "Aborting!" << std::endl;
      std::cerr << "----------------------------------------------------"
                << std::endl;
      return 1;
    }

  return 0;
}

Reply via email to