Ah, Daniel beat me to it. I took too much time putting together another 
demo code :-) 

Aligned with Daniel suggested, I'd bet you're running in debug mode. Here's 
the output of the attached code running in release mode for 1000x1000x1 
repetitions:

$ make release
> [100%] Switch CMAKE_BUILD_TYPE to Release
> -- Autopilot invoked
> -- Run   $ make info  to print a detailed help message
> -- Configuring done
> -- Generating done
> -- Build files have been written to:$/build
> Scanning dependencies of target distib_tria_periodicity
> [ 50%] Building CXX object 
> CMakeFiles/distib_tria_periodicity.dir/distib_tria_periodicity.cc.o
> [100%] Linking CXX executable distib_tria_periodicity
> [100%] Built target distib_tria_periodicity
> Built target release
>
 

$ mpirun -np 2 ./distib_tria_periodicity
> Number of repetitions in coarse grid: 1000x1000x1
> Building grid...
> Number of cells: 1000000
> Collecting periodic face pairs...
> Adding periodicity...  done.
> Finished.
>
> +---------------------------------------------+------------+------------+
> | Total wallclock time elapsed since start    |      55.2s |            |
> |                                             |            |            |
> | Section                         | no. calls |  wall time | % of total |
> +---------------------------------+-----------+------------+------------+
> | Add periodicity                 |         1 |      6.65s |        12% |
> | Build grid                      |         1 |      48.5s |        88% |
> | Collect periodic face pairs     |         1 |    0.0433s |         0% |
> +---------------------------------+-----------+------------+------------+


As you can see, it doesn't take too long in any of those functions after 
the initial grid is built. So it is, in fact, probably the grid build thats 
taking so much time and nothing to do with the periodicity function.

Make sure to build in release mode and play around with the code a little 
more to convince yourself that there's not a problem. I always find it 
useful to output status messages so that I can be somewhat convinced that 
my code is not hanging in an unexpected place.

Jean-Paul

On Tuesday, May 16, 2017 at 10:10:47 PM UTC+2, Daniel Arndt wrote:
>
> Dear Hamed,
>
> However, I think it is not the producing of the final refined mesh that 
>> hangs the code but the add_periodicity when repetition parameter is 
>> relatively large(100 for instance). Since, it has been mentioned in the 
>> documentation that refinement of the course mesh should be applied after 
>> adding periodicity, I think the problem is that using repetition in 
>> subdivided_hyper_rectangle, I am refining the course mesh before 
>> add_periodicity.   
>>  
>>
>>> Can you confirm that the code I've attached reaches the end, i.e. that 
>>> you see this result:
>>>
>>> $ mpirun -np 2 ./distib_tria_periodicity
>>>> Adding periodicity...  done.
>>>> Refinement iteration: 1
>>>> Refinement iteration: 2
>>>> Refinement iteration: 3
>>>> Finished.
>>>
>>>  
>> The code reaches the end for small repetition parameter and hangs for 
>> large ones.
>>
> For me your code needs like 4s for 30 iterations and 50s for 100 
> iterations.
> In debug mode the code runs in 170s for 30 iterations.
> Have you tried both debug and release mode?
>
> Best,
> Daniel
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/grid/grid_in.h>
#include <deal.II/grid/tria.h>
#include <deal.II/grid/tria_boundary_lib.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>

#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>

#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/timer.h>

#include <iostream>
#include <fstream>
/////////////////////////
using namespace dealii;

template <int dim>
  class Solid
  {
  public:
    Solid();

    void    make_grid();

  private:
    MPI_Comm                         mpi_communicator;
    parallel::distributed::Triangulation<dim> triangulation;

    ConditionalOStream pcout;
    TimerOutput timer;
  };

template <int dim>
  Solid<dim>::Solid()
    :
    mpi_communicator (MPI_COMM_WORLD),
    triangulation (mpi_communicator,
                   typename Triangulation<dim>::MeshSmoothing
                   (Triangulation<dim>::smoothing_on_refinement |
                    Triangulation<dim>::smoothing_on_coarsening)),
    pcout(std::cout, Utilities::MPI::this_mpi_process(mpi_communicator) == 0),
    timer (pcout, TimerOutput::summary,
                    TimerOutput::wall_times)

	{}

template <int dim>
  void Solid<dim>::make_grid()
    {
      const unsigned int n_repetitions = 1000;
      const unsigned int n_refinements = 0;

      std::vector< unsigned int > repetitions(dim, n_repetitions);
      if (dim == 3)
      repetitions[dim-1] = 1;
      pcout << "Number of repetitions in coarse grid: "
            << repetitions[0] << "x"
            << repetitions[1] << "x"
            << repetitions[2] << std::endl;

      timer.enter_subsection ("Build grid");
      pcout << "Building grid... " << std::endl;
      GridGenerator::subdivided_hyper_rectangle(triangulation,
     	                                         repetitions,
     										  Point<dim>(0.0, 0.0, -0.5),
     										  Point<dim>(20.0, 20.0, 0.5),
     	 									  true);
      timer.leave_subsection();

  	  // GridGenerator::hyper_rectangle(triangulation,
  	  //                                Point<dim>(0.0, 0.0, 0.0),
  	  //                                Point<dim>(20.0, 20.0, 20.0),
  	  //                                true);

      pcout << "Number of cells: " << triangulation.n_active_cells() << std::endl;

  	  std::vector<GridTools::PeriodicFacePair<typename parallel::distributed::Triangulation<dim>::cell_iterator> >
  	 	  periodicity_vector;

     timer.enter_subsection ("Collect periodic face pairs");
     pcout << "Collecting periodic face pairs... " << std::endl;
  	  GridTools::collect_periodic_faces(triangulation,
  	  	  	                            /*b_id1*/ 0,
  	  	  	                            /*b_id2*/ 1,
  	  	  	                            /*direction*/ 0,
  	  	  								periodicity_vector);
    timer.leave_subsection();


    timer.enter_subsection ("Add periodicity");
    pcout << "Adding periodicity... " << std::flush;
  	 triangulation.add_periodicity(periodicity_vector);
    pcout << " done. " << std::endl;
    timer.leave_subsection();

    for (unsigned int i=0; i<n_refinements; ++i)
    {
       const std::string section_name = "Add periodicity" + Utilities::int_to_string(i);
       timer.enter_subsection (section_name);
       pcout << "Refinement iteration: " << (i+1) << std::endl;
   	   triangulation.refine_global (1);
       pcout << "Number of cells: " << triangulation.n_active_cells() << std::endl;
       timer.leave_subsection();
    }

    pcout << "Finished. " << std::endl;
  }

    int main (int argc, char *argv[])
      {
    	Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
    	Solid<3> solid_3d;
    	solid_3d.make_grid();
      }

Reply via email to