[deal.II] Re: Error durind checkpoint / restart using parallel distributed solution transfer

2016-07-07 Thread Jean-Paul Pelteret
Hi Rajat,

Great, thanks for marking the solution. Unless you want to do so yourself, 
I'll post an issue on GitHub tomorrow to indicate that the error message 
could be made more clear. 

Regards,
J-P

P.S. I'm not a Prof., but rather an ordinary PhD. But thanks for the ego 
boost anyway ;-)

On Thursday, July 7, 2016 at 9:14:52 PM UTC+2, RAJAT ARORA wrote:
>
> Hello,
>
> Yes professor. I thought this might be helpful to anyone who gets a 
> similar error and is looking for a solution.
>
> On Wednesday, July 6, 2016 at 3:58:05 AM UTC-4, Jean-Paul Pelteret wrote:
>>
>> Hi Rajat,
>>
>> Thanks for posting how you solved your problem! I'm glad to know that you 
>> found the issue, and this will be helpful to know. Perhaps we can 
>> mention/reinforce it in the documentation.
>>
>> Regards,
>> J-P
>>
>> On Sunday, July 3, 2016 at 9:22:06 PM UTC+2, RAJAT ARORA wrote:
>>>
>>> Hello,
>>>
>>> This is because of the vector locally_owned_x.
>>> As per the documentation, during the serialization, the vector should be 
>>> ghosted. 
>>> If I make the change, the code works fine.
>>>
>>>
>>> On Sunday, July 3, 2016 at 2:52:57 PM UTC-4, RAJAT ARORA wrote:

 Hello all,

 I am trying to use the checkpoint / restart code with the help of the 
 example given in the documentation here 
 
 I am getting a long error while saving a parallel vector and parallel 
 distributed triangulation using the following code.

 Code Used:


 string triangulation_name = "mesh_restart";

 parallel::distributed::SolutionTransfer 
 sol_trans_x(dof_handler);
 sol_trans_x.prepare_serialization(locally_owned_x);

 triangulation.save(triangulation_name.c_str());


 //LA::MPI::Vector corresponds to PETSC MPI Vector
 // locally_owned_x is solution vector reinitialized as 
 // locally_owned_x.reinit(locally_owned_dofs, MPI_COMM_WORLD);



 Error here:


 An error occurred in line <1253> of file >>> Libraries/deal/dealii8.3/dealii-8.3.0/include/deal.II/lac/
 petsc_vector_base.h> in function
 void dealii::PETScWrappers::VectorBase::extract_subvector_to(
 ForwardIterator, ForwardIterator, OutputIterator) const [with 
 ForwardIterator = const unsigned int*; OutputIterator = double*]
 The violated condition was: 
 index>=static_cast(begin) && index(end)
 The name and call sequence of the exception was:
 ExcInternalError()
 Additional Information: 
 This exception -- which is used in many places in the library -- 
 usually indicates that some condition which the author of the code thought 
 must be satisfied at a certain point in an algorithm, is not fulfilled. 
 An example would be that the first part of an algorithm sorts elements 
 of an array in ascending order, and a second part of the algorithm 
 later encounters an an element that is not larger than the previous one
 .


 There is usually not very much you can do if you encounter such an 
 exception since it indicates an error in deal.II, not in your own 
 program. Try to come up with the smallest possible program that still 
 demonstrates the error and contact the deal.II mailing lists with it 
 to obtain help.


 Stacktrace:
 ---
 #0 
  
 /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
  
 void dealii::PETScWrappers::VectorBase::extract_subvector_to>>> const*, double*>(unsigned int const*, unsigned int const*, double*) const
 #1 
  
 /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
  
 void dealii::DoFCellAccessor, 
 false>::get_dof_values(dealii::PETScWrappers::MPI::Vector const&, double*, double*) const
 #2 
  
 /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
  
 void dealii::DoFCellAccessor, 
 false>::get_interpolated_dof_values(dealii::PETScWrappers::MPI::Vector const&, 
 dealii::Vector&, 
 unsigned int) const
 #3 
  
 /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
  
 dealii::parallel::distributed::SolutionTransfer<3, 
 dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<3, 3> 
 >::pack_callback(dealii::TriaIterator > const&, 
 dealii::parallel::distributed::Triangulation<3, 3>::CellStatus, void*)
 #4 
  
 /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
  

[deal.II] Re: Error durind checkpoint / restart using parallel distributed solution transfer

2016-07-07 Thread RAJAT ARORA
Hello,

Yes professor. I thought this might be helpful to anyone who gets a similar 
error and is looking for a solution.

On Wednesday, July 6, 2016 at 3:58:05 AM UTC-4, Jean-Paul Pelteret wrote:
>
> Hi Rajat,
>
> Thanks for posting how you solved your problem! I'm glad to know that you 
> found the issue, and this will be helpful to know. Perhaps we can 
> mention/reinforce it in the documentation.
>
> Regards,
> J-P
>
> On Sunday, July 3, 2016 at 9:22:06 PM UTC+2, RAJAT ARORA wrote:
>>
>> Hello,
>>
>> This is because of the vector locally_owned_x.
>> As per the documentation, during the serialization, the vector should be 
>> ghosted. 
>> If I make the change, the code works fine.
>>
>>
>> On Sunday, July 3, 2016 at 2:52:57 PM UTC-4, RAJAT ARORA wrote:
>>>
>>> Hello all,
>>>
>>> I am trying to use the checkpoint / restart code with the help of the 
>>> example given in the documentation here 
>>> 
>>> I am getting a long error while saving a parallel vector and parallel 
>>> distributed triangulation using the following code.
>>>
>>> Code Used:
>>>
>>>
>>> string triangulation_name = "mesh_restart";
>>>
>>> parallel::distributed::SolutionTransfer 
>>> sol_trans_x(dof_handler);
>>> sol_trans_x.prepare_serialization(locally_owned_x);
>>>
>>> triangulation.save(triangulation_name.c_str());
>>>
>>>
>>> //LA::MPI::Vector corresponds to PETSC MPI Vector
>>> // locally_owned_x is solution vector reinitialized as 
>>> // locally_owned_x.reinit(locally_owned_dofs, MPI_COMM_WORLD);
>>>
>>>
>>>
>>> Error here:
>>>
>>>
>>> An error occurred in line <1253> of file >> Libraries/deal/dealii8.3/dealii-8.3.0/include/deal.II/lac/
>>> petsc_vector_base.h> in function
>>> void dealii::PETScWrappers::VectorBase::extract_subvector_to(
>>> ForwardIterator, ForwardIterator, OutputIterator) const [with 
>>> ForwardIterator = const unsigned int*; OutputIterator = double*]
>>> The violated condition was: 
>>> index>=static_cast(begin) && index>> unsigned int>(end)
>>> The name and call sequence of the exception was:
>>> ExcInternalError()
>>> Additional Information: 
>>> This exception -- which is used in many places in the library -- 
>>> usually indicates that some condition which the author of the code thought 
>>> must be satisfied at a certain point in an algorithm, is not fulfilled. 
>>> An example would be that the first part of an algorithm sorts elements 
>>> of an array in ascending order, and a second part of the algorithm 
>>> later encounters an an element that is not larger than the previous one.
>>>
>>>
>>> There is usually not very much you can do if you encounter such an 
>>> exception since it indicates an error in deal.II, not in your own 
>>> program. Try to come up with the smallest possible program that still 
>>> demonstrates the error and contact the deal.II mailing lists with it to 
>>> obtain help.
>>>
>>>
>>> Stacktrace:
>>> ---
>>> #0 
>>>  
>>> /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
>>>  
>>> void dealii::PETScWrappers::VectorBase::extract_subvector_to>> const*, double*>(unsigned int const*, unsigned int const*, double*) const
>>> #1 
>>>  
>>> /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
>>>  
>>> void dealii::DoFCellAccessor, 
>>> false>::get_dof_values>> double*>(dealii::PETScWrappers::MPI::Vector const&, double*, double*) const
>>> #2 
>>>  
>>> /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
>>>  
>>> void dealii::DoFCellAccessor, 
>>> false>::get_interpolated_dof_values>> double>(dealii::PETScWrappers::MPI::Vector const&, dealii::Vector&, 
>>> unsigned int) const
>>> #3 
>>>  
>>> /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
>>>  
>>> dealii::parallel::distributed::SolutionTransfer<3, 
>>> dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<3, 3> 
>>> >::pack_callback(dealii::TriaIterator > const&, 
>>> dealii::parallel::distributed::Triangulation<3, 3>::CellStatus, void*)
>>> #4 
>>>  
>>> /home/rajat/Documents/Code-Libraries/deal/dealii8.3/installed-dealii/lib/libdeal_II.g.so.8.3.0:
>>>  
>>> std::_Function_handler>> 3> > const&, dealii::parallel::distributed::Triangulation<3, 
>>> 3>::CellStatus, void*), std::_Bind> (dealii::parallel::distributed::SolutionTransfer<3, 
>>> dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<3, 3> 
>>> >::*)(dealii::TriaIterator > const&, 
>>> dealii::parallel::distributed::Triangulation<3, 3>::CellStatus, void*)> 
>>> (dealii::parallel::distributed::SolutionTransfer<3, 
>>> 

[deal.II] Re: Geometry and boundary conditions

2016-07-07 Thread Jean-Paul Pelteret
Hi Benhour,

Have you looked at any of the tutorials on geometry creation and solid 
mechanics? I believe that these specific points are covered there.

Regards,
J-P

On Thursday, July 7, 2016 at 5:21:30 PM UTC+2, benhour.amiria...@gmail.com 
wrote:
>
> Dear Daniel,
> Thanks very much for your response. I should model a whole circle, however 
> for simplicity I want to model one quarter of it. In fact I have 2 quarter 
> of circles with different radius and with the same center. I have a 
> boundary load on the perimeter(curve) side of the circle, axial symmetry on 
> the vertical side and symmetry (n.u=0, n is the unit normal vector and u is 
> the displacement) on the horizontal side. In fact my figure has three 
> vertices. It should be noted that the center of the quarter is fixed. In 
> addition, the symmetry axis coincides on the vertical axis. I do really 
> appreciate your kindness if you help me how I can define these geometry and 
> boundary conditions for this problem.
>
> Thanks,
> Benhour
>
> On Wednesday, July 6, 2016 at 5:08:29 PM UTC-5, benhour@gmail.com 
> wrote:
>>
>> Dear All,
>> I want to define a new geometry (a quarter of a circle) and apply 
>> symmetry boundary condition on two sides of the quarter. It would be very 
>> kind of you if you help me for that.
>>
>> Best,
>> Benhour
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Question concerning BlockSparsityPattern.copy_from() member function

2016-07-07 Thread Martin Kronbichler

Dear Dustin,



How is your computational setup, i.e., how many nonzero entries do
you have in your matrix?

I'm not sure if I understand what you mean. Do you mean the number of 
nonzero entries in /SparseMatrix/ or in the /BlockSparsityPattern/ or 
in the dynamic one? How can I get this information?
You can call DynamicSparsityPattern::n_nonzero_elements() to get the 
number of nonzero entries in the dynamic sparsity pattern. This method 
also exists in BlockSparsityPattern (or all sparsity patterns that 
inherit from BlockSparsityPatternBase):

https://dealii.org/developer/doxygen/deal.II/classBlockSparsityPatternBase.html

What I'm trying to understand here is what kind of properties your 
problem has - whether there are many nonzero entries per row and other 
special things that could explain your problems.


I just checked the 3D case of step-22 for the performance of 
BlockSparsityPattern::copy_from(BlockDynamicSparsityPattern) and the 
performance looks where I would expect it to be. It takes 1.19s to copy 
the sparsity pattern for a case with 1.6m DoFs (I have some 
modifications for the mesh compared to what you find online) on my 
laptop. Given that there are 275m nonzero entries in that matrix and I 
need to touch around 4.4 GB (= 4 x 275m x 4 bytes per unsigned int, once 
for clearing the data in the pattern, once for reading in the dynamic 
pattern, once for writing into the fixed pattern plus once for 
write-allocate on that last operation) of memory here, I reach 26% of 
the theoretical possible on this machine (~14 GB/s memory transfer per 
core). While I would know how to reach more than 80% of peak memory 
bandwidth here, this function is no way near being relevant in the 
global run time in any of my performance profiles. And I'm likely the 
deal.II person with most affinity to performance numbers.


Thus my interest in what is particular about your setup.


Have you checked that you do not run out of memory and see a large
swap time?

 I'm quiet sure that this is not the case/problem since I used one of 
our compute servers with 64 GB memory. Moreover, at the moment the 
program runs with an additional global refinement, i.e. about 16 
million dofs and only 33% of the memory is used. Swap isn't used at all.
That's good to know, so we can exclude the memory issue. Does your 
program use multithreading? It probably does in case you do not do 
anything special when configuring deal.II; the copy operation is not 
parallelized by threads but neither are almost all other initialization 
functions, so it should not become such a disproportionate timing here. 
10h for 2.5m dofs looks insane. I would expect something between 0.5 and 
10 seconds, depending on the number of nonzeros in those blocks.


Is there anything else special about your configuration or problem as 
compared to the cases presented in the tutorial? What deal.II version 
are you using, what is the finite element? Any special constraints on 
those systems?


Unfortunately this can not be done that easy. I have to reorganize 
things and kill a lot of superflous code. But besides that, I have a 
lot of other work to do. May be I can provide you an example file at 
the end of next week.
Let us know when you have a test case. I'm really curious what could 
cause this huge run time.


Best,
Martin

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Question concerning BlockSparsityPattern.copy_from() member function

2016-07-07 Thread Dustin Kumor
Dear Martin,

thank you for answering that quickly. 

How is your computational setup, i.e., how many nonzero entries do you have 
> in your matrix? 
>
I'm not sure if I understand what you mean. Do you mean the number of 
nonzero entries in *SparseMatrix* or in the *BlockSparsityPattern* or in 
the dynamic one? How can I get this information?
 

> Have you checked that you do not run out of memory and see a large swap 
> time?
>
 I'm quiet sure that this is not the case/problem since I used one of our 
compute servers with 64 GB memory. Moreover, at the moment the program runs 
with an additional global refinement, i.e. about 16 million dofs and only 
33% of the memory is used. Swap isn't used at all.

>  

> How do the run times behave when you choose a smaller problem size? (I 
> wonder if there is some higher than O(N) complexity somewhere.)
>
Even in this cases the time needed to copy is comparatively long. I 
attached a log file wherein the the times are listed for different numbers 
of dofs.

  It would be very helpful if you could provide us an example file that 
> only contains the setup phase so we can investigate the issue further.
>
Unfortunately this can not be done that easy. I have to reorganize things 
and kill a lot of superflous code. But besides that, I have a lot of other 
work to do. May be I can provide you an example file at the end of next 
week.

Best regards,
Dustin

>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
JobId alliance Thu Jun 23 13:22:53 2016

*
*  Starting Simulation on Thu, 23.06.2016 at 13:22:53
**

Cycle::0 (Initial mesh)
Setup:: ...
Setup:setup_intial_fe_distribution:: ...
Setup:setup_intial_fe_distribution:Time:: < 1 sec
setup_intial_fe_distribution, wall time: 0.000339985s.
Setup:setup_system:: ...
Setup:setup_system:setup_data:: ...
Setup:setup_system:setup_data:detect_contact_interface:: ...
Setup:setup_system:setup_data:detect_contact_interface:Time:: < 1 sec
Setup:setup_system:setup_data:setup_interface_q_point_data:: ...
Setup:setup_system:setup_data:setup_interface_q_point_data:Time:: < 1 sec
Setup:setup_system:setup_data:setup_interface_dof_connections:: ...
Setup:setup_system:setup_data:setup_interface_dof_connections:Time:: < 1 sec
Setup:setup_system:setup_data:Time:: < 1 sec
Setup:setup_system:hanging_node_constraints:: ...
Setup:setup_system:hanging_node_constraints:Time:: < 1 sec
Setup:setup_system:interpolate_boundary_values:: ...
Setup:setup_system:interpolate_boundary_values:Time:: < 1 sec
Setup:setup_system:reinit_sparsity_pattern:: ...
Setup:setup_system:reinit_sparsity_pattern:Time:: < 1 sec
Setup:setup_system:make_sparsity_pattern_deal:: ...
Setup:setup_system:make_sparsity_pattern_deal:Time:: < 1 sec
Setup:setup_system:make_biorthogonal_sparsity_pattern:: ...
Setup:setup_system:make_biorthogonal_sparsity_pattern:Time:: < 1 sec
Setup:setup_system:copy_sparsity_pattern:: ...
Setup:setup_system:copy_sparsity_pattern:Time:: < 1 sec
Setup:setup_system:merge_and_close_constraints:: ...
Setup:setup_system:merge_and_close_constraints:Time:: < 1 sec
Setup:setup_system:Time:: < 1 sec
Setup:setup_system:: n_dofs dofhandler_m:   81
 n_dofs dofhandler_s:   135
 n_dofs biorthogonal basis: 45
 n_dofs all:261

setup_system, wall time: 0.00929093s.
Setup:: Number of active cells:   24
Setup:: Number of degrees of freedom: 216
Setup:Time:: < 1 sec
Setup, wall time: 0.0101869s.
Assembling::...
Assembling:Time:: < 1 sec
Assembling, wall time: 0.00511193s.
Solving::...
Solving:cg::Starting value 0.130903
Solving:cg::Convergence step 7 value 1.40409e-08
Solving:cg::Starting value 0.103671
Solving:cg::Convergence step 7 value 1.58635e-08
Solving:cg::Starting value 0.0645919
Solving:cg:cg::Starting value 0.00513355
Solving:cg:cg::Convergence step 7 value 1.15629e-09
Solving:cg:cg::Starting value 0.00403430
Solving:cg:cg::Convergence step 7 value 2.42599e-09
Solving:cg:cg::Starting value 0.00101849
Solving:cg:cg::Convergence step 7 value 1.41922e-10
Solving:cg:cg::Starting value 0.000834389
Solving:cg:cg::Convergence step 7 value 5.36044e-10
Solving:cg:cg::Starting value 0.000228587
Solving:cg:cg::Convergence step 7 value 9.29881e-11
Solving:cg:cg::Starting value 0.000295453
Solving:cg:cg::Convergence step 7 value 2.47793e-10
Solving:cg:cg::Starting value 0.000123562
Solving:cg:cg::Convergence step 7 value 3.51006e-11
Solving:cg:cg::Starting value 0.000169932