[deal.II] Re: Application for the use in continuum mechanics

2019-08-16 Thread Lucas Campos
Dear Lukas,

I fully understand your point. No need to reinvent the wheel. I don't know 
any such software. However, if this is the whole change you want to do, 
tutorial-44 seems indeed to be a suitable start for you, even if this is a 
somewhat complicated program. 

> where you can e.g. easily implement any desired material model but do not 
have to care about the whole assembly and solution process

In that tutorial, the whole material model is contained into four 
functions, get_tau(), get_Jc(), get_dPsi_vol_dJ() and get_d2Psi_vol_dJ2(). 
By creating other objects with such an interface + some fun templating (or 
by using virtual functions), it should be easy to access various materials. 

Incidentely, I noticed we are geographically close. Do you mind if I send 
you a direct email?

Bests,
Lucas

On Friday, 16 August 2019 10:57:06 UTC+2, Lukas Lamm wrote:
>
> Hey Lucas,
>
> thanks for your response. I was actually searching for some application 
> somehow similar to FE software like e.g. FEAP or Deadalon, where you can 
> e.g. easily implement any desired material model but do not have to care 
> about the whole assembly and solution process. At least not if you do not 
> explicitly want to. Writing it by myself would not be a problem I guess, 
> but would still be some effort. Therefore, I thought that maybe someone 
> alse is already working on such a package and I could participate in the 
> development process.
>
> Best regards,
> Lukas
>
>
> Am Freitag, 16. August 2019 10:11:24 UTC+2 schrieb Lucas Campos:
>>
>> Dear Lukas,
>>
>> I am working with Continuum Mechanics, although I would not call my 
>> research "more advanced" than the stuff in tutorial-44, at least from a 
>> computational point of view. Mostly, growing materials. Could you be a bit 
>> more specific on your needs?
>>
>> Bests,
>> Lucas
>>
>> On Thursday, 15 August 2019 09:45:12 UTC+2, Lukas Lamm wrote:
>>>
>>> Dear dealii community,
>>>
>>>
>>> first of all I would like to thank you for this wonderful piece of work 
>>> you all have produced. It is by far the most well designed and documented 
>>> open source FE library I have seen so far.
>>>
>>>
>>> I am working in the field of computational continuum mechanics and was 
>>> wondering, if anyone of you know about (or is even working on) an 
>>> application built on top of dealii for the use in continuum mechanics. 
>>> Something like tutorial 44, but more sophisticated and advanced. If there 
>>> is anything out there, I would be really happy to recieve a short feedback 
>>> with links to github etc.
>>>
>>>
>>> Thank you very much for your help.
>>>
>>>
>>> Mit freundlichen Grüßen / kind regards,
>>>
>>>
>>> *Lukas Lamm, M. Sc.* 
>>>
>>> Research Associate / Ph.D. candidate
>>>
>>> RWTH Aachen University
>>> Institute of Applied Mechanics
>>> Mies-van-der-Rohe-Str. 1 Room 308d
>>> D-52074 Aachen
>>>
>>> Phone: +49(0)241 80 25006
>>> Mail: luka...@ifam.rwth-aachen.de 
>>> Web:www.ifam.rwth-aachen.de
>>>
>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/755de127-16f4-422d-8572-58e560072507%40googlegroups.com.


[deal.II] Re: Application for the use in continuum mechanics

2019-08-16 Thread Lucas Campos
Dear Lukas,

I am working with Continuum Mechanics, although I would not call my 
research "more advanced" than the stuff in tutorial-44, at least from a 
computational point of view. Mostly, growing materials. Could you be a bit 
more specific on your needs?

Bests,
Lucas

On Thursday, 15 August 2019 09:45:12 UTC+2, Lukas Lamm wrote:
>
> Dear dealii community,
>
>
> first of all I would like to thank you for this wonderful piece of work 
> you all have produced. It is by far the most well designed and documented 
> open source FE library I have seen so far.
>
>
> I am working in the field of computational continuum mechanics and was 
> wondering, if anyone of you know about (or is even working on) an 
> application built on top of dealii for the use in continuum mechanics. 
> Something like tutorial 44, but more sophisticated and advanced. If there 
> is anything out there, I would be really happy to recieve a short feedback 
> with links to github etc.
>
>
> Thank you very much for your help.
>
>
> Mit freundlichen Grüßen / kind regards,
>
>
> *Lukas Lamm, M. Sc.* 
>
> Research Associate / Ph.D. candidate
>
> RWTH Aachen University
> Institute of Applied Mechanics
> Mies-van-der-Rohe-Str. 1 Room 308d
> D-52074 Aachen
>
> Phone: +49(0)241 80 25006
> Mail: luka...@ifam.rwth-aachen.de  
> Web:www.ifam.rwth-aachen.de
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4abc5688-e05b-4e77-9a0c-af84f61ea8da%40googlegroups.com.


[deal.II] Re: Publications based on deal.II

2018-04-27 Thread Lucas Campos
There is a paper, not by me, that uses deal.II on 
arXiv: https://arxiv.org/abs/1709.05546

They mention on page 7

> This framework uses the deal.II open source finite element library [30, 
31].

On Tuesday, 24 April 2018 19:41:14 UTC+2, Wolfgang Bangerth wrote:
>
>
> All, 
>
> as you may know, we try to list all publications based on deal.II at 
>   http://dealii.org/publications.html 
> We use this list to justify the effort we spend on writing this 
> software, both to our universities as well as the funding agencies that 
> support its development. 
>
> In anticipation of the next release, we would like to update this page. 
> If you have (or know of) a publication that uses deal.II for numerical 
> results and that isn't already listed, please let us know so we can put 
> it on there. For this purpose, publications also include MSc, Diploma or 
> PhD theses, or anything else that may seem appropriate. 
>
> Thanks! 
>Wolfgang 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Problem with parallelization when using hyper_cube_slit

2018-04-13 Thread Lucas Campos
Dear Roberto,

I think the problem is that you are using GridTools::transform (via 
GridTools::rotate), in a distributed triangulation. This should not be 
done, as noted 
here: 
https://www.dealii.org/8.5.0/doxygen/deal.II/namespaceGridTools.html#a212e99cf0d923cebfa04f1d23fa60b04

If this is indeed the root of the issue, I see two options:

1. Rotate your problem, rather than your triangulation, 90 degrees. 
2. You can create a temporary local normal triangulation, rotate it, and 
finally copy into the distributed triangulation. That is, something in the 
lines of


>> Triangulation<2> tri_tmp;
>
> GridGenerator::hyper_cube_slit(tri_tmp, -1, 1, false);
>
> GridTools::rotate(-M_PI/2, tri_tmp);
>
>
>> triangultion.copy_triangulation(tri_tmp);
>
> triangulation.refine_global (6);
>
> Bests,
Lucas
On Thursday, 12 April 2018 00:54:05 UTC+2, Roberto Porcù wrote:
>
> Dear all,
>
>   in the reply to my first post I forgot the code.
> I attach it to this reply.
>
> Best
>
> Roberto P.
>
>
> On Tuesday, February 20, 2018 at 7:27:37 PM UTC-5, Roberto Porcù wrote:
>>
>> Dear all,
>>
>> I'm solving a linear elasticity problem on a cracked domain created by 
>> means
>> of the  GridGenerator::hyper_cube_slit  function.
>> When I solve it in serial everything works fine.
>> When I solve it in parallel I get a problem on a node which is on one
>> of the slit boundaries as it is possible to see from the attached 
>> pictures.
>> I don't understand why that is happening because I wrote other parallel 
>> codes in deal.II
>> and everything was always working fine. I've checked my code many times, 
>> also comparing It to the tutorials.
>> I am supposing there is something related to the slit.
>>
>> Thanks in advance,
>> Roberto
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Improvement of solver documentation

2018-02-25 Thread Lucas Campos
One addendum to my previous post.

By Linear Solver module, I mean this page:
https://www.dealii.org/developer/doxygen/deal.II/group__Solvers.html
By Preconditioners and Relaxation operators, I mean this other page:
https://www.dealii.org/developer/doxygen/deal.II/group__Preconditioners.html

Lucas

On 25 February 2018 at 17:28, Lucas Campos <rmk...@gmail.com> wrote:

> Wolfgang,
>
> > Work on it whenever you have time -- there is certainly no urgency. In
> any case, always start with the current version from github since things
> change over time.
>
> Sure! I already have a fork of deal.ii on my github account.
>
> > Have you thought about where this kind of documentation should go?
>
> Initally, I thought about putting into two different parts. Namely, I
> think the Solvers conditions should go into the Linear Solver module, and
> the Preconditioners part on the Preconditioners and Relaxation operators. I
> am not sure which file generates these pages, but I am sure it is possible
> to find with some grepping.
>
> Do these choices make sense?
>
> > I'll send you the parts in question in separate mail. (The file with
> all slides for these lectures is now at around 20MB and has a total of 1169
> pages ;-) )
>
> Thanks, both for the documentation and for the filtering. 1169 would lead
> to a *lot* of searching around.
>
> Bests,
> Lucas
>
>
> On 25 February 2018 at 17:04, Wolfgang Bangerth <bange...@colostate.edu>
> wrote:
>
>>
>> Lucas,
>>
>> I will start writing it soon. I will post a draft in the next few days.
>>> Unfortunately, I cannot do it much earlier as I will be in a Spring School
>>> the next days.
>>>
>>
>> Work on it whenever you have time -- there is certainly no urgency. In
>> any case, always start with the current version from github since things
>> change over time.
>>
>> Have you thought about where this kind of documentation should go?
>>
>>
>> Regarding the slides, would be nice having them, if you are willing to
>>> share! I think the PDFs should suffice, but it is better to be safe than
>>> sorry, and extra documentation does not hurt!
>>>
>>
>> I'll send you the parts in question in separate mail. (The file with all
>> slides for these lectures is now at around 20MB and has a total of 1169
>> pages ;-) )
>>
>> Cheers
>>  W.
>>
>>
>> --
>> 
>> Wolfgang Bangerth  email: bange...@colostate.edu
>>
>>www: http://www.math.colostate.edu/~bangerth/
>>
>> --
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see https://groups.google.com/d/fo
>> rum/dealii?hl=en
>> --- You received this message because you are subscribed to a topic in
>> the Google Groups "deal.II User Group" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>> pic/dealii/gWJO4gFmdbA/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> dealii+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Improvement of solver documentation

2018-02-25 Thread Lucas Campos
Wolfgang,

> Work on it whenever you have time -- there is certainly no urgency. In
any case, always start with the current version from github since things
change over time.

Sure! I already have a fork of deal.ii on my github account.

> Have you thought about where this kind of documentation should go?

Initally, I thought about putting into two different parts. Namely, I think
the Solvers conditions should go into the Linear Solver module, and the
Preconditioners part on the Preconditioners and Relaxation operators. I am
not sure which file generates these pages, but I am sure it is possible to
find with some grepping.

Do these choices make sense?

> I'll send you the parts in question in separate mail. (The file with all
slides for these lectures is now at around 20MB and has a total of 1169
pages ;-) )

Thanks, both for the documentation and for the filtering. 1169 would lead
to a *lot* of searching around.

Bests,
Lucas


On 25 February 2018 at 17:04, Wolfgang Bangerth 
wrote:

>
> Lucas,
>
> I will start writing it soon. I will post a draft in the next few days.
>> Unfortunately, I cannot do it much earlier as I will be in a Spring School
>> the next days.
>>
>
> Work on it whenever you have time -- there is certainly no urgency. In any
> case, always start with the current version from github since things change
> over time.
>
> Have you thought about where this kind of documentation should go?
>
>
> Regarding the slides, would be nice having them, if you are willing to
>> share! I think the PDFs should suffice, but it is better to be safe than
>> sorry, and extra documentation does not hurt!
>>
>
> I'll send you the parts in question in separate mail. (The file with all
> slides for these lectures is now at around 20MB and has a total of 1169
> pages ;-) )
>
> Cheers
>  W.
>
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
>
>www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see https://groups.google.com/d/fo
> rum/dealii?hl=en
> --- You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/dealii/gWJO4gFmdbA/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Improvement of solver documentation

2018-02-25 Thread Lucas Campos
Dear Wolfgang,

I will start writing it soon. I will post a draft in the next few days. 
Unfortunately, I cannot do it much earlier as I will be in a Spring School 
the next days. 

The first version will include only the required properties for the serial 
Solvers. As it is my first time writing documentation not only for deal.ii 
but also in Doxygen, I prefer to check somewhat often if I am using the 
correct style. BTW, Lecture 23 has been quite helpful in this regard.

Regarding the slides, would be nice having them, if you are willing to 
share! I think the PDFs should suffice, but it is better to be safe than 
sorry, and extra documentation does not hurt!

Bests,
Lucas

On Saturday, 24 February 2018 21:40:53 UTC+1, Wolfgang Bangerth wrote:
>
>
> Hi Lucas, 
>
> > While I was learning how to use the deal.ii library, one of the my major 
> > theoretical problems was the plethora of choices in 
> solvers+preconditioners 
> > available, as I mentioned in another post. For instance, most of the 
> Tutorial 
> > steps 
> > mention that you can use CG when the tangent matrix is symmetric, but 
> not often 
> > does it mention what to do if your matrix is asymmetric. I think having 
> a short 
> > summary or a table with the solvers in the documentation could help 
> solve 
> > (hehe) this issue. Naturally, something similar could be done regarding 
> > preconditioners. 
> > 
> > I am willing to write such documentation. 
>
> I think that would be fantastic! 
>
>
> > As I plan it right now, a lot of of the work would to summarise and 
> write 
> > down the contents of Prof. Bangerth's Lectures 34-38 and 41.75. As for 
> > the algebraic properties required for the solvers to work, I think using 
> the 
> > Templates book and PETSc documentation should suffice, but I am always 
> > open to more sources. 
>
> These are good resources that would be nice to summarize and link to in 
> the 
> documentation. 
>
>
> > That said all, I can see a few reasons why one would *not* want to have 
> this 
> > summary in the documentation, the most important being that it is 
> possible that 
> > a beginner could get a "good enough" idea and not look deeper into the 
> > documentation to something that would be more beneficial to them. 
> Therefore, 
> > before starting this side project, I would like to know if the 
> maintainers 
> > think it is a good idea to have such summary in the documentation, and 
> would be 
> > pleased to hear more thoughts on the issue. 
>
> I think that the main reason we don't have such an overview in the 
> documentation is that nobody wrote it so far. So if you're willing to do 
> this, 
> then that would be great and I'm sure something that the community would 
> very 
> much appreciate! 
>
> Would it help you if I sent you the original OpenOffice/LibreOffice slides 
> I 
> used for the lectures you mention? The PDFs are of course all online 
> already. 
>
> Best 
>   Wolfgang 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Improvement of solver documentation

2018-02-23 Thread Lucas Campos
Dear all,

While I was learning how to use the deal.ii library, one of the my major
theoretical problems was the plethora of choices in solvers+preconditioners
available, as I mentioned in another post. For instance, most of the 
Tutorial steps
mention that you can use CG when the tangent matrix is symmetric, but not 
often
does it mention what to do if your matrix is asymmetric. I think having a 
short
summary or a table with the solvers in the documentation could help solve
(hehe) this issue. Naturally, something similar could be done regarding
preconditioners.

I am willing to write such documentation. 

As I plan it right now, a lot of of the work would to summarise and write
down the contents of Prof. Bangerth's Lectures 34-38 and 41.75. As for 
the algebraic properties required for the solvers to work, I think using 
the 
Templates book and PETSc documentation should suffice, but I am always 
open to more sources.

That said all, I can see a few reasons why one would *not* want to have this
summary in the documentation, the most important being that it is possible 
that
a beginner could get a "good enough" idea and not look deeper into the
documentation to something that would be more beneficial to them. Therefore,
before starting this side project, I would like to know if the maintainers
think it is a good idea to have such summary in the documentation, and 
would be
pleased to hear more thoughts on the issue. 

Bests,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Automatic refinement with minimal level of refinement

2018-02-19 Thread Lucas Campos
Dear all,

I am creating a mesh using GridGenerator::hyper_shell. In this mesh, I want 
to divide the mesh into 
two distinct subdomains, radially. In order to do that, I need to refine 
the mesh at least once. Each 
of these cells carries an internal state that depends on its kind.

The issue I have is that, while improbable, in theory it is possible for 
GridRefinement::refine_and_coarsen_fixed_number 
<http://www.dealii.org/developer/doxygen/deal.II/namespaceGridRefinement.html#a48e5395381ed87155942a61a1edd134d>
 to 
merge these cells back, which 
would be a mistake. Is there a way either configure the triangulation to 
consider the once refined 
state the coarsest possible?

The other solution I see is to "manually" input the once-refined mesh and 
then use 
create_triangulation 
<https://www.dealii.org/8.4.1/doxygen/deal.II/classTriangulation.html#ab926104144af9f9f5ca8c0798308c68c>,
 
but that sounds cumbersome and error-prone.

Bests,
Lucas Campos

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Force and PK1

2018-02-16 Thread Lucas Campos
Dear David,

 
> So my first question is: Can I do it like this or is there a simpler way 
of doing it?
I can't really comment on this. Hopefully someone with more experience than 
me will chip in.
 
> And the second question is: How can I access this sort of Information? 
The area might for example be computed using the norm of a cross product. 
But I am not sure how to get the two vectors for this operation.
You might want to check this 
function: 
http://dealii.org/developer/doxygen/deal.II/classTriaAccessor.html#a07891adb63b1629ef34bf285d1b70af2

I would recommend you read carefully the description of the function before 
using it, mainly for curved domains.

Bests,
Lucas

On Friday, 16 February 2018 08:43:32 UTC+1, daschn...@googlemail.com wrote:
>
> Dear all,
>
> I studied the step- 44 Tutorial Program and read already a lot about how 
> to apply a Neumann contribution. But I have still two questions:
> In my case, I like to apply a force (not a force per unit reference area) 
> as a Neumann contribution on the RHS. But for the Integral, Ni * traction* 
> JxWI need the Piola- Kirchhoff traction.
> My idea would be the following: F= PN dA = T dA   <=> F/dA= T
> I would simply divide the force by the area of the cellsurface. 
> So my first question is: Can I do it like this or is there a simpler way 
> of doing it?
> And the second question is: How can I access this sort of Information? The 
> area might for example be computed using the norm of a cross product. But I 
> am not sure how to get the two vectors for this operation.
>
> Any help is appreciated.
> Thanks a lot with best regards
> David
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETSc Sparse LU Preallocation

2017-12-08 Thread Lucas Campos
> Yes, you need good preconditioners!

I am investigating this now. I have to admit I was overwhelmed with the
plethora of options, but I am going through Barrett et al's Template Book
now, and I hope it will help.



On 7 December 2017 at 18:26, Wolfgang Bangerth 
wrote:

>
> Thanks for your thoughts on this. I have tried to use iterative solvers,
>> but they all seem to fail after at some point,
>>
>
> Yes, you need good preconditioners!
>
> and it seems that the maximum number of iterations in every one of them is
>> capped at 1.
>>
>
> No, you can select that in the SolverControl object. But if you need more
> than 10,000 iterations, then that really is no longer efficient.
>
> Best
>  W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
>
>www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see https://groups.google.com/d/fo
> rum/dealii?hl=en
> --- You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/dealii/yje1MbYdlfc/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETSc Sparse LU Preallocation

2017-12-07 Thread Lucas Campos
Dear Wolfgang,

> I think there's little to gain from trying to tweak the memory allocation 
part of this. The fact that MUMPS takes a lot of memory is also nothing you 
can change -- that's just what you get for using a direct solver.

Thanks for your thoughts on this. I have tried to use iterative solvers, 
but they all seem to fail after at some point, and it seems that the 
maximum number of iterations in every one of them is capped at 1. I 
will give them another try before accepting this RAM devouring fate. Worst 
case scenario, I hope that when I distribute this to several nodes the 
"parallel ram" will (even if slightly) compensate this effect.

> Nice graph, though. It clearly shows what's going on!

Thank you! I hope it makes up for the unclear first post!

Bests,
L

On Thursday, December 7, 2017 at 4:44:46 PM UTC+1, Wolfgang Bangerth wrote:
>
>
> Lucas, 
>
> > I mean that the LU Decomposition solver takes a huge amount of RAM, and 
> it 
> > seems to me that allocating that once and reusing the space would be 
> better. 
> > Attached you can find a simple graph* showing how the free memory in 
> time. I 
> > ran an instance of my program using around 164k cells, running on 7 
> threads. 
> > As you can see, the solving step consumes a lot of RAM, and then 
> deallocates 
> > it after the solver finishes. What I wonder is if it is useful and 
> possible to 
> > just do this allocation/freeing once, at the start of the program. 
>
> I don't think it matters. First, you can't know how much memory you're 
> going 
> to use if you do a sparse LU decomposition. It all depends on the sparsity 
> pattern of the matrix. Second, MUMPS does this internally -- I don't think 
> you 
> have control over it. Third, sparse decompositions are so expensive to 
> compute 
> that the cost of memory allocation is likely completely negligible. 
>
> I think there's little to gain from trying to tweak the memory allocation 
> part 
> of this. The fact that MUMPS takes a lot of memory is also nothing you can 
> change -- that's just what you get for using a direct solver. 
>
> Nice graph, though. It clearly shows what's going on! 
>
> Cheers 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETSc Sparse LU Preallocation

2017-12-07 Thread Lucas Campos
Dear Wolfgang,

I just noticed a minor mistake on the graph: two of the arrows are slightly
misplaced. While this does not change the main message, I am still sending
you the corrected figure.

Lucas

On 7 December 2017 at 16:28, Lucas Campos <rmk...@gmail.com> wrote:

> Dear Wolfgang,
>
> I am sorry for not being clear.
>
> > Lucas -- I don't think I entirely understand the question. Where is the
> time allocating memory spent? When you build the matrix, or when MUMPS
> works on it to compute the LU decomposition?
>
> I mean that the LU Decomposition solver takes a huge amount of RAM, and it
> seems to me that allocating that once and reusing the space would be
> better. Attached you can find a simple graph* showing how the free memory
> in time. I ran an instance of my program using around 164k cells, running
> on 7 threads. As you can see, the solving step consumes a lot of RAM, and
> then deallocates it after the solver finishes. What I wonder is if it is
> useful and possible to just do this allocation/freeing once, at the start
> of the program.
>
> > You have no control over what MUMPS does -- it's a black box from our
> perspective. Or do you know whether PETSc allows setting parameters that
> are then passed to MUMPS?
>
> I don't know if PETSc allows to change MUMPS' configuration. In fact, I
> only ever used PETSc via deal.II. What I understood from the documentation
> was that UMFPack would allow me to use a single allocation, but currently I
> am not 100% how to make it play nice with PETSc interface and I want to
> check if there is a simpler/more direct way to do before diving into it.
>
> *: This graph is less than scientific. I simply ran free every 5 seconds
> while my program ran. But I think it gets the point across.
>
> Bests,
> Lucas
>
> On 7 December 2017 at 15:40, Wolfgang Bangerth <bange...@colostate.edu>
> wrote:
>
>> On 12/07/2017 03:12 AM, Lucas Campos wrote:
>>
>>>
>>> Currently I am using a direct LU solver via PETSc/MUMPS to solve my
>>> matrix. However, I have noticed that I spend a lot of time in allocation,
>>> at every step. Is it possible (or useful) to preallocate the internal
>>> structures necessary to solve the matrix? According to [1], it is possible
>>> to do it if I use UMFPack, but it seems I would need to change a bit more
>>> code to still work with MPI, so would be simpler to do it while using
>>> PETSc/MUMPS.
>>>
>>
>> Lucas -- I don't think I entirely understand the question. Where is the
>> time allocating memory spent? When you build the matrix, or when MUMPS
>> works on it to compute the LU decomposition?
>>
>> You have no control over what MUMPS does -- it's a black box from our
>> perspective. Or do you know whether PETSc allows setting parameters that
>> are then passed to MUMPS?
>>
>> Best
>>  W.
>>
>> --
>> 
>> Wolfgang Bangerth  email: bange...@colostate.edu
>>www: http://www.math.colostate.edu/~bangerth/
>>
>>
>> --
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see https://groups.google.com/d/fo
>> rum/dealii?hl=en
>> --- You received this message because you are subscribed to a topic in
>> the Google Groups "deal.II User Group" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>> pic/dealii/yje1MbYdlfc/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> dealii+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] PETSc Sparse LU Preallocation

2017-12-07 Thread Lucas Campos

Dear all,

Currently I am using a direct LU solver via PETSc/MUMPS to solve my matrix. 
However, I have noticed that I spend a lot of time in allocation, at every 
step. Is it possible (or useful) to preallocate the internal structures 
necessary to solve the matrix? According to [1], it is possible to do it if 
I use UMFPack, but it seems I would need to change a bit more code to still 
work with MPI, so would be simpler to do it while using PETSc/MUMPS.

Bests,
Lucas

[1]https://www.dealii.org/developer/doxygen/deal.II/classSparseDirectUMFPACK.html

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Errors when using MUMPS/PETSc LU

2017-11-30 Thread Lucas Campos
Deal all,

Sorry for the late reply. I had some urgent matters to solve, and had to
put this investigation to a halt for a while.

> Then you have to simplify your problem as much as possible until we
> can reproduce it.

I am not sure of the best way to do it, as a have a rather large program
and I need it to build the system matrix. In your experience, saving the
matrices
to files and then just load them would be a good path?

> Are you using the debug build of PETSc?

I think so. I built it from candi, and I am compiling my program in debug
mode, as created by deal.ii's auto pilot.

> That should be enough to find out whether anybody is overwriting
> memory.

Please, see the attachment.

> You can also use clang's address sanitizer.

I could not compile with clang, due to an unknown flag, -fopenmp-simd. Do
you happen to know to to disable this one?

Bests,
Lucas

On 17 November 2017 at 19:50, Timo Heister <heis...@clemson.edu> wrote:

> > It shows the same issues, as expected. I will follow your advice and try
> to use a different solver.
>
> Then you have to simplify your problem as much as possible until we
> can reproduce it.
>
> > Still, would it be possible for you to comment a bit more on those
> MPI_Op_free errors?
>
> This can happen for many different reasons. It might be a bug inside
> one of the libraries or it might happen if you overwrite memory (the
> reason I suggested valgrind) or you are trying to delete a PETSc
> object more than once, etc..
>
> Are you using the debug build of PETSc?
>
> > Also, do you have recommendations for Valgrind flags other than
> --track-origins=yes --leak-check=full?
>
> That should be enough to find out whether anybody is overwriting
> memory. You can also use clang's address sanitizer.
>
>
> On Fri, Nov 17, 2017 at 11:07 AM, Wolfgang Bangerth
> <bange...@colostate.edu> wrote:
> > On 11/17/2017 07:37 AM, Lucas Campos wrote:
> >>
> >>
> >> Invalid MPI_Op, error stack:
> >>
> >> MPI_Op_free(111): MPI_Op_free(op=0x7ffcf6298dac) failed
> >>
> >> MPI_Op_free(75).: Null Op pointer
> >>
> >>
> >> If this error would lead the the issues I am having, is up to
> discussion.
> >>
> >> I tried using PETSc's SolverPreOnly
> >>
> >>  SolverControl solver_control;
> >>
> >>  PETScWrappers::SolverPreOnly solver(solver_control,
> >> mpi_communicator);
> >>
> >>  PETScWrappers::PreconditionLU
> >> preconditioner(system_matrix);
> >>
> >>  solver.solve(system_matrix, distributed_dU, system_rhs,
> >> preconditioner);
> >>
> >>
> >> It shows the same issues, as expected. I will follow your advice and try
> >> to use a different solver.
> >>
> >> Still, would it be possible for you to comment a bit more on those
> >> MPI_Op_free errors?
> >
> >
> > It means that an MPI_Op object is freed (like calling 'free' for memory)
> but
> > that the object that's being freed doesn't actually exist (is a NULL
> > pointer).
> >
> > There clearly is a bug here, but it's impossible to tell without a
> backtrace
> > where that might be.
> >
> > Best
> >  W.
> >
> >
> > --
> > 
> > Wolfgang Bangerth  email: bange...@colostate.edu
> >www: https://urldefense.proofpoint.
> com/v2/url?u=http-3A__www.math.colostate.edu_-7Ebangerth_=DwIBaQ=Ngd-
> ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmE
> jX38S7JmlS9Vw=IpF6CHQF4jujTvcadoPchJantOI8WT4okP-4h4ugYXU=
> oUUKpv7xLIryBSCggjeM-8LDP9MPpZTG7nlTwGXS7os=
> >
> >
> > --
> > The deal.II project is located at https://urldefense.proofpoint.
> com/v2/url?u=http-3A__www.dealii.org_=DwIBaQ=Ngd-
> ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmE
> jX38S7JmlS9Vw=IpF6CHQF4jujTvcadoPchJantOI8WT4okP-4h4ugYXU=
> 25xaQnWa3gLddZdQtfajI_ydxKLe6cHjjurmVW4hMn0=
> > For mailing list/forum options, see
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.
> google.com_d_forum_dealii-3Fhl-3Den=DwIBaQ=Ngd-
> ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmE
> jX38S7JmlS9Vw=IpF6CHQF4jujTvcadoPchJantOI8WT4okP-4h4ugYXU=
> cjIApHTOyuurgKurhKBam3fbKFKCLlDjcodrdwGi30w=
> > --- You received this message because you are subscribed to the Google
> > Groups "deal.II User Group" group.
> >

Re: [deal.II] Re: Errors when using MUMPS/PETSc LU

2017-11-17 Thread Lucas Campos
Dear Timo,

Also, do you have recommendations for Valgrind flags other
than --track-origins=yes --leak-check=full?

Lucas

On 17 November 2017 at 15:37, Lucas Campos <rmk...@gmail.com> wrote:

> Dear Timo,
>
> Thanks for you advice. I am running the program in three different
> computers -- my notebook, my research group's server and the local cluster.
> In all of them I have this (small) change to suddenly find the nan.
>
> According to MUST, there is clearly a problem inside deal.II, MUMPS or
> PETSc, as can be noticed on the file I sent on the previous messages.
> Namely,
>
> Invalid MPI_Op, error stack:
>>
>> MPI_Op_free(111): MPI_Op_free(op=0x7ffcf6298dac) failed
>>
>> MPI_Op_free(75).: Null Op pointer
>>
>>
> If this error would lead the the issues I am having, is up to discussion.
>
> I tried using PETSc's SolverPreOnly
>
> SolverControl solver_control;
>>
>> PETScWrappers::SolverPreOnly solver(solver_control,
>>> mpi_communicator);
>>
>> PETScWrappers::PreconditionLU preconditioner(system_matrix);
>>
>> solver.solve(system_matrix, distributed_dU, system_rhs,
>>> preconditioner);
>>
>>
> It shows the same issues, as expected. I will follow your advice and try
> to use a different solver.
>
> Still, would it be possible for you to comment a bit more on those
> MPI_Op_free errors?
>
> Cheers,
> Lucas
>
> On 17 November 2017 at 15:14, Timo Heister <heis...@clemson.edu> wrote:
>
>> Lucas,
>>
>> those kind of bugs are hard to find. Honestly, the bug could still be
>> in your code, inside deal.II, inside PETSc, inside MUMPS, or related
>> to the software/hardware you are running on.
>>
>> I know this won't be of much help, but I would suggest you try a
>> different solver to see if MUMPS is the problematic part here. Maybe
>> they are doing some invalid operations (maybe one processor has no
>> DoFs?). Try to simplify your test problem as much as possible. If the
>> problem is small enough, test on a different machine
>> (workstation/laptop), run using valgrind, etc..
>>
>> Best,
>> Timo
>>
>>
>> On Fri, Nov 17, 2017 at 8:13 AM, Lucas Campos <rmk...@gmail.com> wrote:
>> > Sorry, I forgot to include the link to MUST. Here it is:
>> > https://urldefense.proofpoint.com/v2/url?u=https-3A__doc.itc
>> .rwth-2Daachen.de_display_CCP_Project-2BMUST=DwIBaQ=Ngd-
>> ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJ
>> XiaYVu6FRWmEjX38S7JmlS9Vw=31KxKzbiEhsoTggloCeUx5owuWvvNSO3
>> OBJDaSNDJks=MoO0ofNJV7qBLDqBoKHK0fpxT7ResQnpHz4l1KXw3q0=
>> >
>> >
>> > On Friday, 17 November 2017 14:12:18 UTC+1, Lucas Campos wrote:
>> >>
>> >> Dear all,
>> >>
>> >> First of all, a bit of context:
>> >> I am trying to debug an error in my application where randomly I start
>> >> seeing nan's. The probability of this increases with the number of MPI
>> >> processors I use, so it looks like it is a data race of some sort. Any
>> >> advice on the best way to find the error?
>> >>
>> >> My current approach is to use project MUST[1] to help me find the
>> issues.
>> >> When I ran MUST with the debug version of my code on the local
>> cluster, it
>> >> returned a errors related to the MPI internalities of
>> dealii/petsc(/MUMPS?).
>> >> An exemplary output can be seen on errors.txt. The output stopping in
>> >> "Solving... " suggested that the error was in between the following
>> lines of
>> >> my code:
>> >>
>> >>>> PetscPrintf(mpi_communicator, "Solving... \n");
>> >>>>
>> >>>> computing_timer.enter_section("solve");
>> >>>>
>> >>>>
>> >>>> SolverControl cn;
>> >>>>
>> >>>> PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);
>> >>>>
>> >>>> solver.set_symmetric_mode(false);
>> >>>>
>> >>>> solver.solve(system_matrix, distributed_dU, system_rhs);
>> >>>>
>> >>>>
>> >>>> computing_timer.exit_section("solve");
>> >>>>
>> >>>> PetscPrintf(mpi_communicator, "Solved! \n");
>> >>>
>> >>>
>> >>
>> >>  Indeed, when I comment out the "solver.solve(system_matrix,
>> >> distributed_dU, system_rhs);

Re: [deal.II] Re: Errors when using MUMPS/PETSc LU

2017-11-17 Thread Lucas Campos
Dear Timo,

Thanks for you advice. I am running the program in three different
computers -- my notebook, my research group's server and the local cluster.
In all of them I have this (small) change to suddenly find the nan.

According to MUST, there is clearly a problem inside deal.II, MUMPS or
PETSc, as can be noticed on the file I sent on the previous messages.
Namely,

Invalid MPI_Op, error stack:
>
> MPI_Op_free(111): MPI_Op_free(op=0x7ffcf6298dac) failed
>
> MPI_Op_free(75).: Null Op pointer
>
>
If this error would lead the the issues I am having, is up to discussion.

I tried using PETSc's SolverPreOnly

SolverControl solver_control;
>
> PETScWrappers::SolverPreOnly solver(solver_control,
>> mpi_communicator);
>
> PETScWrappers::PreconditionLU preconditioner(system_matrix);
>
> solver.solve(system_matrix, distributed_dU, system_rhs,
>> preconditioner);
>
>
It shows the same issues, as expected. I will follow your advice and try to
use a different solver.

Still, would it be possible for you to comment a bit more on those
MPI_Op_free errors?

Cheers,
Lucas

On 17 November 2017 at 15:14, Timo Heister <heis...@clemson.edu> wrote:

> Lucas,
>
> those kind of bugs are hard to find. Honestly, the bug could still be
> in your code, inside deal.II, inside PETSc, inside MUMPS, or related
> to the software/hardware you are running on.
>
> I know this won't be of much help, but I would suggest you try a
> different solver to see if MUMPS is the problematic part here. Maybe
> they are doing some invalid operations (maybe one processor has no
> DoFs?). Try to simplify your test problem as much as possible. If the
> problem is small enough, test on a different machine
> (workstation/laptop), run using valgrind, etc..
>
> Best,
> Timo
>
>
> On Fri, Nov 17, 2017 at 8:13 AM, Lucas Campos <rmk...@gmail.com> wrote:
> > Sorry, I forgot to include the link to MUST. Here it is:
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__doc.
> itc.rwth-2Daachen.de_display_CCP_Project-2BMUST=DwIBaQ=Ngd-
> ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmE
> jX38S7JmlS9Vw=31KxKzbiEhsoTggloCeUx5owuWvvNSO3OBJDaSNDJks=
> MoO0ofNJV7qBLDqBoKHK0fpxT7ResQnpHz4l1KXw3q0=
> >
> >
> > On Friday, 17 November 2017 14:12:18 UTC+1, Lucas Campos wrote:
> >>
> >> Dear all,
> >>
> >> First of all, a bit of context:
> >> I am trying to debug an error in my application where randomly I start
> >> seeing nan's. The probability of this increases with the number of MPI
> >> processors I use, so it looks like it is a data race of some sort. Any
> >> advice on the best way to find the error?
> >>
> >> My current approach is to use project MUST[1] to help me find the
> issues.
> >> When I ran MUST with the debug version of my code on the local cluster,
> it
> >> returned a errors related to the MPI internalities of
> dealii/petsc(/MUMPS?).
> >> An exemplary output can be seen on errors.txt. The output stopping in
> >> "Solving... " suggested that the error was in between the following
> lines of
> >> my code:
> >>
> >>>> PetscPrintf(mpi_communicator, "Solving... \n");
> >>>>
> >>>> computing_timer.enter_section("solve");
> >>>>
> >>>>
> >>>> SolverControl cn;
> >>>>
> >>>> PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);
> >>>>
> >>>> solver.set_symmetric_mode(false);
> >>>>
> >>>> solver.solve(system_matrix, distributed_dU, system_rhs);
> >>>>
> >>>>
> >>>> computing_timer.exit_section("solve");
> >>>>
> >>>> PetscPrintf(mpi_communicator, "Solved! \n");
> >>>
> >>>
> >>
> >>  Indeed, when I comment out the "solver.solve(system_matrix,
> >> distributed_dU, system_rhs); " line, it runs with no errors at all.
> >>
> >> Could this be the source of my issues? Also, how can I solve this
> specific
> >> issue?
> >
> > --
> > The deal.II project is located at https://urldefense.proofpoint.
> com/v2/url?u=http-3A__www.dealii.org_=DwIBaQ=Ngd-
> ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmE
> jX38S7JmlS9Vw=31KxKzbiEhsoTggloCeUx5owuWvvNS
> O3OBJDaSNDJks=FlH-oR80VOUqL_lvynTH4ECrvEahwkkYt5AFNP2aunA=
> > For mailing list/forum options, see
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.
> google.com_d_forum

[deal.II] Re: Errors when using MUMPS/PETSc LU

2017-11-17 Thread Lucas Campos
Sorry, I forgot to include the link to MUST. Here it 
is: https://doc.itc.rwth-aachen.de/display/CCP/Project+MUST

On Friday, 17 November 2017 14:12:18 UTC+1, Lucas Campos wrote:
>
> Dear all,
>
> First of all, a bit of context:
> I am trying to debug an error in my application where randomly I start 
> seeing nan's. The probability of this increases with the number of MPI 
> processors I use, so it looks like it is a data race of some sort. Any 
> advice on the best way to find the error?
>
> My current approach is to use project MUST[1] to help me find the issues. 
> When I ran MUST with the debug version of my code on the local cluster, it 
> returned a errors related to the MPI internalities of 
> dealii/petsc(/MUMPS?). An exemplary output can be seen on errors.txt. The 
> output stopping in "Solving... " suggested that the error was in between 
> the following lines of my code:
>
> PetscPrintf(mpi_communicator, "Solving... \n");
>>
>> computing_timer.enter_section("solve");
>>>
>>
>>> SolverControl cn;
>>
>> PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);
>>
>> solver.set_symmetric_mode(false);
>>
>> solver.solve(system_matrix, distributed_dU, system_rhs); 
>>
>>
>>> computing_timer.exit_section("solve");
>>
>> PetscPrintf(mpi_communicator, "Solved! \n");
>>
>>
>>
>  Indeed, when I comment out the "solver.solve(system_matrix, 
> distributed_dU, system_rhs); " line, it runs with no errors at all.
>
> Could this be the source of my issues? Also, how can I solve this specific 
> issue?
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Errors when using MUMPS/PETSc LU

2017-11-17 Thread Lucas Campos
Dear all,

First of all, a bit of context:
I am trying to debug an error in my application where randomly I start 
seeing nan's. The probability of this increases with the number of MPI 
processors I use, so it looks like it is a data race of some sort. Any 
advice on the best way to find the error?

My current approach is to use project MUST[1] to help me find the issues. 
When I ran MUST with the debug version of my code on the local cluster, it 
returned a errors related to the MPI internalities of 
dealii/petsc(/MUMPS?). An exemplary output can be seen on errors.txt. The 
output stopping in "Solving... " suggested that the error was in between 
the following lines of my code:

PetscPrintf(mpi_communicator, "Solving... \n");
>
> computing_timer.enter_section("solve");
>>
>
>> SolverControl cn;
>
> PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);
>
> solver.set_symmetric_mode(false);
>
> solver.solve(system_matrix, distributed_dU, system_rhs); 
>
>
>> computing_timer.exit_section("solve");
>
> PetscPrintf(mpi_communicator, "Solved! \n");
>
>
>
 Indeed, when I comment out the "solver.solve(system_matrix, 
distributed_dU, system_rhs); " line, it runs with no errors at all.

Could this be the source of my issues? Also, how can I solve this specific 
issue?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
[MUST] MUST configuration ... centralized checks without application crash 
handling
[MUST] Information: overwritting old intermediate data in directory 
"/homeb/inm1/lcampos/JuFold/build/must_temp"!
[MUST] Using prebuilt infrastructure at 
/usr/local/software/jureca/Stages/2017b/software/MUST/1.5.0-gpsmpi-2017b-Python-2.7.14/modules//mode3-layer2
[MUST] Search for linked P^nMPI ... not found ... using LD_PRELOAD to load 
P^nMPI ... success
[MUST] Executing application:
Number of active cells:   512 (by partition: 
21+21+21+22+22+21+21+21+21+21+22+22+21+21+22+21+21+22+22+22+21+21+21+21)
Number of degrees of freedom: 2187 (by partition: 
83+114+86+88+73+93+69+113+90+90+86+84+72+133+79+91+100+91+72+93+84+106+95+102)
==APPLYING EXTERNAL 
FORCE==
Saving snapshot

Assembling system
Finished assembling
Inc:  1 (time:0.e+00, dt:1.e-02, rel:000%, growth:000%), Iter: 0. 
Residual norm:   5.99e+01. Relative norm:   1.00e+00 
Solving... 
rank 4 (of 24), pid 29208 catched MPI error nr 284282377
rank 7 (of 24), pid 29215 catched MPI error nr 284282377
rank 3 (of 24), pid 29205 catched MPI error nr 888262153
rank 1 (of 24), pid 29202 catched MPI error nr 552717833
rank 19 (of 24), pid 29255 catched MPI error nr 821153289
rank 12 (of 24), pid 29231 catched MPI error nr 82955785
rank 22 (of 24), pid 29260 catched MPI error nr 888262153
rank 8 (of 24), pid 29219 catched MPI error nr 1022479881
rank 5 (of 24), pid 29214 catched MPI error nr 686935561
Invalid MPI_Op, error stack:
MPI_Op_free(111): MPI_Op_free(op=0x7ffcf6298dac) failed
MPI_Op_free(75).: Null Op pointer
Invalid MPI_Op, error stack:
MPI_Op_free(111): MPI_Op_free(op=0x7fff49aa3b2c) failed
MPI_Op_free(75).: Null Op pointer
Invalid MPI_Op, error stack:
MPI_Op_free(111): MPI_Op_free(op=0x7ffe0f21b53c) failed
MPI_Op_free(75).: Null Op pointer
Invalid MPI_Op, error stack:
MPI_Op_free(111): MPI_Op_free(op=0x7ffc0535c4cc) failed
MPI_Op_free(75).: Null Op pointer
rank 11 (of 24), pid 29227 catched MPI error nr 1022479881
rank 14 (of 24), pid 29239 catched MPI error nr 150064649
rank 16 (of 24), pid 29245 catched MPI error nr 418500105
rank 13 (of 24), pid 29235 catched MPI error nr 82955785
rank 10 (of 24), pid 29226 catched MPI error nr 1022479881
rank 17 (of 24), pid 29248 catched MPI error nr 552717833
rank 6 (of 24), pid 29211 catched MPI error nr 351391241
rank 20 (of 24), pid 29258 catched MPI error nr 418500105
rank 9 (of 24), pid 29221 catched MPI error nr 888262153
rank 21 (of 24), pid 29259 catched MPI error nr 15846921
rank 18 (of 24), pid 29251 catched MPI error nr 955371017
Invalid MPI_Op, error stack:
MPI_Op_free(111): MPI_Op_free(op=0x7ffec3a67f0c) failed
MPI_Op_free(75).: Null Op pointer
rank 2 (of 24), pid 29201 catched MPI error nr 619826697
rank 0 (of 24), pid 29196 catched MPI error nr 754044425
Invalid MPI_Op, error stack:
MPI_Op_free(111): MPI_Op_free(op=0x7ffece191b0c) failed
MPI_Op_free(75).: Null Op pointer
Invalid MPI_Op, error stack:
MPI_Op_free(111): MPI_Op_free(op=0x7fff8945c82c) failed
MPI_Op_free(75).: Null Op pointer
Invalid MPI_Op, error stack:
MPI_Op_free(111): MPI_Op_free(op=0x7ffe3e9c0ecc) failed
MPI_Op_free(75).: Null Op pointer
Invalid MPI_Op, error stack:

Re: [deal.II] LU Decomposition on multiple processors

2017-11-13 Thread Lucas Campos
Dear Jean-Paul,

Thanks for the indication! I just modified the program to use MUMPS, and
there was no difference,
neither in the speed nor in the solution. Right now I am interpreting this
as a positive thing, as it
means the old version was correct.

However I still do not understand what that line in the documentation
means. Maybe it is a leftover?

Thanks a lot,
Lucas

On 13 November 2017 at 12:43, Jean-Paul Pelteret <jppelte...@gmail.com>
wrote:

> Dear Lucas,
>
> I don’t have much familiarity with our PETScWrappers, but after a quick
> look through the documentation I guess that the most convenient interface
> to a parallel direct solver would be that of PETScWrappers::
> SparseDirectMUMPS
> <http://dealii.org/8.5.1/doxygen/deal.II/classPETScWrappers_1_1SparseDirectMUMPS.html>
> .
>
> Best,
> Jean-Paul
>
> On 13 Nov 2017, at 11:19, Lucas Campos <rmk...@gmail.com> wrote:
>
> Dear all,
>
> I am using the PETSc interface to solve my system matrix, like so
>
>  SolverControl solver_control;
>>
>>  PETScWrappers::SolverPreOnly solver(solver_control, mpi_communicator);
>>
>>  PETScWrappers::PreconditionLU preconditioner(system_matrix);
>>
>>  solver.solve(system_matrix, distributed_dU, system_rhs, preconditioner);
>>
>>
> where all vectors and matrices are either PETScWrappers::MPI::
> Vector PETScWrappers::MPI::SparseMatrix.
>
> Today I was checking the documentation and was struck by this sentence[1]:
>
> A class that implements the interface to use the PETSc LU preconditioner.
>> The LU decomposition is only implemented for single processor machines. It
>> should provide a convenient interface to another direct solver.
>>
>>
> Is that correct? If so, what is the recommendation for direct solvers in
> parallel?
>
> Bests,
> Lucas
>
> [1]: https://dealii.org/8.4.1/doxygen/deal.II/classPETScWrappers_1_
> 1PreconditionLU.html
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see https://groups.google.com/d/
> forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see https://groups.google.com/d/
> forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/dealii/ktJUFbymYeM/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] LU Decomposition on multiple processors

2017-11-13 Thread Lucas Campos
Dear all,

I am using the PETSc interface to solve my system matrix, like so

 SolverControl solver_control;
>
>  PETScWrappers::SolverPreOnly solver(solver_control, mpi_communicator);
>
>  PETScWrappers::PreconditionLU preconditioner(system_matrix);
>
>  solver.solve(system_matrix, distributed_dU, system_rhs, preconditioner);
>
>
where all vectors and matrices are 
either PETScWrappers::MPI::Vector PETScWrappers::MPI::SparseMatrix.

Today I was checking the documentation and was struck by this sentence[1]:

A class that implements the interface to use the PETSc LU preconditioner. 
> The LU decomposition is only implemented for single processor machines. It 
> should provide a convenient interface to another direct solver.
>
>
Is that correct? If so, what is the recommendation for direct solvers in 
parallel?

Bests,
Lucas
 
[1]: 
https://dealii.org/8.4.1/doxygen/deal.II/classPETScWrappers_1_1PreconditionLU.html

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Solving time dependent heat equation with MPI (PETsc)

2017-10-13 Thread Lucas Campos
Dear Wolfgang,

Thank you for your explanation. Currently, I am using a code that was not
written by me, and uses the MatrixTools::apply_boundary_values() approach.
I try to change it to use the ConstraintMatrix one. For that, step-40 seems
to be the best starting point, like Mark did.

Thanks again,
Lucas



On 13 October 2017 at 16:43, Wolfgang Bangerth <bange...@colostate.edu>
wrote:

> On 10/13/2017 08:39 AM, Lucas Campos wrote:
>
>>
>> In general, using MatrixTools::apply_boundary_values() is not the
>> way to go
>> with MPI programs. Rather, use a ConstraintMatrix and incorporate the
>> boundary
>> values into the same object as you do with hanging node constraints.
>>
>>
>> This is the way to go due to correctness, or in the sense of scalability?
>>
>
> MatrixTools::apply_boundary_values() needs access to the elements of the
> matrix because it wants to modify elements after they have already been
> written into the matrix. That is already difficult if the matrix is owned
> by PETSc or Trilinos -- we can get access to these elements, but it is not
> efficient to do so.
>
> But the bigger issue is that the function wants to access elements not
> only for the rows of constrained DoFs, but also for the columns. That means
> that you may have to access elements that are actually stored on other
> processors -- something that can not be done efficiently. Consequently,
> MatrixTools::apply_boundary_values() does not attempt to eliminate
> columns of the matrix, and you will end up with a non-symmetric matrix even
> if your problem is symmetric.
>
> It is better to use the approach via ConstraintMatrix that deals with
> entries before they even get into the matrix (and therefore in particular
> before matrix entries are sent to other processors).
>
> Best
>  W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
>
>www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see https://groups.google.com/d/fo
> rum/dealii?hl=en
> --- You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/dealii/5hC7jODg-7k/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Solving time dependent heat equation with MPI (PETsc)

2017-10-13 Thread Lucas Campos
Dear Bangerth,

When you mention 

In general, using MatrixTools::apply_boundary_values() is not the way to go 
> with MPI programs. Rather, use a ConstraintMatrix and incorporate the 
> boundary 
> values into the same object as you do with hanging node constraints.


This is the way to go due to correctness, or in the sense of scalability?

Bests,
Lucas

On Tuesday, 10 October 2017 17:48:57 UTC+2, Wolfgang Bangerth wrote:
>
> On 10/10/2017 08:40 AM, Mark Ma wrote: 
> > 
> > I want to solve a heat equation in the time domain with distributed 
> memory 
> > using MPI, but the results are incorrect. In order to do so, I reference 
> > tutorial step-23 for time updating method and step-40 for implementing 
> MPI. 
> > May I ask whether my boundary condition is right or not? Should we do 
> > compress() after apply_boundary_values()? Thanks in advance! 
>
> Jack -- how exactly is your solution wrong when you look at it? Do the 
> boundary values look wrong? Are they correct if you run your MPI program 
> with 
> just one MPI process? 
>
> In general, using MatrixTools::apply_boundary_values() is not the way to 
> go 
> with MPI programs. Rather, use a ConstraintMatrix and incorporate the 
> boundary 
> values into the same object as you do with hanging node constraints. 
> That's 
> what all of the parallel programs do, if I recall correctly. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Refining a cylinder only in x

2017-09-18 Thread Lucas Campos
Also, here is the new program, with the added if and 
prepare_coarsening_and_refinement(). You will find the interesting part 
between lines 85 and 104.

On Monday, 18 September 2017 16:29:09 UTC+2, Lucas Campos wrote:
>
> Dear Bruno,
>
> You will find attached the resulting grids. While I originally expected to 
> find new divisions only along the x-direction, your previous answer tells 
> me it actually cuts in some local coordinate (whatever that might be). In 
> this case, what is the best way to refine the cylinder in a single 
> direction? I tried adding an if(cell->boundary_id() == 0) before setting 
> the flag, but it did not really help. The mesh is still being refined in 
> all directions.
>
> I ask this because I plan to do some transformations on this cylinder, to 
> turn it into a coil. In this case, the final result is much better if there 
> are enough cuts in the x-axis.
>
> Bests
>
> On Monday, 18 September 2017 14:44:40 UTC+2, Bruno Turcksin wrote:
>>
>> Lucas,
>>
>> On Monday, September 18, 2017 at 3:50:20 AM UTC-4, Lucas Campos wrote:
>>>
>>> I am trying to refine a cylinder in the x direction only, but it seems 
>>> that when I use 
>>>
>>> > cell->set_refine_flag(RefinementCase<3>::cut_x);
>>>
>>> the whole cylinder is refined. Indeed, if I run the above line for every 
>>> active cell *once* I would expect the number of cells to double. However, 
>>> they quintuple!
>>>
>> It looks good but how does the mesh look like? Here 
>> <http://dealii.org/developer/doxygen/deal.II/classCellAccessor.html#afb6cc537720a5b6381c237abe0887de2>,
>>  
>> you can see that cut_x is in the local coordinate system not the global 
>> one. So I would not be surprise if the mesh doesn't look like you think it 
>> should. Also if you do more than one refinement you should also use 
>> prepare_coarsening_and_refinement() before you refine your mesh.
>>
>> Best,
>>
>> Bruno
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
/* -
 *
 * Copyright (C) 2013 - 2017 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE at
 * the top level of the deal.II distribution.
 *
 * -

 *
 * Author: Timo Heister, Texas A University, 2013
 */

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 

#include 

using namespace dealii;

template 
void print_mesh_info(const Triangulation ,
 const std::string)
{
  std::cout << "Mesh info:" << std::endl
<< " dimension: " << dim << std::endl
<< " no. of cells: " << triangulation.n_active_cells() << std::endl;

  // Next loop over all faces of all cells and find how often each
  // boundary indicator is used (recall that if you access an element
  // of a std::map object that doesn't exist, it is implicitly created
  // and default initialized -- to zero, in the current case -- before
  // we then increment it):
  {
std::map boundary_count;
typename Triangulation::active_cell_iterator
cell = triangulation.begin_active(),
endc = triangulation.end();
for (; cell!=endc; ++cell)
  {
for (unsigned int face=0; face<GeometryInfo::faces_per_cell; ++face)
  {
if (cell->face(face)->at_boundary())
  boundary_count[cell->face(face)->boundary_id()]++;
  }
  }

std::cout << " boundary indicators: ";
for (std::map::iterator it=boundary_count.begin();
 it!=boundary_count.end();
 ++it)
  {
std::cout << it->first << "(" << it->second << " times) ";
  }
std::cout << std::endl;
  }

  // Finally, produce a graphical representation of the mesh to an output
  // file:
  {
  std::ofstream o

[deal.II] Re: Refining a cylinder only in x

2017-09-18 Thread Lucas Campos
Dear Bruno,

You will find attached the resulting grids. While I originally expected to 
find new divisions only along the x-direction, your previous answer tells 
me it actually cuts in some local coordinate (whatever that might be). In 
this case, what is the best way to refine the cylinder in a single 
direction? I tried adding an if(cell->boundary_id() == 0) before setting 
the flag, but it did not really help. The mesh is still being refined in 
all directions.

I ask this because I plan to do some transformations on this cylinder, to 
turn it into a coil. In this case, the final result is much better if there 
are enough cuts in the x-axis.

Bests

On Monday, 18 September 2017 14:44:40 UTC+2, Bruno Turcksin wrote:
>
> Lucas,
>
> On Monday, September 18, 2017 at 3:50:20 AM UTC-4, Lucas Campos wrote:
>>
>> I am trying to refine a cylinder in the x direction only, but it seems 
>> that when I use 
>>
>> > cell->set_refine_flag(RefinementCase<3>::cut_x);
>>
>> the whole cylinder is refined. Indeed, if I run the above line for every 
>> active cell *once* I would expect the number of cells to double. However, 
>> they quintuple!
>>
> It looks good but how does the mesh look like? Here 
> <http://dealii.org/developer/doxygen/deal.II/classCellAccessor.html#afb6cc537720a5b6381c237abe0887de2>,
>  
> you can see that cut_x is in the local coordinate system not the global 
> one. So I would not be surprise if the mesh doesn't look like you think it 
> should. Also if you do more than one refinement you should also use 
> prepare_coarsening_and_refinement() before you refine your mesh.
>
> Best,
>
> Bruno
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Refining a cylinder only in x

2017-09-18 Thread Lucas Campos
Dear all,

I am trying to refine a cylinder in the x direction only, but it seems that 
when I use 

> cell->set_refine_flag(RefinementCase<3>::cut_x);

the whole cylinder is refined. Indeed, if I run the above line for every 
active cell *once* I would expect the number of cells to double. However, 
they quintuple!

Any ideas on how to fix this?

I am using an adapted version of step-49. The offending function is 
outlined below, and the full program is attached, together with its output.

void grid_cylinder()
>
> {
>
>   Triangulation<3> triangulation;
>
>   static const CylindricalManifold<3> boundary;
>
>
>>   GridGenerator::cylinder (triangulation, 0.05, 2);
>
>   triangulation.set_manifold (0, boundary);
>
>
>>   print_mesh_info (triangulation, "pre_grid");
>
>   for (int i=0; i<1; i++) {
>
>   for(auto cell: triangulation.active_cell_iterators()) {
>
>   cell->set_refine_flag(RefinementCase<3>::cut_x);
>
>   }
>
>   triangulation.execute_coarsening_and_refinement();
>
>   }
>
>
>>   print_mesh_info (triangulation, "grid");
>
> }
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
/* -
 *
 * Copyright (C) 2013 - 2017 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE at
 * the top level of the deal.II distribution.
 *
 * -

 *
 * Author: Timo Heister, Texas A University, 2013
 */

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 

#include 

using namespace dealii;

template 
void print_mesh_info(const Triangulation ,
 const std::string)
{
  std::cout << "Mesh info:" << std::endl
<< " dimension: " << dim << std::endl
<< " no. of cells: " << triangulation.n_active_cells() << std::endl;

  // Next loop over all faces of all cells and find how often each
  // boundary indicator is used (recall that if you access an element
  // of a std::map object that doesn't exist, it is implicitly created
  // and default initialized -- to zero, in the current case -- before
  // we then increment it):
  {
std::map boundary_count;
typename Triangulation::active_cell_iterator
cell = triangulation.begin_active(),
endc = triangulation.end();
for (; cell!=endc; ++cell)
  {
for (unsigned int face=0; faceface(face)->at_boundary())
  boundary_count[cell->face(face)->boundary_id()]++;
  }
  }

std::cout << " boundary indicators: ";
for (std::map::iterator it=boundary_count.begin();
 it!=boundary_count.end();
 ++it)
  {
std::cout << it->first << "(" << it->second << " times) ";
  }
std::cout << std::endl;
  }

  // Finally, produce a graphical representation of the mesh to an output
  // file:
  {
  std::ofstream out (filename + ".vtk");
  GridOut grid_out;
  grid_out.write_vtk (triangulation, out);
  }
  std::cout << " written to " << filename
<< std::endl
<< std::endl;
}

void grid_cylinder()
{
  Triangulation<3> triangulation;
  static const CylindricalManifold<3> boundary;

  GridGenerator::cylinder (triangulation, 0.05, 2);
  triangulation.set_manifold (0, boundary);

  print_mesh_info (triangulation, "pre_grid");
  for (int i=0; i<1; i++) {
  for(auto cell: triangulation.active_cell_iterators()) {
  cell->set_refine_flag(RefinementCase<3>::cut_x);
  }
  triangulation.execute_coarsening_and_refinement();
  }

  print_mesh_info (triangulation, "grid");
}

int main ()
{
  try
{
  grid_cylinder ();
}
  catch (std::exception )
{
  std::cerr << std::endl << std::endl
<< ""
<< std::endl;
  std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< ""
<< std::endl;

  return 1;
}
  catch (...)
{
  std::cerr << std::endl << std::endl
  

Re: [deal.II] Modifying step-44 - Null pressure case

2017-09-05 Thread Lucas Campos
Dear Andrew,

I get it now! Thanks a lot.

Lucas

On Tuesday, 5 September 2017 15:39:48 UTC+2, mac wrote:
>
> Hi Lucas
>
> Not quite - have a look at the struct Errors in step 44 where:
>
> void reset()
> {
> norm 
> <https://dealii.org/developer/doxygen/deal.II/namespaceLocalIntegrators_1_1Divergence.html#a4cee9f4ad877f0d0c471a72affe91df7>
>  
> = 1.0;
> u = 1.0;
> p = 1.0;
> J = 1.0;
> }
>
> void normalize(const Errors )
> {
> if (rhs.norm != 0.0)
> norm 
> <https://dealii.org/developer/doxygen/deal.II/namespaceLocalIntegrators_1_1Divergence.html#a4cee9f4ad877f0d0c471a72affe91df7>
>  
> /= rhs.norm;
> if (rhs.u != 0.0)
> u /= rhs.u;
> if (rhs.p != 0.0)
> p /= rhs.p;
> if (rhs.J != 0.0)
> J /= rhs.J;
> }
>
> So if the rhs is zero (the case with zero pressure loading) - then the 
> normalised error will also be 1 and we won’t converge.
>
> I think this is reasonable behaviour as we expect some loading to be 
> applied otherwise there is nothing to solve.
>
> A
>
>
> On 5 Sep 2017, at 09:28, Lucas Campos <rmk...@gmail.com > 
> wrote:
>
> Dear Andrew,
>
> I tested with small (but non-null) pressures and the system indeed 
> converges. If I understand your point correctly, I am essentially having a 
> zero by zero divison when calculating the errors, and that is why the 
> system fails to converge. Is that so?
>
> Bests,
> Lucas
>
> On Monday, 4 September 2017 18:41:02 UTC+2, mac wrote:
>>
>> I think the problem arises from the fact that the error residual is 
>> essentially zero in the absence of loading. There is nothing wrong with the 
>> theory, it’s just that we assume loading will / should be applied. I think 
>> this is reasonable.
>>
>> Try a small value (positive or negative) for the pressure and see what 
>> happens
>>
>> A
>>
>>
>>
>> On 4 Sep 2017, at 16:38, Lucas Campos <rmk...@gmail.com> wrote:
>>
>> Dear all,
>>
>> First, I would like to apologize for post the diff the "wrong direction". 
>> I just attached the second version correctly.
>>
>> Also, I think I need to expand in a few things I did try to do already, 
>> none with success. First, I tried removing the boundary 6
>> altogether. Unsurprisingly, this alone did not work, as it is the same as 
>> simply set p = 0, as done previously.
>>
>> I tried augmenting this approach by reinserting the boundary conditions, 
>> as I was suspecting the system could be under-
>> specified. This still did not work. At this point I started suspecting 
>> there might be something I was missing from the theory,
>> but I still tried adding BCs in +x and +z as well. You can guess the 
>> result. 
>>
>> I then read Simo et al. 1985, but even then, I don't see why having the 
>> +y side free would lead to any problems. My intuition
>> says we should simply a cube, with no distortions at al. The most trivial 
>> of problems. But this is not what the program above 
>> outputs.
>>
>> On Monday, 4 September 2017 16:18:43 UTC+2, Lucas Campos wrote:
>>>
>>> Dear all,
>>>
>>> I am playing around with the step-44 code, and I have made a few (minor)
>>> modifications to it. Namely, I set the sides of the cube to be free (that
>>> is, with BC of null force) and added the whole top side to boundary
>>> 6. This results in exactly what I expect, that is, a squished cube, with 
>>> 90
>>> degrees symmetry. You can see the results in result_squish.png. 
>>>
>>> I could also modify the program to have a positive pressure. The result 
>>> was not
>>> what I had antecipated, but upon analysing the BCs, it seems correct. 
>>>  This can
>>> be found in result_pull.png.
>>>
>>> However, when I set the pressure to zero, the program stops converging. 
>>> I do
>>> not understand why that happens, as it seems that the analytical 
>>> formulation is
>>> independent from an external pressure. I spent quite a while looking how 
>>> to fix
>>> this, but it seems I am missing rather subtle. Any ideas how to proceed?
>>>
>>> I have added both the source code I am using - which, as I said, is 
>>> based in
>>> step-44 - and the diff between the original version and my version. But, 
>>> in short,
>>> I removed a few lines make_constraints related to the +- x and +- z 
>>> sides and
>>> changed make_grid such that the whole +y side is now part of the 
>>> boundary 6.
>>>
>

Re: [deal.II] Re: Modifying step-44 - Null pressure case

2017-09-05 Thread Lucas Campos
Dear Andrew,

I tested with small (but non-null) pressures and the system indeed 
converges. If I understand your point correctly, I am essentially having a 
zero by zero divison when calculating the errors, and that is why the 
system fails to converge. Is that so?

Bests,
Lucas

On Monday, 4 September 2017 18:41:02 UTC+2, mac wrote:
>
> I think the problem arises from the fact that the error residual is 
> essentially zero in the absence of loading. There is nothing wrong with the 
> theory, it’s just that we assume loading will / should be applied. I think 
> this is reasonable.
>
> Try a small value (positive or negative) for the pressure and see what 
> happens
>
> A
>
>
>
> On 4 Sep 2017, at 16:38, Lucas Campos <rmk...@gmail.com > 
> wrote:
>
> Dear all,
>
> First, I would like to apologize for post the diff the "wrong direction". 
> I just attached the second version correctly.
>
> Also, I think I need to expand in a few things I did try to do already, 
> none with success. First, I tried removing the boundary 6
> altogether. Unsurprisingly, this alone did not work, as it is the same as 
> simply set p = 0, as done previously.
>
> I tried augmenting this approach by reinserting the boundary conditions, 
> as I was suspecting the system could be under-
> specified. This still did not work. At this point I started suspecting 
> there might be something I was missing from the theory,
> but I still tried adding BCs in +x and +z as well. You can guess the 
> result. 
>
> I then read Simo et al. 1985, but even then, I don't see why having the +y 
> side free would lead to any problems. My intuition
> says we should simply a cube, with no distortions at al. The most trivial 
> of problems. But this is not what the program above 
> outputs.
>
> On Monday, 4 September 2017 16:18:43 UTC+2, Lucas Campos wrote:
>>
>> Dear all,
>>
>> I am playing around with the step-44 code, and I have made a few (minor)
>> modifications to it. Namely, I set the sides of the cube to be free (that
>> is, with BC of null force) and added the whole top side to boundary
>> 6. This results in exactly what I expect, that is, a squished cube, with 
>> 90
>> degrees symmetry. You can see the results in result_squish.png. 
>>
>> I could also modify the program to have a positive pressure. The result 
>> was not
>> what I had antecipated, but upon analysing the BCs, it seems correct. 
>>  This can
>> be found in result_pull.png.
>>
>> However, when I set the pressure to zero, the program stops converging. I 
>> do
>> not understand why that happens, as it seems that the analytical 
>> formulation is
>> independent from an external pressure. I spent quite a while looking how 
>> to fix
>> this, but it seems I am missing rather subtle. Any ideas how to proceed?
>>
>> I have added both the source code I am using - which, as I said, is based 
>> in
>> step-44 - and the diff between the original version and my version. But, 
>> in short,
>> I removed a few lines make_constraints related to the +- x and +- z sides 
>> and
>> changed make_grid such that the whole +y side is now part of the boundary 
>> 6.
>>
>> There was no changes in parameters.prm.
>>
>> Bests
>>
>
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
> 
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Modifying step-44 - Null pressure case

2017-09-04 Thread Lucas Campos
Dear all,

First, I would like to apologize for post the diff the "wrong direction". I 
just attached the second version correctly.

Also, I think I need to expand in a few things I did try to do already, 
none with success. First, I tried removing the boundary 6
altogether. Unsurprisingly, this alone did not work, as it is the same as 
simply set p = 0, as done previously.

I tried augmenting this approach by reinserting the boundary conditions, as 
I was suspecting the system could be under-
specified. This still did not work. At this point I started suspecting 
there might be something I was missing from the theory,
but I still tried adding BCs in +x and +z as well. You can guess the 
result. 

I then read Simo et al. 1985, but even then, I don't see why having the +y 
side free would lead to any problems. My intuition
says we should simply a cube, with no distortions at al. The most trivial 
of problems. But this is not what the program above 
outputs.

On Monday, 4 September 2017 16:18:43 UTC+2, Lucas Campos wrote:
>
> Dear all,
>
> I am playing around with the step-44 code, and I have made a few (minor)
> modifications to it. Namely, I set the sides of the cube to be free (that
> is, with BC of null force) and added the whole top side to boundary
> 6. This results in exactly what I expect, that is, a squished cube, with 90
> degrees symmetry. You can see the results in result_squish.png. 
>
> I could also modify the program to have a positive pressure. The result 
> was not
> what I had antecipated, but upon analysing the BCs, it seems correct. 
>  This can
> be found in result_pull.png.
>
> However, when I set the pressure to zero, the program stops converging. I 
> do
> not understand why that happens, as it seems that the analytical 
> formulation is
> independent from an external pressure. I spent quite a while looking how 
> to fix
> this, but it seems I am missing rather subtle. Any ideas how to proceed?
>
> I have added both the source code I am using - which, as I said, is based 
> in
> step-44 - and the diff between the original version and my version. But, 
> in short,
> I removed a few lines make_constraints related to the +- x and +- z sides 
> and
> changed make_grid such that the whole +y side is now part of the boundary 
> 6.
>
> There was no changes in parameters.prm.
>
> Bests
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
1529,1544c1529,1533
<   &&
<   cell->face(face)->center()[1] == 1.0 * parameters.scale)
< {
<   if (dim==3)
< {
<   if (cell->face(face)->center()[0] < 0.5 * parameters.scale
<   &&
<   cell->face(face)->center()[2] < 0.5 * parameters.scale)
< cell->face(face)->set_boundary_id(6);
< }
<   else
< {
<   if (cell->face(face)->center()[0] < 0.5 * parameters.scale)
< cell->face(face)->set_boundary_id(6);
< }
< }
---
>   &&
>   cell->face(face)->center()[1] == 1.0 * parameters.scale)
>   {
>   cell->face(face)->set_boundary_id(6);
>   }
2422c2411
<   static const double  p0= -4.0
---
>   static const double  p0= 0.0
2493,2508d2481
<   const int boundary_id = 0;
< 
<   if (apply_dirichlet_bc == true)
< VectorTools::interpolate_boundary_values(dof_handler_ref,
<  boundary_id,
<  ZeroFunction(n_components),
<  constraints,
<  fe.component_mask(x_displacement));
<   else
< VectorTools::interpolate_boundary_values(dof_handler_ref,
<  boundary_id,
<  ZeroFunction(n_components),
<  constraints,
<  fe.component_mask(x_displacement));
< }
< {
2530,2566d2502
<   const int boundary_id = 3;
< 
&

Re: [deal.II] Re: if else leading to the same path in step 44

2017-08-28 Thread Lucas Campos
Dear Jean-Paul,

I can definitely do it.

Lucas

On 28 August 2017 at 17:40, Jean-Paul Pelteret <jppelte...@gmail.com> wrote:

> Dear Lucas,
>
> You're welcome. Thanks for the discussion and your feedback! I've made a
> note to myself <https://github.com/dealii/dealii/issues/4979> to improve
> the documentation in this area, but if you want a small side-project then
> feel free to propose the amendments yourself with a pull request to the
> GitHub repository :-) I'd be happy to help you out if you want / need it.
>
> J-P
>
> On Monday, August 28, 2017 at 5:35:10 PM UTC+2, Lucas Campos wrote:
>>
>> Dear Jean-Paul,
>>
>> Thanks again. Now I do understand and it all makes sense.
>>
>> Also, while yes, I agree that the lack of the else could lead to errors
>> that quite hard to track down. However, I think adding your comment to that
>> part of the tutorial could save someone having the same issues I faced.
>>
>> Thanks for your time,
>> Lucas
>>
>> On 28 August 2017 at 17:26, Jean-Paul Pelteret <> wrote:
>>
>>> Hi Lucas,
>>>
>>> I see, now I understand to what you were referring.
>>>
>>> Yes, the specific example that you've shown here is an intentional
>>> redundancy. Previous experiences has shown that we all have a tendency to
>>> copy-paste the tutorials and work from them (that is their intended use
>>> after all :-). I put this in place to ensure that one doesn't accidentally
>>> forget the "else" case when refactoring this code. It would be very easy to
>>> do, and could be a tricky error to track down. Furthermore, it also
>>> reinforces what was discussed in the introductory paragraph above it.
>>>
>>> So I agree fully that this is not required. One could in fact reduce
>>> this entire implementation of the constraints to only a few lines. But in
>>> the interest of clarity it will remain in its current form for now :-)
>>>
>>> Regards,
>>> Jean-Paul
>>>
>>> On Monday, August 28, 2017 at 5:14:52 PM UTC+2, Lucas Campos wrote:
>>>>
>>>> Dear Jean-Paul,
>>>>
>>>> Thanks for your answer! Still, I am not sure if I understand the code.
>>>> The main issue seems to be that both blocks in every if else path are the
>>>> exactly the same. For instance,
>>>>
>>>> if (apply_dirichlet_bc == true)
>>>>> VectorTools::interpolate_boundary_values
>>>>> <http://dealii.org/developer/doxygen/deal.II/namespaceVectorTools.html#a9f3e3ae1396811f998cc35f94cbaa926>
>>>>> (dof_handler_ref,
>>>>> boundary_id,
>>>>> Functions::ZeroFunction
>>>>> <http://dealii.org/developer/doxygen/deal.II/classFunctions_1_1ZeroFunction.html>
>>>>> (n_components),
>>>>> constraints,
>>>>> fe.component_mask
>>>>> <http://dealii.org/developer/doxygen/deal.II/classFiniteElement.html#ace1f6cf0a9f0eb472be4df56743851aa>
>>>>> (x_displacement));
>>>>> else
>>>>> VectorTools::interpolate_boundary_values
>>>>> <http://dealii.org/developer/doxygen/deal.II/namespaceVectorTools.html#a9f3e3ae1396811f998cc35f94cbaa926>
>>>>> (dof_handler_ref,
>>>>> boundary_id,
>>>>> Functions::ZeroFunction
>>>>> <http://dealii.org/developer/doxygen/deal.II/classFunctions_1_1ZeroFunction.html>
>>>>> (n_components),
>>>>> constraints,
>>>>> fe.component_mask
>>>>> <http://dealii.org/developer/doxygen/deal.II/classFiniteElement.html#ace1f6cf0a9f0eb472be4df56743851aa>
>>>>> (x_displacement));
>>>>>
>>>>
>>>>  What is the use of having if-else in this situation?
>>>>
>>>> Also, in your answer you mentioned that in the second call (iteration
>>>> == 1), only homogeneous constrains will be built. I also cannot see this
>>>> from the code. Perhaps this part is interacting with some other part of the
>>>> code in a way I cannot see?
>>>>
>>>> Thanks for the help,
>>>> Lucas Campos
>>>>
>>>> On 28 August 2017 at 17:03, Jean-Paul Pelteret <> wrote:
>>>>
>>>>> Hi Lucas,
>>>>>
>>>>> So it sounds as if you've already worked out what's happening here. We
>>>>> consider 3 cases, namely what to do for newton iteration equal to 0, equal
>>>>> 

Re: [deal.II] Re: if else leading to the same path in step 44

2017-08-28 Thread Lucas Campos
Dear Jean-Paul,

Thanks for your answer! Still, I am not sure if I understand the code. The
main issue seems to be that both blocks in every if else path are the
exactly the same. For instance,

if (apply_dirichlet_bc == true)
> VectorTools::interpolate_boundary_values
> <http://dealii.org/developer/doxygen/deal.II/namespaceVectorTools.html#a9f3e3ae1396811f998cc35f94cbaa926>
> (dof_handler_ref,
> boundary_id,
> Functions::ZeroFunction
> <http://dealii.org/developer/doxygen/deal.II/classFunctions_1_1ZeroFunction.html>
> (n_components),
> constraints,
> fe.component_mask
> <http://dealii.org/developer/doxygen/deal.II/classFiniteElement.html#ace1f6cf0a9f0eb472be4df56743851aa>
> (x_displacement));
> else
> VectorTools::interpolate_boundary_values
> <http://dealii.org/developer/doxygen/deal.II/namespaceVectorTools.html#a9f3e3ae1396811f998cc35f94cbaa926>
> (dof_handler_ref,
> boundary_id,
> Functions::ZeroFunction
> <http://dealii.org/developer/doxygen/deal.II/classFunctions_1_1ZeroFunction.html>
> (n_components),
> constraints,
> fe.component_mask
> <http://dealii.org/developer/doxygen/deal.II/classFiniteElement.html#ace1f6cf0a9f0eb472be4df56743851aa>
> (x_displacement));
>

 What is the use of having if-else in this situation?

Also, in your answer you mentioned that in the second call (iteration ==
1), only homogeneous constrains will be built. I also cannot see this from
the code. Perhaps this part is interacting with some other part of the code
in a way I cannot see?

Thanks for the help,
Lucas Campos

On 28 August 2017 at 17:03, Jean-Paul Pelteret <jppelte...@gmail.com> wrote:

> Hi Lucas,
>
> So it sounds as if you've already worked out what's happening here. We
> consider 3 cases, namely what to do for newton iteration equal to 0, equal
> to 1, and greater than 1.
>
> For iteration == 0 and iteration == 1, the early return on line 2468
> <https://github.com/dealii/dealii/blob/master/examples/step-44/step-44.cc?utf8=%E2%9C%93#L2468>
> is not satisfied and we have to build some dirichlet constraints. Which
> constraints are build is governed by the iteration number: if it is equal
> to zero, then apply_dirichlet_BC
> <https://github.com/dealii/dealii/blob/master/examples/step-44/step-44.cc?utf8=%E2%9C%93#L2471>
> is set true and we build non-homogeneous constraints (the "if" case). On
> the next visit to this function (iteration == 1) this flag is set to false
> and we build only homogeneous constraints (the "else" case). At subsequent
> iterations we use the early return that I previously mentioned to make no
> further changes to the existing constraint matrix. This is because it is
> already filled with the homogeneous constraints, which is what we want for
> all newton iterations > 0.
>
> Does this make sense?
>
> Regards,
> Jean-Paul
>
> On Monday, August 28, 2017 at 4:14:31 PM UTC+2, Lucas Campos wrote:
>>
>> > This calculation does not happen anyway, because of the early return in
>> line 2466.
>>
>> Just a minor correction for the previous sentence: The guard on line 2466
>> will be true only on the second iterations. The else paths will still be
>> taken at least once.
>>
>> On Monday, 28 August 2017 16:02:01 UTC+2, Lucas Campos wrote:
>>>
>>> Dear all,
>>>
>>> First of all, thanks for the great library! I spent the last few days
>>> reading your (very!) extensive
>>> documentation and I think it will be really useful for me in the near
>>> future.
>>>
>>> Right now I am struggling a bit on understanding some things on step-44.
>>> Most pressing right
>>> now is that, in Section Solid::make_constraints of the tutorials, we
>>> have
>>>
>>> > However, since we are dealing with an iterative Newton method, it
>>> should be noted
>>> > that any displacement constraints should only be specified at the
>>> zeroth iteration
>>> > and subsequently no additional contributions are to be made since the
>>> constraints
>>> > are already exactly satisfied.
>>>
>>> However, reading the code underneath, the code under the if-else blocks
>>> are the same (around
>>> line 2490 of step-44.cc). That is to say, we would run the same
>>> computation whether the condition
>>> is true or false. This calculation does not happen anyway, because of
>>> the early return in line 2466.
>>> If we disregard this early return, would this code be wrong, or is there
>>> something I am not seeing?
>>>
>>> Bests,
>>> Lucas
>>>
>>

[deal.II] Re: if else leading to the same path in step 44

2017-08-28 Thread Lucas Campos
> This calculation does not happen anyway, because of the early return in 
line 2466. 

Just a minor correction for the previous sentence: The guard on line 2466 
will be true only on the second iterations. The else paths will still be 
taken at least once.

On Monday, 28 August 2017 16:02:01 UTC+2, Lucas Campos wrote:
>
> Dear all,
>
> First of all, thanks for the great library! I spent the last few days 
> reading your (very!) extensive 
> documentation and I think it will be really useful for me in the near 
> future. 
>
> Right now I am struggling a bit on understanding some things on step-44. 
> Most pressing right 
> now is that, in Section Solid::make_constraints of the tutorials, we have 
>
> > However, since we are dealing with an iterative Newton method, it 
> should be noted 
> > that any displacement constraints should only be specified at the zeroth 
> iteration 
> > and subsequently no additional contributions are to be made since the 
> constraints 
> > are already exactly satisfied.
>
> However, reading the code underneath, the code under the if-else blocks 
> are the same (around 
> line 2490 of step-44.cc). That is to say, we would run the same 
> computation whether the condition 
> is true or false. This calculation does not happen anyway, because of the 
> early return in line 2466. 
> If we disregard this early return, would this code be wrong, or is there 
> something I am not seeing?
>
> Bests,
> Lucas 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] if else leading to the same path in step 44

2017-08-28 Thread Lucas Campos
Dear all,

First of all, thanks for the great library! I spent the last few days 
reading your (very!) extensive 
documentation and I think it will be really useful for me in the near 
future. 

Right now I am struggling a bit on understanding some things on step-44. 
Most pressing right 
now is that, in Section Solid::make_constraints of the tutorials, we have 

> However, since we are dealing with an iterative Newton method, it should 
be noted 
> that any displacement constraints should only be specified at the zeroth 
iteration 
> and subsequently no additional contributions are to be made since the 
constraints 
> are already exactly satisfied.

However, reading the code underneath, the code under the if-else blocks are 
the same (around 
line 2490 of step-44.cc). That is to say, we would run the same computation 
whether the condition 
is true or false. This calculation does not happen anyway, because of the 
early return in line 2466. 
If we disregard this early return, would this code be wrong, or is there 
something I am not seeing?

Bests,
Lucas 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.