Re: [deal.II] bug in program only with adaptive mesh refinement

2017-03-07 Thread Wolfgang Bangerth



Hi Wolfgang, a brief update -- before, I was using
ConstraintMatrix::distribute_local_to_global in the assembly of both the
derivative vector and the Hessian matrix. I can see how that would make
the computed vector no longer represent the derivative of a functional,
since you're changing entries more to make the hanging node constraints
come out correctly. To see if this was the issue, I instead changed the
code to add up contributions to the global vector/matrix directly, and
only reconcile the constraints in the nonlinear solver routine. All the
other unit tests still pass, so this change didn't break the nonlinear
solver for the unrefined case. Still, the errors in the local linear
approximation to P aren't decreasing for the refined mesh.

I'll try your approach about assembling (f(u), v) directly and report
back. If you have other suggestions, I'm all ears; this is a pretty
serious roadblock for what I want to do next.


I imagine -- but you're doing the right thing trying to figure things out.

I don't have any other suggestions, except possibly to try to work out a 
little example that you can compute by hand on a piece of paper. E.g., 
start with two cells, refine one, and prescribe a "current solution" u 
that is convenient (e.g., constant, or linear). Then work out on a piece 
of paper how everything ought to look, and compare what you get out of 
your code. I've often used this as a last resort, because I know that 
I'm going to spend an afternoon writing stuff on a piece of paper and 
comparing with what I get, but it's also often helped me debug things 
with which I've banged my head against the wall for too long already.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Jean-Paul Pelteret
Hi Michael,

I've opened an issu e for 
this. The problem seems not to be with the actual examples *per se*, but 
rather that the current mechanism in place to build the documentation. It 
will collect whichever files are in these subdirectories and pass them 
along to doxygen. So, in the event that one's built or run any of the 
examples, what's passed to doxygen includes any CMakeFiles, visualisation 
output etc. We'll find away to improve this, but in the mean time you can 
completely purge all of these directories of any superfluous files (i.e. 
run "make distclean" in those directories), nuke and build it all again. 
Hopefully you'll only have to build the documentation once more :-)

Best,
Jean-Paul

On Tuesday, March 7, 2017 at 9:34:38 PM UTC+1, Michael Harmon wrote:
>
> Hi Jean-Paul,
>
> Yes, Thanks!  I removed all the CG examples except for 
> goal_oriented_electroplasticity and my own and it worked I guess it is 
> one of the CG examples.. It takes an painfully long time to write the html 
> on my puny macbook air :)
>
> - Mike
>
> On Tuesday, March 7, 2017 at 11:54:42 AM UTC-5, Jean-Paul Pelteret wrote:
>>
>> Thanks Wolfgang, a slight permutation of that seemed to work! I'll submit 
>> a PR in a moment.
>>
>> Michael, can you tell me if you've built any of the code gallery 
>> examples? I think that this might be the issue. If you have, can you go 
>> into those examples' directories and run "make distclean", then try to 
>> build the documentation again? It looks like this has fixed the problem for 
>> me.
>>
>> FYI. Before cleaning the CG examples completely, this was the last line 
>> of doxygen.log:
>>
>>> input buffer overflow, can't enlarge buffer because scanner uses REJECT
>>>
>>
>> On Tuesday, March 7, 2017 at 5:34:39 PM UTC+1, Wolfgang Bangerth wrote:
>>>
>>> On 03/07/2017 09:30 AM, Jean-Paul Pelteret wrote: 
>>> > 
>>> > Matthias, is there any way to disable the deletion of doxygen.log when 
>>> a 
>>> > build of the documentation fails? 
>>>
>>> In doc/doxygen/CMakeLists.txt, line ~230, you have 
>>>
>>> ADD_CUSTOM_COMMAND( 
>>>OUTPUT 
>>>  ${CMAKE_BINARY_DIR}/doxygen.log 
>>>COMMAND ${DOXYGEN_EXECUTABLE} 
>>>  ${CMAKE_CURRENT_BINARY_DIR}/options.dox 
>>>  > ${CMAKE_BINARY_DIR}/doxygen.log 2>&1 # *pssst* 
>>>... 
>>>
>>> Can you try to change that into something of the form 
>>>
>>> ADD_CUSTOM_COMMAND( 
>>>OUTPUT 
>>>  ${CMAKE_BINARY_DIR}/doxygen.log 
>>>COMMAND 
>>>  (${DOXYGEN_EXECUTABLE} 
>>>   ${CMAKE_CURRENT_BINARY_DIR}/options.dox 
>>>   > ${CMAKE_BINARY_DIR}/doxygen.log 2>&1 # *pssst* 
>>>  ) 
>>>  || 
>>>  mv ${CMAKE_BINARY_DIR}/doxygen.log ${CMAKE_BINARY_DIR}/doxygen.err 
>>>... 
>>>
>>>
>>> The second branch of || is only executed if the first one fails, and 
>>> moves the output file to an error file. 
>>>
>>> If this happens to work, please submit this as a patch in general -- we 
>>> should try to preserve error messages. 
>>>
>>> Best 
>>>   W. 
>>>
>>> -- 
>>>  
>>> Wolfgang Bangerth  email: bang...@colostate.edu 
>>> www: 
>>> http://www.math.colostate.edu/~bangerth/ 
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Michael Harmon
Hi Jean-Paul,

Yes, Thanks!  I removed all the CG examples except for 
goal_oriented_electroplasticity and my own and it worked I guess it is 
one of the CG examples.. It takes an painfully long time to write the html 
on my puny macbook air :)

- Mike

On Tuesday, March 7, 2017 at 11:54:42 AM UTC-5, Jean-Paul Pelteret wrote:
>
> Thanks Wolfgang, a slight permutation of that seemed to work! I'll submit 
> a PR in a moment.
>
> Michael, can you tell me if you've built any of the code gallery examples? 
> I think that this might be the issue. If you have, can you go into those 
> examples' directories and run "make distclean", then try to build the 
> documentation again? It looks like this has fixed the problem for me.
>
> FYI. Before cleaning the CG examples completely, this was the last line of 
> doxygen.log:
>
>> input buffer overflow, can't enlarge buffer because scanner uses REJECT
>>
>
> On Tuesday, March 7, 2017 at 5:34:39 PM UTC+1, Wolfgang Bangerth wrote:
>>
>> On 03/07/2017 09:30 AM, Jean-Paul Pelteret wrote: 
>> > 
>> > Matthias, is there any way to disable the deletion of doxygen.log when 
>> a 
>> > build of the documentation fails? 
>>
>> In doc/doxygen/CMakeLists.txt, line ~230, you have 
>>
>> ADD_CUSTOM_COMMAND( 
>>OUTPUT 
>>  ${CMAKE_BINARY_DIR}/doxygen.log 
>>COMMAND ${DOXYGEN_EXECUTABLE} 
>>  ${CMAKE_CURRENT_BINARY_DIR}/options.dox 
>>  > ${CMAKE_BINARY_DIR}/doxygen.log 2>&1 # *pssst* 
>>... 
>>
>> Can you try to change that into something of the form 
>>
>> ADD_CUSTOM_COMMAND( 
>>OUTPUT 
>>  ${CMAKE_BINARY_DIR}/doxygen.log 
>>COMMAND 
>>  (${DOXYGEN_EXECUTABLE} 
>>   ${CMAKE_CURRENT_BINARY_DIR}/options.dox 
>>   > ${CMAKE_BINARY_DIR}/doxygen.log 2>&1 # *pssst* 
>>  ) 
>>  || 
>>  mv ${CMAKE_BINARY_DIR}/doxygen.log ${CMAKE_BINARY_DIR}/doxygen.err 
>>... 
>>
>>
>> The second branch of || is only executed if the first one fails, and 
>> moves the output file to an error file. 
>>
>> If this happens to work, please submit this as a patch in general -- we 
>> should try to preserve error messages. 
>>
>> Best 
>>   W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth  email: bang...@colostate.edu 
>>  
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Jean-Paul Pelteret
Thanks Wolfgang, a slight permutation of that seemed to work! I'll submit a 
PR in a moment.

Michael, can you tell me if you've built any of the code gallery examples? 
I think that this might be the issue. If you have, can you go into those 
examples' directories and run "make distclean", then try to build the 
documentation again? It looks like this has fixed the problem for me.

FYI. Before cleaning the CG examples completely, this was the last line of 
doxygen.log:

> input buffer overflow, can't enlarge buffer because scanner uses REJECT
>

On Tuesday, March 7, 2017 at 5:34:39 PM UTC+1, Wolfgang Bangerth wrote:
>
> On 03/07/2017 09:30 AM, Jean-Paul Pelteret wrote: 
> > 
> > Matthias, is there any way to disable the deletion of doxygen.log when a 
> > build of the documentation fails? 
>
> In doc/doxygen/CMakeLists.txt, line ~230, you have 
>
> ADD_CUSTOM_COMMAND( 
>OUTPUT 
>  ${CMAKE_BINARY_DIR}/doxygen.log 
>COMMAND ${DOXYGEN_EXECUTABLE} 
>  ${CMAKE_CURRENT_BINARY_DIR}/options.dox 
>  > ${CMAKE_BINARY_DIR}/doxygen.log 2>&1 # *pssst* 
>... 
>
> Can you try to change that into something of the form 
>
> ADD_CUSTOM_COMMAND( 
>OUTPUT 
>  ${CMAKE_BINARY_DIR}/doxygen.log 
>COMMAND 
>  (${DOXYGEN_EXECUTABLE} 
>   ${CMAKE_CURRENT_BINARY_DIR}/options.dox 
>   > ${CMAKE_BINARY_DIR}/doxygen.log 2>&1 # *pssst* 
>  ) 
>  || 
>  mv ${CMAKE_BINARY_DIR}/doxygen.log ${CMAKE_BINARY_DIR}/doxygen.err 
>... 
>
>
> The second branch of || is only executed if the first one fails, and 
> moves the output file to an error file. 
>
> If this happens to work, please submit this as a patch in general -- we 
> should try to preserve error messages. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bange...@colostate.edu 
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread Daniel Arndt
Seyed,

after having just a quick glance over your approach there seem to be some 
more issues you can easily stumble upon:
- n_vertices() only gives you the number of vertices for one process, not 
the global one. In particular, you can't rely on the fact that this is the 
same for all processes. Furthermore, you (probably) don't initialize all 
the values of your struct. This might be the reason for the large numbers 
you are observing.
- You seem to rely on a specific numbering of the degrees of freedom in 
FEValues. In particular, you are assuming that you have only nodal values 
at the cell vertices. This can only be true for Q1 elements. Furthermore, 
you rely on the fact that after a local dof for the x-component, a local 
dof for the y-component is stored. This is an unsafe assumption. The proper 
way of doing this is to use an appropriate Quadrature object that is 
initialized with support points at the points you are interested in. Have a 
look at the FAQ [1,2].
- You seem to first calculate the maximum for each vertex across all the 
processes and after that the maximum over all vertices. It would probably 
be easier to first find the maximum value and location on each process and 
after that just compare this one value across all the processes. This would 
probably also avoid your problems with large numbers.

Else I can just agree with Wolfgang, learning how to debug is a crucial 
point if you ever want to build working software.

Best,
Daniel

[1] 
https://github.com/dealii/dealii/wiki/Frequently-Asked-Questions#how-do-i-access-values-of-discontinuous-elements-at-vertices
 

[2] 
https://github.com/dealii/dealii/wiki/Frequently-Asked-Questions#how-to-get-the-mapped-position-of-support-points-of-my-element

Am Dienstag, 7. März 2017 16:55:47 UTC+1 schrieb Seyed Ali Mohseni:
>
> Dear all,
>
> With MPI_Allreduce it works like a charm. Thank you very much everyone.
>
> Especially Prof. Bangerth and Daniel.
>
> Kind regards,
> S. A. Mohseni
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Wolfgang Bangerth

On 03/07/2017 09:30 AM, Jean-Paul Pelteret wrote:


Matthias, is there any way to disable the deletion of doxygen.log when a
build of the documentation fails?


In doc/doxygen/CMakeLists.txt, line ~230, you have

ADD_CUSTOM_COMMAND(
  OUTPUT
${CMAKE_BINARY_DIR}/doxygen.log
  COMMAND ${DOXYGEN_EXECUTABLE}
${CMAKE_CURRENT_BINARY_DIR}/options.dox
> ${CMAKE_BINARY_DIR}/doxygen.log 2>&1 # *pssst*
  ...

Can you try to change that into something of the form

ADD_CUSTOM_COMMAND(
  OUTPUT
${CMAKE_BINARY_DIR}/doxygen.log
  COMMAND
(${DOXYGEN_EXECUTABLE}
 ${CMAKE_CURRENT_BINARY_DIR}/options.dox
 > ${CMAKE_BINARY_DIR}/doxygen.log 2>&1 # *pssst*
)
||
mv ${CMAKE_BINARY_DIR}/doxygen.log ${CMAKE_BINARY_DIR}/doxygen.err
  ...


The second branch of || is only executed if the first one fails, and 
moves the output file to an error file.


If this happens to work, please submit this as a patch in general -- we 
should try to preserve error messages.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Jean-Paul Pelteret
Dear all,

Ok, so it looks as though at least one of the code gallery examples are 
being problematic. I nuked my build directory and moved all but one of the 
CG examples out of their subdirectory. For added precaution, I disabled 
MathJax via
-DDEAL_II_DOXYGEN_USE_MATHJAX=OFF \
-DDEAL_II_DOXYGEN_USE_ONLINE_MATHJAX=OFF \
and have left only "goal_oriented_elastoplasticity" to be built. This 
worked successfully. 

I'll add the other examples back in one-by-one to see which the problem 
child is (please don't be either of mine... :-)

Matthias, is there any way to disable the deletion of doxygen.log when a 
build of the documentation fails?

J-P





On Tuesday, March 7, 2017 at 4:28:00 PM UTC+1, Michael Harmon wrote:
>
> Thanks! I am glad it wasn't just me!!
>
> Mike
>
> On Tuesday, March 7, 2017 at 10:26:43 AM UTC-5, Jean-Paul Pelteret wrote:
>>
>> Hi Michael,
>>
>> I've just tried to build the documentation with the code gallery and have 
>> run into similar problems. I'm going to fiddle around to see if I can work 
>> out what the issue might be. I'll post an update if I get anywhere with 
>> this.
>>
>> Best,
>> Jean-Paul
>>
>> On Tuesday, March 7, 2017 at 3:50:46 PM UTC+1, Michael Harmon wrote:
>>>
>>> I ran "make" again and attached the outputs fm the terminal into make.log
>>>
>>> I also ran "make install" and attached the outputs from the into 
>>> make_install.log
>>>
>>> It seems they are failing at different points... but I'm not sure whats 
>>> going wrong..
>>>
>>> Thanks,
>>>
>>> Mike
>>>
>>> On Monday, March 6, 2017 at 10:58:28 PM UTC-5, Wolfgang Bangerth wrote:

 On 03/06/2017 08:14 PM, Michael Harmon wrote: 
 > The doxygen.log file seems to get deleted... but here's the copy of 
 it I took 
 > right before the error gets thrown and it is deleted. 

 Hm, yes, that is not helpful. Do you get to see more in terms of errors 
 if you do 
make VERBOSE=1 
 ? At the least, you would get to see which command is being executed, 
 and you 
 could do so by hand on the command line to possibly get the full 
 doxygen.log. 


 > I'm not sure if it makes a difference, but I'm building this on a mac 
 where I 
 > am using a terminal that is launched by the deal.II app. 

 That doesn't trigger anything for me :-( 

 Best 
   W. 


 -- 
  

 Wolfgang Bangerth  email: bang...@colostate.edu 
 www: 
 http://www.math.colostate.edu/~bangerth/ 



-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread 'Seyed Ali Mohseni' via deal.II User Group
Dear all,

With MPI_Allreduce it works like a charm. Thank you very much everyone.

Especially Prof. Bangerth and Daniel.

Kind regards,
S. A. Mohseni

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: MeshWorker clarifications

2017-03-07 Thread Franco Milicchio
I have implemented, as suggested by Daniel, another MeshWorker callback for 
the right hand side.

The code is at the bottom, but I don't know, given the status of the 
MeshWorker documentation, if this is correct. I cannot say this by looking 
at the solutions, since matices, as I said, differ greatly between serial 
and parallel parts.

Any hints on what I'm doing wrong with my implementation?

Cheers!


static void rhs_integrate_boundary_term (MeshWorker::DoFInfo &dinfo, 
MeshWorker::IntegrationInfo &info)

{

if (dumpall)

{

std::lock_guard guard(lock);

std::cout << "> RHS boundary " << dinfo.cell->id() << " face " << 
dinfo.face_number << std::endl;

   

}

loopb++;



// FE Values

const FEValuesBase &fe_v = info.fe_values();



// Local RHS vector

Vector &local_vector = dinfo.vector(0).block(0);



// Local values

std::vector> boundary_values(fe_v.n_quadrature_points, 
Vector(dim));



// Compute values for this boundary cell

right_face_side.vector_value_list(fe_v.get_quadrature_points(), 
boundary_values);



for (unsigned k = 0; k < fe_v.n_quadrature_points; ++k)

for (unsigned int i = 0; i < fe_v.dofs_per_cell; ++i)

{

const unsigned int component_i = info.fe_values().get_fe().
system_to_component_index(i).first;



local_vector(i) += (fe_v.shape_value(i, k) * boundary_values[k][
component_i]) * fe_v.JxW(k);

}


}

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread Wolfgang Bangerth


Seyed,


MPI_Reduce computes exactly what I want. I checked the values and they are
correct. The value higher than 1e10 is probably a MPI bug while trying to
find the maximum value. But this problem has nothing to do with my max_rank
problem.

I thought maybe you are MPI experts and could give me a hint.

For instance this MPI_MAXLOC idea was a great suggestion from Daniel. I
also wonder why you haven't included it in deal.II yet.

Debugging MPI problems is quite cumbersome, you know that yourself. Setting
up such a debugger environment is described in one of your video tutorials,
but it takes more time I assume than solving this small problem. I agree,
if I want to dive deeply into software design I should improve my MPI
skills, but I am merely trying to use tools you offer me in deal.II. As an
inventor of deal.II, I think it should be in your interest to help your
users find a way solving their problems in your software.


Let me call you out here: It is, without a doubt, in our interest to help our
users. And we do every day: We have on the order of maybe a thousand questions
per year on the mailing list, and a group of half a dozen of us answer the
vast majority of those. I am certain that over the past 17 years, I have
written more than 10,000 emails to the mailing lists, and many others have
written a large number as well. To claim that we do not try to be helpful is
factually not correct.

That said, there is a limit to what we can do on a mailing list. We can not,
for example, debug every user's programs if they do not work. We have too few
volunteers for this, and it is also not something our employers pay us for. 
You have received significant help on this mailing list already in the past 
few weeks, but I don't think you can expect that we continue to debug your 
programs indefinitely. So what we try to do -- and what I tried to do in my 
previous mail -- is to teach *skills* so that users learn to debug their own 
programs.


That is what I tried to do in my last email. It may not have been worded
particularly well, and I apologize if it came over as rude, but I will try
again. Let me state again what I said:


Seyed -- you need to learn to debug these things. Just ignoring values
greater than 1e10 means that you don't understand why these values are
there -- but then how can you be sure that the *other* values are correct?

You need to learn strategies to figure these things out. Run the program in
a debugger. Print the values that you send to MPI_Reduce and compare what
that function returns with what you *expect* it to return, etc. You cannot
write software that does what it is supposed to do if you don't understand
what it computes. Learning strategies to debug software is the only way you
can learn to write good software.


What I mean here is this:

1/ If your program gives you values of 1e10, and you don't understand where 
they are coming from, then you cannot be sure that the rest is correct. You 
can, of course, ignore everything that is bigger than 1e9, but how do you know 
that that is the correct threshold? You claim that this is a bug in MPI -- 
something I am 99.99% sure is wrong -- but assuming that that is the case, 
what do you do if MPI gives you wrong values that are 1e8? What if they are 
1e2 or 3.141 and in the middle of all of the values that are correct? How will 
you filter out the wrong values?


The point I'm making is that you are in the *wrong mindset* if you think you 
can ignore values that are wrong and that you don't understand where they are 
from. You can say it's rude of me to point that out, but I have been 
programming since 1987 and I have learned a few things since then. One of them 
is that if there are wrong values, you *need to find out where they come 
from*. So what I was trying to say in that statement above is to *teach you a 
skill*, namely to not ignore wrong values but to follow their trail and find 
out what is happening.


2/ I think I actually was being specific in how to debug things, and didn't 
just say "you need to debug". I did say, for example, that you ought to print 
out the things you send to MPI_Reduce, and print the things you get back, and 
make sure these are all correct, on all processors. You can do this in a 
debugger, or just via printf. My choice for these things is to use a debugger 
(there is a video lecture on how to do this with MPI). Of course it takes a 
bit of time to set it up once, but that is again a skill that you will need 
one way or the other at one point as your programs become more complicated.



Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because y

Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Michael Harmon
Thanks! I am glad it wasn't just me!!

Mike

On Tuesday, March 7, 2017 at 10:26:43 AM UTC-5, Jean-Paul Pelteret wrote:
>
> Hi Michael,
>
> I've just tried to build the documentation with the code gallery and have 
> run into similar problems. I'm going to fiddle around to see if I can work 
> out what the issue might be. I'll post an update if I get anywhere with 
> this.
>
> Best,
> Jean-Paul
>
> On Tuesday, March 7, 2017 at 3:50:46 PM UTC+1, Michael Harmon wrote:
>>
>> I ran "make" again and attached the outputs fm the terminal into make.log
>>
>> I also ran "make install" and attached the outputs from the into 
>> make_install.log
>>
>> It seems they are failing at different points... but I'm not sure whats 
>> going wrong..
>>
>> Thanks,
>>
>> Mike
>>
>> On Monday, March 6, 2017 at 10:58:28 PM UTC-5, Wolfgang Bangerth wrote:
>>>
>>> On 03/06/2017 08:14 PM, Michael Harmon wrote: 
>>> > The doxygen.log file seems to get deleted... but here's the copy of it 
>>> I took 
>>> > right before the error gets thrown and it is deleted. 
>>>
>>> Hm, yes, that is not helpful. Do you get to see more in terms of errors 
>>> if you do 
>>>make VERBOSE=1 
>>> ? At the least, you would get to see which command is being executed, 
>>> and you 
>>> could do so by hand on the command line to possibly get the full 
>>> doxygen.log. 
>>>
>>>
>>> > I'm not sure if it makes a difference, but I'm building this on a mac 
>>> where I 
>>> > am using a terminal that is launched by the deal.II app. 
>>>
>>> That doesn't trigger anything for me :-( 
>>>
>>> Best 
>>>   W. 
>>>
>>>
>>> -- 
>>>  
>>> Wolfgang Bangerth  email: bang...@colostate.edu 
>>> www: 
>>> http://www.math.colostate.edu/~bangerth/ 
>>>
>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Jean-Paul Pelteret
Hi Michael,

I've just tried to build the documentation with the code gallery and have 
run into similar problems. I'm going to fiddle around to see if I can work 
out what the issue might be. I'll post an update if I get anywhere with 
this.

Best,
Jean-Paul

On Tuesday, March 7, 2017 at 3:50:46 PM UTC+1, Michael Harmon wrote:
>
> I ran "make" again and attached the outputs fm the terminal into make.log
>
> I also ran "make install" and attached the outputs from the into 
> make_install.log
>
> It seems they are failing at different points... but I'm not sure whats 
> going wrong..
>
> Thanks,
>
> Mike
>
> On Monday, March 6, 2017 at 10:58:28 PM UTC-5, Wolfgang Bangerth wrote:
>>
>> On 03/06/2017 08:14 PM, Michael Harmon wrote: 
>> > The doxygen.log file seems to get deleted... but here's the copy of it 
>> I took 
>> > right before the error gets thrown and it is deleted. 
>>
>> Hm, yes, that is not helpful. Do you get to see more in terms of errors 
>> if you do 
>>make VERBOSE=1 
>> ? At the least, you would get to see which command is being executed, and 
>> you 
>> could do so by hand on the command line to possibly get the full 
>> doxygen.log. 
>>
>>
>> > I'm not sure if it makes a difference, but I'm building this on a mac 
>> where I 
>> > am using a terminal that is launched by the deal.II app. 
>>
>> That doesn't trigger anything for me :-( 
>>
>> Best 
>>   W. 
>>
>>
>> -- 
>>  
>> Wolfgang Bangerth  email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread Wolfgang Bangerth

On 03/07/2017 07:46 AM, 'Seyed Ali Mohseni' via deal.II User Group wrote:


Now my question how can I store the data on all processors?
Or how am I able to at least store my max_rank variable on all processors?


The function you're looking for is called MPI_Allreduce.
Best
 W.


--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Michael Harmon
I ran "make" again and attached the outputs fm the terminal into make.log

I also ran "make install" and attached the outputs from the into 
make_install.log

It seems they are failing at different points... but I'm not sure whats 
going wrong..

Thanks,

Mike

On Monday, March 6, 2017 at 10:58:28 PM UTC-5, Wolfgang Bangerth wrote:
>
> On 03/06/2017 08:14 PM, Michael Harmon wrote: 
> > The doxygen.log file seems to get deleted... but here's the copy of it I 
> took 
> > right before the error gets thrown and it is deleted. 
>
> Hm, yes, that is not helpful. Do you get to see more in terms of errors if 
> you do 
>make VERBOSE=1 
> ? At the least, you would get to see which command is being executed, and 
> you 
> could do so by hand on the command line to possibly get the full 
> doxygen.log. 
>
>
> > I'm not sure if it makes a difference, but I'm building this on a mac 
> where I 
> > am using a terminal that is launched by the deal.II app. 
>
> That doesn't trigger anything for me :-( 
>
> Best 
>   W. 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
/usr/local/Cellar/cmake/3.6.0_1/bin/cmake -H/Users/Mike/Desktop/dealii-8.4.1 
-B/Users/Mike/Desktop/dealii-8.4.1/build --check-build-system 
CMakeFiles/Makefile.cmake 0
/usr/local/Cellar/cmake/3.6.0_1/bin/cmake -E cmake_progress_start 
/Users/Mike/Desktop/dealii-8.4.1/build/CMakeFiles 
/Users/Mike/Desktop/dealii-8.4.1/build/CMakeFiles/progress.marks
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/Makefile2 
all
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f 
cmake/scripts/CMakeFiles/expand_instantiations_exe.dir/build.make 
cmake/scripts/CMakeFiles/expand_instantiations_exe.dir/depend
cd /Users/Mike/Desktop/dealii-8.4.1/build && 
/usr/local/Cellar/cmake/3.6.0_1/bin/cmake -E cmake_depends "Unix Makefiles" 
/Users/Mike/Desktop/dealii-8.4.1 /Users/Mike/Desktop/dealii-8.4.1/cmake/scripts 
/Users/Mike/Desktop/dealii-8.4.1/build 
/Users/Mike/Desktop/dealii-8.4.1/build/cmake/scripts 
/Users/Mike/Desktop/dealii-8.4.1/build/cmake/scripts/CMakeFiles/expand_instantiations_exe.dir/DependInfo.cmake
 --color=
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f 
cmake/scripts/CMakeFiles/expand_instantiations_exe.dir/build.make 
cmake/scripts/CMakeFiles/expand_instantiations_exe.dir/build
make[2]: Nothing to be done for 
`cmake/scripts/CMakeFiles/expand_instantiations_exe.dir/build'.
[  0%] Built target expand_instantiations_exe
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f 
doc/CMakeFiles/doxygen_headers.dir/build.make 
doc/CMakeFiles/doxygen_headers.dir/depend
cd /Users/Mike/Desktop/dealii-8.4.1/build && 
/usr/local/Cellar/cmake/3.6.0_1/bin/cmake -E cmake_depends "Unix Makefiles" 
/Users/Mike/Desktop/dealii-8.4.1 /Users/Mike/Desktop/dealii-8.4.1/doc 
/Users/Mike/Desktop/dealii-8.4.1/build 
/Users/Mike/Desktop/dealii-8.4.1/build/doc 
/Users/Mike/Desktop/dealii-8.4.1/build/doc/CMakeFiles/doxygen_headers.dir/DependInfo.cmake
 --color=
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f 
doc/CMakeFiles/doxygen_headers.dir/build.make 
doc/CMakeFiles/doxygen_headers.dir/build
make[2]: Nothing to be done for `doc/CMakeFiles/doxygen_headers.dir/build'.
[  0%] Built target doxygen_headers
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f 
doc/doxygen/code-gallery/CMakeFiles/build_code-gallery_h.dir/build.make 
doc/doxygen/code-gallery/CMakeFiles/build_code-gallery_h.dir/depend
cd /Users/Mike/Desktop/dealii-8.4.1/build && 
/usr/local/Cellar/cmake/3.6.0_1/bin/cmake -E cmake_depends "Unix Makefiles" 
/Users/Mike/Desktop/dealii-8.4.1 
/Users/Mike/Desktop/dealii-8.4.1/doc/doxygen/code-gallery 
/Users/Mike/Desktop/dealii-8.4.1/build 
/Users/Mike/Desktop/dealii-8.4.1/build/doc/doxygen/code-gallery 
/Users/Mike/Desktop/dealii-8.4.1/build/doc/doxygen/code-gallery/CMakeFiles/build_code-gallery_h.dir/DependInfo.cmake
 --color=
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f 
doc/doxygen/code-gallery/CMakeFiles/build_code-gallery_h.dir/build.make 
doc/doxygen/code-gallery/CMakeFiles/build_code-gallery_h.dir/build
make[2]: Nothing to be done for 
`doc/doxygen/code-gallery/CMakeFiles/build_code-gallery_h.dir/build'.
[  0%] Built target build_code-gallery_h
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f 
doc/doxygen/code-gallery/CMakeFiles/code-gallery_Quasi_static_Finite_strain_

[deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread 'Seyed Ali Mohseni' via deal.II User Group
I think I figured it out ;) 
After thinking about your suggestion, Prof. Bangerth.
It came to my mind, that the result of my max_rank variable is stored on a 
specific processor due to MPI_Reduce.
That means I cannot access these data parts without being currently at the 
correspinding processor rank.
For instance I store my results on rank 0, if I want to check the max_rank 
data on processor rank 3, it is empty.
That is why I cannot get any output, because when rank 3 owns everything 
output works fine.

Now my question how can I store the data on all processors?
Or how am I able to at least store my max_rank variable on all processors?

BR,
Seyed Ali 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread 'Seyed Ali Mohseni' via deal.II User Group
Dear Prof. Bangerth,

MPI_Reduce computes exactly what I want. I checked the values and they are 
correct. The value higher than 1e10 is probably a MPI bug while trying to 
find the maximum value. 
But this problem has nothing to do with my max_rank problem.

I thought maybe you are MPI experts and could give me a hint.

For instance this MPI_MAXLOC idea was a great suggestion from Daniel. I 
also wonder why you haven't included it in deal.II yet. 

Debugging MPI problems is quite cumbersome, you know that yourself. Setting 
up such a debugger environment is described in one of your video tutorials, 
but it takes more time I assume than solving this small problem.
I agree, if I want to dive deeply into software design I should improve my 
MPI skills, but I am merely trying to use tools you offer me in deal.II.
As an inventor of deal.II, I think it should be in your interest to help 
your users find a way solving their problems in your software.

Until now, I haven't yet felt the benefits of deal.II since the programming 
effort is so high it doesn't make up for the computational speed you would 
gain, "if" you are able to achieve your goals.
Hence, telling your interested PhD student to go and "learn debugging" is a 
bit disappointing.

Kind regards,
Seyed Ali 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread Wolfgang Bangerth

On 03/07/2017 05:31 AM, 'Seyed Ali Mohseni' via deal.II User Group wrote:

Now the funny part is, if I set max_rank manually such as max_rank = 3 for
instance, it works and for the currently owned rank I receive an output within
terminal. Another thing is that MPI_Reduce creates somewhere some big values,
which is why I fixed it by erasing values greater for 1e10.


Seyed -- you need to learn to debug these things. Just ignoring values greater 
than 1e10 means that you don't understand why these values are there -- but 
then how can you be sure that the *other* values are correct?


You need to learn strategies to figure these things out. Run the program in a 
debugger. Print the values that you send to MPI_Reduce and compare what that 
function returns with what you *expect* it to return, etc. You cannot write 
software that does what it is supposed to do if you don't understand what it 
computes. Learning strategies to debug software is the only way you can learn 
to write good software.


Best
 WB

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread 'Seyed Ali Mohseni' via deal.II User Group
Dear Daniel,

Thanks a lot. I tried the MPI_MAXLOC approach and it works.
Unfortunately though, there is a slight problem I wasn't able to understand 
yet fully.

int disp_size = triangulation.n_vertices();

struct
{
double value;
int rank;
} in[disp_size], out[disp_size];

typename DoFHandler::active_cell_iterator cell = 
dof_handler.begin_active(), endc = dof_handler.end();
std::vector local_dof_indices(fe.dofs_per_cell);
unsigned int cell_number = 0;

for (; cell != endc; ++cell, ++cell_number)
{
if ( cell->is_locally_owned() )
{
cell->get_dof_indices(local_dof_indices);

for (unsigned int i = 0, j = 1; i < fe.dofs_per_cell; i += 2, j += 2)
{
unsigned int v = i / dim;

double displacement_x = locally_relevant_solution(local_dof_indices[i]);
double displacement_y = locally_relevant_solution(local_dof_indices[j]);

double displacement_norm = sqrt(displacement_x * displacement_x 
+ displacement_y * displacement_y);

in[cell->vertex_index(v)].value = displacement_norm;
in[cell->vertex_index(v)].rank = Utilities::MPI::this_mpi_process(mpi_com);
}
}
}

MPI_Reduce(in, out, disp_size, MPI_DOUBLE_INT, MPI_MAXLOC, 0, mpi_com);

std::vector maximum_disp_norms(disp_size);
std::vector disp_ranks(disp_size), disp_nodes(disp_size);
double max_displacement;
int max_index, max_rank, max_node_id;

if ( Utilities::MPI::this_mpi_process(mpi_com) == 0 )
{
for (int i = 0; i < disp_size; ++i)
{
maximum_disp_norms[i] = out[i].value;
disp_ranks[i] = out[i].rank;
disp_nodes[i] = i;

if ( out[i].value > 1e10 )
{
maximum_disp_norms[i] = 0;
disp_ranks[i] = 0;
}
}

// Maximum norm of displacements
max_displacement = *max_element(maximum_disp_norms.begin(), 
maximum_disp_norms.end());
max_index = distance(maximum_disp_norms.begin(), 
max_element(maximum_disp_norms.begin(), maximum_disp_norms.end()));
max_rank = disp_ranks[max_index];
max_node_id = disp_nodes[max_index];
}

// Maximum displacement position
cell = dof_handler.begin_active(), endc = dof_handler.end();
cell_number = 0;

for (; cell != endc; ++cell, ++cell_number)
{
if ( cell->is_locally_owned() )
{
// This is just testwise
if ( Utilities::MPI::this_mpi_process(mpi_com) == max_rank )
{
std::cout << Utilities::MPI::this_mpi_process(mpi_com) << std::endl;
}

for (int v = 0; v < GeometryInfo::vertices_per_cell; ++v)
{
if ( cell->vertex_index(v) == max_node_id )
{
// std::cout << "VERTEX ID: " << cell->vertex_index(v) << " WITH 
COORDINATES: " << cell->vertex(v) << std::endl;
}
}
}
}

Now the funny part is, if I set max_rank manually such as max_rank = 3 for 
instance, it works and for the currently owned rank I receive an output 
within terminal. Another thing is that MPI_Reduce creates somewhere some 
big values, which is why I fixed it by erasing values greater for 1e10. 
Finding the maximum value is put inside the if ( 
Utilities::MPI::this_mpi_process(mpi_com) == 0 ) part, so I can check, if 
it is dependent on the processor I compute it for. Surprisingly, this works 
only in the first step and afterwards for higher loading steps I receive no 
output. If I use MPI_Reduce for the 3rd processor such as MPI_Reduce(in, 
out, disp_size, MPI_DOUBLE_INT, MPI_MAXLOC, *3*, mpi_com); then I check for 
the max_rank again, it works. I am a bit confused...

What could be the error here?


Kind regards,
Seyed Ali


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: MeshWorker clarifications

2017-03-07 Thread Franco Milicchio
Thanks for the answer, Daniel.

On Monday, March 6, 2017 at 7:49:53 PM UTC+1, Daniel Arndt wrote:

So my first question is, should I avoid using this class and implement 
>> parallel loops by hand (via TBB or other means)?
>>
>  

> "amandus"[1] is in fact based on MeshWorker. If you are trying to do 
> something similar to what is in the examples there, you have a good chance 
> to be successful.
> The problem really is the documentation, but with working code you should 
> be able to understand the concepts.
>  
>

Yes, I understand the concept, and I appreciate the code you posted, it 
should be really useful.
 

> Then, my doubts about the use of MeshWorkers. I cannot really grasp it 
>> even after reading tutorials 12, 16, and 39.
>>
>> What is the purpose of the cell, boundary, and face functions? The cell 
>> is called exactly on each n-dimensional cell, I've checked that number, 
>> what about the other two? Boundary is called, as I read, for each 
>> n-dimensional cell on the boundary (numbers again match). Face function is 
>> called on each internal (n-1)-dimensional face (not on the boundary), as 
>> far as I understand. Is this correct?
>>
>  

> In MeshWorker, you are basically only writing the assembly loops for your 
> bilinear forms. You don't have to care about how to handle hanging nodes 
> and the like.
>

That's good, but in the same loop I could assemble (one or more) bilinear 
form and (one or more) linear form for the RHS.
 

> In a typical DG formulation your bilinear form consists of a cell term, a 
> boundary term and face term. These relate to the cell, boundary and face 
> functions you are talking about. You should be able to see this in step-39. 
>

I have implemented both serial and parallel (MeshWorker) version of the 
same bilinear form, and I got two different matrices. And if I see it 
correctly, it's not just numerical noise. In mathematica, with 
matpar/matser imported with a text file, resulting from system_matrix.print(
std::cout):

In[17]:= Norm[matpar - matser]

Out[17]= 2.1304 * 10^10

In[20]:= {Eigenvalues[matpar, 10], Eigenvalues[matser, 10]}

Out[20]= {
{1.14866 * 10^11, 1.14329 * 10^11, 
  1.13441 * 10^11, 1.1221 * 10^11, 
  1.1065 * 10^11, 1.1055 * 10^11, 
  1.10036 * 10^11, 1.09185 * 10^11, 
  1.08777 * 10^11, 
  1.08005 * 10^11}, 
{1.35992 * 10^11, 
  1.35304 * 10^11, 1.34166 * 10^11, 
  1.32589 * 10^11, 1.30589 * 10^11, 
  1.30502 * 10^11, 1.29842 * 10^11, 
  1.28749 * 10^11, 1.28188 * 10^11, 
  1.27236 * 10^11}}

In[21]:= {Eigenvalues[matpar, -10], Eigenvalues[matser, -10]}

Out[21]= {
{2.17343 * 10^9, 2.17343 * 10^9, 
  2.16945 * 10^9, 2.16945 * 10^9, 
  2.16728 * 10^9, 2.16728 * 10^9, 
  1.34634 * 10^9, 1.34634 * 10^9, 
  1.25769 * 10^9, 
  1.25769 * 10^9}, 
{4.5406 * 10^7, 
  3.14171 * 10^7, 1.95205 * 10^7, 
  1.69433 * 10^7, 1.00199 * 10^7, 
  5.97384 * 10^6, 1.747 * 10^6, 
  1.0382 * 10^6, 581593., 28031.9}}

The last two commands give the first 10 and last 10 eigenvalues. Shouldn't 
the two assemblers yield the very same matrix?

When assigning a Neumann condition on faces, for instance in elasticity 
>> apply a load, where is the correct place? I have tried to get the code into 
>> the cell function, but I didn't find a way to get the faces, boundary 
>> markers, and the face values. My code is at the bottom, if you need it.
>>
>  

> In general, your code fragment looks good, but the right-hand side should 
> be handled in a different integrator object similar to what is done in 
> step-39.
>

Uh, ok, I'll use that. This could have an impact on the performances, 
though.
 

> Neumann boundary conditions are normally considered in a weak sense and 
> incorporated as boundary term into the bilinear form. So this should be in 
> the boundary method of your MatrixIntegrator.
> There is an elasticity example in amandus. Maybe this is useful.
>

Thanks!
 

> Is there an analogous of the FullMatrix for the right hand side vector? 
>> The only thing similar to a vector in the DoFInfo is a BlockVector...
>>
>  

> What exactly do you mean? In general, all vectors are considered to be 
> block vectors with only one block that you can access via "dinfo.vector 
> 
> (0).block(0);".
>

Good info, thanks.
 

> If you initialize the "MeshWorker::DoFInfo" object with a "BlockInfo" 
> object, you also have access to individual blocks. The structure then 
> resembles the structure of your FiniteElement.
>

I will try this as soon as possible.
 

> [1] https://bitbucket.org/guidokanschat/amandus 
>

Thanks for the code, I'm reading it right now.

Cheers! 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receivi