Re: [deal.II] HDF5-function set_attribute fails for data type

2021-03-08 Thread Daniel Garcia-Sanchez
Currently only the function `get_attribute` supports bool.

I added a draft PR to support bool for the whole HDF5 interface. I still 
have to do some testing
https://github.com/dealii/dealii/pull/11869
On Monday, March 8, 2021 at 7:04:33 PM UTC+1 Daniel Garcia-Sanchez wrote:

> Currently only the function `get_attribute` supports bool.
>
> I added a draft PR to support bool for the whole HDF5 interface. I still 
> have to do some testing
>
> On Tuesday, February 23, 2021 at 8:07:36 PM UTC+1 Wolfgang Bangerth wrote:
>
>> On 2/23/21 10:13 AM, 'develo...@googlemail.com' via deal.II User Group 
>> wrote:
>> > Can do that next week, am currently busy with other things. 
>> Nevertheless, why 
>> > is that option mentioned explicitly in the documentation, even though 
>> it fails 
>> > when using it?
>>
>> I'd say it's a mistake.
>>
>> Best
>> W.
>>
>> -- 
>> 
>> Wolfgang Bangerth email: bang...@colostate.edu
>> www: http://www.math.colostate.edu/~bangerth/
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/11db8414-7a8c-425f-8dce-6a57e46a8d54n%40googlegroups.com.


Re: [deal.II] HDF5-function set_attribute fails for data type

2021-03-08 Thread Daniel Garcia-Sanchez
Currently only the function `get_attribute` supports bool.

I added a draft PR to support bool for the whole HDF5 interface. I still 
have to do some testing

On Tuesday, February 23, 2021 at 8:07:36 PM UTC+1 Wolfgang Bangerth wrote:

> On 2/23/21 10:13 AM, 'develo...@googlemail.com' via deal.II User Group 
> wrote:
> > Can do that next week, am currently busy with other things. 
> Nevertheless, why 
> > is that option mentioned explicitly in the documentation, even though it 
> fails 
> > when using it?
>
> I'd say it's a mistake.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3336dcc1-9c85-4e13-a967-f594166c9c33n%40googlegroups.com.


Re: [deal.II] Collapse vertex and apply constraint

2019-08-02 Thread Daniel Garcia-Sanchez
On Tuesday, July 30, 2019 at 10:06:59 PM UTC+2, Wolfgang Bangerth wrote:
>
>
> You can't compute the determinant for anything but quite small matrices. 
> It's not a numerically stable operation. The only way to determine 
> whether a matrix is singular is to compute all eigenvalues and see 
> whether one or more are "small" compared to the size of the other 
> eigenvalues. 
>
> Can you do that for the matrices you have, and plot the eigenvalues in 
> the same plot? Is there a qualitative difference? 
>
>
>
Hi Wolfgang,

I calculated the eigenvalues and the condition number. It seems that it is 
ok to collapse one vertex and transform the quad into a triangle.

Interestingly, applying the constraints with AffineConstraints does not 
change the condition number. Even, if I fix all the dofs with add_line() 
the condition number does not change.

I collapsed one vertex in a 3D simulation with NedelecSZ and the solution 
and the condition are ok. I have to do more tests with 3D simulations with 
NelelecSZ.

In the simple simulation from step-6. I collapsed one vertex and then, if I 
move another vertex and squeeze the quad into almost a line the surface 
becomes very small and then the condition number becomes very large. Note 
that I didn't completely squeeze the quad into a line. If the surface is 
zero, then as expected the simulation breaks.

This means that it is ok to collapse a vertex as long as the surface/volume 
does not become too small?
According to this publication it is ok to collapse vertices in quads/hexes
https://www.sciencedirect.com/science/article/pii/S1877705814016713

Enclosed you can find my program and the python script to analyze the data.
To run it: ./collapse_vertex>matrix.txt

You can also find enclosed the meshes. I added the condition number in the 
image for each mesh.

Below you can find the detailed results :

- Original grid:
* surface_ratio = 1
* eigenvalues = [  0.67 0.67 0.67 0.67 
1.17200912   1.3
   1.3  1.3  1.3  1.3  1.3  1.3
   1.3  1.3  1.3  1.3  1.3 
 9.13881996
   9.13881996   9.42425 12.79450083  21.95213004  21.95213004  31.83336
  58.59408006]
* condition_number = 80

- Collapse vertex a
* surface_ratio = 0.5
* eigenvalues = [   0.67  0.67  0.67  0.67 
 1.17122833
1.3   1.3   1.3   1.3   1.3
1.3   1.3   1.3   1.3   1.3
1.3   1.3   8.92744864   10.04655013   12.5454928
   13.98137723   20.47958898   31.17725707   45.95448961  185.24586722]
* condition_number = 3e+02

- Collapse vertex a, move vertex b
* surface_ratio = 0.1
* eigenvalues = [   0.67  0.67  0.67  0.67 
 1.16902972
1.3   1.3   1.3   1.3   1.3
1.3   1.3   1.3   1.3   1.33939
1.52754   1.53365.11586123   10.46272767   12.09280555
   14.54544349   22.42497592   31.57412136   50.19072911  264.78640594]
* condition_number = 4e+02

-Collapse vertex a, almost collapse vertex b
* surface_ratio = 1e-8
* eigenvalues = [  6.7000e-01   6.7000e-01   6.7000e-01   
6.7000e-01
   1.21307648e+00   1.3000e+00   1.3000e+00   1.3000e+00
   1.3000e+00   1.3000e+00   1.3000e+00   1.3000e+00
   1.3000e+00   1.3000e+00   1.35897000e+00   1.6000e+00
   1.69231000e+00   8.83186491e+00   1.10689839e+01   1.37682911e+01
   1.98164659e+01   2.84090528e+01   4.79480606e+01   2.92923117e+05
   1.70718688e+06]
* condition_number = 3e+06

Best,
Dani

[image: original_grid.png][image: collapse_a.png][image: 
collapse_a_move_b.png][image: collapse_a_close_to_collapse_b.png]

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c680299c-8124-4b8a-94fd-614a6e8f0017%40googlegroups.com.
#!/usr/bin/env python3

import numpy as np
import numpy.linalg


matrix_size = 25

matrix = np.zeros((matrix_size,matrix_size))

hola_txt = open('build/matrix.txt','r')

for line in hola_txt:
if line[0] == '(':
value = float(line.split(' ')[1])
coordinates = line.split(' ')[0]
x = int(coordinates.split(',')[0][1:])
y = int(coordinates.split(',')[1][:-1])
matrix[x,y] = value

determinant = np.linalg.det(matrix)

eigenvalues, eigenvectors = np.linalg.eig(matrix)

condition_number = np.linalg.cond(matrix)

print('{:.2e}'.format(determinant))

print(np.sort(eigenvalues))


Re: [deal.II] Collapse vertex and apply constraint

2019-08-02 Thread Daniel Garcia-Sanchez


On Tuesday, July 30, 2019 at 10:06:59 PM UTC+2, Wolfgang Bangerth wrote:
>
>
>
> You can't compute the determinant for anything but quite small matrices. 
> It's not a numerically stable operation. The only way to determine 
> whether a matrix is singular is to compute all eigenvalues and see 
> whether one or more are "small" compared to the size of the other 
> eigenvalues. 
>
> Can you do that for the matrices you have, and plot the eigenvalues in 
> the same plot? Is there a qualitative difference? 
>
>
>  
Hi Wolfgang,

I calculated the eigenvalues and the condition number. It seems that it is 
ok to collapse one vertex and transform the quad into a triangle.

Interestingly, applying the constraints with AffineConstraint does not 
change the condition number. Even if I fix all the dofs with add_line() the 
condition number doesn't change.

I collapsed one vertex in a 3D simulation with NedelecSZ and the solution 
and the condition number are ok. I have to do more tests with NelelecSZ.

In the simple simulation from step-6. I collapse one vertex and then if I 
move another vertex and transform the quad into a line the surface becomes 
very small and the condition number becomes very large. Note that I didn't 
completely squeeze the quad into a line. If the surface is zero, then as 
expected the simulation breaks.

This means that it is ok to collapse a vertex as long as the surface/volume 
does not become too small?
According to this publication it is ok to collapse vertices
https://www.sciencedirect.com/science/article/pii/S1877705814016713

Enclosed you can find my program and the python script to analyze the data.
To run it: ./collapse_vertex>matrix.txt

You can also find enclosed the meshes. I added the condition number in the 
image for each mesh.

Below you can find the detailed results :

- Original grid:
* surface_ratio = 1
* eigenvalues = [  0.67 0.67 0.67 0.67 
1.17200912   1.3
   1.3  1.3  1.3  1.3  1.3  1.3
   1.3  1.3  1.3  1.3  1.3 
 9.13881996
   9.13881996   9.42425 12.79450083  21.95213004  21.95213004  31.83336
  58.59408006]
* condition_number = 80

- Collapse vertex a
* surface_ratio = 0.5
* eigenvalues = [   0.67  0.67  0.67  0.67 
 1.17122833
1.3   1.3   1.3   1.3   1.3
1.3   1.3   1.3   1.3   1.3
1.3   1.3   8.92744864   10.04655013   12.5454928
   13.98137723   20.47958898   31.17725707   45.95448961  185.24586722]
* condition_number = 3e+02

- Collapse vertex a, move vertex b
* surface_ratio = 0.1
* eigenvalues = [   0.67  0.67  0.67  0.67 
 1.16902972
1.3   1.3   1.3   1.3   1.3
1.3   1.3   1.3   1.3   1.33939
1.52754   1.53365.11586123   10.46272767   12.09280555
   14.54544349   22.42497592   31.57412136   50.19072911  264.78640594]
* condition_number = 4e+02

-Collapse vertex a, almost collapse vertex b
* surface_ratio = 1e-8
* eigenvalues = [  6.7000e-01   6.7000e-01   6.7000e-01   
6.7000e-01
   1.21307648e+00   1.3000e+00   1.3000e+00   1.3000e+00
   1.3000e+00   1.3000e+00   1.3000e+00   1.3000e+00
   1.3000e+00   1.3000e+00   1.35897000e+00   1.6000e+00
   1.69231000e+00   8.83186491e+00   1.10689839e+01   1.37682911e+01
   1.98164659e+01   2.84090528e+01   4.79480606e+01   2.92923117e+05
   1.70718688e+06]
* condition_number = 3e+06

Best,
Dani 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6b07998a-f493-4b4d-9373-2a01333585f3%40googlegroups.com.
#!/usr/bin/env python3

import numpy as np
import numpy.linalg


matrix_size = 25

matrix = np.zeros((matrix_size,matrix_size))

hola_txt = open('build/matrix.txt','r')

for line in hola_txt:
if line[0] == '(':
value = float(line.split(' ')[1])
coordinates = line.split(' ')[0]
x = int(coordinates.split(',')[0][1:])
y = int(coordinates.split(',')[1][:-1])
matrix[x,y] = value

determinant = np.linalg.det(matrix)

eigenvalues, eigenvectors = np.linalg.eig(matrix)

condition_number = np.linalg.cond(matrix)

print('{:.2e}'.format(determinant))

print(np.sort(eigenvalues))

print('{:.2e}'.format(condition_number))
/* -
 *
 * Copyright (C) 2000 - 2019 by the deal.II authors
 *
 * This 

[deal.II] Re: Collapse vertex and apply constraint

2019-07-30 Thread Daniel Garcia-Sanchez
Hi,
I realized that the mesh is not clear in the solution images. Below you can 
find the meshes: with and without a vertex collapse.

Thanks!
Daniel

[image: solution_1.png]

[image: solution_2.png]



On Tuesday, July 30, 2019 at 9:28:47 PM UTC+2, Daniel Garcia-Sanchez wrote:
>
> Hi,
>
> I'm doing 3D electromagnetic simulations with Nelelec and NedelecSZ 
> elements. I would like to collapse some vertices in other to follow the 
> geometry.
>
> The matrix should be singular if you collapse a vertex but according 
> to Luca Heltai:
>
> You can get away with it if you add enough constraints to your system so 
> that each collapsed degree of freedom is identical (you can do this by 
> using the ConstraintMatrix class). This should be enough to remove the 
> singularity from the matrix (and it is often used as a trick to deal with 
> singular integrals).
>
> https://groups.google.com/forum/#!searchin/dealii/degenerate%7Csort:relevance/dealii/VLXWrUVh_18/ohiblJOlCtUJ
>
> I gave it a try with a simple simulation (step-6) with 25 dofs and a 
> rectangular mesh.
>
> I used add_line() and add_entry() from AffineConstraints
>
> If I collapse the vertex and don't apply the constraint, I still get the 
> right solution and the determinant is non zero. I expected the determinant 
> to be zero for this particular case. The matrix should be singular.
>
> I did 3 test which gave the same solution:
>
>- Original mesh. determinant(system_matrix) = 6.6e10
>- Move vertex and do not apply constraint. determinant(system_matrix) 
>= 6e11
>- Move vert and apply constraint. det(system_matrix) = 1.2e12
>
> Below you can find the simulations as you can see I always get the right 
> solution for all the cases.
>
> Why the determinant when I collapse the vertex and don't apply the 
> constraint is non-zero? The matrix should be singular.
>
> How do I find the dof asociated to a vertex? I just did try and error for 
> this particular test.
>
> Can I do the same trick in 3D for the Nedelec and NedelecSZ elements?
>
> How do I check the quality of a mesh and that the matrix is not singular? 
> I can't do the determinant for a large matrix.
>
> Thanks!
> Daniel
>
> [image: solution_1.png]
>
> [image: solution_2.png]
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/81c4a9cc-43d5-4b27-b683-6b9165eb5bf3%40googlegroups.com.


[deal.II] Collapse vertex and apply constraint

2019-07-30 Thread Daniel Garcia-Sanchez
Hi,

I'm doing 3D electromagnetic simulations with Nelelec and NedelecSZ 
elements. I would like to collapse some vertices in other to follow the 
geometry.

The matrix should be singular if you collapse a vertex but according 
to Luca Heltai:

You can get away with it if you add enough constraints to your system so 
that each collapsed degree of freedom is identical (you can do this by 
using the ConstraintMatrix class). This should be enough to remove the 
singularity from the matrix (and it is often used as a trick to deal with 
singular integrals).
https://groups.google.com/forum/#!searchin/dealii/degenerate%7Csort:relevance/dealii/VLXWrUVh_18/ohiblJOlCtUJ

I gave it a try with a simple simulation (step-6) with 25 dofs and a 
rectangular mesh.

I used add_line() and add_entry() from AffineConstraints

If I collapse the vertex and don't apply the constraint, I still get the 
right solution and the determinant is non zero. I expected the determinant 
to be zero for this particular case. The matrix should be singular.

I did 3 test which gave the same solution:

   - Original mesh. determinant(system_matrix) = 6.6e10
   - Move vertex and do not apply constraint. determinant(system_matrix) = 
   6e11
   - Move vert and apply constraint. det(system_matrix) = 1.2e12

Below you can find the simulations as you can see I always get the right 
solution for all the cases.

Why the determinant when I collapse the vertex and don't apply the 
constraint is non-zero? The matrix should be singular.

How do I find the dof asociated to a vertex? I just did try and error for 
this particular test.

Can I do the same trick in 3D for the Nedelec and NedelecSZ elements?

How do I check the quality of a mesh and that the matrix is not singular? I 
can't do the determinant for a large matrix.

Thanks!
Daniel

[image: solution_1.png]

[image: solution_2.png]

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/e8812f11-3d99-4ab3-847e-4e949262e501%40googlegroups.com.


Re: [deal.II] Re: Eigenproblem and creating a preconditioner out of a linear operator, Eigensolver Selection

2019-07-08 Thread Daniel Garcia-Sanchez
Hi Andreas,

On Sunday, July 7, 2019 at 4:26:39 PM UTC+2, Andreas Hegendörfer wrote:
>
> I also tried a spectral transformation with the Arpack solver with the 
> same result as without spectral transformation. I am interested in the 
> smallest real eigenvalues. I know from previous calculations with the 
> Krylov Schur solver form SLEPc that using or not using a spectral 
> transformation makes a very big difference here. 
>

Yes, using a spectral transformation makes a huge difference if your 
eigenvalues are from the interior of the spectrum, although if you are 
interested in the smallest real eigenvalue, the difference might not be 
that important.

The drawback of an spectral transformation is that you have to use a direct 
solver.

You can use an iterative solver with an spectral transformation for very 
large problems. However, using an iterative linear solver with an spectral 
transformation makes the overall solution process less robust.

Although the direct solver approach may seem too costly, the factorization 
is only carried out at the beginning of the eigenvalue calculation and this 
cost is amortized in each subsequent application of the operator.

I think that the drawback of an iterative solver is that it requires lot of 
memory and does not scale very well. This figure can give you an idea of 
the scalability of MUMPS (direct parallel solver).
https://www.researchgate.net/figure/Strong-scalability-of-an-iterative-solver-with-different-preconditioners-versus-MUMPS_fig2_282172435
 

> I do not want to use SLEPc here because I think handling the PETSc 
> Matrices and vectors is too uncomfortable for my application. Am I right at 
> this point? What do you think about using SLEPc here?
>

I'm writing a tutorial about how to calculate the eigenmodes of an 
electromagnetic cavity. So far, I've written the code, I'm writing now the 
documentation. You can take a look. You can find the code in this pull 
request:

https://github.com/dealii/dealii/pull/8345

The tutorial uses MPI, and SLEPc with std::complex.

The eigenvalues in this tutorial are from the interior of the spectrum, 
therefore I have to use an spectral transformation with a direct solver.

Note that PETSc has to be compiled with MUMPS if you want to run that 
tutorial.

Best,
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5399942a-80d6-43b3-a2ea-7236e19e4842%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Eigenproblem and creating a preconditioner out of a linear operator, Eigensolver Selection

2019-07-05 Thread Daniel Garcia-Sanchez
Hi Andreas,

On Thursday, July 4, 2019 at 7:41:31 PM UTC+2, Andreas Hegendörfer wrote:
>
>  Hello,
>
> 2. Are there better ways to solve this Eigenproblem, maybe with another 
> solver?
>
>
Have you tried an spectral transformation? I'm not familiar with Arpack, I 
use SLEPc. I think that you could use the 
function ArpackSolver::set_shift(). sigma is your guess for the eigenvalue.

Note that if you use an spectral transformation, you can not use an 
iterative solver. You have to use a direct solver, but it considerably 
accelerates the eigenvalue calculation.

If your eigenvalue is from the interior of the spectrum and you don't do an 
spectral transformation, it can take very long.

Best,
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a157c16e-4e0e-4b13-bd8d-ca3b2258fd30%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: NedelecSZ with adaptive mesh refinement

2019-06-26 Thread Daniel Garcia-Sanchez

Wolfgang,

On Wednesday, June 26, 2019 at 3:50:22 PM UTC+2, Wolfgang Bangerth wrote:
>
>
> OK, attached is some code that builds the constraints that correspond to 
> this 
> rather esoteric paper (but also one I enjoyed working on the most because 
> it's 
> just so weird!): 
>
> https://www.math.colostate.edu/~bangerth/publications/2016-nonconforming.pdf 
>
> The paper identifies 3 different ways how one could define hanging node 
> constraints for this element, and they are implemented in the function I 
> attach. The code is a couple of years old, but if something doesn't 
> compile 
> any more, it should be easy to update. 
>
>
>  
Thanks for the code. I'll work on this. 

Alexader,
On Wednesday, June 26, 2019 at 3:38:34 PM UTC+2, Alexander wrote:
>
>
> Please feel free to take a leading role here. I'm happy to help and look 
> at your code, but I won't be able to lead this myself in the next months (I 
> have to write funding proposals...) 
>
> That sounds good.
Good luck with your proposals!

Best,
Dani 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/be23d21c-94cf-49a2-8835-8fdf874528ca%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: NedelecSZ with adaptive mesh refinement

2019-06-26 Thread Daniel Garcia-Sanchez
Hi Wolfang,

On Tuesday, June 25, 2019 at 9:49:15 PM UTC+2, Wolfgang Bangerth wrote:
>
>
> But one doesn't have to go through this function. One could just as well 
> write 
> a function that is specific to the FE_NedelecSZ element and that computes 
> these kinds of constraints and puts them into AffineConstraints. I've done 
> that for other elements before, though I'm not sure there is an example in 
> deal.II itself. I'd be happy to provide my code as an example, though. 
>
>
>
It would be great if you can provide your code :) You can post it or email 
it


Alexader,

On Wednesday, June 26, 2019 at 11:59:24 AM UTC+2, Alexander wrote:
>
> indeed, this should be possible to do it that way. Would you mind putting 
> an example to the mentioned github issue? I might give it a thought in the 
> background if I see more details. 
>
 
Let me know if your are going to take care of this or need help!
If you take care of this that would be great otherwise I can give it try

Best,
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4f64a949-76b8-4262-a1e6-aac9ebc4d676%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: NedelecSZ with adaptive mesh refinement

2019-06-25 Thread Daniel Garcia-Sanchez
Bruno,

On Tuesday, June 25, 2019 at 8:06:09 PM UTC+2, Bruno Turcksin wrote:
>
>
> Yes, that's correct. The old Nedelec can be refined but I would trust 
> it only if you have rectangles/regular hex . 
>
>
In that case, I think that the old FE_Nedelec should be good for my 
application.

Best,
Daniel
 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5556b1cb-1bd7-4477-bc8b-ee1b06e99827%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: NedelecSZ with adaptive mesh refinement

2019-06-25 Thread Daniel Garcia-Sanchez
Bruno,

On Tuesday, June 25, 2019 at 7:04:07 PM UTC+2, Bruno Turcksin wrote:
>
>
> This post 
>  
> briefly discuss the problem. Apparently it is non-trivial but I can't 
> really comment on it (I've never used Nedelec element)
>
>
>
Thanks for the reference. Yes, it seems that this is a non-trivial problem. 
Although from what I understand from the post.

   - The new FE_NedelecSZ element can be used with complex meshes, but can 
   not be refined
   - The old FE_Nedelec element can only be used with a simple grid but it 
   can be refined.

According to Alexander Grayver, deal.II interface does not permit dynamic 
constraints (that is, constraints, which depend on a cell orientation). 
This problem does not seem easy to solve.

Best,
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c89bb8b9-69c6-4065-bd34-a31552ae6022%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] NedelecSZ with adaptive mesh refinement

2019-06-25 Thread Daniel Garcia-Sanchez
Hi,

I read in the documentation that the NedelecSZ element does not support 
non-conforming meshes at this time.

That means that I can not do adaptive mesh refinement with NedelecSZ?

If that is the case, I would be happy to contribute and make this element 
compatible with adaptive mesh refinement.

Could you indicate me what is left and which is the direction I should take 
to add adaptive mesh refinement to NedelecSZ?

I understand that I have to add the capability to treat hanging nodes to 
the class, suggestions/indications are welcome :)

Thanks!
Dani

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fbc8ff9f-9d64-4f7b-a17a-b4f9310458aa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Fwd: Regarding Step 60 tutorial of deal.ii

2019-06-25 Thread Daniel Garcia-Sanchez


On Monday, June 24, 2019 at 11:36:32 PM UTC+2, Ramprasad R wrote:
>
> Hello Bangerth,
>
> The terms N_* are the resultant forces in the * direction and these forces 
> are calculated using the strains which in turn are calculated using the 
> displacements u. So the terms N_* are not constants, rather change with 
> each element. And these values directly depend on u.
>
>
>>
Hi Ramprasad,

I think that you want to do an eigenvalue calculation (step-36)

I think that first you have to do an static calculation before your 
eigenvalue calculation in order to obtain the values for N_* (step-8)

As discussed in your paper, the typical eigenvalue problem for the elastic 
equation takes this form, where omega^2 is your eigenvalue and Phi the 
eigenvector
(K - omega^2 M)*Phi = 0

For the buckling case lambda is your eigenvalue.
(K - lambda G)*Phi = 0

The calculation of K can be found in step-8, step-62 (or other tutorials).

I think that in order to calculate G you need the resulting strain of an 
static calculation. You can do the static calculation, store the strain in 
a temporary buffer and use that data to calculate G. step-18 shows you how 
to do this.

Once you have K and G, you can do an eigenvalue calculation. step-36 shows 
you how to do an eigenvalue calculation. Note that in step36 the stiffness 
matrix is called A.

The equation in step36 is 
(D-epsilon M)*Phi=0
which is very similar to the buckling equation. 

Best,
Daniel 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d026c101-ffdd-4325-a1bb-f40c8283bbc9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Boundary conditions with the NedelecSZ element

2019-06-14 Thread Daniel Garcia-Sanchez
Thanks Jean-Paul,

This is what i was looking for!

Best,
Daniel

On Friday, June 14, 2019 at 3:17:26 PM UTC+2, Jean-Paul Pelteret wrote:
>
> Dear Daniel,
>
> For non-primitive elements I believe its is necessary to use 
> the project_boundary_values_() functions. So, maybe 
> project_boundary_values_curl_conforming() 
> <https://dealii.org/current/doxygen/deal.II/group__constraints.html#ga1c6685360c01c9c46eeb7575e8ef68ac>
>  or project_boundary_values_curl_conforming_l2() 
> <https://dealii.org/current/doxygen/deal.II/group__constraints.html#ga6fca3672ae63b249402460a6ed4538b4>
>  might 
> be one of the functions that you are looking for.
>
> Best,
> Jean-Paul
>
> On 14 Jun 2019, at 14:54, Daniel Garcia-Sanchez  > wrote:
>
> Hi Dhananjay,
>
> Thanks for your message. I removed the dof_handler by mistake when I did 
> the copy/paste.
>
> I still get the error when I run the following code:
>
>  
> constraints.clear();  
> 
>  
> constraints.reinit(locally_relevant_dofs);
> 
>  
> DoFTools::make_hanging_node_constraints(dof_handler, constraints); 
> VectorTools::interpolate_boundary_values(dof_handler, 1, ZeroFunction number>(components), constraints, ComponentMask({false, true, true}));
>  
>
> On Friday, June 14, 2019 at 2:36:13 PM UTC+2, Dhananjay Phansalkar wrote:
>>
>> Hello Daniel,
>>I think you are missing first argument for the function "
>> VectorTools::interpolate_boundary_values" I guess it requires dofhandler 
>> object . Have a look at step 6  (
>> https://www.dealii.org/current/doxygen/deal.II/step_6.html#Step6setup_system
>> )
>>
>> Cheers
>>
>> Dhananjay
>>
>> On Friday, June 14, 2019 at 1:57:42 PM UTC+2, Daniel Garcia-Sanchez wrote:
>>>
>>> Hi,
>>>
>>> I'm starting to use the NedelecSZ element to simulate the maxwell 
>>> equations in the frequency/harmonic domain. I started to simulate the 
>>> resonant modes of a cube that has metallic boundaries. The boundary 
>>> condition for a metal is n x E = 0. That means that the tangential 
>>> components of the electrical field are zero.
>>>
>>> If I try to use the following code:
>>>
>>>  
>>> constraints.clear();
>>> 
>>>
>>> constraints.reinit(locally_relevant_dofs);  
>>> 
>>>
>>> DoFTools::make_hanging_node_constraints(dof_handler, constraints); 
>>> VectorTools::interpolate_boundary_values(1, ZeroFunction>> number>(components), constraints, ComponentMask({false, true, true}));
>>>  
>>>
>>> I obtain the following error:
>>>
>>> 
>>> 
>>>  
>>> An error occurred in line <2967> of file 
>>>  in 
>>> function
>>>  
>>> void 
>>> dealii::VectorTools::internal::do_interpolate_boundary_values(const 
>>> M_or_MC &, const DoFHandlerType &, const 
>>> std::map *> 
>>> &, std::map &, const 
>>> dealii::ComponentMask &) [dim = 3, spacedim = 3, number = 
>>> std::complex, DoFHandlerType = DoFHandler, M_or_MC = Mapping]  
>>>   
>>> The violated condition was:  
>>> 
>>> 
>>> cell->get_fe().

[deal.II] Re: Boundary conditions with the NedelecSZ element

2019-06-14 Thread Daniel Garcia-Sanchez
Hi Dhananjay,

Thanks for your message. I removed the dof_handler by mistake when I did 
the copy/paste.

I still get the error when I run the following code:

 
constraints.clear();

   
constraints.reinit(locally_relevant_dofs);  

   
DoFTools::make_hanging_node_constraints(dof_handler, constraints); 
VectorTools::interpolate_boundary_values(dof_handler, 1, ZeroFunction(components), constraints, ComponentMask({false, true, true}));
 

On Friday, June 14, 2019 at 2:36:13 PM UTC+2, Dhananjay Phansalkar wrote:
>
> Hello Daniel,
>I think you are missing first argument for the function "
> VectorTools::interpolate_boundary_values" I guess it requires dofhandler 
> object . Have a look at step 6  (
> https://www.dealii.org/current/doxygen/deal.II/step_6.html#Step6setup_system
> )
>
> Cheers
>
> Dhananjay
>
> On Friday, June 14, 2019 at 1:57:42 PM UTC+2, Daniel Garcia-Sanchez wrote:
>>
>> Hi,
>>
>> I'm starting to use the NedelecSZ element to simulate the maxwell 
>> equations in the frequency/harmonic domain. I started to simulate the 
>> resonant modes of a cube that has metallic boundaries. The boundary 
>> condition for a metal is n x E = 0. That means that the tangential 
>> components of the electrical field are zero.
>>
>> If I try to use the following code:
>>
>>  
>> constraints.clear();  
>> 
>>  
>> constraints.reinit(locally_relevant_dofs);
>> 
>>  
>> DoFTools::make_hanging_node_constraints(dof_handler, constraints); 
>> VectorTools::interpolate_boundary_values(1, ZeroFunction> number>(components), constraints, ComponentMask({false, true, true}));
>>  
>>
>> I obtain the following error:
>>
>>   
>> 
>>
>> An error occurred in line <2967> of file 
>>  in 
>> function
>>  
>> void 
>> dealii::VectorTools::internal::do_interpolate_boundary_values(const 
>> M_or_MC &, const DoFHandlerType &, const 
>> std::map *> 
>> &, std::map &, const 
>> dealii::ComponentMask &) [dim = 3, spacedim = 3, number = 
>> std::complex, DoFHandlerType = DoFHandler, M_or_MC = Mapping]  
>>   
>> The violated condition was:  
>> 
>> 
>> cell->get_fe().is_primitive(i)
>> 
>>
>> Additional information:  
>> 
>> 
>> This function can only deal with requested boundary values that 
>> correspond to primitive (scalar) base elements. You may want to look up in 
>> the deal.II glossary what the term 'primitive' means. 
>>   
>> 
>>
>> There are alternative boundary value interpolation functions in namespace 
>> 'VectorTools' that you can use for non-primitive finite elements.
>> 

[deal.II] Boundary conditions with the NedelecSZ element

2019-06-14 Thread Daniel Garcia-Sanchez
Hi,

I'm starting to use the NedelecSZ element to simulate the maxwell equations 
in the frequency/harmonic domain. I started to simulate the resonant modes 
of a cube that has metallic boundaries. The boundary condition for a metal 
is n x E = 0. That means that the tangential components of the electrical 
field are zero.

If I try to use the following code:

 
constraints.clear();

   
constraints.reinit(locally_relevant_dofs);  

   
DoFTools::make_hanging_node_constraints(dof_handler, constraints); 
VectorTools::interpolate_boundary_values(1, ZeroFunction(components), constraints, ComponentMask({false, true, true}));
 

I obtain the following error:



 
An error occurred in line <2967> of file 
 in 
function
 
void 
dealii::VectorTools::internal::do_interpolate_boundary_values(const 
M_or_MC &, const DoFHandlerType &, const 
std::map *> 
&, std::map &, const dealii::ComponentMask 
&) [dim = 3, spacedim = 3, number = std::complex, DoFHandlerType = 
DoFHandler, M_or_MC = Mapping]
The violated condition was:

  
cell->get_fe().is_primitive(i)  

 
Additional information:

  
This function can only deal with requested boundary values that 
correspond to primitive (scalar) base elements. You may want to look up in 
the deal.II glossary what the term 'primitive' means. 


 
There are alternative boundary value interpolation functions in namespace 
'VectorTools' that you can use for non-primitive finite elements.
 

How can I set the boundary condition n x E = 0 for the NedelecSZ element in 
3D?

Thanks!
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b8f5a4c9-0a81-4764-8ce5-1161cb811f65%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Several MPI jobs in a single machine

2018-12-04 Thread Daniel Garcia-Sanchez



> There shouldn't be a difference between running a job via 
>./executable 
> and 
>mpirun -n 1 ./executable. 
>
> Are your timing results reproducible? 
>
>
>
My timing results are reproducible. As expected, there is no difference 
between  ./executable and mpirun -n 1 ./executable. 

Although if I run in the same 16 core machine:

   - 16 instances of  ./executable 
   - 16 instances of mpirun -n 1 ./executable

Then the results are different. I think that the reason is that mpirun 
(OpenMPI) will bound all the instances by default to the the same cores, ex 
all the MPI jobs will be bound to the first core. That means that all the 
instances of mpirun will run in the same core(s), while other cores of the 
machine will be idle. OpenMPI does not know which cores you want to use, 
you have to tell OpenMPI. I think that the workload distribution is the job 
of Slurm/PBS. In order to tune the processor affinity these options have to 
be used:
https://www.open-mpi.org/faq/?category=tuning#paffinity-defs
https://www.open-mpi.org/faq/?category=tuning#using-paffinity-v1.3

It turns out that slurm and problaby pbs manage MPI affinity for you. If 
you use srun (slurm command) instead of mpiexec the workload is distributed 
properly.

You will only encounter this issue if you want to run several MPI jobs in 
the same machine at the same time. Probably most of the people don't do 
this. If you run a single instance of mpiexec you will not encounter these 
issues.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Several MPI jobs in a single machine

2018-12-04 Thread Daniel Garcia-Sanchez
Hi,

I think that I found the explanation/solution. But I would be happy if 
somebody with experience with OpenMPI and Slurm could comment on this!

On Tuesday, December 4, 2018 at 12:30:15 PM UTC+1, Daniel Garcia-Sanchez 
wrote:
>
> But it is striking that If I run 16 MPI jobs of one process per MPI job in 
> the same machine, then I get an important decrease in performance. Is there 
> an MPI overhead per MPI job? I thought that a single MPI process takes only 
> one single core. Am I missing something?
>
> mpiexec -n 1 ./membrane membrane_step_0.h5 & mpiexec -n 1 ./membrane 
> membrane_step_1.h5 & mpiexec -n 1 ./membrane membrane_step_2.h5 & mpiexec 
> -n 1 ./membrane membrane_step_3.h5 & mpiexec -n 1 ./membrane 
> membrane_step_4.h5 & mpiexec -n 1 ./membrane membrane_step_5.h5 & mpiexec 
> -n 1 ./membrane membrane_step_6.h5 & mpiexec -n 1 ./membrane 
> membrane_step_7.h5 & mpiexec -n 1 ./membrane membrane_step_8.h5 & mpiexec 
> -n 1 ./membrane membrane_step_9.h5 & mpiexec -n 1 ./membrane 
> membrane_step_10.h5 & mpiexec -n 1 ./membrane membrane_step_11.h5 & mpiexec 
> -n 1 ./membrane membrane_step_12.h5 & mpiexec -n 1 ./membrane 
> membrane_step_13.h5 & mpiexec -n 1 ./membrane membrane_step_14.h5 & mpiexec 
> -n 1 ./membrane membrane_step_15.h5 &
>
> +-+++  
> 
> | Total wallclock time elapsed since start|  25.8s || 
> | ||| 
> | Section | no. calls |  wall time | % of total |  
> 
> +-+---+++ 
> | assembly| 1 |  24.9s |96% | 
> | output  | 1 |  3.14e-06s | 0% |  
> 
> | setup   | 1 | 0.343s |   1.3% | 
> | solve   | 1 | 0.539s |   2.1% |
> +-+---+++ 
>
>
I have a 16 core machine with 32 hyper-threads.

When I run 16 MPI jobs with one process per job using mpiexec of OpenMPI, 
the machine does not use all the cores. I think that that this is related 
to "MPI process affinity".

Now, if I use slurm instead of mpiexec, the process load is well 
distributed on the machine and I get the expected "total wallclock time". 
Interestingly slurm seems to balance even better the load than running 
several non-MPI processes in parallel using bash.

srun -n 1 ./membrane membrane_step_0.h5 & srun -n 1 ./membrane 
membrane_step_1.h5 & srun -n 1 ./membrane membrane_step_2.h5 & srun -n 1 
./membrane membrane_step_3.h5 & srun -n 1 ./membrane membrane_step_4.h5 & 
srun -n 1 ./membrane membrane_step_5.h5 & srun -n 1 ./membrane 
membrane_step_6.h5 & srun -n 1 ./membrane membrane_step_7.h5 & srun -n 1 
./membrane membrane_step_8.h5 & srun -n 1 ./membrane membrane_step_9.h5 & 
srun -n 1 ./membrane membrane_step_10.h5 & srun -n 1 ./membrane 
membrane_step_11.h5 & srun -n 1 ./membrane membrane_step_12.h5 & srun -n 1 
./membrane membrane_step_13.h5 & srun -n 1 ./membrane membrane_step_14.h5 & 
srun -n 1 ./membrane membrane_step_15.h5 &

+-+++
| Total wallclock time elapsed since start|  12.9s ||
| |||
| Section | no. calls |  wall time | % of total |
+-+---+++
| assembly| 1 |  12.3s |95% |
| output  | 1 |  2.17e-06s | 0% |
| setup   | 1 | 0.256s | 2% |
| solve   | 1 | 0.314s |   2.4% |
+-+---+++

I would be happy to hear your comments about this.

Thanks!
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Several MPI jobs in a single machine

2018-12-04 Thread Daniel Garcia-Sanchez
Hi,

I'm fine tuning a simulation in the frequency domain (similar to 
https://github.com/dealii/dealii/pull/6747). Because it is in the frequency 
domain I can parallelize:

   1. Using MPI
   2. Running several simulations in parallel for different independent 
   frequency ranges.
   3. A combination of the previous options (MPI + independent frequency 
   ranges)

I found a curious behavior, that I can not explain.

If I run the command below I obtain:
Note that./membrane is a deal.II simulation and the HDF5 file has the 
simulation parameters

./membrane membrane_step_0.h5

+-+++
| Total wallclock time elapsed since start|  12.3s ||
| |||
| Section | no. calls |  wall time | % of total |
+-+---+++
| assembly| 1 |  11.7s |95% |
| output  | 1 |  2.03e-06s | 0% |
| setup   | 1 | 0.263s |   2.1% |
| solve   | 1 | 0.303s |   2.5% |
+-+---+++

If I run the command below, as expected I obtain a similar result:

mpiexec -n 1 ./membrane membrane_step_0.h5

+-+++
| Total wallclock time elapsed since start|  12.3s ||
| |||
| Section | no. calls |  wall time | % of total |
+-+---+++
| assembly| 1 |  11.7s |95% |
| output  | 1 |  1.95e-06s | 0% |
| setup   | 1 | 0.259s |   2.1% |
| solve   | 1 | 0.282s |   2.3% |
+-+---+++

If I run 16 independent processes in parallel in a 16 core machine, I 
obtain a similar result, slightly slower, but I think that it is probably 
normal:

./membrane membrane_step_0.h5 & ./membrane membrane_step_1.h5 & ./membrane 
membrane_step_2.h5 & ./membrane membrane_step_3.h5 & ./membrane 
membrane_step_4.h5 & ./membrane membrane_step_5.h5 & ./membrane 
membrane_step_6.h5 & ./membrane membrane_step_7.h5 & ./membrane 
membrane_step_8.h5 & ./membrane membrane_step_9.h5 & ./membrane 
membrane_step_10.h5 & ./membrane membrane_step_11.h5 & ./membrane 
membrane_step_12.h5 & ./membrane membrane_step_13.h5 & ./membrane 
membrane_step_14.h5 & ./membrane membrane_step_15.h5 & ./membrane 
membrane_step_16.h5 &

+-+++
| Total wallclock time elapsed since start|  15.1s ||
| |||
| Section | no. calls |  wall time | % of total |
+-+---+++
| assembly| 1 |  14.5s |96% |
| output  | 1 |  2.24e-06s | 0% |
| setup   | 1 |  0.25s |   1.7% |
| solve   | 1 | 0.341s |   2.3% |
+-+---+++
 

But it is striking that If I run 16 MPI jobs of one process per MPI job in 
the same machine, then I get an important decrease in performance. Is there 
an MPI overhead per MPI job? I thought that a single MPI process takes only 
one single core. Am I missing something?

mpiexec -n 1 ./membrane membrane_step_0.h5 & mpiexec -n 1 ./membrane 
membrane_step_1.h5 & mpiexec -n 1 ./membrane membrane_step_2.h5 & mpiexec 
-n 1 ./membrane membrane_step_3.h5 & mpiexec -n 1 ./membrane 
membrane_step_4.h5 & mpiexec -n 1 ./membrane membrane_step_5.h5 & mpiexec 
-n 1 ./membrane membrane_step_6.h5 & mpiexec -n 1 ./membrane 
membrane_step_7.h5 & mpiexec -n 1 ./membrane membrane_step_8.h5 & mpiexec 
-n 1 ./membrane membrane_step_9.h5 & mpiexec -n 1 ./membrane 
membrane_step_10.h5 & mpiexec -n 1 ./membrane membrane_step_11.h5 & mpiexec 
-n 1 ./membrane membrane_step_12.h5 & mpiexec -n 1 ./membrane 
membrane_step_13.h5 & mpiexec -n 1 ./membrane membrane_step_14.h5 & mpiexec 
-n 1 ./membrane membrane_step_15.h5 &

+-+++  

| Total wallclock time elapsed since start|  25.8s || 
| ||| 
| Section | no. calls |  wall time | % of total |