Re: [petsc-users] HDF5 vec output with DMPlex

2015-10-26 Thread Adrian Croucher

hi

When I read the HDF5 file back in again using PetscViewerHDF5Open() and 
VecLoad(), is there any easy way to navigate to the last time step in 
the results?


Or, equivalently, to find how many timesteps there are in there, so I 
could then use PetscViewerHDF5SetTimestep() before doing VecLoad() ?


Cheers, Adrian

On 17/10/15 03:01, Matthew Knepley wrote:


Now I remember. I did not want the output to depend on the viewer.

Does your example work if you replace PetscViewerHDF5SetTimestep() 
with DMSetOutputSequenceNumber()?


  Thanks,

Matt


--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 84611



Re: [petsc-users] DM question.

2015-10-26 Thread Afanasiev Michael
Hi Matt,

Re: a chat with Dave, I’ve constructed a minimal working example of the problem 
here: https://github.com/michael-afanasiev/salvus2. The relevant stuff is in 
example/main.c. There’s a little note in there detailing what we’d like to 
accomplish. It should compile fine with a quick modification of the CMakeLists 
to tell the makefile where your petsc is. This is all detailed in the Readme, 
and there’s a small test exodus file hanging out in there as well. A hint would 
be great.

Best,
Mike.
--
Michael Afanasiev
Ph.D. Candidate
Computational Seismology
Institut für Geophysik
ETH Zürich

Sonneggstrasse 5, NO H 39.2
CH 8092 Zürich
michael.afanas...@erdw.ethz.ch

On Oct 26, 2015, at 12:52 PM, Matthew Knepley 
mailto:knep...@gmail.com>> wrote:

On Mon, Oct 26, 2015 at 6:45 AM, Afanasiev Michael 
mailto:michael.afanas...@erdw.ethz.ch>> wrote:
Hi Matthew,

No. There are (k-1)^2 unknowns in the interior of the cell, so that we have

  4 + 4 * (k-1) + (k-1)^2 = (k+1)^2

Right, makes sense. With some insight from Dave, I’m getting what I expect for 
a simple 4-cell mesh distributed amongst 4 processors (length of 91 for the 
global vector, length of 25 for the local vectors, for 4th order polynomial and 
the GLL basis). Now I imagine what’s left is to figure out how these vectors 
are globally numbered.

I usually do this by having a consistent way of numbering the dofs on the 
reference element (i.e. given a (u, v)-space loop through v, then u), and then 
getting the local->global map via finding coincident points in a kd-tree. My 
question is: in which order are the local vectors now defined? If I knew this, 
I could create my local->global map as before. From the way the section was 
created, I guess it might be something like loop vertices, edges, and then 
interior?

The numbering is cell unknowns, vertex unknowns, face unknowns, and then edge 
unknowns. This is, of course, arbitrary and therefore
you can change the order using

  
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/PetscSectionSetPermutation.html#PetscSectionSetPermutation

which renumbers all the mesh points in the PetscSection defining the space. You 
should be able to reproduce your old ordering using
this.

  Thanks,

Matt

Thanks for all your help,
Mike.
--
Michael Afanasiev
Ph.D. Candidate
Computational Seismology
Institut für Geophysik
ETH Zürich

Sonneggstrasse 5, NO H 39.2
CH 8092 Zürich
michael.afanas...@erdw.ethz.ch

On Oct 23, 2015, at 5:11 PM, Matthew Knepley 
mailto:knep...@gmail.com>> wrote:

On Fri, Oct 23, 2015 at 9:03 AM, Afanasiev Michael 
mailto:michael.afanas...@erdw.ethz.ch>> wrote:
Hi Matthew,

Yes, however I have some questions. Starting out, I think the GLL points 
include the endpoints, so
that means for polynomial degree k that numDof[] should really have 4*1 dofs on 
each vertex,
4*(k-1) dofs on each edge, and 4*(k-1)^2 on each quad. It looks like below you 
have numDof[] for
a 1D mesh with DG element.

For the GLL basis we have (k+1) points in each dimension, including the 
endpoints. So for example a 2D element with 4-order polynomials will have 25 
locally numbered points per element. I should also mention that the velocity, 
displacement, and acceleration fields are vectors, with d=dim components at 
each integration point, whereas strain has (2^dim)-1 components. In what you’ve 
written above, does this then become (sum over the 4 fields):

(sum(compPerField*dofPerField)) -> vertex
(sum((compPerField*dofPerField)*(k+1)) -> edge
(sum((compPerField*dofPerField)*(k+1)^2) -> quad

No. There are (k-1)^2 unknowns in the interior of the cell, so that we have

  4 + 4 * (k-1) + (k-1)^2 = (k+1)^2

With elements like these, it is common (I think) to eliminate the cell unknown 
explicitly, since the
system is dense and not connected to other cells. For this, you would retain 
the vertex and edge
unknowns, but not the cell unknowns. I have not tried to do this myself, so I 
do not know if there
are any pitfalls.

I believe I don’t worry about this. Everything is solved in a matrix-free 
sense, on each element. The relevant physical quantities are calculated on each 
element, and then summed into the global degrees of freedom. The summed global 
dof are then brought back to the element level, where updates to the 
acceleration, velocity, and displacement are calculated via a Newmark time 
step. So no global system of equations is ever formed.

You accumulate into the global dofs, so you would need more storage if you do 
not do static condensation. It just good to know this,
but you do not have to do it.

This being said, all the routines to a) integrate the element matrices, b) 
calculate the gll point locations, c) construct the local->global dof mapping, 
and d) do the element-wise matrix free time stepping are already written.  My 
problem is really that I do my mesh decomposition by hand (poorly), and I’d 
like to t

Re: [petsc-users] PetscBool in PETSc 3.1? (Ubuntu Precice)

2015-10-26 Thread Dominic Meiser



On 10/25/2015 10:40 AM, Florian Lindner wrote:

Hello,

our build system Travis uses the old Ubuntu Precice Pangolin version that comes 
with PETSc 3.1.


FWIW petsc master or maint build relatively quickly on travis' build 
machines (2-5 minutes depending on how many third party packages you 
need). If you're able to use travis' containerized infrastructure you 
can cache the PETSc builds reducing the PETSc build times to near 0. 
I've done that for a couple of my projects and it turned out to be 
really easy, see for example here:


https://github.com/d-meiser/SuperContinuum

Have a look at .travis.yml and utilities/get_petsc.sh.

Cheers,
Dominic



While trying to activate petsc for our CI tests, I get the message that the 
type PetscBool wasn't found. I downloaded 
http://packages.ubuntu.com/precise/libpetsc3.1-dev and grepped for PetscBool, 
found non.

Just to make sure I didn't screw it up, is it true, that there is no PetscBool 
in version 3.1?

Is there a PPA known that provides newer PETSc versions to Precice Pangolin

Thanks,
Florian



--
Dominic Meiser
Tech-X Corporation
5621 Arapahoe Avenue
Boulder, CO 80303
USA
Telephone: 303-996-2036
Fax: 303-448-7756
www.txcorp.com


Re: [petsc-users] DM question.

2015-10-26 Thread Matthew Knepley
On Mon, Oct 26, 2015 at 6:45 AM, Afanasiev Michael <
michael.afanas...@erdw.ethz.ch> wrote:

> Hi Matthew,
>
> No. There are (k-1)^2 unknowns in the interior of the cell, so that we have
>
>   4 + 4 * (k-1) + (k-1)^2 = (k+1)^2
>
>
> Right, makes sense. With some insight from Dave, I’m getting what I expect
> for a simple 4-cell mesh distributed amongst 4 processors (length of 91 for
> the global vector, length of 25 for the local vectors, for 4th order
> polynomial and the GLL basis). Now I imagine what’s left is to figure out
> how these vectors are globally numbered.
>
> I usually do this by having a consistent way of numbering the dofs on the
> reference element (i.e. given a (u, v)-space loop through v, then u), and
> then getting the local->global map via finding coincident points in a
> kd-tree. My question is: in which order are the local vectors now defined?
> If I knew this, I could create my local->global map as before. From the way
> the section was created, I guess it might be something like loop vertices,
> edges, and then interior?
>

The numbering is cell unknowns, vertex unknowns, face unknowns, and then
edge unknowns. This is, of course, arbitrary and therefore
you can change the order using


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/PetscSectionSetPermutation.html#PetscSectionSetPermutation

which renumbers all the mesh points in the PetscSection defining the space.
You should be able to reproduce your old ordering using
this.

  Thanks,

Matt


> Thanks for all your help,
> Mike.
> --
> Michael Afanasiev
> Ph.D. Candidate
> Computational Seismology
> Institut für Geophysik
> ETH Zürich
>
> Sonneggstrasse 5, NO H 39.2
> CH 8092 Zürich
> michael.afanas...@erdw.ethz.ch
>
> On Oct 23, 2015, at 5:11 PM, Matthew Knepley  wrote:
>
> On Fri, Oct 23, 2015 at 9:03 AM, Afanasiev Michael <
> michael.afanas...@erdw.ethz.ch> wrote:
>
>> Hi Matthew,
>>
>> Yes, however I have some questions. Starting out, I think the GLL points
>> include the endpoints, so
>> that means for polynomial degree k that numDof[] should really have 4*1
>> dofs on each vertex,
>> 4*(k-1) dofs on each edge, and 4*(k-1)^2 on each quad. It looks like
>> below you have numDof[] for
>> a 1D mesh with DG element.
>>
>>
>> For the GLL basis we have (k+1) points in each dimension, including the
>> endpoints. So for example a 2D element with 4-order polynomials will have
>> 25 locally numbered points per element. I should also mention that the
>> velocity, displacement, and acceleration fields are vectors, with d=dim
>> components at each integration point, whereas strain has (2^dim)-1
>> components. In what you’ve written above, does this then become (sum over
>> the 4 fields):
>>
>> (sum(compPerField*dofPerField)) -> vertex
>> (sum((compPerField*dofPerField)*(k+1)) -> edge
>> (sum((compPerField*dofPerField)*(k+1)^2) -> quad
>>
>
> No. There are (k-1)^2 unknowns in the interior of the cell, so that we have
>
>   4 + 4 * (k-1) + (k-1)^2 = (k+1)^2
>
>
>> With elements like these, it is common (I think) to eliminate the cell
>> unknown explicitly, since the
>> system is dense and not connected to other cells. For this, you would
>> retain the vertex and edge
>> unknowns, but not the cell unknowns. I have not tried to do this myself,
>> so I do not know if there
>> are any pitfalls.
>>
>>
>> I believe I don’t worry about this. Everything is solved in a matrix-free
>> sense, on each element. The relevant physical quantities are calculated on
>> each element, and then summed into the global degrees of freedom. The
>> summed global dof are then brought back to the element level, where updates
>> to the acceleration, velocity, and displacement are calculated via a
>> Newmark time step. So no global system of equations is ever formed.
>>
>
> You accumulate into the global dofs, so you would need more storage if you
> do not do static condensation. It just good to know this,
> but you do not have to do it.
>
>
>> This being said, all the routines to a) integrate the element matrices,
>> b) calculate the gll point locations, c) construct the local->global dof
>> mapping, and d) do the element-wise matrix free time stepping are already
>> written.  My problem is really that I do my mesh decomposition by hand
>> (poorly), and I’d like to transfer this over to Petsc. Of course if I do
>> this, I might as well use DM's LocalToGlobal vector routines to sum my
>> field vectors across mesh partitions.
>>
>
> Yes.
>
>
>> Perhaps a better question to ask would be in the form of a workflow:
>>
>> 1) Read in exodus mesh and partition (done)
>> 2) Set up local and global degrees of freedom on each mesh partition
>> (done)
>>
>
> Here you just have to setup PetscSections that mirror your local and
> global layout. Then the LocalToGlobal and its inverse will work.
>
>Matt
>
>
>> 3) Integrate element matrices (done)
>>
>> for i 1, nTimeSteps
>> 3) Step fields forward on each partition (done)
>> 4) Sum to global degrees of freedom 

Re: [petsc-users] DM question.

2015-10-26 Thread Afanasiev Michael
Hi Matthew,

No. There are (k-1)^2 unknowns in the interior of the cell, so that we have

  4 + 4 * (k-1) + (k-1)^2 = (k+1)^2

Right, makes sense. With some insight from Dave, I’m getting what I expect for 
a simple 4-cell mesh distributed amongst 4 processors (length of 91 for the 
global vector, length of 25 for the local vectors, for 4th order polynomial and 
the GLL basis). Now I imagine what’s left is to figure out how these vectors 
are globally numbered.

I usually do this by having a consistent way of numbering the dofs on the 
reference element (i.e. given a (u, v)-space loop through v, then u), and then 
getting the local->global map via finding coincident points in a kd-tree. My 
question is: in which order are the local vectors now defined? If I knew this, 
I could create my local->global map as before. From the way the section was 
created, I guess it might be something like loop vertices, edges, and then 
interior?

Thanks for all your help,
Mike.
--
Michael Afanasiev
Ph.D. Candidate
Computational Seismology
Institut für Geophysik
ETH Zürich

Sonneggstrasse 5, NO H 39.2
CH 8092 Zürich
michael.afanas...@erdw.ethz.ch

On Oct 23, 2015, at 5:11 PM, Matthew Knepley 
mailto:knep...@gmail.com>> wrote:

On Fri, Oct 23, 2015 at 9:03 AM, Afanasiev Michael 
mailto:michael.afanas...@erdw.ethz.ch>> wrote:
Hi Matthew,

Yes, however I have some questions. Starting out, I think the GLL points 
include the endpoints, so
that means for polynomial degree k that numDof[] should really have 4*1 dofs on 
each vertex,
4*(k-1) dofs on each edge, and 4*(k-1)^2 on each quad. It looks like below you 
have numDof[] for
a 1D mesh with DG element.

For the GLL basis we have (k+1) points in each dimension, including the 
endpoints. So for example a 2D element with 4-order polynomials will have 25 
locally numbered points per element. I should also mention that the velocity, 
displacement, and acceleration fields are vectors, with d=dim components at 
each integration point, whereas strain has (2^dim)-1 components. In what you’ve 
written above, does this then become (sum over the 4 fields):

(sum(compPerField*dofPerField)) -> vertex
(sum((compPerField*dofPerField)*(k+1)) -> edge
(sum((compPerField*dofPerField)*(k+1)^2) -> quad

No. There are (k-1)^2 unknowns in the interior of the cell, so that we have

  4 + 4 * (k-1) + (k-1)^2 = (k+1)^2

With elements like these, it is common (I think) to eliminate the cell unknown 
explicitly, since the
system is dense and not connected to other cells. For this, you would retain 
the vertex and edge
unknowns, but not the cell unknowns. I have not tried to do this myself, so I 
do not know if there
are any pitfalls.

I believe I don’t worry about this. Everything is solved in a matrix-free 
sense, on each element. The relevant physical quantities are calculated on each 
element, and then summed into the global degrees of freedom. The summed global 
dof are then brought back to the element level, where updates to the 
acceleration, velocity, and displacement are calculated via a Newmark time 
step. So no global system of equations is ever formed.

You accumulate into the global dofs, so you would need more storage if you do 
not do static condensation. It just good to know this,
but you do not have to do it.

This being said, all the routines to a) integrate the element matrices, b) 
calculate the gll point locations, c) construct the local->global dof mapping, 
and d) do the element-wise matrix free time stepping are already written.  My 
problem is really that I do my mesh decomposition by hand (poorly), and I’d 
like to transfer this over to Petsc. Of course if I do this, I might as well 
use DM's LocalToGlobal vector routines to sum my field vectors across mesh 
partitions.

Yes.

Perhaps a better question to ask would be in the form of a workflow:

1) Read in exodus mesh and partition (done)
2) Set up local and global degrees of freedom on each mesh partition (done)

Here you just have to setup PetscSections that mirror your local and global 
layout. Then the LocalToGlobal and its inverse will work.

   Matt

3) Integrate element matrices (done)

for i 1, nTimeSteps
3) Step fields forward on each partition (done)
4) Sum to global degrees of freedom on each partition (done)
5) Sum to global degrees of freedom across partitions ()
6) Retrieve summed global degrees of freedom across partitions ()
7) Continue

So really what I hope Petsc will help with is just steps 5 and 6. I guess this 
involves, given a partitioned DMPlex object, which has been partitioned 
according to the geometry and topology defined in an exodus file, adding the 
degrees of freedom to each partition-local vector (via a DMPlexSection?). Then, 
ensuring that the dof added along the partition boundaries sum together 
properly when a LocalToGlobal vector operation is defined.

Does this make sense? If so, is this something that DMPlex is designed for?
--
Michael Afanasiev

Re: [petsc-users] DM question.

2015-10-26 Thread Afanasiev Michael
Hi Matthew,

No. There are (k-1)^2 unknowns in the interior of the cell, so that we have

  4 + 4 * (k-1) + (k-1)^2 = (k+1)^2

Right, makes sense. With some insight from Dave, I’m getting what I expect for 
a simple 4-cell mesh distributed amongst 4 processors (length of 91 for the 
global vector, length of 25 for the local vectors, for 4th order polynomial and 
the GLL basis). Now I imagine what’s left is to figure out how these vectors 
are globally numbered.

I usually do this by having a consistent way of numbering the dofs on the 
reference element (i.e. given a (u, v)-space loop through v, then u), and then 
getting the local->global map via finding coincident points in a kd-tree. My 
question is: in which order are the local vectors now defined? If I knew this, 
I could create my local->global map as before. From the way the section was 
created, I guess it might be something like loop vertices, edges, and then 
interior?

Thanks for all your help,
Mike.
--
Michael Afanasiev
Ph.D. Candidate
Computational Seismology
Institut für Geophysik
ETH Zürich

Sonneggstrasse 5, NO H 39.2
CH 8092 Zürich
michael.afanas...@erdw.ethz.ch

On Oct 23, 2015, at 5:11 PM, Matthew Knepley 
mailto:knep...@gmail.com>> wrote:

On Fri, Oct 23, 2015 at 9:03 AM, Afanasiev Michael 
mailto:michael.afanas...@erdw.ethz.ch>> wrote:
Hi Matthew,

Yes, however I have some questions. Starting out, I think the GLL points 
include the endpoints, so
that means for polynomial degree k that numDof[] should really have 4*1 dofs on 
each vertex,
4*(k-1) dofs on each edge, and 4*(k-1)^2 on each quad. It looks like below you 
have numDof[] for
a 1D mesh with DG element.

For the GLL basis we have (k+1) points in each dimension, including the 
endpoints. So for example a 2D element with 4-order polynomials will have 25 
locally numbered points per element. I should also mention that the velocity, 
displacement, and acceleration fields are vectors, with d=dim components at 
each integration point, whereas strain has (2^dim)-1 components. In what you’ve 
written above, does this then become (sum over the 4 fields):

(sum(compPerField*dofPerField)) -> vertex
(sum((compPerField*dofPerField)*(k+1)) -> edge
(sum((compPerField*dofPerField)*(k+1)^2) -> quad

No. There are (k-1)^2 unknowns in the interior of the cell, so that we have

  4 + 4 * (k-1) + (k-1)^2 = (k+1)^2

With elements like these, it is common (I think) to eliminate the cell unknown 
explicitly, since the
system is dense and not connected to other cells. For this, you would retain 
the vertex and edge
unknowns, but not the cell unknowns. I have not tried to do this myself, so I 
do not know if there
are any pitfalls.

I believe I don’t worry about this. Everything is solved in a matrix-free 
sense, on each element. The relevant physical quantities are calculated on each 
element, and then summed into the global degrees of freedom. The summed global 
dof are then brought back to the element level, where updates to the 
acceleration, velocity, and displacement are calculated via a Newmark time 
step. So no global system of equations is ever formed.

You accumulate into the global dofs, so you would need more storage if you do 
not do static condensation. It just good to know this,
but you do not have to do it.

This being said, all the routines to a) integrate the element matrices, b) 
calculate the gll point locations, c) construct the local->global dof mapping, 
and d) do the element-wise matrix free time stepping are already written.  My 
problem is really that I do my mesh decomposition by hand (poorly), and I’d 
like to transfer this over to Petsc. Of course if I do this, I might as well 
use DM's LocalToGlobal vector routines to sum my field vectors across mesh 
partitions.

Yes.

Perhaps a better question to ask would be in the form of a workflow:

1) Read in exodus mesh and partition (done)
2) Set up local and global degrees of freedom on each mesh partition (done)

Here you just have to setup PetscSections that mirror your local and global 
layout. Then the LocalToGlobal and its inverse will work.

   Matt

3) Integrate element matrices (done)

for i 1, nTimeSteps
3) Step fields forward on each partition (done)
4) Sum to global degrees of freedom on each partition (done)
5) Sum to global degrees of freedom across partitions ()
6) Retrieve summed global degrees of freedom across partitions ()
7) Continue

So really what I hope Petsc will help with is just steps 5 and 6. I guess this 
involves, given a partitioned DMPlex object, which has been partitioned 
according to the geometry and topology defined in an exodus file, adding the 
degrees of freedom to each partition-local vector (via a DMPlexSection?). Then, 
ensuring that the dof added along the partition boundaries sum together 
properly when a LocalToGlobal vector operation is defined.

Does this make sense? If so, is this something that DMPlex is designed for?
--
Michael Afanasiev