Re: [petsc-dev] PETScSection writ large

2014-05-05 Thread Andrs, David
Barry,

on page 7, in function kInner, there seems to be an extra * at the end of
the sum statement.

Otherwise, very interesting reading.

--
David




On Sat, May 3, 2014 at 8:26 PM, Barry Smith  wrote:

>
>So after I finally understood Matt’s PetscSection object I stole the
> idea and converted it to an entire parallel programming model.
>
>Any feedback appreciated.
>
>Barry
>
>
>
>


Re: [petsc-dev] PetscSection

2013-09-11 Thread Barry Smith

On Sep 11, 2013, at 7:54 AM, Jed Brown  wrote:

> Barry Smith  writes:
>>   Sometimes it is more convenient to use "global indices" than
>> rank/local offset. Thus we need a "scalable" utility to convert back
>> and forth between them. Possibly having interfaces to supply the
>> information in either format.
> 
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/PetscSFSetGraphLayout.html

   Fine, but their really shouldn't be two SFSetGraph() functions. Rather there 
should be two ways of constructing the "abstract indices" that are passed to 
one SFSetGraph() function.





Re: [petsc-dev] PetscSection

2013-09-11 Thread Jed Brown
Barry Smith  writes:
>Sometimes it is more convenient to use "global indices" than
> rank/local offset. Thus we need a "scalable" utility to convert back
> and forth between them. Possibly having interfaces to supply the
> information in either format.

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/PetscSFSetGraphLayout.html


pgpijhz6EyyuY.pgp
Description: PGP signature


Re: [petsc-dev] PetscSection

2013-09-11 Thread Barry Smith

On Sep 11, 2013, at 3:47 AM, Matthew Knepley  wrote:

> On Tue, Sep 10, 2013 at 11:25 PM, Barry Smith  wrote:
> 
> On Sep 10, 2013, at 8:55 PM, Jed Brown  wrote:
> 
> > Matthew Knepley  writes:
> >>> PetscSF cannot currently scatter individual bytes (4 bytes minimum), and
> >>> even if it could, it's a horribly inefficient representation (4 or 8
> >>> bytes of metadata for each byte of payload).  The quick fix at that
> >>> moment was to send in units of size PetscInt (the struct was always
> >>> going to be divisible by that size).
> 
>Since when are PETSc people interested in quick fixes, we are only 
> concerned with doing things right; quick fixes lead to bad code and bad ideas.
> 
> >>>
> >>
> >> The way I understand it, he never had a single MPI_BYTE, but always a 
> >> bunch.
> >> Shouldn't that packing handle that case?
> >
> > In DMPlexDistributeData, your fieldSF is blown up so that every unit has
> > its own metadata (rank, offset).  That's a terrible mechanism for moving
> > structs with hundreds of bytes.
> >
> > I would rather not make PetscSF deal with fully-heterogeneous data
> > (where each node can have a different size) because it makes indexing a
> > nightmare (you need a PetscSection or similar just to get in the door; I
> > don't want PetscSF to depend on PetscSection).
> 
>Having PetscSF depend on PetscSection would be insane (after all 
> PetscSection has information about constrained variables and other silly 
> nonsense) but having PetscSF depend on XX makes perfect sense.
> 
> A review:
> 
> (current situation; with unimportant stuff removed)
> 
> +  sf - star forest
> .  nroots - number of root vertices on the current process (these are 
> possible targets for other process to attach leaves)
> .  nleaves - number of leaf vertices on the current process, each of these 
> references a root on any process
> .  ilocal - locations of leaves in leafdata buffers, pass NULL for contiguous 
> storage
> .  iremote - remote locations of root vertices for each leaf on the current 
> process
> 
> PetscSFSetGraph(PetscSF sf,PetscInt nroots,PetscInt nleaves,PetscInt 
> *ilocal,,PetscSFNode *iremote,)
> 
> typedef struct {
>   PetscInt rank;/* Rank of owner */  (why is this not 
> PetscMPIInt?)
>   PetscInt index;   /* Index of node on rank */
> } PetscSFNode;
> 
> abstractly nleaves, ilocal  are the indices (into an array) you are copying 
> too; iremote are the indices (into an array) you are copying from
> 
> Now if PETSc had an abstract concept of indexing (funny no one ever put that 
> in PETSc decades ago) it would look like
> 
> PetscSFSetGraph(PetscSF sf,PetscInt nroots,  toIndices  , fromIndices)
> 
> but wait; PETSc does have have an abstraction for indexing (into regular 
> arrays) called IS, come to think of it, PETSc has another abstraction for 
> indexing (into arrays with different sized items) called XX.  Actually 
> thinking a tiny bit more one realizes that there is a third: PetscSFNode is a 
> way of indexing (into regular arrays) on a bunch of different processes. We 
> come up with a couple more related, but inconsistent syntaxes for indexing, 
> we'll have code has good as Trilinos.
> 
>So let's fix this up! One abstraction for indexing that handles both 
> regular arrays, arrays with different size items and both kinds of arrays on 
> a bunch of different processes; with simple enough syntax so it can be used 
> whenever indexing is needed including passing to PetscSF! (We can ignore the 
> need for MPI datatypes in communication for now).
> 
>Concrete examples to demonstrate this need not be so difficult. A 
> completely general set of indices for this situation could be handled by
> 
> typedef struct {
>PetscInt nindices;
>PetscMPIInt   *ranks;
>PetscInt  *offsets;
>PetscInt  *sizes;
> }  XXData;  Hardly more than PetscSFNode.  Or could be handled as an array of 
> structs.
> 
> special cases would include:
> all on one process so ranks is not needed
> all items the same size so sizes not needed
> contiquous memory (or strided) so offsets  not needed
> 
> If we decided not to go with an abstract object but instead just a concrete 
> data structure that could handle all cases it could look something like
> 
> typedef struct {
>PetscInt nindices;
>PetscMPIInt   *ranks;
>PetscInt  *offsets;
>PetscInt  *sizes;
>PetscInt  chunksize,strideoffset;
> }  XXData;
> 
> For completeness I note with IS (for example in VecScatter creating with 
> parallel vectors) we don't use rank,offset notation we use global offset == 
> rstart[rank] + offset but that is something I'm sure we can deal with.
> 
> 
> QED
> 
> Comments?
> 
> This sounds like a good idea. So
> 
>   - we always store indices in blocks (get rid of regular IS)
>   - we can optimize for strided (get rid of strided IS)
>   - they can be local or nonlocal (get rid of Pe

Re: [petsc-dev] PetscSection

2013-09-11 Thread Matthew Knepley
On Tue, Sep 10, 2013 at 11:25 PM, Barry Smith  wrote:

>
> On Sep 10, 2013, at 8:55 PM, Jed Brown  wrote:
>
> > Matthew Knepley  writes:
> >>> PetscSF cannot currently scatter individual bytes (4 bytes minimum),
> and
> >>> even if it could, it's a horribly inefficient representation (4 or 8
> >>> bytes of metadata for each byte of payload).  The quick fix at that
> >>> moment was to send in units of size PetscInt (the struct was always
> >>> going to be divisible by that size).
>
>Since when are PETSc people interested in quick fixes, we are only
> concerned with doing things right; quick fixes lead to bad code and bad
> ideas.
>
> >>>
> >>
> >> The way I understand it, he never had a single MPI_BYTE, but always a
> bunch.
> >> Shouldn't that packing handle that case?
> >
> > In DMPlexDistributeData, your fieldSF is blown up so that every unit has
> > its own metadata (rank, offset).  That's a terrible mechanism for moving
> > structs with hundreds of bytes.
> >
> > I would rather not make PetscSF deal with fully-heterogeneous data
> > (where each node can have a different size) because it makes indexing a
> > nightmare (you need a PetscSection or similar just to get in the door; I
> > don't want PetscSF to depend on PetscSection).
>
>Having PetscSF depend on PetscSection would be insane (after all
> PetscSection has information about constrained variables and other silly
> nonsense) but having PetscSF depend on XX makes perfect sense.
>
> A review:
>
> (current situation; with unimportant stuff removed)
>
> +  sf - star forest
> .  nroots - number of root vertices on the current process (these are
> possible targets for other process to attach leaves)
> .  nleaves - number of leaf vertices on the current process, each of these
> references a root on any process
> .  ilocal - locations of leaves in leafdata buffers, pass NULL for
> contiguous storage
> .  iremote - remote locations of root vertices for each leaf on the
> current process
>
> PetscSFSetGraph(PetscSF sf,PetscInt nroots,PetscInt nleaves,PetscInt
> *ilocal,,PetscSFNode *iremote,)
>
> typedef struct {
>   PetscInt rank;/* Rank of owner */  (why is this not
> PetscMPIInt?)
>   PetscInt index;   /* Index of node on rank */
> } PetscSFNode;
>
> abstractly nleaves, ilocal  are the indices (into an array) you are
> copying too; iremote are the indices (into an array) you are copying from
>
> Now if PETSc had an abstract concept of indexing (funny no one ever put
> that in PETSc decades ago) it would look like
>
> PetscSFSetGraph(PetscSF sf,PetscInt nroots,  toIndices  , fromIndices)
>
> but wait; PETSc does have have an abstraction for indexing (into regular
> arrays) called IS, come to think of it, PETSc has another abstraction for
> indexing (into arrays with different sized items) called XX.  Actually
> thinking a tiny bit more one realizes that there is a third: PetscSFNode is
> a way of indexing (into regular arrays) on a bunch of different processes.
> We come up with a couple more related, but inconsistent syntaxes for
> indexing, we'll have code has good as Trilinos.
>
>So let's fix this up! One abstraction for indexing that handles both
> regular arrays, arrays with different size items and both kinds of arrays
> on a bunch of different processes; with simple enough syntax so it can be
> used whenever indexing is needed including passing to PetscSF! (We can
> ignore the need for MPI datatypes in communication for now).
>
>Concrete examples to demonstrate this need not be so difficult. A
> completely general set of indices for this situation could be handled by
>
> typedef struct {
>PetscInt nindices;
>PetscMPIInt   *ranks;
>PetscInt  *offsets;
>PetscInt  *sizes;
> }  XXData;  Hardly more than PetscSFNode.  Or could be handled as an array
> of structs.
>
> special cases would include:
> all on one process so ranks is not needed
> all items the same size so sizes not needed
> contiquous memory (or strided) so offsets  not needed
>
> If we decided not to go with an abstract object but instead just a
> concrete data structure that could handle all cases it could look something
> like
>
> typedef struct {
>PetscInt nindices;
>PetscMPIInt   *ranks;
>PetscInt  *offsets;
>PetscInt  *sizes;
>PetscInt  chunksize,strideoffset;
> }  XXData;
>
> For completeness I note with IS (for example in VecScatter creating with
> parallel vectors) we don't use rank,offset notation we use global offset ==
> rstart[rank] + offset but that is something I'm sure we can deal with.
>
>
> QED
>
> Comments?
>

This sounds like a good idea. So

  - we always store indices in blocks (get rid of regular IS)
  - we can optimize for strided (get rid of strided IS)
  - they can be local or nonlocal (get rid of PetscSFNode)

Right now, I have a GlobalSection, which uses XX but puts in different
values. It has negative offsets

Re: [petsc-dev] PetscSection

2013-09-10 Thread Barry Smith

On Sep 10, 2013, at 8:55 PM, Jed Brown  wrote:

> Matthew Knepley  writes:
>>> PetscSF cannot currently scatter individual bytes (4 bytes minimum), and
>>> even if it could, it's a horribly inefficient representation (4 or 8
>>> bytes of metadata for each byte of payload).  The quick fix at that
>>> moment was to send in units of size PetscInt (the struct was always
>>> going to be divisible by that size).

   Since when are PETSc people interested in quick fixes, we are only concerned 
with doing things right; quick fixes lead to bad code and bad ideas.

>>> 
>> 
>> The way I understand it, he never had a single MPI_BYTE, but always a bunch.
>> Shouldn't that packing handle that case?
> 
> In DMPlexDistributeData, your fieldSF is blown up so that every unit has
> its own metadata (rank, offset).  That's a terrible mechanism for moving
> structs with hundreds of bytes.
> 
> I would rather not make PetscSF deal with fully-heterogeneous data
> (where each node can have a different size) because it makes indexing a
> nightmare (you need a PetscSection or similar just to get in the door; I
> don't want PetscSF to depend on PetscSection).

   Having PetscSF depend on PetscSection would be insane (after all 
PetscSection has information about constrained variables and other silly 
nonsense) but having PetscSF depend on XX makes perfect sense.

A review:

(current situation; with unimportant stuff removed)

+  sf - star forest
.  nroots - number of root vertices on the current process (these are possible 
targets for other process to attach leaves)
.  nleaves - number of leaf vertices on the current process, each of these 
references a root on any process
.  ilocal - locations of leaves in leafdata buffers, pass NULL for contiguous 
storage
.  iremote - remote locations of root vertices for each leaf on the current 
process

PetscSFSetGraph(PetscSF sf,PetscInt nroots,PetscInt nleaves,PetscInt 
*ilocal,,PetscSFNode *iremote,)

typedef struct {
  PetscInt rank;/* Rank of owner */  (why is this not 
PetscMPIInt?)
  PetscInt index;   /* Index of node on rank */
} PetscSFNode;

abstractly nleaves, ilocal  are the indices (into an array) you are copying 
too; iremote are the indices (into an array) you are copying from 

Now if PETSc had an abstract concept of indexing (funny no one ever put that in 
PETSc decades ago) it would look like

PetscSFSetGraph(PetscSF sf,PetscInt nroots,  toIndices  , fromIndices)

but wait; PETSc does have have an abstraction for indexing (into regular 
arrays) called IS, come to think of it, PETSc has another abstraction for 
indexing (into arrays with different sized items) called XX.  Actually thinking 
a tiny bit more one realizes that there is a third: PetscSFNode is a way of 
indexing (into regular arrays) on a bunch of different processes. We come up 
with a couple more related, but inconsistent syntaxes for indexing, we'll have 
code has good as Trilinos.

   So let's fix this up! One abstraction for indexing that handles both regular 
arrays, arrays with different size items and both kinds of arrays on a bunch of 
different processes; with simple enough syntax so it can be used whenever 
indexing is needed including passing to PetscSF! (We can ignore the need for 
MPI datatypes in communication for now). 

   Concrete examples to demonstrate this need not be so difficult. A completely 
general set of indices for this situation could be handled by 

typedef struct {
   PetscInt nindices;
   PetscMPIInt   *ranks;
   PetscInt  *offsets;
   PetscInt  *sizes;
}  XXData;  Hardly more than PetscSFNode.  Or could be handled as an array of 
structs.

special cases would include: 
all on one process so ranks is not needed
all items the same size so sizes not needed
contiquous memory (or strided) so offsets  not needed

If we decided not to go with an abstract object but instead just a concrete 
data structure that could handle all cases it could look something like

typedef struct {
   PetscInt nindices;
   PetscMPIInt   *ranks;
   PetscInt  *offsets;
   PetscInt  *sizes;
   PetscInt  chunksize,strideoffset;
}  XXData;

For completeness I note with IS (for example in VecScatter creating with 
parallel vectors) we don't use rank,offset notation we use global offset == 
rstart[rank] + offset but that is something I'm sure we can deal with.


QED

Comments?






Re: [petsc-dev] PetscSection

2013-09-10 Thread Matthew Knepley
On Tue, Sep 10, 2013 at 8:55 PM, Jed Brown  wrote:

> Matthew Knepley  writes:
> >> PetscSF cannot currently scatter individual bytes (4 bytes minimum), and
> >> even if it could, it's a horribly inefficient representation (4 or 8
> >> bytes of metadata for each byte of payload).  The quick fix at that
> >> moment was to send in units of size PetscInt (the struct was always
> >> going to be divisible by that size).
> >>
> >
> > The way I understand it, he never had a single MPI_BYTE, but always a
> bunch.
> > Shouldn't that packing handle that case?
>
> In DMPlexDistributeData, your fieldSF is blown up so that every unit has
> its own metadata (rank, offset).  That's a terrible mechanism for moving
> structs with hundreds of bytes.
>

Hmm, okay we can probably handle some of the packing there. I will think
about
it.

   Matt


> I would rather not make PetscSF deal with fully-heterogeneous data
> (where each node can have a different size) because it makes indexing a
> nightmare (you need a PetscSection or similar just to get in the door; I
> don't want PetscSF to depend on PetscSection).
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-dev] PetscSection

2013-09-10 Thread Jed Brown
Shri  writes:
>For my toy example, using four sections is fine. But in general I think 
> there needs to 
> be a mechanism that the user can easily pack the data and I don't think the 
> user would want
> to create tens of sections. 

Presumably they wouldn't be the ones creating all those sections.

> I'll take a look at in the next few days and get back to you. One
> thing that I am struggling to understand, and this is in reference to
> having one chunk of memory holding all the different parameters for
> vertices, edges etc, is how to manage heteregenous data, i.e., for
> example one vertex has a struct of type G1 and another has a type
> G2. Currently from PetscSection, we can only get the offset and the
> size.

This is the problem you have to solve.  If you don't know how to index
it locally, there's no sense going on about how to communicate it.

One option is a tagged union (can use a Label as the tag if you like),
the other is to partition the index space by node type.

Note that if you can have multiple structs at one node, you're asking
for two levels of nesting: number-of-units and number-of-bytes-per-unit.


pgpAUVo6Huy55.pgp
Description: PGP signature


Re: [petsc-dev] PetscSection

2013-09-10 Thread Shri

On Sep 10, 2013, at 8:47 PM, Matthew Knepley wrote:

> On Tue, Sep 10, 2013 at 8:43 PM, Jed Brown  wrote:
> Matthew Knepley  writes:
> >>   I was unclear.  The offset value should be in bytes from the beginning
> >> of the array, currently it is in ints from the beginning of the array. Ints
> >> is just plan confusing, Shri has to have this strange size sizeof(his
> >> struct)/sizeof(PetscInt) to indicate how big his chunk is.
> >
> >
> > No, its has nothing to do with Int. Where did this come from? In fact, I
> > use it all the time to mean Scalar offsets. It is
> > interpreter by the creator of the Section. I told him to put in bytes for
> > the sizes. Then you do not need any of that crap.
> 
> PetscSF cannot currently scatter individual bytes (4 bytes minimum), and
> even if it could, it's a horribly inefficient representation (4 or 8
> bytes of metadata for each byte of payload).  The quick fix at that
> moment was to send in units of size PetscInt (the struct was always
> going to be divisible by that size).
> 
> The way I understand it, he never had a single MPI_BYTE, but always a bunch.
> Shouldn't that packing handle that case?
> 
>Matt
>  
> After looking at code with him yesterday, it looks like he just needs a
> separate Section for each of four different types of node.  Within each
> node type, the sizes are homogeneous (though there may be multiple
> structs at a vertex of the graph).  In that case, he can make an
> MPI_Datatype for sizeof(struct Node) bytes and communicate those.
> PetscSF should do fine with that.
   For my toy example, using four sections is fine. But in general I think 
there needs to 
be a mechanism that the user can easily pack the data and I don't think the 
user would want
to create tens of sections. 

> 
> The indexing still needs the size of a unit, but there would be no stray
> sizeof(Int) running around.
> 
> 
> In that discussion, we also concluded that having the user add code to
> one big struct definition was a terrible approach and we should
> dynamically build structures by registration.  I pointed him to the
> monitor infrastructure in Parmod (like in TS ex11).

I'll take a look at in the next few days and get back to you. One thing that I 
am struggling to understand, and this is in reference to having one chunk of 
memory holding all the
different parameters for vertices, edges etc, is how to manage heteregenous 
data, i.e., for example one vertex has a struct of type G1 and another has a 
type G2. Currently from
PetscSection, we can only get the offset and the size.

Shri
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener



Re: [petsc-dev] PetscSection

2013-09-10 Thread Jed Brown
Dmitry Karpeyev  writes:

> Any chance of a more intuitive name in place of PetscSF?

"star forest" exactly describes this communication graph and is totally
unambiguous.  If you have a better name, let's hear it.


pgpeg6tZMHZQq.pgp
Description: PGP signature


Re: [petsc-dev] PetscSection

2013-09-10 Thread Jed Brown
Matthew Knepley  writes:
>> PetscSF cannot currently scatter individual bytes (4 bytes minimum), and
>> even if it could, it's a horribly inefficient representation (4 or 8
>> bytes of metadata for each byte of payload).  The quick fix at that
>> moment was to send in units of size PetscInt (the struct was always
>> going to be divisible by that size).
>>
>
> The way I understand it, he never had a single MPI_BYTE, but always a bunch.
> Shouldn't that packing handle that case?

In DMPlexDistributeData, your fieldSF is blown up so that every unit has
its own metadata (rank, offset).  That's a terrible mechanism for moving
structs with hundreds of bytes.

I would rather not make PetscSF deal with fully-heterogeneous data
(where each node can have a different size) because it makes indexing a
nightmare (you need a PetscSection or similar just to get in the door; I
don't want PetscSF to depend on PetscSection).


pgpL51gx3UzRw.pgp
Description: PGP signature


Re: [petsc-dev] PetscSection

2013-09-10 Thread Jed Brown
Shri  writes:
> After speaking with Barry today, I realized that I was abusing
> PetscSection and doing a roundabout indexing to access the desired
> data. I was using four sections to distribute data for four different
> structures. I understand now that I need to
>
> i) Create only one section to manage the parameters and use
> PetscSectionAddDof() to add the dofs (sizes of structs) for the
> different structs required for residual evaluation.

This is not what I remember from our discussion yesterday.

> I tried to use MPI_BYTE as the MPI_DataType in DMPlexDistributeData()
> but got the following error. Jed explained that it had something to do
> with how the data is packed and suggested to divide the sizes of the
> structs in (ii) by PetscInt and use MPI_INT for DMPlexDistributeData()
> as a workaround.

That was a hack, but with separate sections, you have homogeneous-sized units.

You said that you have some nodes with an array of structs.  How do you
distinguish four structs of 25 bytes each from one struct of 100 bytes?
What would the section look like and what would you tell PetscSF?

> Btw, DMPlexDistributeData() is missing an extern declaration in petscdmplex.h.

I fixed this yesterday.  It is in 'knepley/feature-plex-generic-distribute' and 
'next'.


pgp7Up3mjT0nx.pgp
Description: PGP signature


Re: [petsc-dev] PetscSection

2013-09-10 Thread Matthew Knepley
On Tue, Sep 10, 2013 at 8:43 PM, Jed Brown  wrote:

> Matthew Knepley  writes:
> >>   I was unclear.  The offset value should be in bytes from the beginning
> >> of the array, currently it is in ints from the beginning of the array.
> Ints
> >> is just plan confusing, Shri has to have this strange size sizeof(his
> >> struct)/sizeof(PetscInt) to indicate how big his chunk is.
> >
> >
> > No, its has nothing to do with Int. Where did this come from? In fact, I
> > use it all the time to mean Scalar offsets. It is
> > interpreter by the creator of the Section. I told him to put in bytes for
> > the sizes. Then you do not need any of that crap.
>
> PetscSF cannot currently scatter individual bytes (4 bytes minimum), and
> even if it could, it's a horribly inefficient representation (4 or 8
> bytes of metadata for each byte of payload).  The quick fix at that
> moment was to send in units of size PetscInt (the struct was always
> going to be divisible by that size).
>

The way I understand it, he never had a single MPI_BYTE, but always a bunch.
Shouldn't that packing handle that case?

   Matt


> After looking at code with him yesterday, it looks like he just needs a
> separate Section for each of four different types of node.  Within each
> node type, the sizes are homogeneous (though there may be multiple
> structs at a vertex of the graph).  In that case, he can make an
> MPI_Datatype for sizeof(struct Node) bytes and communicate those.
> PetscSF should do fine with that.
>
> The indexing still needs the size of a unit, but there would be no stray
> sizeof(Int) running around.
>
>
> In that discussion, we also concluded that having the user add code to
> one big struct definition was a terrible approach and we should
> dynamically build structures by registration.  I pointed him to the
> monitor infrastructure in Parmod (like in TS ex11).
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-dev] PetscSection

2013-09-10 Thread Dmitry Karpeyev
Any chance of a more intuitive name in place of PetscSF?


On Tue, Sep 10, 2013 at 8:43 PM, Jed Brown  wrote:

> Matthew Knepley  writes:
> >>   I was unclear.  The offset value should be in bytes from the beginning
> >> of the array, currently it is in ints from the beginning of the array.
> Ints
> >> is just plan confusing, Shri has to have this strange size sizeof(his
> >> struct)/sizeof(PetscInt) to indicate how big his chunk is.
> >
> >
> > No, its has nothing to do with Int. Where did this come from? In fact, I
> > use it all the time to mean Scalar offsets. It is
> > interpreter by the creator of the Section. I told him to put in bytes for
> > the sizes. Then you do not need any of that crap.
>
> PetscSF cannot currently scatter individual bytes (4 bytes minimum), and
> even if it could, it's a horribly inefficient representation (4 or 8
> bytes of metadata for each byte of payload).  The quick fix at that
> moment was to send in units of size PetscInt (the struct was always
> going to be divisible by that size).
>
> After looking at code with him yesterday, it looks like he just needs a
> separate Section for each of four different types of node.  Within each
> node type, the sizes are homogeneous (though there may be multiple
> structs at a vertex of the graph).  In that case, he can make an
> MPI_Datatype for sizeof(struct Node) bytes and communicate those.
> PetscSF should do fine with that.
>
> The indexing still needs the size of a unit, but there would be no stray
> sizeof(Int) running around.
>
>
> In that discussion, we also concluded that having the user add code to
> one big struct definition was a terrible approach and we should
> dynamically build structures by registration.  I pointed him to the
> monitor infrastructure in Parmod (like in TS ex11).
>


Re: [petsc-dev] PetscSection

2013-09-10 Thread Jed Brown
Matthew Knepley  writes:
> I wish the guy who write PetscSF believed in types smaller than int. Just
> because they are small, does not mean they
> cannot achieve great things. MPI_BYTE is the Little Type That Could.

You have a (rank, offset) per unit.  If unit is one byte, then you have
inflated the metadata by at least 8x (16x when 64-bit-indices).  That
is, you're using 8 bytes of metadata to move each byte of payload.
That's silly; we need a different model.

It looks like Shri would be just fine with 4 separate sections, each
with homogeneous-sized units.


pgpPwwHWdvsmL.pgp
Description: PGP signature


Re: [petsc-dev] PetscSection

2013-09-10 Thread Jed Brown
Matthew Knepley  writes:
>>   I was unclear.  The offset value should be in bytes from the beginning
>> of the array, currently it is in ints from the beginning of the array. Ints
>> is just plan confusing, Shri has to have this strange size sizeof(his
>> struct)/sizeof(PetscInt) to indicate how big his chunk is.
>
>
> No, its has nothing to do with Int. Where did this come from? In fact, I
> use it all the time to mean Scalar offsets. It is
> interpreter by the creator of the Section. I told him to put in bytes for
> the sizes. Then you do not need any of that crap.

PetscSF cannot currently scatter individual bytes (4 bytes minimum), and
even if it could, it's a horribly inefficient representation (4 or 8
bytes of metadata for each byte of payload).  The quick fix at that
moment was to send in units of size PetscInt (the struct was always
going to be divisible by that size).

After looking at code with him yesterday, it looks like he just needs a
separate Section for each of four different types of node.  Within each
node type, the sizes are homogeneous (though there may be multiple
structs at a vertex of the graph).  In that case, he can make an
MPI_Datatype for sizeof(struct Node) bytes and communicate those.
PetscSF should do fine with that.

The indexing still needs the size of a unit, but there would be no stray
sizeof(Int) running around.


In that discussion, we also concluded that having the user add code to
one big struct definition was a terrible approach and we should
dynamically build structures by registration.  I pointed him to the
monitor infrastructure in Parmod (like in TS ex11).


pgptr8v0bb9RO.pgp
Description: PGP signature


Re: [petsc-dev] PetscSection

2013-09-10 Thread Matthew Knepley
On Tue, Sep 10, 2013 at 8:36 PM, Shri  wrote:

>
> On Sep 10, 2013, at 6:57 PM, Matthew Knepley wrote:
>
> On Tue, Sep 10, 2013 at 6:53 PM, Barry Smith  wrote:
>
>>
>> On Sep 10, 2013, at 6:11 PM, Matthew Knepley  wrote:
>>
>> > On Tue, Sep 10, 2013 at 5:14 PM, Barry Smith 
>> wrote:
>> >
>> >We had this discussion last November. Now that DMPlex has further
>> matured and is likely to play a big role in some major applications I'd
>> like to revisit the issue and see if PetscSection can be refactored to be
>> easier to understand and more useful and more "compatible" with how DMPlex
>> is developing.
>> >
>> > PetscSection seems to have two distinct roles:
>> >
>> >   Role 1   (I'll call this object XX)  Provides a way of indexing into
>> an array of "items" where different items can be of different sizes and
>> there can be spaces in the memory between items.
>> >
>> > -
>> >   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
>> >   PetscInt *atlasDof; /* Describes layout of
>> storage, point --> # of values */
>> >   PetscInt *atlasOff; /* Describes layout of
>> storage, point --> offset into storage */
>> >   PetscInt  maxDof;   /* Maximum dof on any
>> point */
>> >   PetscBool setup;
>> >
>> > So if we have a regular C array pointer to type int, double etc we do
>>  addr = array + p == &array[p] to get the address of the pth item (where p
>> may only be valid between pstart and pend.) With PetscSection instead one
>> would do XXGetOffset(xx,p,&offset);  addr = (type*) (array + offset); while
>> XXGetDOF(xx,p,dof); gives you the "size" of the memory at location addr.
>> >
>> > For historical reasons the offsets are in terms of int* instead of
>> char*;  (this should be fixed)
>> >
>> > I do not understand this. The offsets are integers, and I think they
>> should be.
>>
>>   I was unclear.  The offset value should be in bytes from the beginning
>> of the array, currently it is in ints from the beginning of the array. Ints
>> is just plan confusing, Shri has to have this strange size sizeof(his
>> struct)/sizeof(PetscInt) to indicate how big his chunk is.
>
>
> No, its has nothing to do with Int. Where did this come from? In fact, I
> use it all the time to mean Scalar offsets. It is
> interpreter by the creator of the Section. I told him to put in bytes for
> the sizes. Then you do not need any of that crap.
>
>
> After speaking with Barry today, I realized that I was abusing
> PetscSection and doing a roundabout indexing to access the desired data. I
> was using four sections to distribute data for four different structures. I
> understand now that I need to
> i) Create only one section to manage the parameters and use
> PetscSectionAddDof() to add the dofs (sizes of structs) for the different
> structs required for residual evaluation.
> ii) Allocate a chunk of memory equal to the size returned from
> PetscSectionGetStorageSize(). Note that the user can do (i) and (ii)
> directly while reading the circuit parameters file. I am doing this after
> the file has been read and the different structs have been created, hence
> the need for allocating the chunk of memory.
> iii) Loop over the edges, vertices and copy over the parameters using
> PetscSectionGetOffset().
> iv) Use DMPlexDistributeData() with this section and the chunk of memory.
>
> I tried to use MPI_BYTE as the MPI_DataType in DMPlexDistributeData() but
> got the following error. Jed explained that it had something to do with how
> the data is packed and suggested to divide the sizes of the structs in (ii)
> by PetscInt and use MPI_INT for DMPlexDistributeData() as a workaround.
>

I wish the guy who write PetscSF believed in types smaller than int. Just
because they are small, does not mean they
cannot achieve great things. MPI_BYTE is the Little Type That Could.

   Matt


> [0]PETSC ERROR: - Error Message
> 
> [0]PETSC ERROR: No support for this operation for this object type!
> [0]PETSC ERROR: No support for type size not divisible by 4!
> [0]PETSC ERROR:
> 
> [0]PETSC ERROR: Petsc Development GIT revision:
> d45619dec29bfb59cf96225a84e0a74106da50ca  GIT Date: 2013-08-17 23:45:05
> -0500
> [1]PETSC ERROR: - Error Message
> 
> [1]PETSC ERROR: No support for this operation for this object type!
> [1]PETSC ERROR: No support for type size not divisible by 4!
> [1]PETSC ERROR:
> 
> [1]PETSC ERROR: Petsc Development GIT revision:
> d45619dec29bfb59cf96225a84e0a74106da50ca  GIT Date: 2013-08-17 23:45:05
> -0500
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR:

Re: [petsc-dev] PetscSection

2013-09-10 Thread Shri

On Sep 10, 2013, at 6:57 PM, Matthew Knepley wrote:

> On Tue, Sep 10, 2013 at 6:53 PM, Barry Smith  wrote:
> 
> On Sep 10, 2013, at 6:11 PM, Matthew Knepley  wrote:
> 
> > On Tue, Sep 10, 2013 at 5:14 PM, Barry Smith  wrote:
> >
> >We had this discussion last November. Now that DMPlex has further 
> > matured and is likely to play a big role in some major applications I'd 
> > like to revisit the issue and see if PetscSection can be refactored to be 
> > easier to understand and more useful and more "compatible" with how DMPlex 
> > is developing.
> >
> > PetscSection seems to have two distinct roles:
> >
> >   Role 1   (I'll call this object XX)  Provides a way of indexing into an 
> > array of "items" where different items can be of different sizes and there 
> > can be spaces in the memory between items.
> >
> > -
> >   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
> >   PetscInt *atlasDof; /* Describes layout of 
> > storage, point --> # of values */
> >   PetscInt *atlasOff; /* Describes layout of 
> > storage, point --> offset into storage */
> >   PetscInt  maxDof;   /* Maximum dof on any point */
> >   PetscBool setup;
> >
> > So if we have a regular C array pointer to type int, double etc we do  addr 
> > = array + p == &array[p] to get the address of the pth item (where p may 
> > only be valid between pstart and pend.) With PetscSection instead one would 
> > do XXGetOffset(xx,p,&offset);  addr = (type*) (array + offset); while 
> > XXGetDOF(xx,p,dof); gives you the "size" of the memory at location addr.
> >
> > For historical reasons the offsets are in terms of int* instead of 
> > char*;  (this should be fixed)
> >
> > I do not understand this. The offsets are integers, and I think they should 
> > be.
> 
>   I was unclear.  The offset value should be in bytes from the beginning of 
> the array, currently it is in ints from the beginning of the array. Ints is 
> just plan confusing, Shri has to have this strange size sizeof(his 
> struct)/sizeof(PetscInt) to indicate how big his chunk is.
> 
> No, its has nothing to do with Int. Where did this come from? In fact, I use 
> it all the time to mean Scalar offsets. It is
> interpreter by the creator of the Section. I told him to put in bytes for the 
> sizes. Then you do not need any of that crap.

After speaking with Barry today, I realized that I was abusing PetscSection and 
doing a roundabout indexing to access the desired data. I was using four 
sections to distribute data for four different structures. I understand now 
that I need to 
i) Create only one section to manage the parameters and use 
PetscSectionAddDof() to add the dofs (sizes of structs) for the different 
structs required for residual evaluation.
ii) Allocate a chunk of memory equal to the size returned from 
PetscSectionGetStorageSize(). Note that the user can do (i) and (ii) directly 
while reading the circuit parameters file. I am doing this after the file has 
been read and the different structs have been created, hence the need for 
allocating the chunk of memory.
iii) Loop over the edges, vertices and copy over the parameters using 
PetscSectionGetOffset().
iv) Use DMPlexDistributeData() with this section and the chunk of memory.

I tried to use MPI_BYTE as the MPI_DataType in DMPlexDistributeData() but got 
the following error. Jed explained that it had something to do with how the 
data is packed and suggested to divide the sizes of the structs in (ii) by 
PetscInt and use MPI_INT for DMPlexDistributeData() as a workaround.

[0]PETSC ERROR: - Error Message 

[0]PETSC ERROR: No support for this operation for this object type!
[0]PETSC ERROR: No support for type size not divisible by 4!
[0]PETSC ERROR: 

[0]PETSC ERROR: Petsc Development GIT revision: 
d45619dec29bfb59cf96225a84e0a74106da50ca  GIT Date: 2013-08-17 23:45:05 -0500
[1]PETSC ERROR: - Error Message 

[1]PETSC ERROR: No support for this operation for this object type!
[1]PETSC ERROR: No support for type size not divisible by 4!
[1]PETSC ERROR: 

[1]PETSC ERROR: Petsc Development GIT revision: 
d45619dec29bfb59cf96225a84e0a74106da50ca  GIT Date: 2013-08-17 23:45:05 -0500
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: 

[0]PETSC ERROR: ./PF2 on a debug-mode named Shrirangs-MacBook-Pro.local by Shri 
Tue Sep 10 20:22:44 2013
[0]PETSC ERROR: Libraries linked from /Users/Shri/Documents/petsc/debug-mode/lib

Re: [petsc-dev] PetscSection

2013-09-10 Thread Barry Smith

On Sep 10, 2013, at 6:11 PM, Matthew Knepley  wrote:

> On Tue, Sep 10, 2013 at 5:14 PM, Barry Smith  wrote:
> 
>We had this discussion last November. Now that DMPlex has further matured 
> and is likely to play a big role in some major applications I'd like to 
> revisit the issue and see if PetscSection can be refactored to be easier to 
> understand and more useful and more "compatible" with how DMPlex is 
> developing.
> 
> PetscSection seems to have two distinct roles:
> 
>   Role 1   (I'll call this object XX)  Provides a way of indexing into an 
> array of "items" where different items can be of different sizes and there 
> can be spaces in the memory between items.
> 
> -
>   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
>   PetscInt *atlasDof; /* Describes layout of storage, 
> point --> # of values */
>   PetscInt *atlasOff; /* Describes layout of storage, 
> point --> offset into storage */
>   PetscInt  maxDof;   /* Maximum dof on any point */
>   PetscBool setup;
> 
> So if we have a regular C array pointer to type int, double etc we do  addr = 
> array + p == &array[p] to get the address of the pth item (where p may only 
> be valid between pstart and pend.) With PetscSection instead one would do 
> XXGetOffset(xx,p,&offset);  addr = (type*) (array + offset); while 
> XXGetDOF(xx,p,dof); gives you the "size" of the memory at location addr.
> 
> For historical reasons the offsets are in terms of int* instead of char*; 
>  (this should be fixed)
> 
> I do not understand this. The offsets are integers, and I think they should 
> be.

  I was unclear.  The offset value should be in bytes from the beginning of the 
array, currently it is in ints from the beginning of the array. Ints is just 
plan confusing, Shri has to have this strange size sizeof(his 
struct)/sizeof(PetscInt) to indicate how big his chunk is.

>  
> 
> This is a useful thing but in itself not a killer app.  What makes it a 
> killer app are
> 
>  DMPlexDistributeField(DM dm, PetscSF pointSF, XX originalSection, Vec 
> originalVec, XX newSection, Vec newVec)
>  DMPlexDistributeData(DM dm, PetscSF pointSF, XX originalSection, 
> MPI_Datatype datatype, void *originalData, XX newSection, void **newData)
> 
> Note that these do not actually even use the DM (the code should be fixed 
> by moving it out of DMPlex into the "lower level" code.) Note also these 
> really create newSection internally (the code should be fixed by making these 
> take PetscSection *newSection).
> 
> I can move them over to vsectionis.c. I do not agree that the signature 
> should change. I thought we were going to
> the VecLoad() paradigm where an empty object comes in.
>  
> What this means is that if I have a set of "points" distributed across 
> MPI processes and for each point a "bunch of data" (in an "array" offsetted 
> by a XX) and I redistribute the points across MPI processes including adding 
> ghost points then I can redistribute all the "bunch of data" (including that 
> needed for ghost points) trivially and conveniently access it using the 
> newSection.
> 
> Yep, this should be very useful, an extension of the very useful VecScatter.
>  
> 
>Role 2 (I'll call this object CC)  Provides a container that tells you 
> which parts of "bunches of data" in an array or Vec are associated with 
> different "physical simulation fields" and are possibly constrained (for 
> example by boundary conditions). Uses XX to indicate each field etc.
> 
>   
>   PetscSection  bc;   /* Describes constraints, point 
> --> # local dofs which are constrained */
>   PetscInt *bcIndices;/* Local indices for 
> constrained dofs */
> 
> 
>   PetscInt  numFields;/* The number of fields making 
> up the degrees of freedom */
>   const char  **fieldNames;   /* The field names */
>   PetscInt *numFieldComponents; /* The number of 
> components in each field */
>   PetscSection *field;/* A section describing the 
> layout and constraints for each field */
> 
> 
>Suggestion:
> 
> We split PetscSection into XX and CC reorganize and clean them up.
> 
> Okay. The reason I did it this way was that any FEM code is going to need CC, 
> and it needs to contain XX, so
> I just made XX the container. We can make CC the container and have the split 
> that you want. It makes sense.
>  
>Relationship of XX to IS.
> 
> An IS defines a subset of entries in a Vec or possibly in an array of 
> type int. Generally individual entries in an IS do not have particular 
> meaning relative to other entries. That is one does not iterate over entries 
> in an IS, one just uses the IS to get a new Vec and then does whatever on 
> that Vec.
> 
>An XX is something you often iterate ov

Re: [petsc-dev] PetscSection

2013-09-10 Thread Matthew Knepley
On Tue, Sep 10, 2013 at 6:53 PM, Barry Smith  wrote:

>
> On Sep 10, 2013, at 6:11 PM, Matthew Knepley  wrote:
>
> > On Tue, Sep 10, 2013 at 5:14 PM, Barry Smith  wrote:
> >
> >We had this discussion last November. Now that DMPlex has further
> matured and is likely to play a big role in some major applications I'd
> like to revisit the issue and see if PetscSection can be refactored to be
> easier to understand and more useful and more "compatible" with how DMPlex
> is developing.
> >
> > PetscSection seems to have two distinct roles:
> >
> >   Role 1   (I'll call this object XX)  Provides a way of indexing into
> an array of "items" where different items can be of different sizes and
> there can be spaces in the memory between items.
> >
> > -
> >   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
> >   PetscInt *atlasDof; /* Describes layout of
> storage, point --> # of values */
> >   PetscInt *atlasOff; /* Describes layout of
> storage, point --> offset into storage */
> >   PetscInt  maxDof;   /* Maximum dof on any
> point */
> >   PetscBool setup;
> >
> > So if we have a regular C array pointer to type int, double etc we do
>  addr = array + p == &array[p] to get the address of the pth item (where p
> may only be valid between pstart and pend.) With PetscSection instead one
> would do XXGetOffset(xx,p,&offset);  addr = (type*) (array + offset); while
> XXGetDOF(xx,p,dof); gives you the "size" of the memory at location addr.
> >
> > For historical reasons the offsets are in terms of int* instead of
> char*;  (this should be fixed)
> >
> > I do not understand this. The offsets are integers, and I think they
> should be.
>
>   I was unclear.  The offset value should be in bytes from the beginning
> of the array, currently it is in ints from the beginning of the array. Ints
> is just plan confusing, Shri has to have this strange size sizeof(his
> struct)/sizeof(PetscInt) to indicate how big his chunk is.


No, its has nothing to do with Int. Where did this come from? In fact, I
use it all the time to mean Scalar offsets. It is
interpreter by the creator of the Section. I told him to put in bytes for
the sizes. Then you do not need any of that crap.



> >
> >
> > This is a useful thing but in itself not a killer app.  What makes
> it a killer app are
> >
> >  DMPlexDistributeField(DM dm, PetscSF pointSF, XX originalSection, Vec
> originalVec, XX newSection, Vec newVec)
> >  DMPlexDistributeData(DM dm, PetscSF pointSF, XX originalSection,
> MPI_Datatype datatype, void *originalData, XX newSection, void **newData)
> >
> > Note that these do not actually even use the DM (the code should be
> fixed by moving it out of DMPlex into the "lower level" code.) Note also
> these really create newSection internally (the code should be fixed by
> making these take PetscSection *newSection).
> >
> > I can move them over to vsectionis.c. I do not agree that the signature
> should change. I thought we were going to
> > the VecLoad() paradigm where an empty object comes in.
> >
> > What this means is that if I have a set of "points" distributed
> across MPI processes and for each point a "bunch of data" (in an "array"
> offsetted by a XX) and I redistribute the points across MPI processes
> including adding ghost points then I can redistribute all the "bunch of
> data" (including that needed for ghost points) trivially and conveniently
> access it using the newSection.
> >
> > Yep, this should be very useful, an extension of the very useful
> VecScatter.
> >
> >
> >Role 2 (I'll call this object CC)  Provides a container that tells
> you which parts of "bunches of data" in an array or Vec are associated with
> different "physical simulation fields" and are possibly constrained (for
> example by boundary conditions). Uses XX to indicate each field etc.
> >
> >   
> >   PetscSection  bc;   /* Describes constraints,
> point --> # local dofs which are constrained */
> >   PetscInt *bcIndices;/* Local indices for
> constrained dofs */
> >
> >
> >   PetscInt  numFields;/* The number of fields
> making up the degrees of freedom */
> >   const char  **fieldNames;   /* The field names */
> >   PetscInt *numFieldComponents; /* The number of
> components in each field */
> >   PetscSection *field;/* A section describing
> the layout and constraints for each field */
> >
> >
> >Suggestion:
> >
> > We split PetscSection into XX and CC reorganize and clean them
> up.
> >
> > Okay. The reason I did it this way was that any FEM code is going to
> need CC, and it needs to contain XX, so
> > I just made XX the container. We can make CC the container and have the
> split that you want. It makes sense.
> >
> >Relationship of XX to IS.
> >
>

Re: [petsc-dev] PetscSection

2013-09-10 Thread Matthew Knepley
On Tue, Sep 10, 2013 at 5:14 PM, Barry Smith  wrote:

>
>We had this discussion last November. Now that DMPlex has further
> matured and is likely to play a big role in some major applications I'd
> like to revisit the issue and see if PetscSection can be refactored to be
> easier to understand and more useful and more "compatible" with how DMPlex
> is developing.
>
> PetscSection seems to have two distinct roles:
>
>   Role 1   (I'll call this object XX)  Provides a way of indexing into an
> array of "items" where different items can be of different sizes and there
> can be spaces in the memory between items.
>
> -
>   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
>   PetscInt *atlasDof; /* Describes layout of
> storage, point --> # of values */
>   PetscInt *atlasOff; /* Describes layout of
> storage, point --> offset into storage */
>   PetscInt  maxDof;   /* Maximum dof on any point
> */
>   PetscBool setup;
>
> So if we have a regular C array pointer to type int, double etc we do
>  addr = array + p == &array[p] to get the address of the pth item (where p
> may only be valid between pstart and pend.) With PetscSection instead one
> would do XXGetOffset(xx,p,&offset);  addr = (type*) (array + offset); while
> XXGetDOF(xx,p,dof); gives you the "size" of the memory at location addr.
>
> For historical reasons the offsets are in terms of int* instead of
> char*;  (this should be fixed)
>

I do not understand this. The offsets are integers, and I think they should
be.


>
> This is a useful thing but in itself not a killer app.  What makes it
> a killer app are
>
>  DMPlexDistributeField(DM dm, PetscSF pointSF, XX originalSection, Vec
> originalVec, XX newSection, Vec newVec)
>  DMPlexDistributeData(DM dm, PetscSF pointSF, XX originalSection,
> MPI_Datatype datatype, void *originalData, XX newSection, void **newData)
>
> Note that these do not actually even use the DM (the code should be
> fixed by moving it out of DMPlex into the "lower level" code.) Note also
> these really create newSection internally (the code should be fixed by
> making these take PetscSection *newSection).
>

I can move them over to vsectionis.c. I do not agree that the signature
should change. I thought we were going to
the VecLoad() paradigm where an empty object comes in.


> What this means is that if I have a set of "points" distributed across
> MPI processes and for each point a "bunch of data" (in an "array" offsetted
> by a XX) and I redistribute the points across MPI processes including
> adding ghost points then I can redistribute all the "bunch of data"
> (including that needed for ghost points) trivially and conveniently access
> it using the newSection.
>

Yep, this should be very useful, an extension of the very useful VecScatter.


>
>Role 2 (I'll call this object CC)  Provides a container that tells you
> which parts of "bunches of data" in an array or Vec are associated with
> different "physical simulation fields" and are possibly constrained (for
> example by boundary conditions). Uses XX to indicate each field etc.
>
>   
>   PetscSection  bc;   /* Describes constraints,
> point --> # local dofs which are constrained */
>   PetscInt *bcIndices;/* Local indices for
> constrained dofs */
>
>
>   PetscInt  numFields;/* The number of fields
> making up the degrees of freedom */
>   const char  **fieldNames;   /* The field names */
>   PetscInt *numFieldComponents; /* The number of
> components in each field */
>   PetscSection *field;/* A section describing the
> layout and constraints for each field */
>
>
>Suggestion:
>
> We split PetscSection into XX and CC reorganize and clean them up.
>

Okay. The reason I did it this way was that any FEM code is going to need
CC, and it needs to contain XX, so
I just made XX the container. We can make CC the container and have the
split that you want. It makes sense.


>Relationship of XX to IS.
>
> An IS defines a subset of entries in a Vec or possibly in an array
> of type int. Generally individual entries in an IS do not have particular
> meaning relative to other entries. That is one does not iterate over
> entries in an IS, one just uses the IS to get a new Vec and then does
> whatever on that Vec.
>
>An XX is something you often iterate over, for example
>
> for (p = pStart; p < pEnd; ++p) {
>   PetscInt dof, off, s;
>   ierr = PetscSectionGetDof(mesh->supportSection, p,
> &dof);CHKERRQ(ierr);
>   ierr = PetscSectionGetOffset(mesh->supportSection, p,
> &off);CHKERRQ(ierr);
>   do something
>
> that is you get a single entry of XX and do something with it (though you
> can also get a bunch of entries and do something on the whole bunch

Re: [petsc-dev] PetscSection

2013-09-10 Thread Barry Smith

   We had this discussion last November. Now that DMPlex has further matured 
and is likely to play a big role in some major applications I'd like to revisit 
the issue and see if PetscSection can be refactored to be easier to understand 
and more useful and more "compatible" with how DMPlex is developing.

PetscSection seems to have two distinct roles: 

  Role 1   (I'll call this object XX)  Provides a way of indexing into an array 
of "items" where different items can be of different sizes and there can be 
spaces in the memory between items. 

-
  struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
  PetscInt *atlasDof; /* Describes layout of storage, 
point --> # of values */
  PetscInt *atlasOff; /* Describes layout of storage, 
point --> offset into storage */
  PetscInt  maxDof;   /* Maximum dof on any point */
  PetscBool setup;

So if we have a regular C array pointer to type int, double etc we do  addr = 
array + p == &array[p] to get the address of the pth item (where p may only be 
valid between pstart and pend.) With PetscSection instead one would do 
XXGetOffset(xx,p,&offset);  addr = (type*) (array + offset); while 
XXGetDOF(xx,p,dof); gives you the "size" of the memory at location addr.

For historical reasons the offsets are in terms of int* instead of char*;  
(this should be fixed)

This is a useful thing but in itself not a killer app.  What makes it a 
killer app are

 DMPlexDistributeField(DM dm, PetscSF pointSF, XX originalSection, Vec 
originalVec, XX newSection, Vec newVec)
 DMPlexDistributeData(DM dm, PetscSF pointSF, XX originalSection, MPI_Datatype 
datatype, void *originalData, XX newSection, void **newData)

Note that these do not actually even use the DM (the code should be fixed 
by moving it out of DMPlex into the "lower level" code.) Note also these really 
create newSection internally (the code should be fixed by making these take 
PetscSection *newSection).

What this means is that if I have a set of "points" distributed across MPI 
processes and for each point a "bunch of data" (in an "array" offsetted by a 
XX) and I redistribute the points across MPI processes including adding ghost 
points then I can redistribute all the "bunch of data" (including that needed 
for ghost points) trivially and conveniently access it using the newSection.


   Role 2 (I'll call this object CC)  Provides a container that tells you which 
parts of "bunches of data" in an array or Vec are associated with different 
"physical simulation fields" and are possibly constrained (for example by 
boundary conditions). Uses XX to indicate each field etc.

  
  PetscSection  bc;   /* Describes constraints, point 
--> # local dofs which are constrained */
  PetscInt *bcIndices;/* Local indices for constrained 
dofs */


  PetscInt  numFields;/* The number of fields making up 
the degrees of freedom */
  const char  **fieldNames;   /* The field names */
  PetscInt *numFieldComponents; /* The number of components 
in each field */
  PetscSection *field;/* A section describing the 
layout and constraints for each field */


   Suggestion:  

We split PetscSection into XX and CC reorganize and clean them up.

   Relationship of XX to IS.

An IS defines a subset of entries in a Vec or possibly in an array of 
type int. Generally individual entries in an IS do not have particular meaning 
relative to other entries. That is one does not iterate over entries in an IS, 
one just uses the IS to get a new Vec and then does whatever on that Vec.

   An XX is something you often iterate over, for example

for (p = pStart; p < pEnd; ++p) {
  PetscInt dof, off, s;
  ierr = PetscSectionGetDof(mesh->supportSection, p, &dof);CHKERRQ(ierr);
  ierr = PetscSectionGetOffset(mesh->supportSection, p, &off);CHKERRQ(ierr);
  do something

that is you get a single entry of XX and do something with it (though you can 
also get a bunch of entries and do something on the whole bunch).


   Question: 

Is XX a powerful and useful thing? Do we have the correct API for it? Can 
it be used for other purposes? 

Can it be made more powerful: for example the "chunk" that it gives you is 
always a contiqous set of bytes in the "array". Would be be better off with a 
YY object that allowed them to be non-contiquous? Or should YY be a higher 
level object implemented with XX?

   Looking for a vigorous discussion and then code that I can better understand.

   Barry



On Nov 8, 2012, at 10:18 PM, Jed Brown  wrote:

> This is inevitable, but may still not resolve anything. I'm sure Matt will 
> correct whatever below is incorrect. The main question is whether to promote 
> PetscSection to a PetscObject or to hide/re

[petsc-dev] PetscSection

2012-11-12 Thread Tim Tautges


On 11/10/2012 06:33 PM, Matthew Knepley wrote:
> On Sat, Nov 10, 2012 at 7:14 PM, Jed Brown  wrote:
>> On Sat, Nov 10, 2012 at 5:31 PM, Matthew Knepley  
>> wrote:
>>>
>>> I was being too pessimistic. You can assign an intermediate node to any
>>> set
>>> of shared vertices, but I am not sure whether you start from the top or
>>> bottom,
>>> and the hard part is preallocation.
>>
>>
>> Hmm, I'm not sure how to do hanging nodes, but for a conforming mesh
>> starting with cell-to-vertex c2v[...]
>>
>> # build inverse map
>> v2c = defaultdict(lambda v:[])
>> for c in cells:
>>for v in c2v[c]:
>>  v2c[v].append(c)
>>
>> # find common subsets
>> edges = set()
>> c2e = defaultdict(lambda c:set())
>> for v in vertices:
>>for (v1,c1,v2,c2) in sorted_vertices_appearing_in_exactly_2_cells(v2c[v]);
>>  edges.add((v1,v2))
>>  c2e[c1].add((v1,v2))
>>  c2e[c2].add((v1,v2))
>>
>> # repeat using edges to bound cell-to-face
>> ...
>>
>>
>>
>> The problem with this is that you don't automatically get consistent
>> orientation. If you need to work with basis functions, you need to orient,
>> in which case the ordering produced by the procedure above needs to be
>> fixed.
>>
>> In my opinion, that orientation is critical, such that connectivity without
>> topology is of limited value. The topological dimension and number of
>> vertices is just not sufficient to identify the topology of an element. For
>> example, a hex can be collapsed to 7 vertices via an edge or across the
>> middle of a face.
>
> Definitely, that is much harder than the topology, and where I need specific
> stuff for non-simplices, but it might be possible to use the initial 
> orientation
> of vertices.
>
>  Matt
>

This is a classic compute vs. store problem.  I solve that with the CN 
(Canonical Numbering) class in MOAB, 
http://trac.mcs.anl.gov/projects/ITAPS/browser/MOAB/trunk/src/moab/CN.hpp.

- tim

> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
>

-- 

"You will keep in perfect peace him whose mind is
   steadfast, because he trusts in you."   Isaiah 26:3

  Tim TautgesAdjunct Professor, Engr. Physics
  (tautges at engr.wisc.edu)University of Wisconsin-Madison
  phone: (608) 263-8485  1500 Engineering Dr.
fax: (608) 263-4499  Madison, WI 53706


-- 

"You will keep in perfect peace him whose mind is
   steadfast, because he trusts in you."   Isaiah 26:3

  Tim TautgesArgonne National Laboratory
  (tautges at mcs.anl.gov)  (telecommuting from UW-Madison)
  phone (gvoice): (608) 354-1459  1500 Engineering Dr.
 fax: (608) 263-4499  Madison, WI 53706



[petsc-dev] PetscSection

2012-11-10 Thread Blaise A Bourdin

On Nov 10, 2012, at 2:29 PM, Jed Brown mailto:jedbrown 
at mcs.anl.gov>>
 wrote:

On Sat, Nov 10, 2012 at 2:09 PM, Blaise A Bourdin mailto:bourdin at lsu.edu>> wrote:
Nope... and please, don't break that...

Here is an example of a mesh with 4 element blocks. In an analysis code, for 
each block , I would specify the element type, type of physics, coefficients 
etc in a petscBag. That said, I may be doing it wrong...

block 10 is made of tri
block 22 is made of quad
blocks 100 and 102 are made of bars

I'm not suggesting any demotion of this ability. I'm trying to understand what 
is the advantage of having a stratum with unsorted mixed dimension. If the 
blocks were sorted by dimension, you would still access them using a label 
(which gives you an index set, perhaps contiguous). You would still be able to 
do all the normal operations with those points. If you want to have one loop 
over all elements of the various types, you would union or concatenate the 
index sets of each type. Am I missing something?

No, you are absolutely right. I get your point now.

Blaise



--
Department of Mathematics and Center for Computation & Technology
Louisiana State University, Baton Rouge, LA 70803, USA
Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin







-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Blaise A Bourdin

On Nov 10, 2012, at 1:54 PM, Jed Brown mailto:jedbrown 
at mcs.anl.gov>> wrote:

On Sat, Nov 10, 2012 at 1:44 PM, Matthew Knepley mailto:knepley at gmail.com>> wrote:
Yes, I agree that they are different geometrically, and in PyLith we
make Labels to
distinguish, which I think is the right way.

Cool. Of course you have to create the labels somehow. Do you currently make 
labels for all the "sidesets" in the mesh you load (e.g., from exodus)?

For exodus, I don't read side sets. The exodus interface for side set is really 
crappy, and does not allow an side set _inside_ the domain. Instead, I do not 
assume that all elements have the same codimension.


Labels in their current form seem to do two things (denote sets and associate 
values with points in those sets). Don't you frequently want one or the other?

> Fine, I care much more about conceptual simplicity in the interface. I think
> adoption will be higher if 95% of users don't see the word "stratum".

Note that this will have to be set explicitly somehow, since you cannot "tell"
from the DAG that something has a given dimension/co-dim.

Are your points not sorted by dimension? (Most of the code I've seen in the 
examples assumes homogeneous strata.)

Nope... and please, don't break that...

Here is an example of a mesh with 4 element blocks. In an analysis code, for 
each block , I would specify the element type, type of physics, coefficients 
etc in a petscBag. That said, I may be doing it wrong...

block 10 is made of tri
block 22 is made of quad
blocks 100 and 102 are made of bars
iMac:Exodus blaise$ ./test1 -i Square-mixed2.gen

dm:
Mesh in 2 dimensions:
  0-cells: 9
  2-cells: 12
[0]: Number of vertices in mesh: 9
[0]: Number of cells in mesh: 12
viewing Mesh 'mesh'
Max sizes cone: 4 support: 5
viewing IFSieve 'mesh sieve'
cap --> base:
[0]: 12 > 0
[0]: 12 > 1
[0]: 12 > 4
[0]: 12 > 3
[0]: 12 > 5
[0]: 13 > 6
[0]: 13 > 0
[0]: 13 > 7
[0]: 13 > 5
[0]: 14 > 1
[0]: 14 > 6
[0]: 14 > 0
[0]: 15 > 2
[0]: 15 > 3
[0]: 15 > 1
[0]: 16 > 11
[0]: 16 > 2
[0]: 17 > 10
[0]: 17 > 3
[0]: 17 > 4
[0]: 17 > 11
[0]: 17 > 2
[0]: 18 > 8
[0]: 18 > 10
[0]: 18 > 4
[0]: 19 > 5
[0]: 19 > 9
[0]: 19 > 8
[0]: 19 > 4
[0]: 20 > 7
[0]: 20 > 5
[0]: 20 > 9
base <-- cap:
[0]: 0<12
[0]: 0<13
[0]: 0<14
[0]: 1<12
[0]: 1<14
[0]: 1<15
[0]: 2<15
[0]: 2<16
[0]: 2<17
[0]: 3<15
[0]: 3<17
[0]: 3<12
[0]: 4<12
[0]: 4<17
[0]: 4<18
[0]: 4<19
[0]: 5<19
[0]: 5<20
[0]: 5<13
[0]: 5<12
[0]: 6<13
[0]: 6<14
[0]: 7<20
[0]: 7<13
[0]: 8<18
[0]: 8<19
[0]: 9<19
[0]: 9<20
[0]: 10<17
[0]: 10<18
[0]: 11<16
[0]: 11<17
Orientation:
[0]: 0<12: 0
[0]: 0<13: 0
[0]: 0<14: 0
[0]: 1<12: 0
[0]: 1<14: 0
[0]: 1<15: 0
[0]: 2<15: 0
[0]: 2<16: 0
[0]: 2<17: 0
[0]: 3<15: 0
[0]: 3<17: 0
[0]: 3<12: 0
[0]: 4<12: 0
[0]: 4<17: 0
[0]: 4<18: 0
[0]: 4<19: 0
[0]: 5<19: 0
[0]: 5<20: 0
[0]: 5<13: 0
[0]: 5<12: 0
[0]: 6<13: 0
[0]: 6<14: 0
[0]: 7<20: 0
[0]: 7<13: 0
[0]: 8<18: 0
[0]: 8<19: 0
[0]: 9<19: 0
[0]: 9<20: 0
[0]: 10<17: 0
[0]: 10<18: 0
[0]: 11<16: 0
[0]: 11<17: 0
viewing GeneralSection 'coordinates'
  Fields: 0
[0]:   12 dim 2 offset 0   0 0
[0]:   13 dim 2 offset 2   -0.5 0
[0]:   14 dim 2 offset 4   -0.5 -0.5
[0]:   15 dim 2 offset 6   0 -0.5
[0]:   16 dim 2 offset 8   0.5 -0.5
[0]:   17 dim 2 offset 10   0.5 0
[0]:   18 dim 2 offset 12   0.5 0.5
[0]:   19 dim 2 offset 14   0 0.5
[0]:   20 dim 2 offset 16   -0.5 0.5
viewing LabelSifter: 'Cell sets'
cap --> base:
[0]: 10>0
[0]: 10>1
[0]: 10>2
[0]: 10>3
[0]: 22>4
[0]: 22>5
[0]: 100>6
[0]: 100>7
[0]: 102>8
[0]: 102>9
[0]: 102>10
[0]: 102>11
viewing LabelSifter: 'Vertex sets'
cap --> base:
[0]: 1>16
[0]: 1>17
[0]: 1>18


--
Department of Mathematics and Center for Computation & Technology
Louisiana State University, Baton Rouge, LA 70803, USA
Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin







-- next part --
An HTML attachment was scrubbed...
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: test1.c
Type: application/octet-stream
Size: 2711 bytes
Desc: test1.c
URL: 

-- next part --
A non-text attachment was scrubbed...
Name: Square-mixed2.gen
Type: application/octet-stream
Size: 2572 bytes
Desc: Square-mixed2.gen
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Blaise A Bourdin

On Nov 10, 2012, at 1:44 PM, Matthew Knepley 
 wrote:

> On Sat, Nov 10, 2012 at 2:33 PM, Jed Brown  wrote:
>> On Sat, Nov 10, 2012 at 1:26 PM, Matthew Knepley  
>> wrote:
>>> 
>>> I do not understand. Codim 0 and 1 are handled in an identical way.
>> 
>> 
>> The user doesn't do the same thing with them. Your data structure treats
>> them the same way (though I'm not sure how you distinguish between a tet and
>> a quad if both are "height 0" and you don't store intermediate levels), but
>> that seems like an implementation detail to me.
> 
> Yes, I agree that they are different geometrically, and in PyLith we
> make Labels to
> distinguish, which I think is the right way.

I actually disagree they may or may not be treated different. It all depends on 
what you want to do. If all you are interested in is computing integrals, 
elements are elements are elements. There is no fundamental difference between 
computing integrals over surfaces or volumes. And the user already needs to 
keep track of the polynomial order, interpolation type, integration type etc on 
a per element or per block of elements basis, right? If you cache the values of 
your FE basis functions at integration points and integration weights, 
integrating over a surface or an element is _exactly_ the same thing.

> 
>>> 
>>> I am not against some convenience interface for dim/co-dim, however it
>>> would
>>> just be implemented by making a label, since we have many equivalent
>>> structural
>>> things which would map to it.
>> 
>> 
>> Fine, I care much more about conceptual simplicity in the interface. I think
>> adoption will be higher if 95% of users don't see the word "stratum".
> 
> Note that this will have to be set explicitly somehow, since you cannot "tell"
> from the DAG that something has a given dimension/co-dim.

You can also not say anything about the element type from the DAG either. Is 
this a problem? Of course no. My point is that if a mesh has cells with 
different codimension, this information is something in the description from 
which the DM was created, and was probably imported somewhere in an IS, right?

Blaise 


-- 
Department of Mathematics and Center for Computation & Technology
Louisiana State University, Baton Rouge, LA 70803, USA
Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin










[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 7:14 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 5:31 PM, Matthew Knepley  wrote:
>>
>> I was being too pessimistic. You can assign an intermediate node to any
>> set
>> of shared vertices, but I am not sure whether you start from the top or
>> bottom,
>> and the hard part is preallocation.
>
>
> Hmm, I'm not sure how to do hanging nodes, but for a conforming mesh
> starting with cell-to-vertex c2v[...]
>
> # build inverse map
> v2c = defaultdict(lambda v:[])
> for c in cells:
>   for v in c2v[c]:
> v2c[v].append(c)
>
> # find common subsets
> edges = set()
> c2e = defaultdict(lambda c:set())
> for v in vertices:
>   for (v1,c1,v2,c2) in sorted_vertices_appearing_in_exactly_2_cells(v2c[v]);
> edges.add((v1,v2))
> c2e[c1].add((v1,v2))
> c2e[c2].add((v1,v2))
>
> # repeat using edges to bound cell-to-face
> ...
>
>
>
> The problem with this is that you don't automatically get consistent
> orientation. If you need to work with basis functions, you need to orient,
> in which case the ordering produced by the procedure above needs to be
> fixed.
>
> In my opinion, that orientation is critical, such that connectivity without
> topology is of limited value. The topological dimension and number of
> vertices is just not sufficient to identify the topology of an element. For
> example, a hex can be collapsed to 7 vertices via an edge or across the
> middle of a face.

Definitely, that is much harder than the topology, and where I need specific
stuff for non-simplices, but it might be possible to use the initial orientation
of vertices.

Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 6:11 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 5:03 PM, Matthew Knepley  wrote:
>>
>> Interpolation. I do not have a good, graph theoretic way to phrase it.
>
>
> So if you know the topology of the n-cell, it implies a structure to its
> cone-closure. (You could store that as a cache of single-cell-rooted
> complex.) To interpolate the mesh, you graft in the single-cell, matching
> the intermediate levels (fast query with a vertex index). I've written some
> code for this, though not in full generality (I didn't explicitly store the
> reference single-cells).

I was being too pessimistic. You can assign an intermediate node to any set
of shared vertices, but I am not sure whether you start from the top or bottom,
and the hard part is preallocation.

  Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 5:31 PM, Matthew Knepley  wrote:

> I was being too pessimistic. You can assign an intermediate node to any set
> of shared vertices, but I am not sure whether you start from the top or
> bottom,
> and the hard part is preallocation.
>

Hmm, I'm not sure how to do hanging nodes, but for a conforming mesh
starting with cell-to-vertex c2v[...]

# build inverse map
v2c = defaultdict(lambda v:[])
for c in cells:
  for v in c2v[c]:
v2c[v].append(c)

# find common subsets
edges = set()
c2e = defaultdict(lambda c:set())
for v in vertices:
  for (v1,c1,v2,c2) in sorted_vertices_appearing_in_exactly_2_cells(v2c[v]);
edges.add((v1,v2))
c2e[c1].add((v1,v2))
c2e[c2].add((v1,v2))

# repeat using edges to bound cell-to-face
...



The problem with this is that you don't automatically get consistent
orientation. If you need to work with basis functions, you need to orient,
in which case the ordering produced by the procedure above needs to be
fixed.

In my opinion, that orientation is critical, such that connectivity without
topology is of limited value. The topological dimension and number of
vertices is just not sufficient to identify the topology of an element. For
example, a hex can be collapsed to 7 vertices via an edge or across the
middle of a face.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 5:55 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 4:51 PM, Matthew Knepley  wrote:
>>
>>
>> The is one operation that I do not yet know how to do outside of this.
>> This may
>> be ignorance.
>
>
> What operation?

Interpolation. I do not have a good, graph theoretic way to phrase it.

  Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Blaise A Bourdin

Ah, okay. To confirm, you have a DM that you are solving for, and in its user 
context, you have several other DMs, each with a Vec, describing the "problem 
data" like coefficients, forcing terms, and internal discontinuities?
Yes, but because of the structure of my problem, the application context also 
contains a link to the original DM.
Think solving F(u,v) = 0 where and v don't have the same layout, or the same 
dimensionality, without field split.
My iterate is
u_{n+1} = argmin_w F(w,v_n)
   v_{n+1} = argmin_w F(u_{n+1},w)
The DM associated with u is DMu and the one with v is DMv
For the application context, I can either a create AppCtx with components 
(pointer to u, pointer to v, pointed to DMu, pointer to DMu), or
AppCtxu with components (pointer to v, pointer to DMv) and AppCtxv with 
components (pointer to u, pointer to DMu)
In this setting, is calling SNESSolve with Vec u and Application Context AppCtx 
legal, or do I have to use AppCtxu / AppCtxv?


That is completely fine, and not "aliasing", but it does not play well with 
geometric multigrid because coarse grids reference the same application 
context. We have a system of hooks for managing such resolution-dependent data, 
though only with a C interface so far. (We needed this to get geometric 
multigrid and FAS to work with TS. Most non-toy applications need it too.)

Can you point me to an example? Are interfaces C only because nobody has ever 
needed fortran versions, or is there a technical reason.


I'm not sure if there is a way to make this easier. We have been using 
PetscObjectCompose() to attach things to the DM on different levels. We could 
have a slightly friendlier "user" interface for that.
I debated AppCtx vs. ObjectCompose for quite a bit. As far as I understand, 
PetscCompose/Query uses object names to reference objects, so I would have 
ended passing the name of the coefficients DM and Vecs in the appctx. I opted 
to put pointers to them in the appctx instead of names. It looked a bit simpler 
at the time. Now I have two good reasons I should have gone the PetscObject 
way. Oh well...


So keeping those things in the app context is just fine, but if you want to use 
geometric multigrid, you'll have to take them out of the app context and put 
them in a different structure attached to the DM that is not transparently 
propagated under coarsening and refinement. If you think you might do this 
eventually, I recommend commenting/organizing your app context so that 
resolution-dependent stuff is easily identifiable.

I had not thought about it, but it is quite feasible. I store all coefficients 
that are constant per block of cells or Dof in PetscBags, and everything that 
has variation at the scale of the finite element space in Vecs.  How would the 
creation of the coarse DMs be handled, though? The geometry part is trivial to 
propagate using DMClone, but you may need to user feedback for the data layout 
part, unless you have a scheme that describes it (i.e. for this IS of cells, n 
dof at the vertices, p at the faces etc)


I don't mind that, but can't you have an index set describing the codim 0 
elements (maybe all of them) and another index set for the codim 1 elements on 
the features you care about? You can take their union (or concatenate) for your 
assembly loop if you like. Is there something wrong with this approach?

Thats a very good point. In the end it doesn't really matter. As far as I 
remember, the main reason I ended with my current scheme is that DMMesh did not 
play well with partially interpolated meshes. I don't know what the current 
status of DMComplex is.

Okay, I think it's important to eventually support partially interpolated 
meshes to avoid using a lot of memory when used with low-order discretizations. 
I see no reason why there can't also be a direct cache for closure. For a P1 
basis, that amounts to a point range

[cells, boundary faces, vertices]

closure: [cells -> vertices, faces -> vertices]

So cell -> face need not be stored anywhere. Presumably there is a reason why 
Matt didn't write it this way. Is it just uniformity of data structure?

Do we mean the same by partially interpolated mesh? What I mean is a mesh the 
only faces that are explicitly stored are the one we are interested in 
(typically the boundary mesh). For P1 elements, we need only to know of [cells, 
some faces, vertices]. I tried to manually build partially interpolated sieves 
with only some of the faces, and distribution would fail. That's how I ended up 
with a mix of cells of co-dimension 0 and 1. If one wants access to ALL faces / 
edges or none of them, there is no problem in the current implementation.

Blaise


--
Department of Mathematics and Center for Computation & Technology
Louisiana State University, Baton Rouge, LA 70803, USA
Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin







-- next part --
An HTML attachment was scrubbed...
URL: 


[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 5:01 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 3:49 PM, Matthew Knepley  wrote:
>>
>> I am not sure I understand. Labels just define point sets. If you want
>> to segregate point
>> sets by dimension, that is fine. Nothing in the interface is sensitive
>> to this, nor should it be.
>
>
> I can think of only one way in which numbering points by stratum versus
> dimension makes a tangible difference: whether contiguous ranges can contain
> unsorted mixed-dimensional points. I want to understand whether this is
> actually important and whether there is a different use case that is
> important. If not, then we could entirely discard the concept of stratum,
> sort points by co/dimension, and query based on labels.
>
> Where is stratum as a concept irreplaceable?

I cannot say off the top of my head. It is very convenient to have the
breadth-first
levels of the graph done this way, and it is also nice for loops, but
I know you are
arguing that people more often write loops over dimension.

>>
>> >>
>> >> You don't need coordinates for interpolation?
>> >
>> >
>> > Are we talking about the same interpolation? If I have cell-to-vertex
>> > connectivity, I can create the faces without coordinates, yes.
>>
>> Ah, we are not. Mesh interpolation.
>>
>> The way I have done interpolation, you have to know the mesh dimension,
>> DMComplexGetDimension(), and then it assumes that height 0 stuff is cells
>> of that dimension.
>
>
> Then we're basically identifying stratum with dimension, suggesting that we
> should be able to remove stratum from the API in favor of co/dimension.

The is one operation that I do not yet know how to do outside of this. This may
be ignorance.

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 5:03 PM, Matthew Knepley  wrote:

> Interpolation. I do not have a good, graph theoretic way to phrase it.


So if you know the topology of the n-cell, it implies a structure to its
cone-closure. (You could store that as a cache of single-cell-rooted
complex.) To interpolate the mesh, you graft in the single-cell, matching
the intermediate levels (fast query with a vertex index). I've written some
code for this, though not in full generality (I didn't explicitly store the
reference single-cells).
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 4:51 PM, Matthew Knepley  wrote:

>
> The is one operation that I do not yet know how to do outside of this.
> This may
> be ignorance.
>

What operation?
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 4:30 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 3:25 PM, Matthew Knepley  wrote:
>>
>> Everything is just a point, so I get back mixed dimension things all the
>> time. I
>> usually separate by stratum.
>
>
> Okay, but what is the compelling use case for DMComplexGetStratumIS()
> returning mixed-dimension that cannot be reasonably accomplished by getting
> dimensional ISs and union/concatenating them?

I am not sure I understand. Labels just define point sets. If you want
to segregate point
sets by dimension, that is fine. Nothing in the interface is sensitive
to this, nor should it be.

>>
>> You don't need coordinates for interpolation?
>
>
> Are we talking about the same interpolation? If I have cell-to-vertex
> connectivity, I can create the faces without coordinates, yes.

Ah, we are not. Mesh interpolation.

The way I have done interpolation, you have to know the mesh dimension,
DMComplexGetDimension(), and then it assumes that height 0 stuff is cells
of that dimension.

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Blaise A Bourdin
Sorry, for the empty message earlier. Fat fingers on a small keyboard...


On Fri, Nov 9, 2012 at 10:20 PM, Blaise A Bourdin mailto:bourdin at lsu.edu>> wrote:
A DM does double duty by describing the geometry of the mesh, and the data 
layout associated with the finite element space. I liked the model where the 
mesh geometry and the data layout on the mesh was split in two objects, but I 
understand the convenience of having everything in the DM, and DMClone works 
just fine. Since I may have to handle scalar, vector, 2nd and 4th order tensor 
on 2 different finite element spaces, in an assembly loop, I may end up dealing 
with as many as 8 DM. I stick all these DM's in the user context of each DM 
associated with a unknown (a Vec on which I may have to call SNESSolve or 
TSSolve), hoping that this is not creating some aliasing problem which as  a 
fortran programmer I can not possibly understand.

;-)

Actually, I am really not sure if passing a dm and a pointer to a user context 
containing this dm is legit or not...

Everything is an IS. I suspect that this means that the Set/GetCone,

I think these just return a range or (offset,size).

Set/GetClosure operations

Closure needs to do orientation, so it's not direct access from the Vec. 
Regardless, I don't think the plan would be to create an IS for that 
granularity.
Fine.  It should be done from the DM anyways.



would return an IS that could be used in VecSetValuesIS, but may not even be 
needed if an equivalent of  DMMesh' SectionRealRestrict / RestrictClosure 
/Update / UpdateCLosure is implemented

I think these are DMCompletVecGet/SetClosure.

Great




Setting Values is good, but getting values is needed too!

Also, in addition to the Barry's simple pseudo-code, I need to be able to get 
an IS for the i^th dof of a field at a given point (think of applying a 
Dirichlet boundary conditions to only one component of a field, for instance). 
It's always been the messy part. How would that fit within the proposed model?

I think you would get an index set for those points on the feature you wanted 
to do something special for.

That's pretty much what I do now. With DMMesh, it required quite a bit of work. 
It looks much easier with DMComplex

Finally, just a comment on stratum vs topological dimension. There is no reason 
why all elements in a mesh should have the same topological dimension. I 
actually find it much easier to have a single mesh where some sets of elements 
have codimension 0 and others codimension 1 and dispatching my assembly 
depending on the physics associated with each block, and the codimension of the 
element rather than having to deal with side sets / boundary mesh / whatever 
you want to call it.

I don't mind that, but can't you have an index set describing the codim 0 
elements (maybe all of them) and another index set for the codim 1 elements on 
the features you care about? You can take their union (or concatenate) for your 
assembly loop if you like. Is there something wrong with this approach?

Thats a very good point. In the end it doesn't really matter. As far as I 
remember, the main reason I ended with my current scheme is that DMMesh did not 
play well with partially interpolated meshes. I don't know what the current 
status of DMComplex is.

Blaise
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 4:13 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 3:02 PM, Matthew Knepley  wrote:
>>
>>
>> > How does one get an IS containing _only_ the labeled points?
>>
>> DMComplexGetStratumIS()
>
> Great. Under what circumstances do you call this routine and get back an IS
> with mixed dimension?

Everything is just a point, so I get back mixed dimension things all the time. I
usually separate by stratum.

>>
>>
>> >>
>> >> > Can you answer how you can distinguish a quad from a tet in a
>> >> > non-interpolated mesh?
>> >>
>> >> You can't, but that is the point. You are not supposed to distinguish
>> >> them.
>> >
>> >
>> > Hmm, so these are the same at the category-theory level, but definitely
>> > not
>> > at the mesh topology level. It doesn't seem right to not distinguish
>> > them at
>> > all. How do you write DMComplexInterpolate for mixed-dim strata?
>>
>> Interpolation is analysis. You need to know what dimension you are in,
>> etc.
>
>
> But it's still topological rather than geometric. You don't need
> coordinates, for example.

You don't need coordinates for interpolation?

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Blaise A Bourdin


Sent from a handheld device.

On Nov 10, 2012, at 12:38 AM, "Jed Brown" mailto:jedbrown at mcs.anl.gov>> wrote:

On Fri, Nov 9, 2012 at 10:20 PM, Blaise A Bourdin mailto:bourdin at lsu.edu>> wrote:
A DM does double duty by describing the geometry of the mesh, and the data 
layout associated with the finite element space. I liked the model where the 
mesh geometry and the data layout on the mesh was split in two objects, but I 
understand the convenience of having everything in the DM, and DMClone works 
just fine. Since I may have to handle scalar, vector, 2nd and 4th order tensor 
on 2 different finite element spaces, in an assembly loop, I may end up dealing 
with as many as 8 DM. I stick all these DM's in the user context of each DM 
associated with a unknown (a Vec on which I may have to call SNESSolve or 
TSSolve), hoping that this is not creating some aliasing problem which as  a 
fortran programmer I can not possibly understand.

;-)

Everything is an IS. I suspect that this means that the Set/GetCone,

I think these just return a range or (offset,size).

Set/GetClosure operations

Closure needs to do orientation, so it's not direct access from the Vec. 
Regardless, I don't think the plan would be to create an IS for that 
granularity.

would return an IS that could be used in VecSetValuesIS, but may not even be 
needed if an equivalent of  DMMesh' SectionRealRestrict / RestrictClosure 
/Update / UpdateCLosure is implemented

I think these are DMCompletVecGet/SetClosure.


Setting Values is good, but getting values is needed too!

Also, in addition to the Barry's simple pseudo-code, I need to be able to get 
an IS for the i^th dof of a field at a given point (think of applying a 
Dirichlet boundary conditions to only one component of a field, for instance). 
It's always been the messy part. How would that fit within the proposed model?

I think you would get an index set for those points on the feature you wanted 
to do something special for.

Finally, just a comment on stratum vs topological dimension. There is no reason 
why all elements in a mesh should have the same topological dimension. I 
actually find it much easier to have a single mesh where some sets of elements 
have codimension 0 and others codimension 1 and dispatching my assembly 
depending on the physics associated with each block, and the codimension of the 
element rather than having to deal with side sets / boundary mesh / whatever 
you want to call it.

I don't mind that, but can't you have an index set describing the codim 0 
elements (maybe all of them) and another index set for the codim 1 elements on 
the features you care about? You can take their union (or concatenate) for your 
assembly loop if you like. Is there something wrong with this approach?
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 4:09 PM, Karl Rupp  wrote:
> On 11/10/2012 02:29 PM, Jed Brown wrote:
>>
>> On Sat, Nov 10, 2012 at 2:09 PM, Blaise A Bourdin > > wrote:
>>
>> Nope... and please, don't break that...
>>
>> Here is an example of a mesh with 4 element blocks. In an analysis
>> code, for each block , I would specify the element type, type of
>> physics, coefficients etc in a petscBag. That said, I may be doing
>> it wrong...
>>
>> block 10 is made of tri
>> block 22 is made of quad
>> blocks 100 and 102 are made of bars
>>
>>
>> I'm not suggesting any demotion of this ability. I'm trying to
>> understand what is the advantage of having a stratum with unsorted mixed
>> dimension. If the blocks were sorted by dimension, you would still
>> access them using a label (which gives you an index set, perhaps
>> contiguous). You would still be able to do all the normal operations
>> with those points. If you want to have one loop over all elements of the
>> various types, you would union or concatenate the index sets of each
>> type. Am I missing something?
>
>
> May I add something that is not specific to the current implementation in
> PETSc, yet still backs up the 'arrays of elements of the same type'
> approach:
>  a) When thinking of accelerators, having, say, all tets stored/referenced
> contiguously in one array and all hexahedra stored contiguously in another
> (separate) array, makes it a lot easier to move all tets to a kernel working
> on tets, and to feed all hexes to a second kernel working on hexes.
>
>  b) In a strongly typed setting (my experience here is in C++), one can do
> pretty nice things by keeping the two arrays separate rather than trying to
> unify them in some base class. For simplicity, suppose you have a mesh and
> two containers holding the tets and hexes. A FEM assembly could then look
> like this:

Yes, I agree completely with batching of this kind, which is why my integration
functions look like they do. However, the topology information is not
really needed
in these kernels. Instead you are packing discretization information, like
FEM coefficients, or geometric information like Jacobians, which are
all controlled
by PetscSection, not by the DMComplex.

   Matt

>   for_each(mesh_cells.begin(), mesh_cells.end(), assembly_functor());
>
> where assembly_functor statically dispatches into the cell type:
>   struct assembly_functor{
> void operator()(tetrahedron const & t) { ... }
> void operator()(hexahedron const & t) { ... }
>   }
> and for_each is implemented such that it internally expands to a for_each
> over each of the two containers for tets and hexes. Thus, instead of
> dispatching for each element over and over again, the dispatch is done once
> for the whole array.
>
> Just my 2 cents,
> Karli
>
>



--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 3:35 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 2:28 PM, Matthew Knepley  wrote:
>>
>> > point that is a member. What about if a given level happens to be empty?
>>
>> DMComplexGetLabelIdIS(). Its not problem if a level is empty.
>
>
> Ah, I misinterpreted what that routine did. Aren't these trivially short
> arrays (max length of 4 for meshes in 3D) that are expected to be identical
> on every process?

Not identical on every process.

> How does one get an IS containing _only_ the labeled points?

DMComplexGetStratumIS()

>>
>> > Can you answer how you can distinguish a quad from a tet in a
>> > non-interpolated mesh?
>>
>> You can't, but that is the point. You are not supposed to distinguish
>> them.
>
>
> Hmm, so these are the same at the category-theory level, but definitely not
> at the mesh topology level. It doesn't seem right to not distinguish them at
> all. How do you write DMComplexInterpolate for mixed-dim strata?

Interpolation is analysis. You need to know what dimension you are in, etc.

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 3:49 PM, Matthew Knepley  wrote:

> I am not sure I understand. Labels just define point sets. If you want
> to segregate point
> sets by dimension, that is fine. Nothing in the interface is sensitive
> to this, nor should it be.
>

I can think of only one way in which numbering points by stratum versus
dimension makes a tangible difference: whether contiguous ranges can
contain unsorted mixed-dimensional points. I want to understand whether
this is actually important and whether there is a different use case that
is important. If not, then we could entirely discard the concept of
stratum, sort points by co/dimension, and query based on labels.

Where is stratum as a concept irreplaceable?


> >>
> >> You don't need coordinates for interpolation?
> >
> >
> > Are we talking about the same interpolation? If I have cell-to-vertex
> > connectivity, I can create the faces without coordinates, yes.
>
> Ah, we are not. Mesh interpolation.
>
> The way I have done interpolation, you have to know the mesh dimension,
> DMComplexGetDimension(), and then it assumes that height 0 stuff is cells
> of that dimension.
>

Then we're basically identifying stratum with dimension, suggesting that we
should be able to remove stratum from the API in favor of co/dimension.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 3:25 PM, Matthew Knepley  wrote:

> Everything is just a point, so I get back mixed dimension things all the
> time. I
> usually separate by stratum.
>

Okay, but what is the compelling use case for DMComplexGetStratumIS()
returning mixed-dimension that cannot be reasonably accomplished by getting
dimensional ISs and union/concatenating them?


> You don't need coordinates for interpolation?
>

Are we talking about the same interpolation? If I have cell-to-vertex
connectivity, I can create the faces without coordinates, yes.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 3:19 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 2:05 PM, Matthew Knepley  wrote:
>>
>> Well, you have to designate the set with something. The easiest thing to
>> use is
>> an integer, so I use that. This is the only value associated with the set,
>> so I
>> would say Labels really only make sets.
>
>
> I was looking at DMComplexGetLabelValue() and didn't realize it's just one
> value per stratum. So you get the value associated with a point by searching
> to find which stratum it resides in. In the implementation, if point is not

Yes.

> in the set specified by the label, it does not set *value. Is that

It gives back -1 (complex.c:2103)

> intentional? And there is no way to query stratum values without getting a
> point that is a member. What about if a given level happens to be empty?

DMComplexGetLabelIdIS(). Its not problem if a level is empty.


> Also, linear search, oh my. ;-)

Was waiting to fix. Fixed.

>>
>>
>> Exactly, they are sorted by strata, not dimension.
>
>
> Okay, then dimensional query would really need to return an IS, provided it
> didn't switch to sorting by dimension.

Yes. Dimensions are just labels, so they can reuse the Label interface.

> Can you answer how you can distinguish a quad from a tet in a
> non-interpolated mesh?

You can't, but that is the point. You are not supposed to distinguish them.

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 3:12 PM, Matthew Knepley  wrote:

> Yes, I agree completely with batching of this kind, which is why my
> integration
> functions look like they do. However, the topology information is not
> really needed
> in these kernels. Instead you are packing discretization information, like
> FEM coefficients, or geometric information like Jacobians, which are
> all controlled
> by PetscSection, not by the DMComplex.
>

At the time you call an element residual kernel, you definitely need that
information. Now the question is how much should bubble up to higher levels.

Because I think topological dimension is an important attribute (I think
the library should be able to distinguish tets from quads), I was only
proposing using topological dimension more explicitly. Karl suggests one
step further in delineating the specific topology. My concern with that is
keeping a contiguous index space. If there is sorting by cell topology, you
no longer have to store size explicitly for each point in the coneSection.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 3:02 PM, Matthew Knepley  wrote:

>
> > How does one get an IS containing _only_ the labeled points?
>
> DMComplexGetStratumIS()
>

Great. Under what circumstances do you call this routine and get back an IS
with mixed dimension?


>
> >>
> >> > Can you answer how you can distinguish a quad from a tet in a
> >> > non-interpolated mesh?
> >>
> >> You can't, but that is the point. You are not supposed to distinguish
> >> them.
> >
> >
> > Hmm, so these are the same at the category-theory level, but definitely
> not
> > at the mesh topology level. It doesn't seem right to not distinguish
> them at
> > all. How do you write DMComplexInterpolate for mixed-dim strata?
>
> Interpolation is analysis. You need to know what dimension you are in, etc.
>

But it's still topological rather than geometric. You don't need
coordinates, for example.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Karl Rupp
On 11/10/2012 02:29 PM, Jed Brown wrote:
> On Sat, Nov 10, 2012 at 2:09 PM, Blaise A Bourdin  > wrote:
>
> Nope... and please, don't break that...
>
> Here is an example of a mesh with 4 element blocks. In an analysis
> code, for each block , I would specify the element type, type of
> physics, coefficients etc in a petscBag. That said, I may be doing
> it wrong...
>
> block 10 is made of tri
> block 22 is made of quad
> blocks 100 and 102 are made of bars
>
>
> I'm not suggesting any demotion of this ability. I'm trying to
> understand what is the advantage of having a stratum with unsorted mixed
> dimension. If the blocks were sorted by dimension, you would still
> access them using a label (which gives you an index set, perhaps
> contiguous). You would still be able to do all the normal operations
> with those points. If you want to have one loop over all elements of the
> various types, you would union or concatenate the index sets of each
> type. Am I missing something?

May I add something that is not specific to the current implementation 
in PETSc, yet still backs up the 'arrays of elements of the same type' 
approach:
  a) When thinking of accelerators, having, say, all tets 
stored/referenced contiguously in one array and all hexahedra stored 
contiguously in another (separate) array, makes it a lot easier to move 
all tets to a kernel working on tets, and to feed all hexes to a second 
kernel working on hexes.

  b) In a strongly typed setting (my experience here is in C++), one can 
do pretty nice things by keeping the two arrays separate rather than 
trying to unify them in some base class. For simplicity, suppose you 
have a mesh and two containers holding the tets and hexes. A FEM 
assembly could then look like this:

   for_each(mesh_cells.begin(), mesh_cells.end(), assembly_functor());

where assembly_functor statically dispatches into the cell type:
   struct assembly_functor{
 void operator()(tetrahedron const & t) { ... }
 void operator()(hexahedron const & t) { ... }
   }
and for_each is implemented such that it internally expands to a 
for_each over each of the two containers for tets and hexes. Thus, 
instead of dispatching for each element over and over again, the 
dispatch is done once for the whole array.

Just my 2 cents,
Karli




[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 2:54 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 1:44 PM, Matthew Knepley  wrote:
>>
>> Yes, I agree that they are different geometrically, and in PyLith we
>> make Labels to
>> distinguish, which I think is the right way.
>
>
> Cool. Of course you have to create the labels somehow. Do you currently make
> labels for all the "sidesets" in the mesh you load (e.g., from exodus)?

Yep.

> Labels in their current form seem to do two things (denote sets and
> associate values with points in those sets). Don't you frequently want one
> or the other?

Well, you have to designate the set with something. The easiest thing to use is
an integer, so I use that. This is the only value associated with the set, so I
would say Labels really only make sets.

>>
>> > Fine, I care much more about conceptual simplicity in the interface. I
>> > think
>> > adoption will be higher if 95% of users don't see the word "stratum".
>>
>> Note that this will have to be set explicitly somehow, since you cannot
>> "tell"
>> from the DAG that something has a given dimension/co-dim.
>
>
> Are your points not sorted by dimension? (Most of the code I've seen in the
> examples assumes homogeneous strata.)

Exactly, they are sorted by strata, not dimension.

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 2:33 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 1:26 PM, Matthew Knepley  wrote:
>>
>> I do not understand. Codim 0 and 1 are handled in an identical way.
>
>
> The user doesn't do the same thing with them. Your data structure treats
> them the same way (though I'm not sure how you distinguish between a tet and
> a quad if both are "height 0" and you don't store intermediate levels), but
> that seems like an implementation detail to me.

Yes, I agree that they are different geometrically, and in PyLith we
make Labels to
distinguish, which I think is the right way.

>>
>> I am not against some convenience interface for dim/co-dim, however it
>> would
>> just be implemented by making a label, since we have many equivalent
>> structural
>> things which would map to it.
>
>
> Fine, I care much more about conceptual simplicity in the interface. I think
> adoption will be higher if 95% of users don't see the word "stratum".

Note that this will have to be set explicitly somehow, since you cannot "tell"
from the DAG that something has a given dimension/co-dim.

  Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 2:28 PM, Matthew Knepley  wrote:

> > point that is a member. What about if a given level happens to be empty?
>
> DMComplexGetLabelIdIS(). Its not problem if a level is empty.
>

Ah, I misinterpreted what that routine did. Aren't these trivially short
arrays (max length of 4 for meshes in 3D) that are expected to be identical
on every process?

How does one get an IS containing _only_ the labeled points?


> > Can you answer how you can distinguish a quad from a tet in a
> > non-interpolated mesh?
>
> You can't, but that is the point. You are not supposed to distinguish them.


Hmm, so these are the same at the category-theory level, but definitely not
at the mesh topology level. It doesn't seem right to not distinguish them
at all. How do you write DMComplexInterpolate for mixed-dim strata?
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 2:09 PM, Blaise A Bourdin  wrote:

> Nope... and please, don't break that...
>
>  Here is an example of a mesh with 4 element blocks. In an analysis code,
> for each block , I would specify the element type, type of physics,
> coefficients etc in a petscBag. That said, I may be doing it wrong...
>
>  block 10 is made of tri
> block 22 is made of quad
> blocks 100 and 102 are made of bars
>

I'm not suggesting any demotion of this ability. I'm trying to understand
what is the advantage of having a stratum with unsorted mixed dimension. If
the blocks were sorted by dimension, you would still access them using a
label (which gives you an index set, perhaps contiguous). You would still
be able to do all the normal operations with those points. If you want to
have one loop over all elements of the various types, you would union or
concatenate the index sets of each type. Am I missing something?
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 2:16 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 12:45 PM, Matthew Knepley  
> wrote:
>>
>> It just depends on what you want your assumptions to be. I am all for
>> experimenting. The
>> foregoing strategy is good because closure continues to work as
>> advertised, and all our
>> integration code is still dim-independent, but the assumption that
>> height 0 things are all
>> cells breaks down. This is not so bad, since we usually want to group
>> cells by material
>> anyway, using a Label, so I am fine with this.
>
>
> Okay, my complaint is that "stratum" is an additional concept that has no
> established use in the current lexicon and the generality of which doesn't
> really help the user because they have to do different things for codim 0
> and codim 1 anyway. If we are not causing extreme hardship or irreparably
> crippling the flexibility of their code, I think the interface should give
> access to by codim instead of by stratum.

I do not understand. Codim 0 and 1 are handled in an identical way.

I am not against some convenience interface for dim/co-dim, however it would
just be implemented by making a label, since we have many equivalent structural
things which would map to it.

   Matt

> Now in the P1 case, the faces do not store any data, therefore
> closure(some_cell) returns the same thing going directly to vertices as if
> the intermediate dimensions were populated. I think that by definition, this
> sort of mesh does not support getting faces of a cell, therefore it's
> correct to not store the cell -> boundary-face relation. The user can ask to
> interpolate the mesh if they want.
>
> I'm not sure about the multi-domain case where the user wants a high-order
> discretization in one domain and a P1 with non-interpolated mesh in another.
> The benefit of non-interpolated doesn't seem so clear in this case.



--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 2:05 PM, Matthew Knepley  wrote:

> Well, you have to designate the set with something. The easiest thing to
> use is
> an integer, so I use that. This is the only value associated with the set,
> so I
> would say Labels really only make sets.
>

I was looking at DMComplexGetLabelValue() and didn't realize it's just one
value per stratum. So you get the value associated with a point by
searching to find which stratum it resides in. In the implementation, if
point is not in the set specified by the label, it does not set *value. Is
that intentional? And there is no way to query stratum values without
getting a point that is a member. What about if a given level happens to be
empty?

Also, linear search, oh my. ;-)


>
> Exactly, they are sorted by strata, not dimension.
>

Okay, then dimensional query would really need to return an IS, provided it
didn't switch to sorting by dimension.


Can you answer how you can distinguish a quad from a tet in a
non-interpolated mesh?
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 1:44 PM, Matthew Knepley  wrote:

> Yes, I agree that they are different geometrically, and in PyLith we
> make Labels to
> distinguish, which I think is the right way.
>

Cool. Of course you have to create the labels somehow. Do you currently
make labels for all the "sidesets" in the mesh you load (e.g., from exodus)?

Labels in their current form seem to do two things (denote sets and
associate values with points in those sets). Don't you frequently want one
or the other?


> > Fine, I care much more about conceptual simplicity in the interface. I
> think
> > adoption will be higher if 95% of users don't see the word "stratum".
>
> Note that this will have to be set explicitly somehow, since you cannot
> "tell"
> from the DAG that something has a given dimension/co-dim.
>

Are your points not sorted by dimension? (Most of the code I've seen in the
examples assumes homogeneous strata.)
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Matthew Knepley
On Sat, Nov 10, 2012 at 12:04 PM, Jed Brown  wrote:
> On Sat, Nov 10, 2012 at 10:40 AM, Blaise A Bourdin  wrote:
>>
>> On Fri, Nov 9, 2012 at 10:20 PM, Blaise A Bourdin  wrote:
>>>
>>> A DM does double duty by describing the geometry of the mesh, and the
>>> data layout associated with the finite element space. I liked the model
>>> where the mesh geometry and the data layout on the mesh was split in two
>>> objects, but I understand the convenience of having everything in the DM,
>>> and DMClone works just fine. Since I may have to handle scalar, vector, 2nd
>>> and 4th order tensor on 2 different finite element spaces, in an assembly
>>> loop, I may end up dealing with as many as 8 DM. I stick all these DM's in
>>> the user context of each DM associated with a unknown (a Vec on which I may
>>> have to call SNESSolve or TSSolve), hoping that this is not creating some
>>> aliasing problem which as  a fortran programmer I can not possibly
>>> understand.
>>
>>
>> ;-)
>>
>>
>> Actually, I am really not sure if passing a dm and a pointer to a user
>> context containing this dm is legit or not...
>
>
> Ah, okay. To confirm, you have a DM that you are solving for, and in its
> user context, you have several other DMs, each with a Vec, describing the
> "problem data" like coefficients, forcing terms, and internal
> discontinuities? That is completely fine, and not "aliasing", but it does
> not play well with geometric multigrid because coarse grids reference the
> same application context. We have a system of hooks for managing such
> resolution-dependent data, though only with a C interface so far. (We needed
> this to get geometric multigrid and FAS to work with TS. Most non-toy
> applications need it too.)
>
> I'm not sure if there is a way to make this easier. We have been using
> PetscObjectCompose() to attach things to the DM on different levels. We
> could have a slightly friendlier "user" interface for that.
>
> So keeping those things in the app context is just fine, but if you want to
> use geometric multigrid, you'll have to take them out of the app context and
> put them in a different structure attached to the DM that is not
> transparently propagated under coarsening and refinement. If you think you
> might do this eventually, I recommend commenting/organizing your app context
> so that resolution-dependent stuff is easily identifiable.
>
>>
>> I don't mind that, but can't you have an index set describing the codim 0
>> elements (maybe all of them) and another index set for the codim 1 elements
>> on the features you care about? You can take their union (or concatenate)
>> for your assembly loop if you like. Is there something wrong with this
>> approach?
>>
>>
>> Thats a very good point. In the end it doesn't really matter. As far as I
>> remember, the main reason I ended with my current scheme is that DMMesh did
>> not play well with partially interpolated meshes. I don't know what the
>> current status of DMComplex is.
>
>
> Okay, I think it's important to eventually support partially interpolated
> meshes to avoid using a lot of memory when used with low-order
> discretizations. I see no reason why there can't also be a direct cache for
> closure. For a P1 basis, that amounts to a point range
>
> [cells, boundary faces, vertices]
>
> closure: [cells -> vertices, faces -> vertices]
>
> So cell -> face need not be stored anywhere. Presumably there is a reason
> why Matt didn't write it this way. Is it just uniformity of data structure?

It just depends on what you want your assumptions to be. I am all for
experimenting. The
foregoing strategy is good because closure continues to work as
advertised, and all our
integration code is still dim-independent, but the assumption that
height 0 things are all
cells breaks down. This is not so bad, since we usually want to group
cells by material
anyway, using a Label, so I am fine with this.

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 1:26 PM, Matthew Knepley  wrote:

> I do not understand. Codim 0 and 1 are handled in an identical way.
>

The user doesn't do the same thing with them. Your data structure treats
them the same way (though I'm not sure how you distinguish between a tet
and a quad if both are "height 0" and you don't store intermediate levels),
but that seems like an implementation detail to me.


> I am not against some convenience interface for dim/co-dim, however it
> would
> just be implemented by making a label, since we have many equivalent
> structural
> things which would map to it.
>

Fine, I care much more about conceptual simplicity in the interface. I
think adoption will be higher if 95% of users don't see the word "stratum".
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 12:45 PM, Matthew Knepley  wrote:

> It just depends on what you want your assumptions to be. I am all for
> experimenting. The
> foregoing strategy is good because closure continues to work as
> advertised, and all our
> integration code is still dim-independent, but the assumption that
> height 0 things are all
> cells breaks down. This is not so bad, since we usually want to group
> cells by material
> anyway, using a Label, so I am fine with this.
>

Okay, my complaint is that "stratum" is an additional concept that has no
established use in the current lexicon and the generality of which doesn't
really help the user because they have to do different things for codim 0
and codim 1 anyway. If we are not causing extreme hardship or irreparably
crippling the flexibility of their code, I think the interface should give
access to by codim instead of by stratum.

Now in the P1 case, the faces do not store any data, therefore
closure(some_cell) returns the same thing going directly to vertices as if
the intermediate dimensions were populated. I think that by definition,
this sort of mesh does not support getting faces of a cell, therefore it's
correct to not store the cell -> boundary-face relation. The user can ask
to interpolate the mesh if they want.

I'm not sure about the multi-domain case where the user wants a high-order
discretization in one domain and a P1 with non-interpolated mesh in
another. The benefit of non-interpolated doesn't seem so clear in this case.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 11:57 AM, Blaise A Bourdin  wrote:

> Yes, but because of the structure of my problem, the application context
> also contains a link to the original DM.
> Think solving F(u,v) = 0 where and v don't have the same layout, or the
> same dimensionality, without field split.
> My iterate is
> u_{n+1} = argmin_w F(w,v_n)
>v_{n+1} = argmin_w F(u_{n+1},w)
> The DM associated with u is DMu and the one with v is DMv
> For the application context, I can either a create AppCtx with components
> (pointer to u, pointer to v, pointed to DMu, pointer to DMu), or
> AppCtxu with components (pointer to v, pointer to DMv) and AppCtxv with
> components (pointer to u, pointer to DMu)
> In this setting, is calling SNESSolve with Vec u and Application Context
> AppCtx legal, or do I have to use AppCtxu / AppCtxv?
>

Either is okay (and neither is okay for FAS).


>  Can you point me to an example? Are interfaces C only because nobody has
> ever needed fortran versions, or is there a technical reason.
>

SNES ex48 does this, also look at any use of DMCoarsenHookAdd(). There is
no fundamental issue, it just requires custom function pointer juggling
which is messy and really needs to be tested.


>
>
 I had not thought about it, but it is quite feasible. I store all
> coefficients that are constant per block of cells or Dof in PetscBags, and
> everything that has variation at the scale of the finite element space in
> Vecs.  How would the creation of the coarse DMs be handled, though? The
> geometry part is trivial to propagate using DMClone, but you may need to
> user feedback for the data layout part, unless you have a scheme that
> describes it (i.e. for this IS of cells, n dof at the vertices, p at the
> faces etc)
>

You get to implement these two callbacks to create and update data for the
coarse grid problem:

http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/DM/DMCoarsenHookAdd.html


> Do we mean the same by partially interpolated mesh? What I mean is a mesh
> the only faces that are explicitly stored are the one we are interested in
> (typically the boundary mesh). For P1 elements, we need only to know of
> [cells, some faces, vertices].
>

Yup, same thing.


>  I tried to manually build partially interpolated sieves with only some of
> the faces, and distribution would fail. That's how I ended up with a mix of
> cells of co-dimension 0 and 1. If one wants access to ALL faces / edges or
> none of them, there is no problem in the current implementation.
>

Right, I think of this as having a bunch of codim 0 and a few codim 1. As a
user, we call Closure, not Cone, so we go directly to the points from
either. But we would get the points to loop over by requesting codim 0 and
codim 1 separately. I'd expect that in principle, you likely have multiple
groups of codim 1 things that get treated differently. Of course you can
lump them all in one bigger index set and have an if statement, though it's
not as vectorization-friendly so Matt doesn't like that. I would not mind
having an interface for requesting points of multiple dimensions at once
(codim in {0,1}).
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Sat, Nov 10, 2012 at 10:40 AM, Blaise A Bourdin  wrote:

> On Fri, Nov 9, 2012 at 10:20 PM, Blaise A Bourdin  wrote:
>
>> A DM does double duty by describing the geometry of the mesh, and the
>> data layout associated with the finite element space. I liked the model
>> where the mesh geometry and the data layout on the mesh was split in two
>> objects, but I understand the convenience of having everything in the DM,
>> and DMClone works just fine. Since I may have to handle scalar, vector, 2nd
>> and 4th order tensor on 2 different finite element spaces, in an assembly
>> loop, I may end up dealing with as many as 8 DM. I stick all these DM's in
>> the user context of each DM associated with a unknown (a Vec on which I may
>> have to call SNESSolve or TSSolve), hoping that this is not creating some
>> aliasing problem which as  a fortran programmer I can not possibly
>> understand.
>>
>
>  ;-)
>
>
>  Actually, I am really not sure if passing a dm and a pointer to a user
> context containing this dm is legit or not...
>

Ah, okay. To confirm, you have a DM that you are solving for, and in its
user context, you have several other DMs, each with a Vec, describing the
"problem data" like coefficients, forcing terms, and internal
discontinuities? That is completely fine, and not "aliasing", but it does
not play well with *geometric* multigrid because coarse grids reference the
same application context. We have a system of hooks for managing such
resolution-dependent data, though only with a C interface so far. (We
needed this to get geometric multigrid and FAS to work with TS. Most
non-toy applications need it too.)

I'm not sure if there is a way to make this easier. We have been using
PetscObjectCompose() to attach things to the DM on different levels. We
could have a slightly friendlier "user" interface for that.

So keeping those things in the app context is just fine, but if you want to
use geometric multigrid, you'll have to take them out of the app context
and put them in a different structure attached to the DM that is not
transparently propagated under coarsening and refinement. If you think you
might do this eventually, I recommend commenting/organizing your app
context so that resolution-dependent stuff is easily identifiable.


> I don't mind that, but can't you have an index set describing the codim 0
> elements (maybe all of them) and another index set for the codim 1 elements
> on the features you care about? You can take their union (or concatenate)
> for your assembly loop if you like. Is there something wrong with this
> approach?
>
>
>   Thats a very good point. In the end it doesn't really matter. As far as
> I remember, the main reason I ended with my current scheme is that DMMesh
> did not play well with partially interpolated meshes. I don't know what the
> current status of DMComplex is.
>

Okay, I think it's important to eventually support partially interpolated
meshes to avoid using a lot of memory when used with low-order
discretizations. I see no reason why there can't also be a direct cache for
closure. For a P1 basis, that amounts to a point range

[cells, boundary faces, vertices]

closure: [cells -> vertices, faces -> vertices]

So cell -> face need not be stored anywhere. Presumably there is a reason
why Matt didn't write it this way. Is it just uniformity of data structure?
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-10 Thread Blaise A Bourdin
Hi,

Let me make sure I understand the consensus, since I have a vested interest in 
the unstructured mesh business:

A DM does double duty by describing the geometry of the mesh, and the data 
layout associated with the finite element space. I liked the model where the 
mesh geometry and the data layout on the mesh was split in two objects, but I 
understand the convenience of having everything in the DM, and DMClone works 
just fine. Since I may have to handle scalar, vector, 2nd and 4th order tensor 
on 2 different finite element spaces, in an assembly loop, I may end up dealing 
with as many as 8 DM. I stick all these DM's in the user context of each DM 
associated with a unknown (a Vec on which I may have to call SNESSolve or 
TSSolve), hoping that this is not creating some aliasing problem which as  a 
fortran programmer I can not possibly understand. 

Everything is an IS. I suspect that this means that the Set/GetCone, 
Set/GetClosure operations would return an IS that could be used in 
VecSetValuesIS, but may not even be needed if an equivalent of  DMMesh' 
SectionRealRestrict / RestrictClosure /Update / UpdateCLosure is implemented

Setting Values is good, but getting values is needed too! 

Also, in addition to the Barry's simple pseudo-code, I need to be able to get 
an IS for the i^th dof of a field at a given point (think of applying a 
Dirichlet boundary conditions to only one component of a field, for instance). 
It's always been the messy part. How would that fit within the proposed model?

Finally, just a comment on stratum vs topological dimension. There is no reason 
why all elements in a mesh should have the same topological dimension. I 
actually find it much easier to have a single mesh where some sets of elements 
have codimension 0 and others codimension 1 and dispatching my assembly 
depending on the physics associated with each block, and the codimension of the 
element rather than having to deal with side sets / boundary mesh / whatever 
you want to call it.

Blaise

On Nov 9, 2012, at 9:02 PM, Barry Smith 
 wrote:

> 
>  We have a users manual?
> 
> On Nov 9, 2012, at 8:56 PM, Matthew Knepley  wrote:
> 
>> On Fri, Nov 9, 2012 at 9:46 PM, Barry Smith  wrote:
>>> 
>>> On Nov 9, 2012, at 8:42 PM, Jed Brown  wrote:
>>> 
 On Fri, Nov 9, 2012 at 8:39 PM, Matthew Knepley  
 wrote:
 You are both right :) I don't care where we put them, but I want BCs
 and fields. However, this is no problem.
 Right now I do BCs and fields as PetscSections held inside the primary
 section. If IS can do what PetscSection
 can do, I can jsut use another IS for each one.
 
 Convergence!
 
>>> 
>>>  Write it up!   :-)
>> 
>> In the manual section on Unstructured Grids :)
>> 
>>  Matt
>> 
 
 I actually like it where it is. Stick with DMComplexVecGetClosure()
 with no section argument, and then
 the DM holds an IS++.
 
 Cool, now the "normal" user doesn't even see the ++IS.
>>> 
>> 
>> 
>> 
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which
>> their experiments lead.
>> -- Norbert Wiener
> 
> 

-- 
Department of Mathematics and Center for Computation & Technology
Louisiana State University, Baton Rouge, LA 70803, USA
Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin










[petsc-dev] PetscSection

2012-11-10 Thread Jed Brown
On Fri, Nov 9, 2012 at 10:20 PM, Blaise A Bourdin  wrote:

> A DM does double duty by describing the geometry of the mesh, and the data
> layout associated with the finite element space. I liked the model where
> the mesh geometry and the data layout on the mesh was split in two objects,
> but I understand the convenience of having everything in the DM, and
> DMClone works just fine. Since I may have to handle scalar, vector, 2nd and
> 4th order tensor on 2 different finite element spaces, in an assembly loop,
> I may end up dealing with as many as 8 DM. I stick all these DM's in the
> user context of each DM associated with a unknown (a Vec on which I may
> have to call SNESSolve or TSSolve), hoping that this is not creating some
> aliasing problem which as  a fortran programmer I can not possibly
> understand.
>

;-)

Everything is an IS. I suspect that this means that the Set/GetCone,
>

I think these just return a range or (offset,size).


> Set/GetClosure operations
>

Closure needs to do orientation, so it's not direct access from the Vec.
Regardless, I don't think the plan would be to create an IS for that
granularity.


> would return an IS that could be used in VecSetValuesIS, but may not even
> be needed if an equivalent of  DMMesh' SectionRealRestrict /
> RestrictClosure /Update / UpdateCLosure is implemented
>

I think these are DMCompletVecGet/SetClosure.


>
> Setting Values is good, but getting values is needed too!
>
> Also, in addition to the Barry's simple pseudo-code, I need to be able to
> get an IS for the i^th dof of a field at a given point (think of applying a
> Dirichlet boundary conditions to only one component of a field, for
> instance). It's always been the messy part. How would that fit within the
> proposed model?
>

I think you would get an index set for those points on the feature you
wanted to do something special for.


> Finally, just a comment on stratum vs topological dimension. There is no
> reason why all elements in a mesh should have the same topological
> dimension. I actually find it much easier to have a single mesh where some
> sets of elements have codimension 0 and others codimension 1 and
> dispatching my assembly depending on the physics associated with each
> block, and the codimension of the element rather than having to deal with
> side sets / boundary mesh / whatever you want to call it.
>

I don't mind that, but can't you have an index set describing the codim 0
elements (maybe all of them) and another index set for the codim 1 elements
on the features you care about? You can take their union (or concatenate)
for your assembly loop if you like. Is there something wrong with this
approach?
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-09 Thread Matthew Knepley
On Fri, Nov 9, 2012 at 9:46 PM, Barry Smith  wrote:
>
> On Nov 9, 2012, at 8:42 PM, Jed Brown  wrote:
>
>> On Fri, Nov 9, 2012 at 8:39 PM, Matthew Knepley  wrote:
>> You are both right :) I don't care where we put them, but I want BCs
>> and fields. However, this is no problem.
>> Right now I do BCs and fields as PetscSections held inside the primary
>> section. If IS can do what PetscSection
>> can do, I can jsut use another IS for each one.
>>
>> Convergence!
>>
>
>Write it up!   :-)

In the manual section on Unstructured Grids :)

   Matt

>>
>> I actually like it where it is. Stick with DMComplexVecGetClosure()
>> with no section argument, and then
>> the DM holds an IS++.
>>
>> Cool, now the "normal" user doesn't even see the ++IS.
>



--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-09 Thread Matthew Knepley
On Fri, Nov 9, 2012 at 9:34 PM, Barry Smith  wrote:
>
> On Nov 9, 2012, at 8:12 PM, Jed Brown  wrote:
>
>> On Fri, Nov 9, 2012 at 7:58 PM, Barry Smith  wrote:
>> Just an implementation issue. To me the correct abstract model is "indices 
>> of an element" etc and whether this is done via a "closure" or listing them 
>> etc is just an implementation issue.
>>
>> Right, I don't think it should be visible in the access interface whether 
>> the implementation has built an Index (in the DB sense) providing the 
>> closure directly or whether it's constructed on the fly from whatever 
>> indirect information. There is a meaningful distinction between getting the 
>> dofs associated with the element or face _interior_ versus its closure.
>>
>>I realize this is different than Matt's abstract model. I am not saying 
>> my model is better than Matt's (from an abstract mathematicians Matt's is 
>> probably better) but I believe my model is much more sellable to the masses 
>> (remember very few abstract mathematicians use PETSc :-)).
>>
>> Just a few months ago, you were arguing that (u,v) = \int u conj(v) since 
>> that was the Math Convention and PETSc was for Mathematicians.
>
>   touche
>
>>
>>So we are set? We design and implement a better, more general IS concept?
>>
>> I think so. I think Matt wants still to pull "fields" in, as well as perhaps 
>> constraints (for BCs).
>
>He is wrong, that all belongs in DMs.

You are both right :) I don't care where we put them, but I want BCs
and fields. However, this is no problem.
Right now I do BCs and fields as PetscSections held inside the primary
section. If IS can do what PetscSection
can do, I can jsut use another IS for each one.

>>
>> Now where do we put the convenience interface of accessing a chunk of data 
>> in a Vec or array? Or does the new IS interface only give use the index 
>> range associated with the block and we as the caller must do the indexing? 
>> (I'm not opposed to the latter. It's slight clutter but removes more from 
>> the inner loop.)
>
>Accessing from an array? Maybe
>
> Accessing from a vector? Unlikely since the destroys the hierarchy of IS 
> below Vec. So that part may need to be in the Vec,  maybe 
> VecGetSection(vec,index tag,is,&values)
>

I actually like it where it is. Stick with DMComplexVecGetClosure()
with no section argument, and then
the DM holds an IS++.

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-09 Thread Barry Smith

  We have a users manual?

On Nov 9, 2012, at 8:56 PM, Matthew Knepley  wrote:

> On Fri, Nov 9, 2012 at 9:46 PM, Barry Smith  wrote:
>> 
>> On Nov 9, 2012, at 8:42 PM, Jed Brown  wrote:
>> 
>>> On Fri, Nov 9, 2012 at 8:39 PM, Matthew Knepley  
>>> wrote:
>>> You are both right :) I don't care where we put them, but I want BCs
>>> and fields. However, this is no problem.
>>> Right now I do BCs and fields as PetscSections held inside the primary
>>> section. If IS can do what PetscSection
>>> can do, I can jsut use another IS for each one.
>>> 
>>> Convergence!
>>> 
>> 
>>   Write it up!   :-)
> 
> In the manual section on Unstructured Grids :)
> 
>   Matt
> 
>>> 
>>> I actually like it where it is. Stick with DMComplexVecGetClosure()
>>> with no section argument, and then
>>> the DM holds an IS++.
>>> 
>>> Cool, now the "normal" user doesn't even see the ++IS.
>> 
> 
> 
> 
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener



[petsc-dev] PetscSection

2012-11-09 Thread Barry Smith

On Nov 9, 2012, at 8:42 PM, Jed Brown  wrote:

> On Fri, Nov 9, 2012 at 8:39 PM, Matthew Knepley  wrote:
> You are both right :) I don't care where we put them, but I want BCs
> and fields. However, this is no problem.
> Right now I do BCs and fields as PetscSections held inside the primary
> section. If IS can do what PetscSection
> can do, I can jsut use another IS for each one.
> 
> Convergence!
>  

   Write it up!   :-)

> 
> I actually like it where it is. Stick with DMComplexVecGetClosure()
> with no section argument, and then
> the DM holds an IS++.
> 
> Cool, now the "normal" user doesn't even see the ++IS.



[petsc-dev] PetscSection

2012-11-09 Thread Jed Brown
On Fri, Nov 9, 2012 at 8:39 PM, Matthew Knepley  wrote:

> You are both right :) I don't care where we put them, but I want BCs
> and fields. However, this is no problem.
> Right now I do BCs and fields as PetscSections held inside the primary
> section. If IS can do what PetscSection
> can do, I can jsut use another IS for each one.
>

Convergence!


>
> I actually like it where it is. Stick with DMComplexVecGetClosure()
> with no section argument, and then
> the DM holds an IS++.
>

Cool, now the "normal" user doesn't even see the ++IS.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-09 Thread Jed Brown
On Fri, Nov 9, 2012 at 8:34 PM, Barry Smith  wrote:

> > I think so. I think Matt wants still to pull "fields" in, as well as
> perhaps constraints (for BCs).
>
>He is wrong, that all belongs in DMs.
>

I agree, though it may be reasonable to return an IS describing one field,
then be able to get the closure with respect to that IS.


> >
> > Now where do we put the convenience interface of accessing a chunk of
> data in a Vec or array? Or does the new IS interface only give use the
> index range associated with the block and we as the caller must do the
> indexing? (I'm not opposed to the latter. It's slight clutter but removes
> more from the inner loop.)
>
>Accessing from an array? Maybe
>
> Accessing from a vector? Unlikely since the destroys the hierarchy of
> IS below Vec. So that part may need to be in the Vec,  maybe
> VecGetSection(vec,index tag,is,&values)


Good, that also removes dependence on VecGetArray, which I hate.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-09 Thread Barry Smith

On Nov 9, 2012, at 8:12 PM, Jed Brown  wrote:

> On Fri, Nov 9, 2012 at 7:58 PM, Barry Smith  wrote:
> Just an implementation issue. To me the correct abstract model is "indices of 
> an element" etc and whether this is done via a "closure" or listing them etc 
> is just an implementation issue.
> 
> Right, I don't think it should be visible in the access interface whether the 
> implementation has built an Index (in the DB sense) providing the closure 
> directly or whether it's constructed on the fly from whatever indirect 
> information. There is a meaningful distinction between getting the dofs 
> associated with the element or face _interior_ versus its closure.
>  
>I realize this is different than Matt's abstract model. I am not saying my 
> model is better than Matt's (from an abstract mathematicians Matt's is 
> probably better) but I believe my model is much more sellable to the masses 
> (remember very few abstract mathematicians use PETSc :-)).
> 
> Just a few months ago, you were arguing that (u,v) = \int u conj(v) since 
> that was the Math Convention and PETSc was for Mathematicians.

  touche

>  
>So we are set? We design and implement a better, more general IS concept?
> 
> I think so. I think Matt wants still to pull "fields" in, as well as perhaps 
> constraints (for BCs).

   He is wrong, that all belongs in DMs.

> 
> Now where do we put the convenience interface of accessing a chunk of data in 
> a Vec or array? Or does the new IS interface only give use the index range 
> associated with the block and we as the caller must do the indexing? (I'm not 
> opposed to the latter. It's slight clutter but removes more from the inner 
> loop.)

   Accessing from an array? Maybe

Accessing from a vector? Unlikely since the destroys the hierarchy of IS 
below Vec. So that part may need to be in the Vec,  maybe 
VecGetSection(vec,index tag,is,&values) 



[petsc-dev] PetscSection

2012-11-09 Thread Jed Brown
On Fri, Nov 9, 2012 at 7:58 PM, Barry Smith  wrote:

> Just an implementation issue. To me the correct abstract model is "indices
> of an element" etc and whether this is done via a "closure" or listing them
> etc is just an implementation issue.
>

Right, I don't think it should be visible in the access interface whether
the implementation has built an Index (in the DB sense) providing the
closure directly or whether it's constructed on the fly from whatever
indirect information. There is a meaningful distinction between getting the
dofs associated with the element or face _interior_ versus its closure.


>I realize this is different than Matt's abstract model. I am not saying
> my model is better than Matt's (from an abstract mathematicians Matt's is
> probably better) but I believe my model is much more sellable to the masses
> (remember very few abstract mathematicians use PETSc :-)).
>

Just a few months ago, you were arguing that (u,v) = \int u conj(v) since
that was the Math Convention and PETSc was for Mathematicians.


>So we are set? We design and implement a better, more general IS
> concept?
>

I think so. I think Matt wants still to pull "fields" in, as well as
perhaps constraints (for BCs).

Now where do we put the convenience interface of accessing a chunk of data
in a Vec or array? Or does the new IS interface only give use the index
range associated with the block and we as the caller must do the indexing?
(I'm not opposed to the latter. It's slight clutter but removes more from
the inner loop.)
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-09 Thread Barry Smith

On Nov 9, 2012, at 7:17 PM, Jed Brown  wrote:

> On Fri, Nov 9, 2012 at 5:12 PM, Barry Smith  wrote:
> You are "down defining" IS so you can say it doesn't support doing these 
> things. I am proposing "up defining" IS to be able to do these things.
> 
>   I consider IS to be _THE_  PETSc tool for "indexing  stuff" (and want 
> it to be the only tool, having two tools is redundant and confusing). When IS 
> doesn't have a capability for some "kind of indexing" we wish to do then I 
> think the right thing to do is generalize IS as needed, not introduce a 
> different class.
> 
> Matt and I have been debating a similar point independently. I think we 
> concluded that we could make do with index sets, DMImpl-based vector access, 
> and and sub-DMs (for splitting out a piece of a larger problem, e.g., to 
> solve an auxiliary problem along a fault).
>  
> 
>   Consider the normal scenario for finite elements:  (simple case)
> 
>   Loop over elements:
>Get the vector entries Ue from U for the element
>Compute with Ue producing Fe
>Put the Fe into the output vector F.
> 
> this is an example of indexing.  Plain old ISGeneral doesn't directly 
> provide a way to do this,  Having a concept of a collection of groups of 
> indices (of varying sizes for different element types) makes this kind of 
> finite element scenario straight forward.  Your PetscSection seems like yet a 
> different type of useful indexing.
> 
> What I propose is we "start at the beginning" using all the ideas we've 
> gained over the years and design a new IS (which we can call PetscIS (or 
> something else)) using good abstractions of "indexing stuff". Once we have 
> this good design
> IS will quickly be removed and replaced with the new IS construct. I am open 
> to any idea on what the model (set of abstractions, whatever ..) we use for 
> the new IS.
> 
> The key feature is accessing a range by block number. In the case of Section, 
> the linear scalar indices (from concatenating the scalar entries for each 
> block) is not used directly.
>  
> 
> Though you use fancy abstract words with PetscSection like Section and 
> Atlas you actually have only one concrete specific implementation that is 
> merely block size and ranges. Is that really all we want/need for a very 
> general and useful concept of efficient indexing?  Perhaps it is, but let's 
> explore it. For example in pseudo code like above show me how you would do 
> the normal scenario for finite elements using your PetscSection?
> 
> Matt does not explicitly store the vector indices with support on the 
> element. Instead he has the "closure" operation which, given an cell number, 
> gathers all the lower-dimensional topological "points" (faces, edges, and 
> vertices) using a breadth-first traversal. (This is something that might make 
> sense to cache explicitly for each element. It uses more memory, but it makes 
> the access pattern simpler so might pay off. Most FEM codes do that.)

   Just an implementation issue. To me the correct abstract model is "indices 
of an element" etc and whether this is done via a "closure" or listing them etc 
is just an implementation issue. 

   I realize this is different than Matt's abstract model. I am not saying my 
model is better than Matt's (from an abstract mathematicians Matt's is probably 
better) but I believe my model is much more sellable to the masses (remember 
very few abstract mathematicians use PETSc :-)).

   So we are set? We design and implement a better, more general IS concept?


   Barry

> 
> Each of those "points" in the closure has zero or more dofs in the Vec, as 
> indicated by the "default" Section.
> 
> for each element e:
> PetscScalar *cu,*cf;
> // need workspace for cf
> DMPlexVecGetClosure(dm,U,e,&csize,&cu); // I'm just going to pretend that 
> he has abandoned the colliding name
> ComputeElementResidual(csize,cu,cf); // geometry and function space 
> conveniently tucked under the rug
> DMPlexVecRestoreClosure(dm,U,e,&csize,&cu);
> DMPlexVecSetClosure(dm,F,e,csize,cf,ADD_VALUES);
> 
> 
> 
> Note that there is no PetscSection above because its use is tucked inside the 
> convenience routines which internally rely on DMComplexGetTransitiveClosure() 
> and then figure out what part of the Vec to access using 
> PetscSectionGetOffset() and PetscSectionGetDof().



[petsc-dev] PetscSection

2012-11-09 Thread Jed Brown
On Fri, Nov 9, 2012 at 5:12 PM, Barry Smith  wrote:

> You are "down defining" IS so you can say it doesn't support doing these
> things. I am proposing "up defining" IS to be able to do these things.
>
>   I consider IS to be _THE_  PETSc tool for "indexing  stuff" (and
> want it to be the only tool, having two tools is redundant and confusing).
> When IS doesn't have a capability for some "kind of indexing" we wish to do
> then I think the right thing to do is generalize IS as needed, not
> introduce a different class.
>

Matt and I have been debating a similar point independently. I think we
concluded that we could make do with index sets, DMImpl-based vector
access, and and sub-DMs (for splitting out a piece of a larger problem,
e.g., to solve an auxiliary problem along a fault).


>
>   Consider the normal scenario for finite elements:  (simple case)
>
>   Loop over elements:
>Get the vector entries Ue from U for the element
>Compute with Ue producing Fe
>Put the Fe into the output vector F.
>
> this is an example of indexing.  Plain old ISGeneral doesn't
> directly provide a way to do this,  Having a concept of a collection of
> groups of indices (of varying sizes for different element types) makes this
> kind of finite element scenario straight forward.  Your PetscSection seems
> like yet a different type of useful indexing.
>
> What I propose is we "start at the beginning" using all the ideas
> we've gained over the years and design a new IS (which we can call PetscIS
> (or something else)) using good abstractions of "indexing stuff". Once we
> have this good design
> IS will quickly be removed and replaced with the new IS construct. I am
> open to any idea on what the model (set of abstractions, whatever ..) we
> use for the new IS.
>

The key feature is accessing a range by block number. In the case of
Section, the linear scalar indices (from concatenating the scalar entries
for each block) is not used directly.


>
> Though you use fancy abstract words with PetscSection like Section and
> Atlas you actually have only one concrete specific implementation that is
> merely block size and ranges. Is that really all we want/need for a very
> general and useful concept of efficient indexing?  Perhaps it is, but let's
> explore it. For example in pseudo code like above show me how you would do
> the normal scenario for finite elements using your PetscSection?
>

Matt does not explicitly store the vector indices with support on the
element. Instead he has the "closure" operation which, given an cell
number, gathers all the lower-dimensional topological "points" (faces,
edges, and vertices) using a breadth-first traversal. (This is something
that might make sense to cache explicitly for each element. It uses more
memory, but it makes the access pattern simpler so might pay off. Most FEM
codes do that.)

Each of those "points" in the closure has zero or more dofs in the Vec, as
indicated by the "default" Section.

for each element e:
PetscScalar *cu,*cf;
// need workspace for cf
DMPlexVecGetClosure(dm,U,e,&csize,&cu); // I'm just going to pretend
that he has abandoned the colliding name
ComputeElementResidual(csize,cu,cf); // geometry and function space
conveniently tucked under the rug
DMPlexVecRestoreClosure(dm,U,e,&csize,&cu);
DMPlexVecSetClosure(dm,F,e,csize,cf,ADD_VALUES);



Note that there is no PetscSection above because its use is tucked inside
the convenience routines which internally rely on
DMComplexGetTransitiveClosure() and then figure out what part of the Vec to
access using PetscSectionGetOffset() and PetscSectionGetDof().
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-09 Thread Barry Smith

On Nov 8, 2012, at 10:51 PM, Matthew Knepley  wrote:

> On Thu, Nov 8, 2012 at 11:46 PM, Barry Smith  wrote:
>> 
>>  Why isn't it
>>> 
>>> struct _n_PetscSection {
>>>  PetscInt  firstIndex,lastIndex;   /* and why not just have this start at 
>>> zero and go to n? */
>>>  PetscInt *dofs;
>>>  PetscInt *offsets;
>> };
>> 
>>   all that other clutter in there makes it mighty confusing as to the 
>> purpose of the beast.
> 
> I would not fight that.
> 
>>   Also why is the name PetscSection?If I have an array of numbers this 
>> just seems to organize them  into a series of blocks (and allow traversals 
>> through the blocks), for example,
>> 
>>0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
>>X XY Y Y  Z Z   Z
>> 
>>   Now a block IS allows traversals through a series of blocks (but all 
>> blocks must be the same size).  It is important to note that  though you can 
>> think of a block IS as simply an implementation of a IS (that is a 
>> collection of integers) that is more efficient than listing all the integers 
>> in an ISGeneral, it actually has additional information (each block as its 
>> own fundamental unit).
>> 
>>  With this point of view, how is a PetscSection more than a block IS with 
>> variable block size? And why not implement it that way?
> 
> Because you can do more general things. The offsets do not have to be
> prefix sums of the sizes. This allows
> me to make views into larger vectors, make "global" sections with
> negative offsets for off-process values,
> etc. IS is not flexible enough even for the things I can dream up, let
> alone users.

   Matt,

  You are "down defining" IS so you can say it doesn't support doing these 
things. I am proposing "up defining" IS to be able to do these things. 

  I consider IS to be _THE_  PETSc tool for "indexing  stuff" (and want it 
to be the only tool, having two tools is redundant and confusing). When IS 
doesn't have a capability for some "kind of indexing" we wish to do then I 
think the right thing to do is generalize IS as needed, not introduce a 
different class. 

  Consider the normal scenario for finite elements:  (simple case)
  
  Loop over elements:
   Get the vector entries Ue from U for the element
   Compute with Ue producing Fe
   Put the Fe into the output vector F.

this is an example of indexing.  Plain old ISGeneral doesn't directly 
provide a way to do this,  Having a concept of a collection of groups of 
indices (of varying sizes for different element types) makes this kind of 
finite element scenario straight forward.  Your PetscSection seems like yet a 
different type of useful indexing.

What I propose is we "start at the beginning" using all the ideas we've 
gained over the years and design a new IS (which we can call PetscIS (or 
something else)) using good abstractions of "indexing stuff". Once we have this 
good design
IS will quickly be removed and replaced with the new IS construct. I am open to 
any idea on what the model (set of abstractions, whatever ..) we use for the 
new IS. 

Though you use fancy abstract words with PetscSection like Section and 
Atlas you actually have only one concrete specific implementation that is 
merely block size and ranges. Is that really all we want/need for a very 
general and useful concept of efficient indexing?  Perhaps it is, but let's 
explore it. For example in pseudo code like above show me how you would do the 
normal scenario for finite elements using your PetscSection?

  Barry




> 
>   Matt
> 
>>   My hatred of more than a tiny number of abstract classes in PETSc has 
>> served me well over the years, and though I cheat sometimes to maintain this 
>> small number I believe that having excessive classes merely shows you don't 
>> understand your domain well enough.
>> 
>>   Barry
>> 
>> 
>> 
>> On Nov 8, 2012, at 10:18 PM, Jed Brown  wrote:
>> 
>>> This is inevitable, but may still not resolve anything. I'm sure Matt will 
>>> correct whatever below is incorrect. The main question is whether to 
>>> promote PetscSection to a PetscObject or to hide/replace it with something 
>>> else.
>>> 
>>> So PetscSection is an unloved (by most) beastie that is sort of 
>>> intermediate between an IS and a DM. Its core functionality is to provide 
>>> indirect indexing with variable block sizes. It consists of two parts, the 
>>> first of which is less than a PetscLayout.
>>> 
>>> struct _n_PetscUniformSection {
>>>  MPI_Comm comm;
>>>  PetscInt pStart, pEnd; /* The chart: all points are contained in [pStart, 
>>> pEnd) */
>>>  PetscInt numDof;   /* Describes layout of storage, point --> (constant 
>>> # of values, (p - pStart)*constant # of values) */
>>> };
>>> 
>>> numDof is always 1 and PetscUniformSection is never referenced from a *.c 
>>> file so this is really just a range of valid "point" values [pStart...pEnd).
>>> 
>>> struct _n_PetscSection {
>>>  struct _n_PetscUniformSection atla

[petsc-dev] PetscSection

2012-11-09 Thread Matthew Knepley
On Fri, Nov 9, 2012 at 12:19 AM, Jed Brown  wrote:
> On Thu, Nov 8, 2012 at 10:58 PM, Matthew Knepley  wrote:
>>
>> I don't care about expensive VecGetArray(). We do it absolutely
>> everywhere in PETSc.
>> We recommend that users do it. This is not a real objection.
>
>
> We normally do one VecGetArray(), then iterate over the whole local part.
> That's different from doing it in an inner loop. Many other packages do it
> in inner loops (overloading operator[] is popular among some), but that
> restricts the Vec types that can perform well with that package.

These are only intended for local vectors, for which I see no reason
to use anything but contiguous.

>>
>> > What other sections do users need? A trace space?
>>
>> Blaise uses a couple for every problem. We have several in PyLith
>> (ground surface,
>> fault).
>
>
> What is the difference between using a PetscSection for each thing and using
> an IS for those points on the feature? I guess just the extra indirection
> (IS) versus more duplication (a PetscSection is heavier than an IS).

They actually have topology, and we do integrals on them, and they get
output. Using
IS, as I said to Barry, is not good enough.

>>
>> >
>> > How do you intend to support many threads calling
>> > DMComplexVecGetClosure(),
>> > perhaps each with multiple buffers, without expensive instructions?
>>
>> GetClosure() is not a problem. All threads are reading from the same
>> topology, and
>> writing to separate output buffers. SetClosure() needs to be locked.
>
>
> You need separate buffers per thread, but we can make DMGetWorkArray() have
> a separate cache per thread. VecGetArray() is still not trivially cheap
> because we need atomics. Maybe I'm being overly paranoid, but I don't like
> the granularity that imposes.
>
>>
>> >>
>> >>
>> >> These are for multi-field splitting/
>> >
>> >
>> > Clearly, but why do they need to be in PetscSection? We already need DMs
>> > associated with fields.
>
>
> Answering my own question: the distinction between DM and Section is that
> the latter can address a discontinuous part of a Vec so addressing by field
> does not imply a copy.
>
>>
>>
>> Someone needs to know the splitting. If you had a DM, this info is
>> still stuck down
>> somewhere, so that DM is holding other DMs.
>
>
> It's still not clear to me why they are needed. If PetscSection is
> user-visible, can't they just be different Sections?
> DMComplexGetFieldSection(dm,field,&fsection);

My point is that it has to go somewhere. Right now, that somewhere is inside the
PetscSection. If we take it out, they must end up in the DM, which is
alright, but
involves a little rewriting. This is not a big issue for me.

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-08 Thread Matthew Knepley
On Thu, Nov 8, 2012 at 11:52 PM, Jed Brown  wrote:
> On Thu, Nov 8, 2012 at 10:31 PM, Matthew Knepley  wrote:
>>
>> This has been redone to remove Jed's objection (see new manual section).
>> It now
>> uses DM temp arrays,
>>
>> for(c = cStart; c < cEnd; ++c) {
>>PetscInt numVals;
>>PetscScalar *vals;
>>
>>DMComplexVecGetClosure(dm, section, vec, c, &numVals, &vals);
>>/* Compute residual */
>>DMComplexVecRestoreClosure(dm, section, vec, c, &numVals, &vals);
>>DMComplexVecSetClosure(dm, section, resvec, c, vals, ADD_VALUES);
>> }
>
>
> My other problem here is that VecGetArray() is potentially expensive (need
> to check coherence with other vectors, might not be contiguous for some Vec
> implementations).

I don't care about expensive VecGetArray(). We do it absolutely
everywhere in PETSc.
We recommend that users do it. This is not a real objection.

>>
>> > 2. Add DMComplex-specific access routines so the user does not need to
>> > see
>> > the PetscSection. Presumably this would be something like
>> > DMComplexGetPointOffset(dm,c,&offset); // offset into owned part of
>> > global
>> > vector?
>> > DMComplexGetPointOffsetLocal(dm,c,&loffset); // offset into local vector
>>
>> This is cumbersome because you must make another DM for every PetscSection
>> you want to use.
>
>
> What other sections do users need? A trace space?

Blaise uses a couple for every problem. We have several in PyLith
(ground surface,
fault).

>>
>> However, I am not completely against this now because
>> DMComplexClone()
>> is easy and accomplishes this.
>>
> [...]
>>
>>
>> I hate cursors. I had the same experience with them no matter what
>> they are called
>> (iterators, etc.) You need so much information, that you end up with
>> the whole object
>> in this "external" thing. I think they never pay off.
>
>
> How do you intend to support many threads calling DMComplexVecGetClosure(),
> perhaps each with multiple buffers, without expensive instructions?

GetClosure() is not a problem. All threads are reading from the same
topology, and
writing to separate output buffers. SetClosure() needs to be locked.

>>
>>
>> These are for multi-field splitting/
>
>
> Clearly, but why do they need to be in PetscSection? We already need DMs
> associated with fields.

Someone needs to know the splitting. If you had a DM, this info is
still stuck down
somewhere, so that DM is holding other DMs.

   Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-dev] PetscSection

2012-11-08 Thread Matthew Knepley
On Thu, Nov 8, 2012 at 11:46 PM, Barry Smith  wrote:
>
>   Why isn't it
>>
>> struct _n_PetscSection {
>>   PetscInt  firstIndex,lastIndex;   /* and why not just have this start at 
>> zero and go to n? */
>>   PetscInt *dofs;
>>   PetscInt *offsets;
> };
>
>all that other clutter in there makes it mighty confusing as to the 
> purpose of the beast.

I would not fight that.

>Also why is the name PetscSection?If I have an array of numbers this 
> just seems to organize them  into a series of blocks (and allow traversals 
> through the blocks), for example,
>
> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
> X XY Y Y  Z Z   Z
>
>Now a block IS allows traversals through a series of blocks (but all 
> blocks must be the same size).  It is important to note that  though you can 
> think of a block IS as simply an implementation of a IS (that is a collection 
> of integers) that is more efficient than listing all the integers in an 
> ISGeneral, it actually has additional information (each block as its own 
> fundamental unit).
>
>   With this point of view, how is a PetscSection more than a block IS with 
> variable block size? And why not implement it that way?

Because you can do more general things. The offsets do not have to be
prefix sums of the sizes. This allows
me to make views into larger vectors, make "global" sections with
negative offsets for off-process values,
etc. IS is not flexible enough even for the things I can dream up, let
alone users.

   Matt

>My hatred of more than a tiny number of abstract classes in PETSc has 
> served me well over the years, and though I cheat sometimes to maintain this 
> small number I believe that having excessive classes merely shows you don't 
> understand your domain well enough.
>
>Barry
>
>
>
> On Nov 8, 2012, at 10:18 PM, Jed Brown  wrote:
>
>> This is inevitable, but may still not resolve anything. I'm sure Matt will 
>> correct whatever below is incorrect. The main question is whether to promote 
>> PetscSection to a PetscObject or to hide/replace it with something else.
>>
>> So PetscSection is an unloved (by most) beastie that is sort of intermediate 
>> between an IS and a DM. Its core functionality is to provide indirect 
>> indexing with variable block sizes. It consists of two parts, the first of 
>> which is less than a PetscLayout.
>>
>> struct _n_PetscUniformSection {
>>   MPI_Comm comm;
>>   PetscInt pStart, pEnd; /* The chart: all points are contained in [pStart, 
>> pEnd) */
>>   PetscInt numDof;   /* Describes layout of storage, point --> (constant 
>> # of values, (p - pStart)*constant # of values) */
>> };
>>
>> numDof is always 1 and PetscUniformSection is never referenced from a *.c 
>> file so this is really just a range of valid "point" values [pStart...pEnd).
>>
>> struct _n_PetscSection {
>>   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
>>   PetscInt *atlasDof; /* Describes layout of 
>> storage, point --> # of values */
>>   PetscInt *atlasOff; /* Describes layout of 
>> storage, point --> offset into storage */
>>
>> ^^ These two fields are the essential part. Each index in [pStart,pEnd) is 
>> mapped to a block of size atlasDof[p-pStart] located at atlasOff[p-pStart]. 
>> Matt composes these indirect maps for storing meshes and other indexing 
>> tasks. This "unsorted offset mapping" is indeed used frequently in PETSc, 
>> though outside of Matt's code, it's just raw arrays.
>>
>> User's primarily encounter PetscSection when accessing unstructured meshes 
>> or any time the number of degrees of freedom or locations of each "point" 
>> (topological entity in the mesh) is not uniform. DMs do a similar thing 
>> (provide more logical access to a Vec), but DMDAVecGetArray() is of course 
>> only natural for a structured grid.
>>
>> At the user interface level, provided the user doesn't need low-level access 
>> to topology, I think there are three possibilities.
>>
>> 1. To compute a residual at a cell, user does
>>
>> VecGetArray(F,&f); // normal access to the Vec
>> DMComplexGetHeightStratum(dm,0,&cstart,&cend); // "height 0" is like 
>> codimension 0 with respect to the maximum topological dimension. Matt and I 
>> have have a philosophical debate over whether users prefer to think in 
>> "strata" (that can skip dimensions, so "height 1" could be a face, an edge, 
>> or a vertex depending on what is stored in the mesh) or in "topological 
>> dimensions" (where codim=1 is a "face" for a 3D mesh and an edge for a 2D 
>> mesh).
>>
>> DMGetDefaultSection(dm,§ion);
>> for (c=cstart; c>   PetscInt off;
>>   PetscSectionGetOffset(section,c,&off);
>>   f[off+0] = residual of first equation at this cell;
>>   f[off+1] = residual of second equation;
>>   // brushing under the rug how we access neighbor values
>> }
>>
>> The above is basically the current model. Matt has 
>> DMComplexGet/SetClosure(), but that interface carries a sleeper

[petsc-dev] PetscSection

2012-11-08 Thread Jed Brown
On Thu, Nov 8, 2012 at 11:32 PM, Matthew Knepley  wrote:

> They actually have topology, and we do integrals on them, and they get
> output. Using
> IS, as I said to Barry, is not good enough.
>

Because you want to restrict the contributions back only to the feature.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-08 Thread Matthew Knepley
On Thu, Nov 8, 2012 at 11:18 PM, Jed Brown  wrote:
> This is inevitable, but may still not resolve anything. I'm sure Matt will
> correct whatever below is incorrect. The main question is whether to promote
> PetscSection to a PetscObject or to hide/replace it with something else.
>
> So PetscSection is an unloved (by most) beastie that is sort of intermediate
> between an IS and a DM. Its core functionality is to provide indirect
> indexing with variable block sizes. It consists of two parts, the first of
> which is less than a PetscLayout.
>
> struct _n_PetscUniformSection {
>   MPI_Comm comm;
>   PetscInt pStart, pEnd; /* The chart: all points are contained in [pStart,
> pEnd) */
>   PetscInt numDof;   /* Describes layout of storage, point --> (constant
> # of values, (p - pStart)*constant # of values) */
> };
>
> numDof is always 1 and PetscUniformSection is never referenced from a *.c
> file so this is really just a range of valid "point" values [pStart...pEnd).

This allows blocks, but as Jed said I have not done anything with blocks yet.

> struct _n_PetscSection {
>   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
>   PetscInt *atlasDof; /* Describes layout of
> storage, point --> # of values */
>   PetscInt *atlasOff; /* Describes layout of
> storage, point --> offset into storage */
>
> ^^ These two fields are the essential part. Each index in [pStart,pEnd) is
> mapped to a block of size atlasDof[p-pStart] located at atlasOff[p-pStart].
> Matt composes these indirect maps for storing meshes and other indexing
> tasks. This "unsorted offset mapping" is indeed used frequently in PETSc,
> though outside of Matt's code, it's just raw arrays.
>
> User's primarily encounter PetscSection when accessing unstructured meshes
> or any time the number of degrees of freedom or locations of each "point"
> (topological entity in the mesh) is not uniform. DMs do a similar thing
> (provide more logical access to a Vec), but DMDAVecGetArray() is of course
> only natural for a structured grid.
>
> At the user interface level, provided the user doesn't need low-level access
> to topology, I think there are three possibilities.
>
> 1. To compute a residual at a cell, user does
>
> VecGetArray(F,&f); // normal access to the Vec
> DMComplexGetHeightStratum(dm,0,&cstart,&cend); // "height 0" is like
> codimension 0 with respect to the maximum topological dimension. Matt and I
> have have a philosophical debate over whether users prefer to think in
> "strata" (that can skip dimensions, so "height 1" could be a face, an edge,
> or a vertex depending on what is stored in the mesh) or in "topological
> dimensions" (where codim=1 is a "face" for a 3D mesh and an edge for a 2D
> mesh).
>
> DMGetDefaultSection(dm,§ion);
> for (c=cstart; c   PetscInt off;
>   PetscSectionGetOffset(section,c,&off);
>   f[off+0] = residual of first equation at this cell;
>   f[off+1] = residual of second equation;
>   // brushing under the rug how we access neighbor values
> }
>
> The above is basically the current model. Matt has
> DMComplexGet/SetClosure(), but that interface carries a sleeper reference to
> the Vec's array and he says he plans to remove it.

This has been redone to remove Jed's objection (see new manual section). It now
uses DM temp arrays,

for(c = cStart; c < cEnd; ++c) {
   PetscInt numVals;
   PetscScalar *vals;

   DMComplexVecGetClosure(dm, section, vec, c, &numVals, &vals);
   /* Compute residual */
   DMComplexVecRestoreClosure(dm, section, vec, c, &numVals, &vals);
   DMComplexVecSetClosure(dm, section, resvec, c, vals, ADD_VALUES);
}

> 2. Add DMComplex-specific access routines so the user does not need to see
> the PetscSection. Presumably this would be something like
> DMComplexGetPointOffset(dm,c,&offset); // offset into owned part of global
> vector?
> DMComplexGetPointOffsetLocal(dm,c,&loffset); // offset into local vector

This is cumbersome because you must make another DM for every PetscSection
you want to use. However, I am not completely against this now because
DMComplexClone()
is easy and accomplishes this.

> 3. Access cursors (another object).
> DMComplexGetCursors(dm,Vec U,3,&uface,&ucellL,&ucellR);
> // similar for F
> DMComplexGetHeightStratum(dm,1,&fstart,&fend);
> for (f=fstart; f   const PetscInt *c;
>   const PetscScalar *uL,*uR,*uF;
>   DMComplexGetSupport(dm,f,&c); // left and right cells
>   DMCursorGetValues(uface,f,&uF); // only used for staggered/mixed
> discretization
>   DMCursorGetValues(ucellL,c[0],&uL);
>   DMCursorGetValues(ucellR,c[1],&uR);
>   // Solve Riemann problem with "face" values uF[] (if using a staggered
> discretization), uL[], and uR[].
>   DMCursorRestoreValues(uface,f,&uF); // ...
>
> The "advantage" of the cursors here is that DMComplex doesn't need to hold
> the context manager itself (perhaps for multiple threads), the code dealing
> with the number of concurrent accesses is outside the inner

[petsc-dev] PetscSection

2012-11-08 Thread Jed Brown
On Thu, Nov 8, 2012 at 10:58 PM, Matthew Knepley  wrote:

> I don't care about expensive VecGetArray(). We do it absolutely
> everywhere in PETSc.
> We recommend that users do it. This is not a real objection.
>

We normally do one VecGetArray(), then iterate over the whole local part.
That's different from doing it in an inner loop. Many other packages do it
in inner loops (overloading operator[] is popular among some), but that
restricts the Vec types that can perform well with that package.


> > What other sections do users need? A trace space?
>
> Blaise uses a couple for every problem. We have several in PyLith
> (ground surface,
> fault).
>

What is the difference between using a PetscSection for each thing and
using an IS for those points on the feature? I guess just the extra
indirection (IS) versus more duplication (a PetscSection is heavier than an
IS).


> >
> > How do you intend to support many threads calling
> DMComplexVecGetClosure(),
> > perhaps each with multiple buffers, without expensive instructions?
>
> GetClosure() is not a problem. All threads are reading from the same
> topology, and
> writing to separate output buffers. SetClosure() needs to be locked.
>

You need separate buffers per thread, but we can make DMGetWorkArray() have
a separate cache per thread. VecGetArray() is still not trivially cheap
because we need atomics. Maybe I'm being overly paranoid, but I don't like
the granularity that imposes.


> >>
> >>
> >> These are for multi-field splitting/
> >
> >
> > Clearly, but why do they need to be in PetscSection? We already need DMs
> > associated with fields.
>

Answering my own question: the distinction between DM and Section is that
the latter can address a discontinuous part of a Vec so addressing by field
does not imply a copy.


>
> Someone needs to know the splitting. If you had a DM, this info is
> still stuck down
> somewhere, so that DM is holding other DMs.
>

It's still not clear to me why they are needed. If PetscSection is
user-visible, can't they just be different Sections?
DMComplexGetFieldSection(dm,field,&fsection);
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-08 Thread Jed Brown
On Thu, Nov 8, 2012 at 10:31 PM, Matthew Knepley  wrote:

> This has been redone to remove Jed's objection (see new manual section).
> It now
> uses DM temp arrays,
>
> for(c = cStart; c < cEnd; ++c) {
>PetscInt numVals;
>PetscScalar *vals;
>
>DMComplexVecGetClosure(dm, section, vec, c, &numVals, &vals);
>/* Compute residual */
>DMComplexVecRestoreClosure(dm, section, vec, c, &numVals, &vals);
>DMComplexVecSetClosure(dm, section, resvec, c, vals, ADD_VALUES);
> }
>

My other problem here is that VecGetArray() is potentially expensive (need
to check coherence with other vectors, might not be contiguous for some Vec
implementations).


> > 2. Add DMComplex-specific access routines so the user does not need to
> see
> > the PetscSection. Presumably this would be something like
> > DMComplexGetPointOffset(dm,c,&offset); // offset into owned part of
> global
> > vector?
> > DMComplexGetPointOffsetLocal(dm,c,&loffset); // offset into local vector
>
> This is cumbersome because you must make another DM for every PetscSection
> you want to use.
>

What other sections do users need? A trace space?


> However, I am not completely against this now because
> DMComplexClone()
> is easy and accomplishes this.
>
> [...]

>
> I hate cursors. I had the same experience with them no matter what
> they are called
> (iterators, etc.) You need so much information, that you end up with
> the whole object
> in this "external" thing. I think they never pay off.
>

How do you intend to support many threads calling DMComplexVecGetClosure(),
perhaps each with multiple buffers, without expensive instructions?


>
> These are for multi-field splitting/
>

Clearly, but why do they need to be in PetscSection? We already need DMs
associated with fields.
-- next part --
An HTML attachment was scrubbed...
URL: 



[petsc-dev] PetscSection

2012-11-08 Thread Barry Smith

  Why isn't it
> 
> struct _n_PetscSection {
>   PetscInt  firstIndex,lastIndex;   /* and why not just have this start at 
> zero and go to n? */
>   PetscInt *dofs;
>   PetscInt *offsets;
};

   all that other clutter in there makes it mighty confusing as to the purpose 
of the beast. 

   Also why is the name PetscSection?If I have an array of numbers this 
just seems to organize them  into a series of blocks (and allow traversals 
through the blocks), for example,

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
X XY Y Y  Z Z   Z 

   Now a block IS allows traversals through a series of blocks (but all blocks 
must be the same size).  It is important to note that  though you can think of 
a block IS as simply an implementation of a IS (that is a collection of 
integers) that is more efficient than listing all the integers in an ISGeneral, 
it actually has additional information (each block as its own fundamental 
unit). 

  With this point of view, how is a PetscSection more than a block IS with 
variable block size? And why not implement it that way? 

   My hatred of more than a tiny number of abstract classes in PETSc has served 
me well over the years, and though I cheat sometimes to maintain this small 
number I believe that having excessive classes merely shows you don't 
understand your domain well enough. 

   Barry



On Nov 8, 2012, at 10:18 PM, Jed Brown  wrote:

> This is inevitable, but may still not resolve anything. I'm sure Matt will 
> correct whatever below is incorrect. The main question is whether to promote 
> PetscSection to a PetscObject or to hide/replace it with something else.
> 
> So PetscSection is an unloved (by most) beastie that is sort of intermediate 
> between an IS and a DM. Its core functionality is to provide indirect 
> indexing with variable block sizes. It consists of two parts, the first of 
> which is less than a PetscLayout.
> 
> struct _n_PetscUniformSection {
>   MPI_Comm comm;
>   PetscInt pStart, pEnd; /* The chart: all points are contained in [pStart, 
> pEnd) */
>   PetscInt numDof;   /* Describes layout of storage, point --> (constant 
> # of values, (p - pStart)*constant # of values) */
> };
> 
> numDof is always 1 and PetscUniformSection is never referenced from a *.c 
> file so this is really just a range of valid "point" values [pStart...pEnd).
> 
> struct _n_PetscSection {
>   struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
>   PetscInt *atlasDof; /* Describes layout of storage, 
> point --> # of values */
>   PetscInt *atlasOff; /* Describes layout of storage, 
> point --> offset into storage */
> 
> ^^ These two fields are the essential part. Each index in [pStart,pEnd) is 
> mapped to a block of size atlasDof[p-pStart] located at atlasOff[p-pStart]. 
> Matt composes these indirect maps for storing meshes and other indexing 
> tasks. This "unsorted offset mapping" is indeed used frequently in PETSc, 
> though outside of Matt's code, it's just raw arrays.
> 
> User's primarily encounter PetscSection when accessing unstructured meshes or 
> any time the number of degrees of freedom or locations of each "point" 
> (topological entity in the mesh) is not uniform. DMs do a similar thing 
> (provide more logical access to a Vec), but DMDAVecGetArray() is of course 
> only natural for a structured grid.
> 
> At the user interface level, provided the user doesn't need low-level access 
> to topology, I think there are three possibilities.
> 
> 1. To compute a residual at a cell, user does
> 
> VecGetArray(F,&f); // normal access to the Vec
> DMComplexGetHeightStratum(dm,0,&cstart,&cend); // "height 0" is like 
> codimension 0 with respect to the maximum topological dimension. Matt and I 
> have have a philosophical debate over whether users prefer to think in 
> "strata" (that can skip dimensions, so "height 1" could be a face, an edge, 
> or a vertex depending on what is stored in the mesh) or in "topological 
> dimensions" (where codim=1 is a "face" for a 3D mesh and an edge for a 2D 
> mesh).
> 
> DMGetDefaultSection(dm,§ion);
> for (c=cstart; c   PetscInt off;
>   PetscSectionGetOffset(section,c,&off);
>   f[off+0] = residual of first equation at this cell;
>   f[off+1] = residual of second equation;
>   // brushing under the rug how we access neighbor values
> }
> 
> The above is basically the current model. Matt has DMComplexGet/SetClosure(), 
> but that interface carries a sleeper reference to the Vec's array and he says 
> he plans to remove it.
> 
> 2. Add DMComplex-specific access routines so the user does not need to see 
> the PetscSection. Presumably this would be something like
> DMComplexGetPointOffset(dm,c,&offset); // offset into owned part of global 
> vector?
> DMComplexGetPointOffsetLocal(dm,c,&loffset); // offset into local vector
> 
> 3. Access cursors (another object).
> DMComplexGetCursors(dm,Vec U,3,&uface,&ucellL,&ucellR);
> // similar for F
> DMCo

[petsc-dev] PetscSection

2012-11-08 Thread Jed Brown
This is inevitable, but may still not resolve anything. I'm sure Matt will
correct whatever below is incorrect. The main question is whether to
promote PetscSection to a PetscObject or to hide/replace it with something
else.

So PetscSection is an unloved (by most) beastie that is sort of
intermediate between an IS and a DM. Its core functionality is to provide
indirect indexing with variable block sizes. It consists of two parts, the
first of which is less than a PetscLayout.

struct _n_PetscUniformSection {
  MPI_Comm comm;
  PetscInt pStart, pEnd; /* The chart: all points are contained in [pStart,
pEnd) */
  PetscInt numDof;   /* Describes layout of storage, point -->
(constant # of values, (p - pStart)*constant # of values) */
};

numDof is always 1 and PetscUniformSection is never referenced from a *.c
file so this is really just a range of valid "point" values [pStart...pEnd).

struct _n_PetscSection {
  struct _n_PetscUniformSection atlasLayout;  /* Layout for the atlas */
  PetscInt *atlasDof; /* Describes layout of
storage, point --> # of values */
  PetscInt *atlasOff; /* Describes layout of
storage, point --> offset into storage */

^^ These two fields are the essential part. Each index in [pStart,pEnd) is
mapped to a block of size atlasDof[p-pStart] located at atlasOff[p-pStart].
Matt composes these indirect maps for storing meshes and other indexing
tasks. This "unsorted offset mapping" is indeed used frequently in PETSc,
though outside of Matt's code, it's just raw arrays.

User's primarily encounter PetscSection when accessing unstructured meshes
or any time the number of degrees of freedom or locations of each "point"
(topological entity in the mesh) is not uniform. DMs do a similar thing
(provide more logical access to a Vec), but DMDAVecGetArray() is of course
only natural for a structured grid.

At the user interface level, provided the user doesn't need low-level
access to topology, I think there are three possibilities.

*1.* To compute a residual at a cell, user does

VecGetArray(F,&f); // normal access to the Vec
DMComplexGetHeightStratum(dm,0,&cstart,&cend); // "height 0" is like
codimension 0 with respect to the maximum topological dimension. Matt and I
have have a philosophical debate over whether users prefer to think in
"strata" (that can skip dimensions, so "height 1" could be a face, an edge,
or a vertex depending on what is stored in the mesh) or in "topological
dimensions" (where codim=1 is a "face" for a 3D mesh and an edge for a 2D
mesh).

DMGetDefaultSection(dm,§ion);
for (c=cstart; c # local dofs which are constrained */
  PetscInt *bcIndices;/* Local indices for
constrained dofs */

These BC fields are for doing masked updates, such as VecSetValuesSection()
which ignores constrained dofs.

  PetscInt  refcnt;   /* Vecs obtained with
VecDuplicate() and from MatGetVecs() reuse map of input object */
  PetscBool setup;

  PetscInt  numFields;/* The number of fields
making up the degrees of freedom */
  const char  **fieldNames;   /* The field names */
  PetscInt *numFieldComponents; /* The number of
components in each field */
  PetscSection *field;/* A section describing the
layout and constraints for each field */
};
-- next part --
An HTML attachment was scrubbed...
URL: