Re: [petsc-users] DMPlex Distribution

2019-09-19 Thread Mark Adams via petsc-users
I think you are fine with DMForest. I just mentioned ForestClaw for
background. It has a bunch of hyperbolic stuff in there that is specialized.

On Thu, Sep 19, 2019 at 8:16 AM Mohammad Hassan 
wrote:

> In fact, I would create my base mesh in DMPlex and use DMForest to
> construct the non-conformal meshing obtained by my own block-based AMR
>  functions. ForestClaw is based on p4est. However, I may need to implement
> the AMR algorithm on DM in p4est library and then convert it to DMPlex. Do
> you think DMForest alone will not allow me to create the AMR for DMPlex?
>
>
>
> Thanks
>
> Amir
>
>
>
> *From:* Mark Adams [mailto:mfad...@lbl.gov]
> *Sent:* Thursday, September 19, 2019 7:23 PM
> *To:* Mohammad Hassan 
> *Cc:* Matthew Knepley ; PETSc users list <
> petsc-users@mcs.anl.gov>
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> Note, Forest gives you individual elements at the leaves. Donna Calhoun, a
> former Chombo user, has developed a block structured solver on p4est (
> https://math.boisestate.edu/~calhoun/ForestClaw/index.html), but I would
> imagine that you could just take the Plex that DMForest creates and just
> call DMRefine(...) on it to get a block structured AMR mesh.
>
>
>
> On Wed, Sep 18, 2019 at 11:02 AM Mohammad Hassan via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Thanks for your suggestion, Matthew. I will certainly look into DMForest
> for refining of my base DMPlex dm.
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Wednesday, September 18, 2019 10:35 PM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Wed, Sep 18, 2019 at 10:27 AM Mohammad Hassan <
> mhbagh...@mail.sjtu.edu.cn> wrote:
>
> I want to implement block-based AMR, which turns my base conformal mesh to
> non-conformal.  My question is how DMPlex renders a mesh that it cannot
> support non-conformal meshes.
>
>
>
> Mark misspoke. Plex _does_ support geometrically non-conforming meshing,
> e.g. "hanging nodes". The easiest way to
>
> use Plex this way is to use DMForest, which uses Plex underneath.
>
>
>
> There are excellent p4est tutorials. What you would do is create your
> conformal mesh, using Plex if you want, and
>
> use that for the p4est base mesh (you would have the base mesh be the
> forest roots).
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
> If DMPlex does not work, I will try to use DMForest.
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Wednesday, September 18, 2019 9:50 PM
> *To:* Mohammad Hassan 
> *Cc:* Mark Adams ; PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Wed, Sep 18, 2019 at 9:35 AM Mohammad Hassan via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there
> any way that we can construct non-conformal layout for DM in petsc?
>
>
>
> Lets see. Plex does support geometrically non-conforming meshes. This is
> how we support p4est. However, if
>
> you want that, you can just use DMForest I think. So you jsut want
> structured AMR?
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
>
>
> *From:* Mark Adams [mailto:mfad...@lbl.gov]
> *Sent:* Wednesday, September 18, 2019 9:23 PM
> *To:* Mohammad Hassan 
> *Cc:* Matthew Knepley ; PETSc users list <
> petsc-users@mcs.anl.gov>
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> I'm puzzled. It sounds like you are doing non-conforming AMR (structured
> block AMR), but Plex does not support that.
>
>
>
> On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Mark is  right. The functionality of AMR does not relate to
> parallelization of that. The vector size (global or local) does not
> conflict with AMR functions.
>
> Thanks
>
>
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Wednesday, September 18, 2019 12:59 AM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan <
> mhbagh...@mail.sjtu.edu.cn> wrote:
>
> Thanks for suggestion. I am going to use a block-based amr. I think I need
> to know exactly the mesh distribution of blocks across different processors
> for implementation of amr.
>
>
>
> Hi Amir,
>
>
>
> How are you using Plex if the block-AMR is coming from somewhere else?
> This will help
>
> me tell you what would be 

Re: [petsc-users] DMPlex Distribution

2019-09-19 Thread Mohammad Hassan via petsc-users
In fact, I would create my base mesh in DMPlex and use DMForest to construct 
the non-conformal meshing obtained by my own block-based AMR  functions. 
ForestClaw is based on p4est. However, I may need to implement the AMR 
algorithm on DM in p4est library and then convert it to DMPlex. Do you think 
DMForest alone will not allow me to create the AMR for DMPlex?

 

Thanks

Amir

 

From: Mark Adams [mailto:mfad...@lbl.gov] 
Sent: Thursday, September 19, 2019 7:23 PM
To: Mohammad Hassan 
Cc: Matthew Knepley ; PETSc users list 

Subject: Re: [petsc-users] DMPlex Distribution

 

Note, Forest gives you individual elements at the leaves. Donna Calhoun, a 
former Chombo user, has developed a block structured solver on p4est 
(https://math.boisestate.edu/~calhoun/ForestClaw/index.html), but I would 
imagine that you could just take the Plex that DMForest creates and just call 
DMRefine(...) on it to get a block structured AMR mesh.

 

On Wed, Sep 18, 2019 at 11:02 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Thanks for your suggestion, Matthew. I will certainly look into DMForest for 
refining of my base DMPlex dm.

 

From: Matthew Knepley [mailto:knep...@gmail.com <mailto:knep...@gmail.com> ] 
Sent: Wednesday, September 18, 2019 10:35 PM
To: Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> >
Cc: PETSc mailto:petsc-users@mcs.anl.gov> >
Subject: Re: [petsc-users] DMPlex Distribution

 

On Wed, Sep 18, 2019 at 10:27 AM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

I want to implement block-based AMR, which turns my base conformal mesh to 
non-conformal.  My question is how DMPlex renders a mesh that it cannot support 
non-conformal meshes. 

 

Mark misspoke. Plex _does_ support geometrically non-conforming meshing, e.g. 
"hanging nodes". The easiest way to

use Plex this way is to use DMForest, which uses Plex underneath.

 

There are excellent p4est tutorials. What you would do is create your conformal 
mesh, using Plex if you want, and

use that for the p4est base mesh (you would have the base mesh be the forest 
roots).

 

  Thanks,

 

 Matt

 

If DMPlex does not work, I will try to use DMForest.  

 

From: Matthew Knepley [mailto:knep...@gmail.com <mailto:knep...@gmail.com> ] 
Sent: Wednesday, September 18, 2019 9:50 PM
To: Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> >
Cc: Mark Adams mailto:mfad...@lbl.gov> >; PETSc 
mailto:petsc-users@mcs.anl.gov> >
Subject: Re: [petsc-users] DMPlex Distribution

 

On Wed, Sep 18, 2019 at 9:35 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there any 
way that we can construct non-conformal layout for DM in petsc?

 

Lets see. Plex does support geometrically non-conforming meshes. This is how we 
support p4est. However, if

you want that, you can just use DMForest I think. So you jsut want structured 
AMR?

 

  Thanks,

 

Matt

 

 

From: Mark Adams [mailto: <mailto:mfad...@lbl.gov> mfad...@lbl.gov] 
Sent: Wednesday, September 18, 2019 9:23 PM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: Matthew Knepley < <mailto:knep...@gmail.com> knep...@gmail.com>; PETSc 
users list < <mailto:petsc-users@mcs.anl.gov> petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

I'm puzzled. It sounds like you are doing non-conforming AMR (structured block 
AMR), but Plex does not support that.

 

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Mark is  right. The functionality of AMR does not relate to parallelization of 
that. The vector size (global or local) does not conflict with AMR functions.

Thanks

 

Amir

 

From: Matthew Knepley [mailto: <mailto:knep...@gmail.com> knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 12:59 AM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc < <mailto:petsc-ma...@mcs.anl.gov> petsc-ma...@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

 

Hi Amir,

 

How are you using Plex if the block-AMR is coming from somewhere else? This 
will help

me tell you what would be best.

 

And as a general question, can we set block size of vector on each rank?

 

I think as Mark says that you are using "blocksize" is a different way than 
PETSc.

 

  Thanks,

 

Matt

 

Thanks

Amir

 

From: Matthew Knepley [mailto: <mailto:knep...@gmail.com> knep...@gma

Re: [petsc-users] DMPlex Distribution

2019-09-19 Thread Mark Adams via petsc-users
Note, Forest gives you individual elements at the leaves. Donna Calhoun, a
former Chombo user, has developed a block structured solver on p4est (
https://math.boisestate.edu/~calhoun/ForestClaw/index.html), but I would
imagine that you could just take the Plex that DMForest creates and just
call DMRefine(...) on it to get a block structured AMR mesh.

On Wed, Sep 18, 2019 at 11:02 AM Mohammad Hassan via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Thanks for your suggestion, Matthew. I will certainly look into DMForest
> for refining of my base DMPlex dm.
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Wednesday, September 18, 2019 10:35 PM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Wed, Sep 18, 2019 at 10:27 AM Mohammad Hassan <
> mhbagh...@mail.sjtu.edu.cn> wrote:
>
> I want to implement block-based AMR, which turns my base conformal mesh to
> non-conformal.  My question is how DMPlex renders a mesh that it cannot
> support non-conformal meshes.
>
>
>
> Mark misspoke. Plex _does_ support geometrically non-conforming meshing,
> e.g. "hanging nodes". The easiest way to
>
> use Plex this way is to use DMForest, which uses Plex underneath.
>
>
>
> There are excellent p4est tutorials. What you would do is create your
> conformal mesh, using Plex if you want, and
>
> use that for the p4est base mesh (you would have the base mesh be the
> forest roots).
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
> If DMPlex does not work, I will try to use DMForest.
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Wednesday, September 18, 2019 9:50 PM
> *To:* Mohammad Hassan 
> *Cc:* Mark Adams ; PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Wed, Sep 18, 2019 at 9:35 AM Mohammad Hassan via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there
> any way that we can construct non-conformal layout for DM in petsc?
>
>
>
> Lets see. Plex does support geometrically non-conforming meshes. This is
> how we support p4est. However, if
>
> you want that, you can just use DMForest I think. So you jsut want
> structured AMR?
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
>
>
> *From:* Mark Adams [mailto:mfad...@lbl.gov]
> *Sent:* Wednesday, September 18, 2019 9:23 PM
> *To:* Mohammad Hassan 
> *Cc:* Matthew Knepley ; PETSc users list <
> petsc-users@mcs.anl.gov>
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> I'm puzzled. It sounds like you are doing non-conforming AMR (structured
> block AMR), but Plex does not support that.
>
>
>
> On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Mark is  right. The functionality of AMR does not relate to
> parallelization of that. The vector size (global or local) does not
> conflict with AMR functions.
>
> Thanks
>
>
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Wednesday, September 18, 2019 12:59 AM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan <
> mhbagh...@mail.sjtu.edu.cn> wrote:
>
> Thanks for suggestion. I am going to use a block-based amr. I think I need
> to know exactly the mesh distribution of blocks across different processors
> for implementation of amr.
>
>
>
> Hi Amir,
>
>
>
> How are you using Plex if the block-AMR is coming from somewhere else?
> This will help
>
> me tell you what would be best.
>
>
>
> And as a general question, can we set block size of vector on each rank?
>
>
>
> I think as Mark says that you are using "blocksize" is a different way
> than PETSc.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> Thanks
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Tuesday, September 17, 2019 11:04 PM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Hi
>
> I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set
> the distribution across processors manually. I mean, how can I set the
> share of dm on each rank (local)?
>
>
>
> You could make a Shell partitioner and tell it the entire partition:
>
>

Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mohammad Hassan via petsc-users
Thanks for your suggestion, Matthew. I will certainly look into DMForest for 
refining of my base DMPlex dm.

 

From: Matthew Knepley [mailto:knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 10:35 PM
To: Mohammad Hassan 
Cc: PETSc 
Subject: Re: [petsc-users] DMPlex Distribution

 

On Wed, Sep 18, 2019 at 10:27 AM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

I want to implement block-based AMR, which turns my base conformal mesh to 
non-conformal.  My question is how DMPlex renders a mesh that it cannot support 
non-conformal meshes. 

 

Mark misspoke. Plex _does_ support geometrically non-conforming meshing, e.g. 
"hanging nodes". The easiest way to

use Plex this way is to use DMForest, which uses Plex underneath.

 

There are excellent p4est tutorials. What you would do is create your conformal 
mesh, using Plex if you want, and

use that for the p4est base mesh (you would have the base mesh be the forest 
roots).

 

  Thanks,

 

 Matt

 

If DMPlex does not work, I will try to use DMForest.  

 

From: Matthew Knepley [mailto:knep...@gmail.com <mailto:knep...@gmail.com> ] 
Sent: Wednesday, September 18, 2019 9:50 PM
To: Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> >
Cc: Mark Adams mailto:mfad...@lbl.gov> >; PETSc 
mailto:petsc-users@mcs.anl.gov> >
Subject: Re: [petsc-users] DMPlex Distribution

 

On Wed, Sep 18, 2019 at 9:35 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there any 
way that we can construct non-conformal layout for DM in petsc?

 

Lets see. Plex does support geometrically non-conforming meshes. This is how we 
support p4est. However, if

you want that, you can just use DMForest I think. So you jsut want structured 
AMR?

 

  Thanks,

 

Matt

 

 

From: Mark Adams [mailto: <mailto:mfad...@lbl.gov> mfad...@lbl.gov] 
Sent: Wednesday, September 18, 2019 9:23 PM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: Matthew Knepley < <mailto:knep...@gmail.com> knep...@gmail.com>; PETSc 
users list < <mailto:petsc-users@mcs.anl.gov> petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

I'm puzzled. It sounds like you are doing non-conforming AMR (structured block 
AMR), but Plex does not support that.

 

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Mark is  right. The functionality of AMR does not relate to parallelization of 
that. The vector size (global or local) does not conflict with AMR functions.

Thanks

 

Amir

 

From: Matthew Knepley [mailto: <mailto:knep...@gmail.com> knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 12:59 AM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc < <mailto:petsc-ma...@mcs.anl.gov> petsc-ma...@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

 

Hi Amir,

 

How are you using Plex if the block-AMR is coming from somewhere else? This 
will help

me tell you what would be best.

 

And as a general question, can we set block size of vector on each rank?

 

I think as Mark says that you are using "blocksize" is a different way than 
PETSc.

 

  Thanks,

 

Matt

 

Thanks

Amir

 

From: Matthew Knepley [mailto: <mailto:knep...@gmail.com> knep...@gmail.com] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc < <mailto:petsc-users@mcs.anl.gov> petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the 
distribution across processors manually. I mean, how can I set the share of dm 
on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just 
want to mess with the weights in ParMetis.

 

  Thanks,

 

Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ &

Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mohammad Hassan via petsc-users
I want to implement block-based AMR, which turns my base conformal mesh to 
non-conformal.  My question is how DMPlex renders a mesh that it cannot support 
non-conformal meshes. If DMPlex does not work, I will try to use DMForest.  

 

From: Matthew Knepley [mailto:knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 9:50 PM
To: Mohammad Hassan 
Cc: Mark Adams ; PETSc 
Subject: Re: [petsc-users] DMPlex Distribution

 

On Wed, Sep 18, 2019 at 9:35 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there any 
way that we can construct non-conformal layout for DM in petsc?

 

Lets see. Plex does support geometrically non-conforming meshes. This is how we 
support p4est. However, if

you want that, you can just use DMForest I think. So you jsut want structured 
AMR?

 

  Thanks,

 

Matt

 

 

From: Mark Adams [mailto: <mailto:mfad...@lbl.gov> mfad...@lbl.gov] 
Sent: Wednesday, September 18, 2019 9:23 PM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: Matthew Knepley < <mailto:knep...@gmail.com> knep...@gmail.com>; PETSc 
users list < <mailto:petsc-users@mcs.anl.gov> petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

I'm puzzled. It sounds like you are doing non-conforming AMR (structured block 
AMR), but Plex does not support that.

 

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Mark is  right. The functionality of AMR does not relate to parallelization of 
that. The vector size (global or local) does not conflict with AMR functions.

Thanks

 

Amir

 

From: Matthew Knepley [mailto: <mailto:knep...@gmail.com> knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 12:59 AM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc < <mailto:petsc-ma...@mcs.anl.gov> petsc-ma...@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

 

Hi Amir,

 

How are you using Plex if the block-AMR is coming from somewhere else? This 
will help

me tell you what would be best.

 

And as a general question, can we set block size of vector on each rank?

 

I think as Mark says that you are using "blocksize" is a different way than 
PETSc.

 

  Thanks,

 

Matt

 

Thanks

Amir

 

From: Matthew Knepley [mailto: <mailto:knep...@gmail.com> knep...@gmail.com] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc < <mailto:petsc-users@mcs.anl.gov> petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the 
distribution across processors manually. I mean, how can I set the share of dm 
on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just 
want to mess with the weights in ParMetis.

 

  Thanks,

 

Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 



Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mohammad Hassan via petsc-users
If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there any 
way that we can construct non-conformal layout for DM in petsc?

 

From: Mark Adams [mailto:mfad...@lbl.gov] 
Sent: Wednesday, September 18, 2019 9:23 PM
To: Mohammad Hassan 
Cc: Matthew Knepley ; PETSc users list 

Subject: Re: [petsc-users] DMPlex Distribution

 

I'm puzzled. It sounds like you are doing non-conforming AMR (structured block 
AMR), but Plex does not support that.

 

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Mark is  right. The functionality of AMR does not relate to parallelization of 
that. The vector size (global or local) does not conflict with AMR functions.

Thanks

 

Amir

 

From: Matthew Knepley [mailto:knep...@gmail.com <mailto:knep...@gmail.com> ] 
Sent: Wednesday, September 18, 2019 12:59 AM
To: Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> >
Cc: PETSc mailto:petsc-ma...@mcs.anl.gov> >
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

 

Hi Amir,

 

How are you using Plex if the block-AMR is coming from somewhere else? This 
will help

me tell you what would be best.

 

And as a general question, can we set block size of vector on each rank?

 

I think as Mark says that you are using "blocksize" is a different way than 
PETSc.

 

  Thanks,

 

Matt

 

Thanks

Amir

 

From: Matthew Knepley [mailto: <mailto:knep...@gmail.com> knep...@gmail.com] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc < <mailto:petsc-users@mcs.anl.gov> petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the 
distribution across processors manually. I mean, how can I set the share of dm 
on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just 
want to mess with the weights in ParMetis.

 

  Thanks,

 

Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 



Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mark Adams via petsc-users
I'm puzzled. It sounds like you are doing non-conforming AMR (structured
block AMR), but Plex does not support that.

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Mark is  right. The functionality of AMR does not relate to
> parallelization of that. The vector size (global or local) does not
> conflict with AMR functions.
>
> Thanks
>
>
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Wednesday, September 18, 2019 12:59 AM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan <
> mhbagh...@mail.sjtu.edu.cn> wrote:
>
> Thanks for suggestion. I am going to use a block-based amr. I think I need
> to know exactly the mesh distribution of blocks across different processors
> for implementation of amr.
>
>
>
> Hi Amir,
>
>
>
> How are you using Plex if the block-AMR is coming from somewhere else?
> This will help
>
> me tell you what would be best.
>
>
>
> And as a general question, can we set block size of vector on each rank?
>
>
>
> I think as Mark says that you are using "blocksize" is a different way
> than PETSc.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> Thanks
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Tuesday, September 17, 2019 11:04 PM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Hi
>
> I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set
> the distribution across processors manually. I mean, how can I set the
> share of dm on each rank (local)?
>
>
>
> You could make a Shell partitioner and tell it the entire partition:
>
>
>
>
> https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html
>
>
>
> However, I would be surprised if you could do this. It is likely that you
> just want to mess with the weights in ParMetis.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> Thanks
>
> Amir
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


Re: [petsc-users] DMPlex Distribution

2019-09-17 Thread Mohammad Hassan via petsc-users
Mark is  right. The functionality of AMR does not relate to parallelization of 
that. The vector size (global or local) does not conflict with AMR functions.

Thanks

 

Amir

 

From: Matthew Knepley [mailto:knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 12:59 AM
To: Mohammad Hassan 
Cc: PETSc 
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

 

Hi Amir,

 

How are you using Plex if the block-AMR is coming from somewhere else? This 
will help

me tell you what would be best.

 

And as a general question, can we set block size of vector on each rank?

 

I think as Mark says that you are using "blocksize" is a different way than 
PETSc.

 

  Thanks,

 

Matt

 

Thanks

Amir

 

From: Matthew Knepley [mailto: <mailto:knep...@gmail.com> knep...@gmail.com] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan < <mailto:mhbagh...@mail.sjtu.edu.cn> 
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc < <mailto:petsc-users@mcs.anl.gov> petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the 
distribution across processors manually. I mean, how can I set the share of dm 
on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just 
want to mess with the weights in ParMetis.

 

  Thanks,

 

Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 



Re: [petsc-users] DMPlex Distribution

2019-09-17 Thread Mohammad Hassan via petsc-users
Sorry if I confused you. 

In fact, I want to use grid adaptively using block-based AMR technique. By 
that, I mean I will have the same stencil for all points inside the block. For 
better functionality of AMR and its parallelization, it is needed to know the 
location of points for both working vectors and also vectors obtained from 
DMPlex.  That’s why I think I need to specify the AMR block across processors.

 

Thanks

Amir

 

From: Mark Adams [mailto:mfad...@lbl.gov] 
Sent: Wednesday, September 18, 2019 12:43 AM
To: Mohammad Hassan 
Cc: Matthew Knepley ; PETSc users list 

Subject: Re: [petsc-users] DMPlex Distribution

 

 

 

On Tue, Sep 17, 2019 at 12:07 PM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

And as a general question, can we set block size of vector on each rank?

 

I don't understand what you mean by AMR in this context exactly. And I'm not 
sure what you mean by blocks size. Block size is the number of dof per vertex 
(eg, 3) and it is a constant for a vector.

 

Thanks

Amir

 

From: Matthew Knepley [mailto:knep...@gmail.com <mailto:knep...@gmail.com> ] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> >
Cc: PETSc mailto:petsc-users@mcs.anl.gov> >
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the 
distribution across processors manually. I mean, how can I set the share of dm 
on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just 
want to mess with the weights in ParMetis.

 

  Thanks,

 

Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 



Re: [petsc-users] DMPlex Distribution

2019-09-17 Thread Mark Adams via petsc-users
On Tue, Sep 17, 2019 at 12:07 PM Mohammad Hassan via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Thanks for suggestion. I am going to use a block-based amr. I think I need
> to know exactly the mesh distribution of blocks across different processors
> for implementation of amr.
>
> And as a general question, can we set block size of vector on each rank?
>

I don't understand what you mean by AMR in this context exactly. And I'm
not sure what you mean by blocks size. Block size is the number of dof per
vertex (eg, 3) and it is a constant for a vector.


> Thanks
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Tuesday, September 17, 2019 11:04 PM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Hi
>
> I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set
> the distribution across processors manually. I mean, how can I set the
> share of dm on each rank (local)?
>
>
>
> You could make a Shell partitioner and tell it the entire partition:
>
>
>
>
> https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html
>
>
>
> However, I would be surprised if you could do this. It is likely that you
> just want to mess with the weights in ParMetis.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> Thanks
>
> Amir
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


Re: [petsc-users] DMPlex Distribution

2019-09-17 Thread Mohammad Hassan via petsc-users
Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

And as a general question, can we set block size of vector on each rank?

Thanks

Amir

 

From: Matthew Knepley [mailto:knep...@gmail.com] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan 
Cc: PETSc 
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the 
distribution across processors manually. I mean, how can I set the share of dm 
on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just 
want to mess with the weights in ParMetis.

 

  Thanks,

 

Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 



Re: [petsc-users] DMPlex distribution and loops over cells/faces for finite volumes

2017-11-09 Thread Matthew Knepley
On Thu, Nov 9, 2017 at 3:08 AM, Matteo Semplice 
wrote:

> Hi.
> I am using a DMPLex to store a grid (at present 2d symplicial but will
> need to work more in general), but after distributing it, I am finding it
> hard to organize my loops over the cells/faces.
>
> First things first: the mesh distribution. I do
>
>   DMPlexSetAdjacencyUseCone(dm, PETSC_TRUE);
>   DMPlexSetAdjacencyUseClosure(dm, PETSC_FALSE);
>

This gives you adjacent cells, but not their boundary. Usually what you
want for FV.


>   DMPlexDistribute(dm, 1, NULL, );
>
> The 1 in DMPlexDistribute is correct to give me 1 layer of ghost cells,
> right?
>

Yes.


> Using 2 instead of 1 should add more internal ghosts, right? Actually,
> this errors out with petsc3.7.7 from debian/stable... If you deem it worth,
> I will test again with petsc3.8.
>

It should not error. If possible, send me the mesh please. I have some
tests where 2 is successful.


> Secondly, looping over cells. I do
>
>   DMPlexGetHeightStratum(dm, 0, , );
>   DMPlexGetHybridBounds(dm, , NULL, NULL, NULL);
>   for (int c = cStart; c < cEnd; ++c) {
>   if ((cPhysical>=0)&&(c>=cPhysical))
> {code for ghost cells outside physical domain boundary}
>   else if ( ??? )
> {code for ghost cells}
>

I am not sure why you want to draw this distinction, but you can check to
see if the cell is present in the pointSF. If it is, then its not owned by
the process.
Here is the kind of FV loop I do (for example in TS ex11):


https://bitbucket.org/petsc/petsc/src/f9fed41fd6c28d171f9c644f97b15591be9df7d6/src/snes/utils/dmplexsnes.c?at=master=file-view-default#dmplexsnes.c-1634


>   else
> {code for cells owned by this process}
>   }
>
>   What should I use as a test for ghost cells outside processor boundary?
> I have seen that (on a 2d symplicial mesh) reading the ghost label with
> DMGetLabelValue(dm, "ghost", c, ) gives the value -1 for inner cells
> and domain boundary ghosts but 2 for processor boundary ghosts. Is this the
> correct test? What is the meaning of ghost=2? In general, should I test for
> any value >=0?
>

Yes, see the link I gave.


> Lastly, since I will do finite volumes, I'd like to loop over the faces to
> compute fluxes.
> DMPlexGetHeightStratum(dm, 1, , )
> gives me the start/endpoints for faces, but they include also faces
> between ghost cells that are therefore external to the cells owned by the
> processor. I see that for faces on processor boundary, ghost=-1 on one
> process and ghost=2 on the other process, which I like. However ghost=-1
> also on faces between ghosts cells and therefore to compute fluxes I should
> test for the ghost value of the face and for the ghost value of the cells
> on both sides... Is there a better way to loop over faces?
>

See the link.

 Thanks,

   Matt


> Thanks in advance for any suggestion.
>
> Matteo
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] DMPlex distribution with custom adjacency

2017-08-04 Thread Karl Rupp



Coming back to this after a while.  It would still be good if I
could provide a callback for user-defined adjacency.  Does

https://bitbucket.org/petsc/petsc/pull-requests/690/plex-support-user-defined-adjacencies-via/diff


look suitable?


Yes. Karl, could you merge?


Now in next.

Best regards,
Karli


Re: [petsc-users] DMPlex distribution with custom adjacency

2017-08-04 Thread Matthew Knepley
On Fri, Aug 4, 2017 at 9:37 AM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
> > On 6 Jun 2017, at 09:01, Lawrence Mitchell  ac.uk> wrote:
> >
> >> If you do really want to prune them, then I guess overriding the
> DMPlexGetAdjacency() as you propose is probably the best way. I would
> >> be willing to put it in. Please send me a reminder email since this
> week is pretty heinous for me.
>
> Coming back to this after a while.  It would still be good if I could
> provide a callback for user-defined adjacency.  Does
> https://bitbucket.org/petsc/petsc/pull-requests/690/plex-
> support-user-defined-adjacencies-via/diff look suitable?
>

Yes. Karl, could you merge?

  Thanks,

Matt


> Cheers,
>
> Lawrence




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with custom adjacency

2017-08-04 Thread Lawrence Mitchell

> On 6 Jun 2017, at 09:01, Lawrence Mitchell  
> wrote:
> 
>> If you do really want to prune them, then I guess overriding the 
>> DMPlexGetAdjacency() as you propose is probably the best way. I would
>> be willing to put it in. Please send me a reminder email since this week is 
>> pretty heinous for me.

Coming back to this after a while.  It would still be good if I could provide a 
callback for user-defined adjacency.  Does 
https://bitbucket.org/petsc/petsc/pull-requests/690/plex-support-user-defined-adjacencies-via/diff
 look suitable?

Cheers,

Lawrence

Re: [petsc-users] DMPlex distribution with custom adjacency

2017-06-06 Thread Lawrence Mitchell

> On 5 Jun 2017, at 23:01, Matthew Knepley  wrote:
> 
> To do a FEM integral on a facet f I need:
> 
> i) to evaluate coefficients at quadrature points (on the facet)
> ii) to evaluate basis functions at quadrature points (on the facet)
> 
> for (i), I need all the dofs in closure(support(f)).
> 
> So this is a jump term, since I need both sides. Is it nonlinear? If its 
> linear, things are easy
> and just compute from both sides and add, but of course that does not work 
> for nonlinear things.

I can't guarantee what downstream users will write.  Most of the time it will 
probably be nonlinear in some way.  Hence wanting to compute on the facet 
(getting both sides of the contribution at once).  Rather than spinning over 
cells, computing the one-sided facet contribution and adding in.


> 
> for (ii), I just need the definition of the finite element.
> 
> So now, my model for how I want to global assembly of a facet integral is:
> 
> loop over all facets:
>gather from global coefficient to local data
>evaluate coefficient at quad points
>perform integral
>local to global
> 
> In parallel, I just make a partition of the facets (so that each facet is 
> integrated exactly once).
> 
> OK, so what data do I need in parallel?
> 
> Exactly the dofs that correspond to closure(support(facet)) for all owned 
> facets in my partition.
> 
> So I was hoping to be able to grow a distributed cell partition by exactly 
> that means: add in those remote cells which are in the support of owned 
> facets (I'm happy if this is symmetrically grown, although I think I can do 
> it with one-sided growth).
> 
> So that's my rationale for wanting this "strange" adjacency.  I can get 
> enough adjacency by using the current FEM adjacency and filtering which 
> entities I iterate over, but it seems a bit wasteful.
> 
> I see that you have edges you do not necessarily want, but do they mess up 
> your loop? It seems like you will not encounter them looping over facets.
> This is exactly what happens to me in FV, where I just ignore them.

Remember in 2D the edges are the facets.  I haven't checked what happens in 3D 
(but I expect it will be similar), because I can't draw 3D meshes.

> If you do really want to prune them, then I guess overriding the 
> DMPlexGetAdjacency() as you propose is probably the best way. I would
> be willing to put it in. Please send me a reminder email since this week is 
> pretty heinous for me.

Sure.  I think this is quite a cute usage because I can make the ghost region 
"one-sided" quite easily by only growing the adjacency through the facets on 
the ranks that I need.  So the halo exchange between two processes is not 
symmetric.  The code I sketched that did this seems to work properly, once the 
adjacency computation is right.

Lawrence

Re: [petsc-users] DMPlex distribution with custom adjacency

2017-06-05 Thread Matthew Knepley
On Mon, Jun 5, 2017 at 12:18 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
> > On 2 Jun 2017, at 05:09, Matthew Knepley  wrote:
> >
> >
> > Coming back to this, I think I understand the problem a little better.
> >
> > Consider this mesh:
> >
> > ++
> > |\ 3 |
> > | \  |
> > |2 \ |
> > |   \|
> > ++
> > |\ 1 |
> > | \  |
> > |0 \ |
> > |   \|
> > ++
> >
> > Let's say I run on 3 processes and the initial (non-overlapped) cell
> > partition is:
> >
> > rank 0: cell 0
> > rank 1: cell 1 & 2
> > rank 2: cell 3
> >
> > Now I'd like to grow the overlap such that any cell I can see through
> > a facet (and its closure) lives in the overlap.
> >
> > So great, I just need a new adjacency relation that gathers
> > closure(support(point))
> >
> > But, that isn't right, because now on rank 0, I will get a mesh that
> > looks like:
> >
> > I do not understand why you think its not right. Toby and I are trying
> to push a formalism for
> > this understanding, in https://arxiv.org/abs/1508.02470. So you say
> that if sigma is a dual
> > basis function associated with point p, then the support of its matching
> psi, sigma(psi) = 1
> > in the biorthogonal bases, is exactly star(p).
> >
> > So, if you have no sigma sitting on your vertex, who cares if you put
> that extra edge and node
> > in. It will not affect the communication pattern for dofs. If you do,
> then shouldn't you be including
> > that edge?
>
> Hmm, I think we are talking at cross-purposes.  Let me try and explain
> again where I am coming from:
>
> To do a FEM integral on some cell c I need:
>
> i) to evaluate coefficients at quadrature points (on the cell)
> ii) to evaluate basis functions at quadrature points (on the cell)
>
> for (i), I need all the dofs in closure(c).
> for (ii), I just need the definition of the finite element.
>
> To do a FEM integral on a facet f I need:
>
> i) to evaluate coefficients at quadrature points (on the facet)
> ii) to evaluate basis functions at quadrature points (on the facet)
>
> for (i), I need all the dofs in closure(support(f)).
>

So this is a jump term, since I need both sides. Is it nonlinear? If its
linear, things are easy
and just compute from both sides and add, but of course that does not work
for nonlinear things.


> for (ii), I just need the definition of the finite element.
>
> So now, my model for how I want to global assembly of a facet integral is:
>
> loop over all facets:
>gather from global coefficient to local data
>evaluate coefficient at quad points
>perform integral
>local to global
>
> In parallel, I just make a partition of the facets (so that each facet is
> integrated exactly once).
>
> OK, so what data do I need in parallel?
>
> Exactly the dofs that correspond to closure(support(facet)) for all owned
> facets in my partition.
>
> So I was hoping to be able to grow a distributed cell partition by exactly
> that means: add in those remote cells which are in the support of owned
> facets (I'm happy if this is symmetrically grown, although I think I can do
> it with one-sided growth).
>
> So that's my rationale for wanting this "strange" adjacency.  I can get
> enough adjacency by using the current FEM adjacency and filtering which
> entities I iterate over, but it seems a bit wasteful.
>

I see that you have edges you do not necessarily want, but do they mess up
your loop? It seems like you will not encounter them looping over facets.
This is exactly what happens to me in FV, where I just ignore them.

If you do really want to prune them, then I guess overriding the
DMPlexGetAdjacency() as you propose is probably the best way. I would
be willing to put it in. Please send me a reminder email since this week is
pretty heinous for me.

  Thanks,

Matt


> For the currently implemented adjacencies, however, the code definitely
> assumes in various places that the closure of the cells on a partition
> covers all the points.  And doing, say, 
> DMPlexSetAdjacencyUseCone/Closure(PETSC_FALSE)
> results in meshes where that is not true.
>
> See the below:
>
> Lawrence
>
> $ mpiexec -n 3 ./bork
> [0] face 7 has support: 0
> [0] face 8 has support: 0
> [0] face 9 has support: 0 1
> [0] face 10 has support: 1
> [0] face 11 has support: 1
> [0] face 12 has support:
> [1] face 10 has support: 0 2
> [1] face 11 has support: 0 1
> [1] face 12 has support: 0
> [1] face 13 has support: 1
> [1] face 14 has support: 2
> [1] face 15 has support: 2
> [1] face 16 has support: 1 3
> [1] face 17 has support: 3
> [1] face 18 has support: 3
> [2] face 7 has support: 0 1
> [2] face 8 has support: 0
> [2] face 9 has support: 0
> [2] face 10 has support: 1
> [2] face 11 has support:
> [2] face 12 has support: 1[2]PETSC ERROR: - Error
> Message --
> [2]PETSC ERROR: Petsc has generated inconsistent data
> [2]PETSC ERROR: Number of depth 2 faces 6 does not match permuted 

Re: [petsc-users] DMPlex distribution with custom adjacency

2017-06-05 Thread Lawrence Mitchell

> On 2 Jun 2017, at 05:09, Matthew Knepley  wrote:
> 
> 
> Coming back to this, I think I understand the problem a little better.
> 
> Consider this mesh:
> 
> ++
> |\ 3 |
> | \  |
> |2 \ |
> |   \|
> ++
> |\ 1 |
> | \  |
> |0 \ |
> |   \|
> ++
> 
> Let's say I run on 3 processes and the initial (non-overlapped) cell
> partition is:
> 
> rank 0: cell 0
> rank 1: cell 1 & 2
> rank 2: cell 3
> 
> Now I'd like to grow the overlap such that any cell I can see through
> a facet (and its closure) lives in the overlap.
> 
> So great, I just need a new adjacency relation that gathers
> closure(support(point))
> 
> But, that isn't right, because now on rank 0, I will get a mesh that
> looks like:
> 
> I do not understand why you think its not right. Toby and I are trying to 
> push a formalism for
> this understanding, in https://arxiv.org/abs/1508.02470. So you say that if 
> sigma is a dual
> basis function associated with point p, then the support of its matching psi, 
> sigma(psi) = 1
> in the biorthogonal bases, is exactly star(p).
> 
> So, if you have no sigma sitting on your vertex, who cares if you put that 
> extra edge and node
> in. It will not affect the communication pattern for dofs. If you do, then 
> shouldn't you be including
> that edge?

Hmm, I think we are talking at cross-purposes.  Let me try and explain again 
where I am coming from:

To do a FEM integral on some cell c I need:

i) to evaluate coefficients at quadrature points (on the cell)
ii) to evaluate basis functions at quadrature points (on the cell)

for (i), I need all the dofs in closure(c).
for (ii), I just need the definition of the finite element.

To do a FEM integral on a facet f I need: 

i) to evaluate coefficients at quadrature points (on the facet)
ii) to evaluate basis functions at quadrature points (on the facet)

for (i), I need all the dofs in closure(support(f)).
for (ii), I just need the definition of the finite element.

So now, my model for how I want to global assembly of a facet integral is:

loop over all facets:
   gather from global coefficient to local data
   evaluate coefficient at quad points
   perform integral
   local to global

In parallel, I just make a partition of the facets (so that each facet is 
integrated exactly once).

OK, so what data do I need in parallel?

Exactly the dofs that correspond to closure(support(facet)) for all owned 
facets in my partition.

So I was hoping to be able to grow a distributed cell partition by exactly that 
means: add in those remote cells which are in the support of owned facets (I'm 
happy if this is symmetrically grown, although I think I can do it with 
one-sided growth).

So that's my rationale for wanting this "strange" adjacency.  I can get enough 
adjacency by using the current FEM adjacency and filtering which entities I 
iterate over, but it seems a bit wasteful.

For the currently implemented adjacencies, however, the code definitely assumes 
in various places that the closure of the cells on a partition covers all the 
points.  And doing, say, DMPlexSetAdjacencyUseCone/Closure(PETSC_FALSE) results 
in meshes where that is not true.

See the below:

Lawrence

$ mpiexec -n 3 ./bork
[0] face 7 has support: 0
[0] face 8 has support: 0
[0] face 9 has support: 0 1
[0] face 10 has support: 1
[0] face 11 has support: 1
[0] face 12 has support:
[1] face 10 has support: 0 2
[1] face 11 has support: 0 1
[1] face 12 has support: 0
[1] face 13 has support: 1
[1] face 14 has support: 2
[1] face 15 has support: 2
[1] face 16 has support: 1 3
[1] face 17 has support: 3
[1] face 18 has support: 3
[2] face 7 has support: 0 1
[2] face 8 has support: 0
[2] face 9 has support: 0
[2] face 10 has support: 1
[2] face 11 has support:
[2] face 12 has support: 1[2]PETSC ERROR: - Error Message 
--
[2]PETSC ERROR: Petsc has generated inconsistent data
[2]PETSC ERROR: Number of depth 2 faces 6 does not match permuted nubmer 5
[2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[2]PETSC ERROR: Petsc Development GIT revision: v3.7.6-3857-gda66ab19e3  GIT 
Date: 2017-05-10 09:02:09 -0500
[2]PETSC ERROR: ./bork on a arch-darwin-c-opt named yam-laptop.local by 
lmitche1 Mon Jun  5 18:15:31 2017
[2]PETSC ERROR: Configure options --download-chaco=1 --download-ctetgen=1 
--download-eigen --download-exodusii=1 --download-hdf5=1 --download-hypre=1 
--download-metis=1 --download-ml=1 --download-mumps=1 --download-netcdf=1 
--download-parmetis=1 --download-ptscotch=1 --download-scalapack=1 
--download-triangle=1 --with-c2html=0 --with-debugging=0 
--with-shared-libraries=1 PETSC_ARCH=arch-darwin-c-opt
[2]PETSC ERROR: #1 DMPlexCreateOrderingClosure_Static() line 41 in 
/Users/lmitche1/Documents/work/src/deps/petsc/src/dm/impls/plex/plexreorder.c

[0]PETSC ERROR: - Error Message 

Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-06-01 Thread Matthew Knepley
On Thu, Jun 1, 2017 at 10:51 AM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

> On 25/05/17 21:00, Matthew Knepley wrote:
> > On Thu, May 25, 2017 at 2:22 PM, Lawrence Mitchell
> >  > > wrote:
> >
> >
> > > On 25 May 2017, at 20:03, Matthew Knepley  > wrote:
> > >
> > >
> > > Hmm, I thought I made adjacency per field. I have to look. That
> way, no problem with the Stokes example. DG is still weird.
> >
> > You might, we don't right now.  We just make the topological
> > adjacency that is "large enough", and then make fields on that.
> >
> > >
> > > That seems baroque. So this is just another adjacency pattern. You
> should be able to easily define it, or if you are a patient person,
> > > wait for me to do it. Its here
> > >
> > > https://bitbucket.org/petsc/petsc/src/
> 01c3230e040078628f5e559992965c1c4b6f473d/src/dm/impls/plex/
> plexdistribute.c?at=master=file-view-default#
> plexdistribute.c-239
> >  01c3230e040078628f5e559992965c1c4b6f473d/src/dm/impls/plex/
> plexdistribute.c?at=master=file-view-default#
> plexdistribute.c-239>
> > >
> > > I am more than willing to make this overridable by the user
> through function composition or another mechanism.
> >
> > Hmm, that naive thing of just modifying the XXX_Support_Internal
> > to compute with DMPlexGetTransitiveClosure rather than
> > DMPlexGetCone didn't do what I expected, but I don't understand
> > the way this bootstrapping is done very well.
> >
> >
> > It should do the right thing. Notice that you have to be careful about
> > the arrays that you use since I reuse them for efficiency here.
> > What is going wrong?
>
> Coming back to this, I think I understand the problem a little better.
>
> Consider this mesh:
>
> ++
> |\ 3 |
> | \  |
> |2 \ |
> |   \|
> ++
> |\ 1 |
> | \  |
> |0 \ |
> |   \|
> ++
>
> Let's say I run on 3 processes and the initial (non-overlapped) cell
> partition is:
>
> rank 0: cell 0
> rank 1: cell 1 & 2
> rank 2: cell 3
>
> Now I'd like to grow the overlap such that any cell I can see through
> a facet (and its closure) lives in the overlap.
>
> So great, I just need a new adjacency relation that gathers
> closure(support(point))
>
> But, that isn't right, because now on rank 0, I will get a mesh that
> looks like:
>

I do not understand why you think its not right. Toby and I are trying to
push a formalism for
this understanding, in https://arxiv.org/abs/1508.02470. So you say that if
sigma is a dual
basis function associated with point p, then the support of its matching
psi, sigma(psi) = 1
in the biorthogonal bases, is exactly star(p).

So, if you have no sigma sitting on your vertex, who cares if you put that
extra edge and node
in. It will not affect the communication pattern for dofs. If you do, then
shouldn't you be including
that edge?

   Matt


> +
> |
> |
> |
> |
> ++
> |\ 1 |
> | \  |
> |0 \ |
> |   \|
> ++
>
> Because I grab all the mesh points in the adjacency of the initial cell:
>
> +
> |\
> | \
> |0 \
> |   \
> ++
>
> And on the top vertex that pulls in the facet (but not the cell).
>
> So I can write a "DMPlexGetAdjacency" information that only returns
> non-empty adjacencies for facets.  But it's sort of lying about what
> it does now.
>
> Thoughts?
>
> Lawrence
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-06-01 Thread Lawrence Mitchell


On 1 Jun 2017, at 19:12, Stefano Zampini  wrote:

>> Now I'd like to grow the overlap such that any cell I can see through
>> a facet (and its closure) lives in the overlap.
>> 
> 
> Lawrence, why do you need the closure here? Why facet adjacency is not enough?

Sorry, bad punctuation. The closure subclause is attached to the cell. So the 
adjacency I want is "go from the facets to the cells, gather all the points in 
the closure of those cells". 

Lawrence
> 



Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-06-01 Thread Stefano Zampini

> On Jun 1, 2017, at 5:51 PM, Lawrence Mitchell 
>  wrote:
> 
> On 25/05/17 21:00, Matthew Knepley wrote:
>> On Thu, May 25, 2017 at 2:22 PM, Lawrence Mitchell
>> > > wrote:
>> 
>> 
>>> On 25 May 2017, at 20:03, Matthew Knepley >> > wrote:
>>> 
>>> 
>>> Hmm, I thought I made adjacency per field. I have to look. That way, no 
>>> problem with the Stokes example. DG is still weird.
>> 
>>You might, we don't right now.  We just make the topological
>>adjacency that is "large enough", and then make fields on that.
>> 
>>> 
>>> That seems baroque. So this is just another adjacency pattern. You should 
>>> be able to easily define it, or if you are a patient person,
>>> wait for me to do it. Its here
>>> 
>>> https://bitbucket.org/petsc/petsc/src/01c3230e040078628f5e559992965c1c4b6f473d/src/dm/impls/plex/plexdistribute.c?at=master=file-view-default#plexdistribute.c-239
>>
>> 
>>> 
>>> I am more than willing to make this overridable by the user through 
>>> function composition or another mechanism.
>> 
>>Hmm, that naive thing of just modifying the XXX_Support_Internal
>>to compute with DMPlexGetTransitiveClosure rather than
>>DMPlexGetCone didn't do what I expected, but I don't understand
>>the way this bootstrapping is done very well.
>> 
>> 
>> It should do the right thing. Notice that you have to be careful about
>> the arrays that you use since I reuse them for efficiency here.
>> What is going wrong?
> 
> Coming back to this, I think I understand the problem a little better.
> 
> Consider this mesh:
> 
> ++
> |\ 3 |
> | \  |
> |2 \ |
> |   \|
> ++
> |\ 1 |
> | \  |
> |0 \ |
> |   \|
> ++
> 
> Let's say I run on 3 processes and the initial (non-overlapped) cell
> partition is:
> 
> rank 0: cell 0
> rank 1: cell 1 & 2
> rank 2: cell 3
> 
> Now I'd like to grow the overlap such that any cell I can see through
> a facet (and its closure) lives in the overlap.
> 

Lawrence, why do you need the closure here? Why facet adjacency is not enough? 

> So great, I just need a new adjacency relation that gathers
> closure(support(point))
> 
> But, that isn't right, because now on rank 0, I will get a mesh that
> looks like:
> 
> +
> |
> |
> |
> |
> ++
> |\ 1 |
> | \  |
> |0 \ |
> |   \|
> ++
> 
> Because I grab all the mesh points in the adjacency of the initial cell:
> 
> +
> |\
> | \
> |0 \
> |   \
> ++
> 
> And on the top vertex that pulls in the facet (but not the cell).
> 
> So I can write a "DMPlexGetAdjacency" information that only returns
> non-empty adjacencies for facets.  But it's sort of lying about what
> it does now.
> 
> Thoughts?
> 
> Lawrence
> 



Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-06-01 Thread Lawrence Mitchell
On 01/06/17 16:51, Lawrence Mitchell wrote:
...

> So I can write a "DMPlexGetAdjacency" information that only returns
> non-empty adjacencies for facets.  But it's sort of lying about what
> it does now.

Proposed a PR that allows the user to specify what they want adjacency
to mean by providing a callback function.  That might be best?
https://bitbucket.org/petsc/petsc/pull-requests/690/plex-support-user-defined-adjacencies-via/diff

Lawrence



signature.asc
Description: OpenPGP digital signature


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-06-01 Thread Lawrence Mitchell
On 25/05/17 21:00, Matthew Knepley wrote:
> On Thu, May 25, 2017 at 2:22 PM, Lawrence Mitchell
>  > wrote:
> 
> 
> > On 25 May 2017, at 20:03, Matthew Knepley  > wrote:
> >
> >
> > Hmm, I thought I made adjacency per field. I have to look. That way, no 
> problem with the Stokes example. DG is still weird.
> 
> You might, we don't right now.  We just make the topological
> adjacency that is "large enough", and then make fields on that.
> 
> >
> > That seems baroque. So this is just another adjacency pattern. You 
> should be able to easily define it, or if you are a patient person,
> > wait for me to do it. Its here
> >
> > 
> https://bitbucket.org/petsc/petsc/src/01c3230e040078628f5e559992965c1c4b6f473d/src/dm/impls/plex/plexdistribute.c?at=master=file-view-default#plexdistribute.c-239
> 
> 
> >
> > I am more than willing to make this overridable by the user through 
> function composition or another mechanism.
> 
> Hmm, that naive thing of just modifying the XXX_Support_Internal
> to compute with DMPlexGetTransitiveClosure rather than
> DMPlexGetCone didn't do what I expected, but I don't understand
> the way this bootstrapping is done very well.
> 
> 
> It should do the right thing. Notice that you have to be careful about
> the arrays that you use since I reuse them for efficiency here.
> What is going wrong?

Coming back to this, I think I understand the problem a little better.

Consider this mesh:

++
|\ 3 |
| \  |
|2 \ |
|   \|
++
|\ 1 |
| \  |
|0 \ |
|   \|
++

Let's say I run on 3 processes and the initial (non-overlapped) cell
partition is:

rank 0: cell 0
rank 1: cell 1 & 2
rank 2: cell 3

Now I'd like to grow the overlap such that any cell I can see through
a facet (and its closure) lives in the overlap.

So great, I just need a new adjacency relation that gathers
closure(support(point))

But, that isn't right, because now on rank 0, I will get a mesh that
looks like:

+
|
|
|
|
++
|\ 1 |
| \  |
|0 \ |
|   \|
++

Because I grab all the mesh points in the adjacency of the initial cell:

+
|\
| \
|0 \
|   \
++

And on the top vertex that pulls in the facet (but not the cell).

So I can write a "DMPlexGetAdjacency" information that only returns
non-empty adjacencies for facets.  But it's sort of lying about what
it does now.

Thoughts?

Lawrence



signature.asc
Description: OpenPGP digital signature


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 2:22 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
> > On 25 May 2017, at 20:03, Matthew Knepley  wrote:
> >
> >
> > Hmm, I thought I made adjacency per field. I have to look. That way, no
> problem with the Stokes example. DG is still weird.
>
> You might, we don't right now.  We just make the topological adjacency
> that is "large enough", and then make fields on that.
>
> >
> > That seems baroque. So this is just another adjacency pattern. You
> should be able to easily define it, or if you are a patient person,
> > wait for me to do it. Its here
> >
> > https://bitbucket.org/petsc/petsc/src/01c3230e040078628f5e559992965c
> 1c4b6f473d/src/dm/impls/plex/plexdistribute.c?at=master&
> fileviewer=file-view-default#plexdistribute.c-239
> >
> > I am more than willing to make this overridable by the user through
> function composition or another mechanism.
>
> Hmm, that naive thing of just modifying the XXX_Support_Internal to
> compute with DMPlexGetTransitiveClosure rather than DMPlexGetCone didn't do
> what I expected, but I don't understand the way this bootstrapping is done
> very well.
>

It should do the right thing. Notice that you have to be careful about the
arrays that you use since I reuse them for efficiency here.
What is going wrong?

   Matt


> Cheers,
>
> Lawrence
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell

> On 25 May 2017, at 20:03, Matthew Knepley  wrote:
> 
> 
> Hmm, I thought I made adjacency per field. I have to look. That way, no 
> problem with the Stokes example. DG is still weird.

You might, we don't right now.  We just make the topological adjacency that is 
"large enough", and then make fields on that.

> 
> That seems baroque. So this is just another adjacency pattern. You should be 
> able to easily define it, or if you are a patient person,
> wait for me to do it. Its here
> 
> https://bitbucket.org/petsc/petsc/src/01c3230e040078628f5e559992965c1c4b6f473d/src/dm/impls/plex/plexdistribute.c?at=master=file-view-default#plexdistribute.c-239
> 
> I am more than willing to make this overridable by the user through function 
> composition or another mechanism.

Hmm, that naive thing of just modifying the XXX_Support_Internal to compute 
with DMPlexGetTransitiveClosure rather than DMPlexGetCone didn't do what I 
expected, but I don't understand the way this bootstrapping is done very well.

Cheers,

Lawrence




Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 1:58 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
> > On 25 May 2017, at 19:46, Matthew Knepley  wrote:
> >
> > Sounds like DG. I will get out my dead chicken for the incantation
>
> Actually no!  Mixed H(div)-L2 for Stokes.  Which has facet integrals for
> partially discontinuous fields.  If you do redundant compute for such
> terms, you need a depth-2 FEM adjacency, which is just grim.  Equally we
> have some strange users who have jump terms in CG formulations.


Hmm, I thought I made adjacency per field. I have to look. That way, no
problem with the Stokes example. DG is still weird.

  Matt


>
> Lawrence




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell

> On 25 May 2017, at 19:46, Matthew Knepley  wrote:
> 
> Sounds like DG. I will get out my dead chicken for the incantation

Actually no!  Mixed H(div)-L2 for Stokes.  Which has facet integrals for 
partially discontinuous fields.  If you do redundant compute for such terms, 
you need a depth-2 FEM adjacency, which is just grim.  Equally we have some 
strange users who have jump terms in CG formulations.

Lawrence

Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 1:38 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
> > On 25 May 2017, at 19:23, Matthew Knepley  wrote:
> >
> > Ok, let me clarify.
> >
> > Given shared facets, I'd like closure(support(facet)) this is a subset
> of the fem adjacency. "Add in the cell and its closure from the remote
> rank". This doesn't include remote cells I can only see through vertices.
> Without sending data evaluated at facet quad points, I think this is the
> adjacency I need to compute facet integrals: all the dofs in
> closure(support(facet)).
> >
> > This seems incoherent to me. For FV, dofs reside in the cells, so you
> should only need the cell for adjacency. If you
> > need dofs defined at vertices, then you should also need cells which are
> only attached by vertices. How could this
> > scheme be consistent without this?
>
> OK, so what I think is this:
>
> I need to compute integrals over cells and facets.
>

Sounds like DG. I will get out my dead chicken for the incantation.


> So I do:
>
> GlobalToLocal(INSERT_VALUES)
> ComputeIntegralsOnOwnedEntities
> LocalToGlobal(ADD_VALUES)
>
> That way, an integration is performed on every entity exactly once, and
> LocalToGlobal ensures that I get a consistent assembled Vec.
>
> OK, so if I only compute cell integrals, then the zero overlap
> distribution with all the points in the closure of the cell (including some
> remote points) is sufficient.
>

Yep.


> If I compute facet integrals, I need both cells (and their closure) in the
> support of the facet.  Again, each facet is only integrated by one process,
> and the LocalToGlobal adds in contributions to remote dofs.  This is the
> same as cell integrals, just I need a bit more data, no?
>
> The other option is to notice that what I actually need when I compute a
> facet integral is the test function and/or any coefficients evaluated at
> quadrature points on the facet.  So if I don't want the extra overlapped
> halo, then what I need to do is for the remote process to evaluate any
> coefficients at the quad points, then send the evaluated data to the facet
> owner.  Now the facet owner can compute the integral, and again
> LocalToGlobal adds in contributions to remote dofs.


That seems baroque. So this is just another adjacency pattern. You should
be able to easily define it, or if you are a patient person,
wait for me to do it. Its here


https://bitbucket.org/petsc/petsc/src/01c3230e040078628f5e559992965c1c4b6f473d/src/dm/impls/plex/plexdistribute.c?at=master=file-view-default#plexdistribute.c-239

I am more than willing to make this overridable by the user through
function composition or another mechanism.

  Thanks,

 Matt


>
> Lawrence




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell

> On 25 May 2017, at 19:23, Matthew Knepley  wrote:
> 
> Ok, let me clarify. 
> 
> Given shared facets, I'd like closure(support(facet)) this is a subset of the 
> fem adjacency. "Add in the cell and its closure from the remote rank". This 
> doesn't include remote cells I can only see through vertices. Without sending 
> data evaluated at facet quad points, I think this is the adjacency I need to 
> compute facet integrals: all the dofs in closure(support(facet)).
> 
> This seems incoherent to me. For FV, dofs reside in the cells, so you should 
> only need the cell for adjacency. If you
> need dofs defined at vertices, then you should also need cells which are only 
> attached by vertices. How could this
> scheme be consistent without this?

OK, so what I think is this:

I need to compute integrals over cells and facets.

So I do:

GlobalToLocal(INSERT_VALUES)
ComputeIntegralsOnOwnedEntities
LocalToGlobal(ADD_VALUES)

That way, an integration is performed on every entity exactly once, and 
LocalToGlobal ensures that I get a consistent assembled Vec.

OK, so if I only compute cell integrals, then the zero overlap distribution 
with all the points in the closure of the cell (including some remote points) 
is sufficient.

If I compute facet integrals, I need both cells (and their closure) in the 
support of the facet.  Again, each facet is only integrated by one process, and 
the LocalToGlobal adds in contributions to remote dofs.  This is the same as 
cell integrals, just I need a bit more data, no?

The other option is to notice that what I actually need when I compute a facet 
integral is the test function and/or any coefficients evaluated at quadrature 
points on the facet.  So if I don't want the extra overlapped halo, then what I 
need to do is for the remote process to evaluate any coefficients at the quad 
points, then send the evaluated data to the facet owner.  Now the facet owner 
can compute the integral, and again LocalToGlobal adds in contributions to 
remote dofs.

Lawrence

Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 1:10 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
>
> On 25 May 2017, at 18:05, Matthew Knepley  wrote:
>
> If you want that, is there a reason you cannot use the FEM style
> FALSE+TRUE?
> If you already want the closure, usually the star is not really adding
> anything new.
>
>
> Ok, let me clarify.
>
> Given shared facets, I'd like closure(support(facet)) this is a subset of
> the fem adjacency. "Add in the cell and its closure from the remote rank".
> This doesn't include remote cells I can only see through vertices. Without
> sending data evaluated at facet quad points, I think this is the adjacency
> I need to compute facet integrals: all the dofs in closure(support(facet)).
>

This seems incoherent to me. For FV, dofs reside in the cells, so you
should only need the cell for adjacency. If you
need dofs defined at vertices, then you should also need cells which are
only attached by vertices. How could this
scheme be consistent without this?

  Thanks,

Matt


> I thought this was what the fv adjacency was, but I think I was mistaken.
> That is support(cone(p)) for all p that I have.
> Now I do a rendezvous to gather everything in the closure of these new
> points. But I think that means I still don't have some cells?
>
> Make sense?
>
> Lawrence
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell


> On 25 May 2017, at 18:05, Matthew Knepley  wrote:
> 
> If you want that, is there a reason you cannot use the FEM style FALSE+TRUE?
> If you already want the closure, usually the star is not really adding 
> anything new.

Ok, let me clarify. 

Given shared facets, I'd like closure(support(facet)) this is a subset of the 
fem adjacency. "Add in the cell and its closure from the remote rank". This 
doesn't include remote cells I can only see through vertices. Without sending 
data evaluated at facet quad points, I think this is the adjacency I need to 
compute facet integrals: all the dofs in closure(support(facet)).

I thought this was what the fv adjacency was, but I think I was mistaken. That 
is support(cone(p)) for all p that I have.
Now I do a rendezvous to gather everything in the closure of these new points. 
But I think that means I still don't have some cells?

Make sense?

Lawrence

Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 11:27 AM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

> On 25/05/17 16:25, Matthew Knepley wrote:
> > On Thu, May 25, 2017 at 9:27 AM, Lawrence Mitchell
> >  > > wrote:
> >
> > Dear petsc-users,
> >
> > I am trying to distribute a triangle mesh with a cell halo defined by
> > FVM adjacency (i.e. if I have a facet in the initial (0-overlap)
> > distribution, I want the cell on the other side of it).
> >
> > Reading the documentation, I think I do:
> >
> > DMPlexSetAdjacencyUseCone(PETSC_TRUE)
> > DMPlexSetAdjacencyUseClosure(PETSC_FALSE)
> >
> > and then
> > DMPlexDistribute(..., ovelap=1)
> >
> > If I do this for a simple mesh and then try and do anything on it, I
> > run into all sorts of problems because I have a plex where I have
> some
> > facets, but not even one cell in the support of the facet.  Is this
> to
> > be expected?
> >
> >
> > Hmm. I don't think so. You should have at least one cell in the
> > support of every facet.
> > TS ex11 works exactly this way.
> >
> > When using that adjacency, the overlap cells you get will not have
> > anything but the
> > facet connecting them to that partition. Although, if you have
> > adjacent cells in that overlap layer,
> > you can get ghost faces between those.
> >
> > With the code below, do you get an interpolated mesh when you create
> > it there. That call in C
> > has another argument
> >
> >   http://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/
> DMPlexCreateFromCellList.html
>
> The mesh is interpolated.
>
>
> OK, so let's see if I can understand what the different adjacency
> relations are:
>
> usecone=False, useclosure=False:
>
> adj(p) => cone(p) + cone(support(p))
>
> usecone=True, useclosure=False:
>
> adj(p) => support(p) + support(cone(p))
>
> usecone=False, useclosure=True
>
> adj(p) => closure(star(p))
>
> usecone=True, useclosure=True
>
> adj(p) => star(closure(p))
>
> So let's imagine I have a facet f, the adjacent points are the
> support(cone(f)) so the support of the vertices in 2D, so those are
> some new facets.
>

If you want that, is there a reason you cannot use the FEM style FALSE+TRUE?
If you already want the closure, usually the star is not really adding
anything new.

   Matt


> So now, following https://arxiv.org/pdf/1506.06194.pdf, I need to
> complete this new mesh, so I ask for the closure of these new facets.
> But that might mean I won't ask for cells, right?  So I think I would
> end up with some facets that don't have any support.  And empirically
> I observe that:
>
> e.g. the code attached:
>
> $ mpiexec -n 3 python bar.py
> [0] 7 [0]
> [0] 8 [0]
> [0] 9 [0 1]
> [0] 10 [1]
> [0] 11 [1]
> [0] 12 []
> [1] 10 [0 2]
> [1] 11 [0 1]
> [1] 12 [0]
> [1] 13 [1]
> [1] 14 [2]
> [1] 15 [2]
> [1] 16 [1 3]
> [1] 17 [3]
> [1] 18 [3]
> [2] 7 [0 1]
> [2] 8 [0]
> [2] 9 [0]
> [2] 10 [1]
> [2] 11 []
> [2] 12 [1]
>
>
> What I would like (although I'm not sure if this is supported right
> now), is the overlap to contain closure(support(facet)) for all shared
> facets.  I think that's equivalent to closure(support(p)) \forall p.
>
> That way on any shared facets, I have both cells and their closure.
>
> Is that easy to do?
>
> Lawrence
>
> import sys, petsc4py
> petsc4py.init(sys.argv)
> from petsc4py import PETSc
> import numpy as np
> Lx = Ly = 1
> nx = 1
> ny = 2
>
> xcoords = np.linspace(0.0, Lx, nx + 1, dtype=PETSc.RealType)
> ycoords = np.linspace(0.0, Ly, ny + 1, dtype=PETSc.RealType)
> coords = np.asarray(np.meshgrid(xcoords, ycoords)).swapaxes(0,
> 2).reshape(-1, 2)
>
> # cell vertices
> i, j = np.meshgrid(np.arange(nx, dtype=PETSc.IntType), np.arange(ny,
> dtype=PETSc.IntType))
> cells = [i*(ny+1) + j, i*(ny+1) + j+1, (i+1)*(ny+1) + j+1,
> (i+1)*(ny+1) + j]
> cells = np.asarray(cells, dtype=PETSc.IntType).swapaxes(0,
> 2).reshape(-1, 4)
> idx = [0, 1, 3, 1, 2, 3]
> cells = cells[:, idx].reshape(-1, 3)
>
> comm = PETSc.COMM_WORLD
> if comm.rank == 0:
> dm = PETSc.DMPlex().createFromCellList(2, cells, coords,
> interpolate=True, comm=comm)
> else:
> dm = PETSc.DMPlex().createFromCellList(2, np.zeros((0,
> cells.shape[1]), dtype=PETSc.IntType),
>np.zeros((0, 2),
> dtype=PETSc.RealType),
>interpolate=True,
>comm=comm)
>
> dm.setAdjacencyUseClosure(False)
> dm.setAdjacencyUseCone(True)
>
> dm.distribute(overlap=1)
> sf = dm.getPointSF()
>
> for p in range(*dm.getDepthStratum(dm.getDepth()-1)):
> PETSc.Sys.syncPrint("[%d] %d %s" % (comm.rank, p, dm.getSupport(p)))
>
> PETSc.Sys.syncFlush()
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell


On 25/05/17 16:25, Matthew Knepley wrote:
> On Thu, May 25, 2017 at 9:27 AM, Lawrence Mitchell
>  > wrote:
> 
> Dear petsc-users,
> 
> I am trying to distribute a triangle mesh with a cell halo defined by
> FVM adjacency (i.e. if I have a facet in the initial (0-overlap)
> distribution, I want the cell on the other side of it).
> 
> Reading the documentation, I think I do:
> 
> DMPlexSetAdjacencyUseCone(PETSC_TRUE)
> DMPlexSetAdjacencyUseClosure(PETSC_FALSE)
> 
> and then
> DMPlexDistribute(..., ovelap=1)
> 
> If I do this for a simple mesh and then try and do anything on it, I
> run into all sorts of problems because I have a plex where I have some
> facets, but not even one cell in the support of the facet.  Is this to
> be expected?
> 
> 
> Hmm. I don't think so. You should have at least one cell in the
> support of every facet.
> TS ex11 works exactly this way.
> 
> When using that adjacency, the overlap cells you get will not have
> anything but the
> facet connecting them to that partition. Although, if you have
> adjacent cells in that overlap layer,
> you can get ghost faces between those.
> 
> With the code below, do you get an interpolated mesh when you create
> it there. That call in C
> has another argument
> 
>   
> http://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexCreateFromCellList.html

The mesh is interpolated.


OK, so let's see if I can understand what the different adjacency
relations are:

usecone=False, useclosure=False:

adj(p) => cone(p) + cone(support(p))

usecone=True, useclosure=False:

adj(p) => support(p) + support(cone(p))

usecone=False, useclosure=True

adj(p) => closure(star(p))

usecone=True, useclosure=True

adj(p) => star(closure(p))

So let's imagine I have a facet f, the adjacent points are the
support(cone(f)) so the support of the vertices in 2D, so those are
some new facets.

So now, following https://arxiv.org/pdf/1506.06194.pdf, I need to
complete this new mesh, so I ask for the closure of these new facets.
But that might mean I won't ask for cells, right?  So I think I would
end up with some facets that don't have any support.  And empirically
I observe that:

e.g. the code attached:

$ mpiexec -n 3 python bar.py
[0] 7 [0]
[0] 8 [0]
[0] 9 [0 1]
[0] 10 [1]
[0] 11 [1]
[0] 12 []
[1] 10 [0 2]
[1] 11 [0 1]
[1] 12 [0]
[1] 13 [1]
[1] 14 [2]
[1] 15 [2]
[1] 16 [1 3]
[1] 17 [3]
[1] 18 [3]
[2] 7 [0 1]
[2] 8 [0]
[2] 9 [0]
[2] 10 [1]
[2] 11 []
[2] 12 [1]


What I would like (although I'm not sure if this is supported right
now), is the overlap to contain closure(support(facet)) for all shared
facets.  I think that's equivalent to closure(support(p)) \forall p.

That way on any shared facets, I have both cells and their closure.

Is that easy to do?

Lawrence

import sys, petsc4py
petsc4py.init(sys.argv)
from petsc4py import PETSc
import numpy as np
Lx = Ly = 1
nx = 1
ny = 2

xcoords = np.linspace(0.0, Lx, nx + 1, dtype=PETSc.RealType)
ycoords = np.linspace(0.0, Ly, ny + 1, dtype=PETSc.RealType)
coords = np.asarray(np.meshgrid(xcoords, ycoords)).swapaxes(0,
2).reshape(-1, 2)

# cell vertices
i, j = np.meshgrid(np.arange(nx, dtype=PETSc.IntType), np.arange(ny,
dtype=PETSc.IntType))
cells = [i*(ny+1) + j, i*(ny+1) + j+1, (i+1)*(ny+1) + j+1,
(i+1)*(ny+1) + j]
cells = np.asarray(cells, dtype=PETSc.IntType).swapaxes(0,
2).reshape(-1, 4)
idx = [0, 1, 3, 1, 2, 3]
cells = cells[:, idx].reshape(-1, 3)

comm = PETSc.COMM_WORLD
if comm.rank == 0:
dm = PETSc.DMPlex().createFromCellList(2, cells, coords,
interpolate=True, comm=comm)
else:
dm = PETSc.DMPlex().createFromCellList(2, np.zeros((0,
cells.shape[1]), dtype=PETSc.IntType),
   np.zeros((0, 2),
dtype=PETSc.RealType),
   interpolate=True,
   comm=comm)

dm.setAdjacencyUseClosure(False)
dm.setAdjacencyUseCone(True)

dm.distribute(overlap=1)
sf = dm.getPointSF()

for p in range(*dm.getDepthStratum(dm.getDepth()-1)):
PETSc.Sys.syncPrint("[%d] %d %s" % (comm.rank, p, dm.getSupport(p)))

PETSc.Sys.syncFlush()



signature.asc
Description: OpenPGP digital signature


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 9:27 AM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

> Dear petsc-users,
>
> I am trying to distribute a triangle mesh with a cell halo defined by
> FVM adjacency (i.e. if I have a facet in the initial (0-overlap)
> distribution, I want the cell on the other side of it).
>
> Reading the documentation, I think I do:
>
> DMPlexSetAdjacencyUseCone(PETSC_TRUE)
> DMPlexSetAdjacencyUseClosure(PETSC_FALSE)
>
> and then
> DMPlexDistribute(..., ovelap=1)
>
> If I do this for a simple mesh and then try and do anything on it, I
> run into all sorts of problems because I have a plex where I have some
> facets, but not even one cell in the support of the facet.  Is this to
> be expected?
>

Hmm. I don't think so. You should have at least one cell in the support of
every facet.
TS ex11 works exactly this way.

When using that adjacency, the overlap cells you get will not have anything
but the
facet connecting them to that partition. Although, if you have adjacent
cells in that overlap layer,
you can get ghost faces between those.

With the code below, do you get an interpolated mesh when you create it
there. That call in C
has another argument


http://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexCreateFromCellList.html

If its just cells and vertices, you could get some bizarre things like you
see.

Matt


> For example the following petsc4py code breaks when run on 3 processes:
>
> $ mpiexec -n 3 python bork.py
> [1] DMPlexGetOrdering() line 133 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [1] DMPlexCreateOrderingClosure_Static() line 41 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [1] Petsc has generated inconsistent data
> [1] Number of depth 2 faces 34 does not match permuted nubmer 29
> : error code 77
> [2] DMPlexGetOrdering() line 133 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [2] DMPlexCreateOrderingClosure_Static() line 41 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [2] Petsc has generated inconsistent data
> [2] Number of depth 2 faces 33 does not match permuted nubmer 28
> : error code 77
> [0] DMPlexGetOrdering() line 133 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [0] DMPlexCreateOrderingClosure_Static() line 41 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [0] Petsc has generated inconsistent data
> [0] Number of depth 2 faces 33 does not match permuted nubmer 31
>
> $ cat > bork.py<<\EOF
> from petsc4py import PETSc
> import numpy as np
> Lx = Ly = 1
> nx = ny = 4
>
> xcoords = np.linspace(0.0, Lx, nx + 1, dtype=PETSc.RealType)
> ycoords = np.linspace(0.0, Ly, ny + 1, dtype=PETSc.RealType)
> coords = np.asarray(np.meshgrid(xcoords, ycoords)).swapaxes(0,
> 2).reshape(-1, 2)
>
> # cell vertices
> i, j = np.meshgrid(np.arange(nx, dtype=PETSc.IntType), np.arange(ny,
> dtype=PETSc.IntType))
> cells = [i*(ny+1) + j, i*(ny+1) + j+1, (i+1)*(ny+1) + j+1,
> (i+1)*(ny+1) + j]
> cells = np.asarray(cells, dtype=PETSc.IntType).swapaxes(0,
> 2).reshape(-1, 4)
> idx = [0, 1, 3, 1, 2, 3]
> cells = cells[:, idx].reshape(-1, 3)
>
> comm = PETSc.COMM_WORLD
> if comm.rank == 0:
> dm = PETSc.DMPlex().createFromCellList(2, cells, coords, comm=comm)
> else:
> dm = PETSc.DMPlex().createFromCellList(2, np.zeros((0, 4),
> dtype=PETSc.IntType),
>np.zeros((0, 2),
> dtype=PETSc.RealType),
>comm=comm)
>
> dm.setAdjacencyUseClosure(False)
> dm.setAdjacencyUseCone(True)
>
> dm.distribute(overlap=1)
>
> dm.getOrdering(PETSc.Mat.OrderingType.RCM)
>
> dm.view()
> EOF
>
> Am I doing something wrong?  Is this not expected to work?
>
> Cheers,
>
> Lawrence
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/