Re: [petsc-users] question about MatCreateRedundantMatrix

2019-09-18 Thread hong--- via petsc-users
Michael,
We have support of MatCreateRedundantMatrix for dense matrices. For
example, petsc/src/mat/examples/tests/ex9.c:
mpiexec -n 4 ./ex9 -mat_type dense -view_mat -nsubcomms 2

Hong

On Wed, Sep 18, 2019 at 5:40 PM Povolotskyi, Mykhailo via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Dear Petsc developers,
>
> I found that MatCreateRedundantMatrix does not support dense matrices.
>
> This causes the following problem: I cannot use CISS eigensolver from
> SLEPC with dense matrices with parallelization over quadrature points.
>
> Is it possible for you to add this support?
>
> Thank you,
>
> Michael.
>
>
> p.s. I apologize if you received this e-mail twice, I sent if first from
> a different address.
>
>


Re: [petsc-users] MKL_PARDISO question

2019-09-18 Thread Smith, Barry F. via petsc-users


   This is easy thanks to the additional debugging I added recently. Your 
install of MKL does not have CPardiso support. When you install MKL you have to 
make sure you select the "extra" cluster option, otherwise it doesn't install 
some of the library.  I only learned this myself recently from another PETSc 
user. 

   So please try again after you install the full MKL and send configure.log if 
it fails (by the way,. just use --with-mkl_cpardiso not --with-mkl_cpardiso-dir 
since it always has to find the CPardiso in the MKL BLAS/Lapack directory). 
After you install the full MKL you will see that directory also has files with 
*blacs* in them.

   Barry




Executing: ls /home/epscodes/MyLocal/intel/mkl/lib/intel64
stdout:
libmkl_avx2.so
libmkl_avx512_mic.so
libmkl_avx512.so
libmkl_avx.so
libmkl_blas95_ilp64.a
libmkl_blas95_lp64.a
libmkl_core.a
libmkl_core.so
libmkl_def.so
libmkl_gf_ilp64.a
libmkl_gf_ilp64.so
libmkl_gf_lp64.a
libmkl_gf_lp64.so
libmkl_gnu_thread.a
libmkl_gnu_thread.so
libmkl_intel_ilp64.a
libmkl_intel_ilp64.so
libmkl_intel_lp64.a
libmkl_intel_lp64.so
libmkl_intel_thread.a
libmkl_intel_thread.so
libmkl_lapack95_ilp64.a
libmkl_lapack95_lp64.a
libmkl_mc3.so
libmkl_mc.so
libmkl_rt.so
libmkl_sequential.a
libmkl_sequential.so
libmkl_tbb_thread.a
libmkl_tbb_thread.so
libmkl_vml_avx2.so
libmkl_vml_avx512_mic.so
libmkl_vml_avx512.so
libmkl_vml_avx.so
libmkl_vml_cmpt.so
libmkl_vml_def.so
libmkl_vml_mc2.so
libmkl_vml_mc3.so
libmkl_vml_mc.so



> On Sep 18, 2019, at 9:40 PM, Xiangdong  wrote:
> 
> Thank you very much for your information. I pulled the master branch but got 
> the error when configuring it.
> 
> When I run configure without mkl_cpardiso (configure.log_nocpardiso):  
> ./configure PETSC_ARCH=arch-debug  --with-debugging=1 
> --with-mpi-dir=$MPI_ROOT --with-blaslapack-dir=${MKL_ROOT} , it works fine.
> 
> However, when I add mkl_cpardiso (configure.log_withcpardiso): ./configure 
> PETSC_ARCH=arch-debug  --with-debugging=1 --with-mpi-dir=$MPI_ROOT  
> -with-blaslapack-dir=${MKL_ROOT} --with-mkl_cpardiso-dir=${MKL_ROOT} , it 
> complains about "Could not find a functional BLAS.", but the blas was 
> provided through mkl as same as previous configuration. 
> 
> Can you help me on the configuration? Thank you.
> 
> Xiangdong
> 
> On Wed, Sep 18, 2019 at 2:39 PM Smith, Barry F.  wrote:
> 
> 
> > On Sep 18, 2019, at 9:15 AM, Xiangdong via petsc-users 
> >  wrote:
> > 
> > Hello everyone,
> > 
> > From here,
> > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSOLVERMKL_PARDISO.html
> > 
> > It seems thatMKL_PARDISO only works for seqaij. I am curious that whether 
> > one can use mkl_pardiso in petsc with multi-thread.
> 
>You can use  mkl_pardiso for multi-threaded and  mkl_cpardiso for MPI 
> parallelism.
> 
>In both cases you must use the master branch of PETSc (or the next release 
> of PETSc) to do this this easily.
> 
>Note that when you use mkl_pardiso with multiple threads the matrix is 
> coming from a single MPI process (or the single program if not running with 
> MPI). So it is not MPI parallel that matches the rest of the parallelism with 
> PETSc. So one much be a little careful: for example if one has 4 cores and 
> uses them all with mpiexec -n 4 and then uses mkl_pardiso with 4 threads 
> (each) then you have 16 threads fighting over 4 cores. So you need to select 
> the number of MPI processes and number of threads wisely.
> 
> > 
> > Is there any reason that MKL_PARDISO is not listed in the linear solver 
> > table?
> > https://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html
> > 
> 
>Just an oversight, thanks for letting us know, I have added it.
> 
> 
> > Thank you.
> > 
> > Best,
> > Xiangdong
> 
> 



[petsc-users] question about MatCreateRedundantMatrix

2019-09-18 Thread Povolotskyi, Mykhailo via petsc-users
Dear Petsc developers,

I found that MatCreateRedundantMatrix does not support dense matrices.

This causes the following problem: I cannot use CISS eigensolver from 
SLEPC with dense matrices with parallelization over quadrature points.

Is it possible for you to add this support?

Thank you,

Michael.


p.s. I apologize if you received this e-mail twice, I sent if first from 
a different address.



Re: [petsc-users] MKL_PARDISO question

2019-09-18 Thread Smith, Barry F. via petsc-users



> On Sep 18, 2019, at 9:15 AM, Xiangdong via petsc-users 
>  wrote:
> 
> Hello everyone,
> 
> From here,
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSOLVERMKL_PARDISO.html
> 
> It seems thatMKL_PARDISO only works for seqaij. I am curious that whether one 
> can use mkl_pardiso in petsc with multi-thread.

   You can use  mkl_pardiso for multi-threaded and  mkl_cpardiso for MPI 
parallelism.

   In both cases you must use the master branch of PETSc (or the next release 
of PETSc) to do this this easily.

   Note that when you use mkl_pardiso with multiple threads the matrix is 
coming from a single MPI process (or the single program if not running with 
MPI). So it is not MPI parallel that matches the rest of the parallelism with 
PETSc. So one much be a little careful: for example if one has 4 cores and uses 
them all with mpiexec -n 4 and then uses mkl_pardiso with 4 threads (each) then 
you have 16 threads fighting over 4 cores. So you need to select the number of 
MPI processes and number of threads wisely.

> 
> Is there any reason that MKL_PARDISO is not listed in the linear solver table?
> https://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html
> 

   Just an oversight, thanks for letting us know, I have added it.


> Thank you.
> 
> Best,
> Xiangdong



Re: [petsc-users] Strange Partition in PETSc 3.11 version on some computers

2019-09-18 Thread Smith, Barry F. via petsc-users



> On Sep 18, 2019, at 12:25 PM, Mark Lohry via petsc-users 
>  wrote:
> 
> Mark,
>  

Mark,

  Good point. This has been a big headache forever

  Note that this has been "fixed" in the master version of PETSc and will 
be in its next release. If you use --download-parmetis in the future it will 
use the same random numbers on all machines and thus should produce the same 
partitions on all machines. 

   I think that metis has aways used the same random numbers and all 
machines and thus always produced the same results.

Barry


> The machine, compiler and MPI version should not matter.
> 
> I might have missed something earlier in the thread, but parmetis has a 
> dependency on the machine's glibc srand, and it can (and does) create 
> different partitions with different srand versions. The same mesh on the same 
> code on the same process count can and will give different partitions 
> (possibly bad ones) on different machines.
> 
> On Tue, Sep 17, 2019 at 1:05 PM Mark Adams via petsc-users 
>  wrote:
> 
> 
> On Tue, Sep 17, 2019 at 12:53 PM Danyang Su  wrote:
> Hi Mark,
> 
> Thanks for your follow-up. 
> 
> The unstructured grid code has been verified and there is no problem in the 
> results. The convergence rate is also good. The 3D mesh is not good, it is 
> based on the original stratum which I haven't refined, but good for initial 
> test as it is relative small and the results obtained from this mesh still 
> makes sense.
> 
> The 2D meshes are just for testing purpose as I want to reproduce the 
> partition problem on a cluster using PETSc3.11.3 and Intel2019. 
> Unfortunately, I didn't find problem using this example. 
> 
> The code has no problem in using different PETSc versions (PETSc V3.4 to 
> V3.11)
> 
> OK, it is the same code. I thought I saw something about your code changing.
> 
> Just to be clear, v3.11 never gives you good partitions. It is not just a 
> problem on this Intel cluster.
> 
> The machine, compiler and MPI version should not matter.
>  
> and MPI distribution (MPICH, OpenMPI, IntelMPI), except for one simulation 
> case (the mesh I attached) on a cluster with PETSc3.11.3 and Intel2019u4 due 
> to the very different partition compared to PETSc3.9.3. Yet the simulation 
> results are the same except for the efficiency problem because the strange 
> partition results into much more communication (ghost nodes).
> 
> I am still trying different compiler and mpi with PETSc3.11.3 on that cluster 
> to trace the problem. Will get back to you guys when there is update.
> 
> 
> This is very strange. You might want to use 'git bisect'. You set a good and 
> a bad SHA1 (we can give you this for 3.9 and 3.11 and the exact commands). 
> The git will go to a version in the middle. You then reconfigure, remake, 
> rebuild your code, run your test. Git will ask you, as I recall, if the 
> version is good or bad. Once you get this workflow going it is not too bad, 
> depending on how hard this loop is of course.
>  
> Thanks,
> 
> danyang
> 



Re: [petsc-users] Strange Partition in PETSc 3.11 version on some computers

2019-09-18 Thread Mark Lohry via petsc-users
Mark,


> The machine, compiler and MPI version should not matter.


I might have missed something earlier in the thread, but parmetis has a
dependency on the machine's glibc srand, and it can (and does) create
different partitions with different srand versions. The same mesh on the
same code on the same process count can and will give different partitions
(possibly bad ones) on different machines.

On Tue, Sep 17, 2019 at 1:05 PM Mark Adams via petsc-users <
petsc-users@mcs.anl.gov> wrote:

>
>
> On Tue, Sep 17, 2019 at 12:53 PM Danyang Su  wrote:
>
>> Hi Mark,
>>
>> Thanks for your follow-up.
>>
>> The unstructured grid code has been verified and there is no problem in
>> the results. The convergence rate is also good. The 3D mesh is not good, it
>> is based on the original stratum which I haven't refined, but good for
>> initial test as it is relative small and the results obtained from this
>> mesh still makes sense.
>>
>> The 2D meshes are just for testing purpose as I want to reproduce the
>> partition problem on a cluster using PETSc3.11.3 and Intel2019.
>> Unfortunately, I didn't find problem using this example.
>>
>> The code has no problem in using different PETSc versions (PETSc V3.4 to
>> V3.11)
>>
> OK, it is the same code. I thought I saw something about your code
> changing.
>
> Just to be clear, v3.11 never gives you good partitions. It is not just a
> problem on this Intel cluster.
>
> The machine, compiler and MPI version should not matter.
>
>
>> and MPI distribution (MPICH, OpenMPI, IntelMPI), except for one
>> simulation case (the mesh I attached) on a cluster with PETSc3.11.3 and
>> Intel2019u4 due to the very different partition compared to PETSc3.9.3. Yet
>> the simulation results are the same except for the efficiency problem
>> because the strange partition results into much more communication (ghost
>> nodes).
>>
>> I am still trying different compiler and mpi with PETSc3.11.3 on that
>> cluster to trace the problem. Will get back to you guys when there is
>> update.
>>
>
> This is very strange. You might want to use 'git bisect'. You set a good
> and a bad SHA1 (we can give you this for 3.9 and 3.11 and the exact
> commands). The git will go to a version in the middle. You then
> reconfigure, remake, rebuild your code, run your test. Git will ask you, as
> I recall, if the version is good or bad. Once you get this workflow going
> it is not too bad, depending on how hard this loop is of course.
>
>
>> Thanks,
>>
>> danyang
>>
>


Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mohammad Hassan via petsc-users
Thanks for your suggestion, Matthew. I will certainly look into DMForest for 
refining of my base DMPlex dm.

 

From: Matthew Knepley [mailto:knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 10:35 PM
To: Mohammad Hassan 
Cc: PETSc 
Subject: Re: [petsc-users] DMPlex Distribution

 

On Wed, Sep 18, 2019 at 10:27 AM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

I want to implement block-based AMR, which turns my base conformal mesh to 
non-conformal.  My question is how DMPlex renders a mesh that it cannot support 
non-conformal meshes. 

 

Mark misspoke. Plex _does_ support geometrically non-conforming meshing, e.g. 
"hanging nodes". The easiest way to

use Plex this way is to use DMForest, which uses Plex underneath.

 

There are excellent p4est tutorials. What you would do is create your conformal 
mesh, using Plex if you want, and

use that for the p4est base mesh (you would have the base mesh be the forest 
roots).

 

  Thanks,

 

 Matt

 

If DMPlex does not work, I will try to use DMForest.  

 

From: Matthew Knepley [mailto:knep...@gmail.com  ] 
Sent: Wednesday, September 18, 2019 9:50 PM
To: Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> >
Cc: Mark Adams mailto:mfad...@lbl.gov> >; PETSc 
mailto:petsc-users@mcs.anl.gov> >
Subject: Re: [petsc-users] DMPlex Distribution

 

On Wed, Sep 18, 2019 at 9:35 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there any 
way that we can construct non-conformal layout for DM in petsc?

 

Lets see. Plex does support geometrically non-conforming meshes. This is how we 
support p4est. However, if

you want that, you can just use DMForest I think. So you jsut want structured 
AMR?

 

  Thanks,

 

Matt

 

 

From: Mark Adams [mailto:  mfad...@lbl.gov] 
Sent: Wednesday, September 18, 2019 9:23 PM
To: Mohammad Hassan <  
mhbagh...@mail.sjtu.edu.cn>
Cc: Matthew Knepley <  knep...@gmail.com>; PETSc 
users list <  petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

I'm puzzled. It sounds like you are doing non-conforming AMR (structured block 
AMR), but Plex does not support that.

 

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Mark is  right. The functionality of AMR does not relate to parallelization of 
that. The vector size (global or local) does not conflict with AMR functions.

Thanks

 

Amir

 

From: Matthew Knepley [mailto:  knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 12:59 AM
To: Mohammad Hassan <  
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc <  petsc-ma...@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

 

Hi Amir,

 

How are you using Plex if the block-AMR is coming from somewhere else? This 
will help

me tell you what would be best.

 

And as a general question, can we set block size of vector on each rank?

 

I think as Mark says that you are using "blocksize" is a different way than 
PETSc.

 

  Thanks,

 

Matt

 

Thanks

Amir

 

From: Matthew Knepley [mailto:  knep...@gmail.com] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan <  
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc <  petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the 
distribution across processors manually. I mean, how can I set the share of dm 
on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just 
want to mess with the weights in ParMetis.

 

  Thanks,

 

Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/  




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to 

Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mohammad Hassan via petsc-users
I want to implement block-based AMR, which turns my base conformal mesh to 
non-conformal.  My question is how DMPlex renders a mesh that it cannot support 
non-conformal meshes. If DMPlex does not work, I will try to use DMForest.  

 

From: Matthew Knepley [mailto:knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 9:50 PM
To: Mohammad Hassan 
Cc: Mark Adams ; PETSc 
Subject: Re: [petsc-users] DMPlex Distribution

 

On Wed, Sep 18, 2019 at 9:35 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there any 
way that we can construct non-conformal layout for DM in petsc?

 

Lets see. Plex does support geometrically non-conforming meshes. This is how we 
support p4est. However, if

you want that, you can just use DMForest I think. So you jsut want structured 
AMR?

 

  Thanks,

 

Matt

 

 

From: Mark Adams [mailto:  mfad...@lbl.gov] 
Sent: Wednesday, September 18, 2019 9:23 PM
To: Mohammad Hassan <  
mhbagh...@mail.sjtu.edu.cn>
Cc: Matthew Knepley <  knep...@gmail.com>; PETSc 
users list <  petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

I'm puzzled. It sounds like you are doing non-conforming AMR (structured block 
AMR), but Plex does not support that.

 

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Mark is  right. The functionality of AMR does not relate to parallelization of 
that. The vector size (global or local) does not conflict with AMR functions.

Thanks

 

Amir

 

From: Matthew Knepley [mailto:  knep...@gmail.com] 
Sent: Wednesday, September 18, 2019 12:59 AM
To: Mohammad Hassan <  
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc <  petsc-ma...@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

 

Hi Amir,

 

How are you using Plex if the block-AMR is coming from somewhere else? This 
will help

me tell you what would be best.

 

And as a general question, can we set block size of vector on each rank?

 

I think as Mark says that you are using "blocksize" is a different way than 
PETSc.

 

  Thanks,

 

Matt

 

Thanks

Amir

 

From: Matthew Knepley [mailto:  knep...@gmail.com] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan <  
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc <  petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the 
distribution across processors manually. I mean, how can I set the share of dm 
on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just 
want to mess with the weights in ParMetis.

 

  Thanks,

 

Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/  




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/  




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/  



Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mohammad Hassan via petsc-users
If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there any 
way that we can construct non-conformal layout for DM in petsc?

 

From: Mark Adams [mailto:mfad...@lbl.gov] 
Sent: Wednesday, September 18, 2019 9:23 PM
To: Mohammad Hassan 
Cc: Matthew Knepley ; PETSc users list 

Subject: Re: [petsc-users] DMPlex Distribution

 

I'm puzzled. It sounds like you are doing non-conforming AMR (structured block 
AMR), but Plex does not support that.

 

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Mark is  right. The functionality of AMR does not relate to parallelization of 
that. The vector size (global or local) does not conflict with AMR functions.

Thanks

 

Amir

 

From: Matthew Knepley [mailto:knep...@gmail.com  ] 
Sent: Wednesday, September 18, 2019 12:59 AM
To: Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> >
Cc: PETSc mailto:petsc-ma...@mcs.anl.gov> >
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan mailto:mhbagh...@mail.sjtu.edu.cn> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to 
know exactly the mesh distribution of blocks across different processors for 
implementation of amr.

 

Hi Amir,

 

How are you using Plex if the block-AMR is coming from somewhere else? This 
will help

me tell you what would be best.

 

And as a general question, can we set block size of vector on each rank?

 

I think as Mark says that you are using "blocksize" is a different way than 
PETSc.

 

  Thanks,

 

Matt

 

Thanks

Amir

 

From: Matthew Knepley [mailto:  knep...@gmail.com] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan <  
mhbagh...@mail.sjtu.edu.cn>
Cc: PETSc <  petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users 
mailto:petsc-users@mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the 
distribution across processors manually. I mean, how can I set the share of dm 
on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just 
want to mess with the weights in ParMetis.

 

  Thanks,

 

Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/  




 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/  



Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mark Adams via petsc-users
I'm puzzled. It sounds like you are doing non-conforming AMR (structured
block AMR), but Plex does not support that.

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Mark is  right. The functionality of AMR does not relate to
> parallelization of that. The vector size (global or local) does not
> conflict with AMR functions.
>
> Thanks
>
>
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Wednesday, September 18, 2019 12:59 AM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan <
> mhbagh...@mail.sjtu.edu.cn> wrote:
>
> Thanks for suggestion. I am going to use a block-based amr. I think I need
> to know exactly the mesh distribution of blocks across different processors
> for implementation of amr.
>
>
>
> Hi Amir,
>
>
>
> How are you using Plex if the block-AMR is coming from somewhere else?
> This will help
>
> me tell you what would be best.
>
>
>
> And as a general question, can we set block size of vector on each rank?
>
>
>
> I think as Mark says that you are using "blocksize" is a different way
> than PETSc.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> Thanks
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knep...@gmail.com]
> *Sent:* Tuesday, September 17, 2019 11:04 PM
> *To:* Mohammad Hassan 
> *Cc:* PETSc 
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Hi
>
> I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set
> the distribution across processors manually. I mean, how can I set the
> share of dm on each rank (local)?
>
>
>
> You could make a Shell partitioner and tell it the entire partition:
>
>
>
>
> https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html
>
>
>
> However, I would be surprised if you could do this. It is likely that you
> just want to mess with the weights in ParMetis.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> Thanks
>
> Amir
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> 
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-users] TS scheme with different DAs

2019-09-18 Thread Matthew Knepley via petsc-users
On Tue, Sep 17, 2019 at 8:27 PM Smith, Barry F.  wrote:

>
>   Don't be too quick to dismiss switching to the DMStag you may find that
> it actually takes little time to convert and then you have a much less
> cumbersome process to manage the staggered grid. Take a look at
> src/dm/impls/stag/examples/tutorials/ex2.c where
>
> const PetscInt dof0 = 0, dof1 = 1,dof2 = 1; /* 1 dof on each edge and
> element center */
> const PetscInt stencilWidth = 1;
> ierr =
> DMStagCreate2d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,7,9,PETSC_DECIDE,PETSC_DECIDE,dof0,dof1,dof2,DMSTAG_STENCIL_BOX,stencilWidth,NULL,NULL,);CHKERRQ(ierr);
>
> BOOM, it has set up a staggered grid with 1 cell centered variable and 1
> on each edge. Adding more the cell centers, vertices, or edges is trivial.
>
>   If you want to stick to DMDA you
>
> "cheat". Depending on exactly what staggering you have you make the DMDA
> for the "smaller problem" as large as the other ones and just track zeros
> in those locations. For example if velocities are "edges" and T, S are on
> cells, make your "cells" DMDA one extra grid width wide in all three
> dimensions. You may need to be careful on the boundaries deepening on the
> types of boundary conditions.
>

Yes, SNES ex30 does exactly this. However, I still recommend looking at
DMStag. Patrick created it because managing the DMDA
became such as headache.

  Thanks,

Matt


> > On Sep 17, 2019, at 7:04 PM, Manuel Valera via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >
> > Thanks Matthew, but my code is too complicated to be redone on DMStag
> now after spending a long time using DMDAs,
> >
> > Is there a way to ensure PETSc distributes several DAs in the same way?
> besides manually distributing the points,
> >
> > Thanks,
> >
> > On Tue, Sep 17, 2019 at 3:28 PM Matthew Knepley 
> wrote:
> > On Tue, Sep 17, 2019 at 6:15 PM Manuel Valera via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> > Hello, petsc users,
> >
> > I have integrated the TS routines in my code, but i just noticed i
> didn't do it optimally. I was using 3 different TS objects to integrate
> velocities, temperature and salinity, and it works but only for small DTs.
> I suspect the intermediate Runge-Kutta states are unphased and this creates
> the discrepancy for broader time steps, so I need to integrate the 3
> quantities in the same routine.
> >
> > I tried to do this by using a 5 DOF distributed array for the RHS, where
> I store the velocities in the first 3 and then Temperature and Salinity in
> the rest. The problem is that I use a staggered grid and T,S are located in
> a different DA layout than the velocities. This is creating problems for me
> since I can't find a way to communicate the information from the result of
> the TS integration back to the respective DAs of each variable.
> >
> > Is there a way to communicate across DAs? or can you suggest an
> alternative solution to this problem?
> >
> > If you have a staggered discretization on a structured grid, I would
> recommend checking out DMStag.
> >
> >   Thanks,
> >
> >  MAtt
> >
> > Thanks,
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/