Re: [petsc-users] (no subject)

2016-09-15 Thread Ji Zhang
Thanks. I think I find the right way. Wayne On Fri, Sep 16, 2016 at 11:33 AM, Ji Zhang wrote: > Thanks for your warm help. Could you please show me some necessary > functions or a simple demo code? > > > Wayne > > On Fri, Sep 16, 2016 at 10:32 AM, Barry Smith

Re: [petsc-users] (no subject)

2016-09-15 Thread Ji Zhang
Thanks for your warm help. Could you please show me some necessary functions or a simple demo code? Wayne On Fri, Sep 16, 2016 at 10:32 AM, Barry Smith wrote: > > You should create your small m_ij matrices as just dense two dimensional > arrays and then set them into the

Re: [petsc-users] (no subject)

2016-09-15 Thread Barry Smith
You should create your small m_ij matrices as just dense two dimensional arrays and then set them into the big M matrix. Do not create the small dense matrices as PETSc matrices. Barry > On Sep 15, 2016, at 9:21 PM, Ji Zhang wrote: > > I'm so apologize for the

Re: [petsc-users] (no subject)

2016-09-15 Thread Ji Zhang
I'm so apologize for the ambiguity. Let me clarify it. I'm trying to simulation interactions among different bodies. Now I have calculated the interaction between two of them and stored in the sub-matrix m_ij. What I want to do is to consider the whole interaction and construct all sub-matrices

Re: [petsc-users] Question about memory usage in Multigrid preconditioner

2016-09-15 Thread Barry Smith
> On Sep 15, 2016, at 1:10 PM, Dave May wrote: > > > > On Thursday, 15 September 2016, Barry Smith wrote: > >Should we have some simple selection of default algorithms based on > problem size/number of processes? For example if using more

Re: [petsc-users] Question about memory usage in Multigrid preconditioner

2016-09-15 Thread Dave May
On Thursday, 15 September 2016, Barry Smith wrote: > >Should we have some simple selection of default algorithms based on > problem size/number of processes? For example if using more than 1000 > processes then use scalable version etc? How would we decide on the >

Re: [petsc-users] fieldsplit preconditioner for indefinite matrix

2016-09-15 Thread Barry Smith
> On Sep 15, 2016, at 4:11 AM, Hoang Giang Bui wrote: > > Dear Barry > > > > Seem like zero pivot does not happen, but why the solver for Schur takes 13 > steps if the preconditioner is direct solver? Because if you use KSPSetOperators(ksp_S,A,B) it is NOT a direct

Re: [petsc-users] Question about memory usage in Multigrid preconditioner

2016-09-15 Thread Barry Smith
Should we have some simple selection of default algorithms based on problem size/number of processes? For example if using more than 1000 processes then use scalable version etc? How would we decide on the parameter values? Barry > On Sep 15, 2016, at 5:35 AM, Dave May

Re: [petsc-users] (no subject)

2016-09-15 Thread Matthew Knepley
On Thu, Sep 15, 2016 at 4:23 AM, Ji Zhang wrote: > Thanks Matt. It works well for signal core. But is there any solution if I > need a MPI program? > It unclear what the stuff below would mean in parallel. If you want to assemble several blocks of a parallel matrix that looks

Re: [petsc-users] Question about memory usage in Multigrid preconditioner

2016-09-15 Thread Dave May
HI all, I the only unexpected memory usage I can see is associated with the call to MatPtAP(). Here is something you can try immediately. Run your code with the additional options -matrap 0 -matptap_scalable I didn't realize this before, but the default behaviour of MatPtAP in parallel is

Re: [petsc-users] (no subject)

2016-09-15 Thread Ji Zhang
Thanks Matt. It works well for signal core. But is there any solution if I need a MPI program? Thanks. Wayne On Tue, Sep 13, 2016 at 9:30 AM, Matthew Knepley wrote: > On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang wrote: > >> Dear all, >> >> I'm using petsc4py

Re: [petsc-users] Question about memory usage in Multigrid preconditioner

2016-09-15 Thread Dave May
On Thursday, 15 September 2016, Hengjie Wang wrote: > Hi Dave, > > Sorry, I should have put more comment to explain the code. > No problem. I was looking at the code after only 3 hrs of sleep > > The number of process in each dimension is the same: Px = Py=Pz=P. So is >

Re: [petsc-users] Question about memory usage in Multigrid preconditioner

2016-09-15 Thread Hengjie Wang
Hi Dave, Sorry, I should have put more comment to explain the code. The number of process in each dimension is the same: Px = Py=Pz=P. So is the domain size. So if the you want to run the code for a 512^3 grid points on 16^3 cores, you need to set "-N 512 -P 16" in the command line. I add