Thanks. I think I find the right way.
Wayne
On Fri, Sep 16, 2016 at 11:33 AM, Ji Zhang wrote:
> Thanks for your warm help. Could you please show me some necessary
> functions or a simple demo code?
>
>
> Wayne
>
> On Fri, Sep 16, 2016 at 10:32 AM, Barry Smith
Thanks for your warm help. Could you please show me some necessary
functions or a simple demo code?
Wayne
On Fri, Sep 16, 2016 at 10:32 AM, Barry Smith wrote:
>
> You should create your small m_ij matrices as just dense two dimensional
> arrays and then set them into the
You should create your small m_ij matrices as just dense two dimensional
arrays and then set them into the big M matrix. Do not create the small dense
matrices as PETSc matrices.
Barry
> On Sep 15, 2016, at 9:21 PM, Ji Zhang wrote:
>
> I'm so apologize for the
I'm so apologize for the ambiguity. Let me clarify it.
I'm trying to simulation interactions among different bodies. Now I have
calculated the interaction between two of them and stored in the sub-matrix
m_ij. What I want to do is to consider the whole interaction and construct
all sub-matrices
> On Sep 15, 2016, at 1:10 PM, Dave May wrote:
>
>
>
> On Thursday, 15 September 2016, Barry Smith wrote:
>
>Should we have some simple selection of default algorithms based on
> problem size/number of processes? For example if using more
On Thursday, 15 September 2016, Barry Smith wrote:
>
>Should we have some simple selection of default algorithms based on
> problem size/number of processes? For example if using more than 1000
> processes then use scalable version etc? How would we decide on the
>
> On Sep 15, 2016, at 4:11 AM, Hoang Giang Bui wrote:
>
> Dear Barry
>
>
>
> Seem like zero pivot does not happen, but why the solver for Schur takes 13
> steps if the preconditioner is direct solver?
Because if you use KSPSetOperators(ksp_S,A,B) it is NOT a direct
Should we have some simple selection of default algorithms based on problem
size/number of processes? For example if using more than 1000 processes then
use scalable version etc? How would we decide on the parameter values?
Barry
> On Sep 15, 2016, at 5:35 AM, Dave May
On Thu, Sep 15, 2016 at 4:23 AM, Ji Zhang wrote:
> Thanks Matt. It works well for signal core. But is there any solution if I
> need a MPI program?
>
It unclear what the stuff below would mean in parallel.
If you want to assemble several blocks of a parallel matrix that looks
HI all,
I the only unexpected memory usage I can see is associated with the call to
MatPtAP().
Here is something you can try immediately.
Run your code with the additional options
-matrap 0 -matptap_scalable
I didn't realize this before, but the default behaviour of MatPtAP in
parallel is
Thanks Matt. It works well for signal core. But is there any solution if I
need a MPI program?
Thanks.
Wayne
On Tue, Sep 13, 2016 at 9:30 AM, Matthew Knepley wrote:
> On Mon, Sep 12, 2016 at 8:24 PM, Ji Zhang wrote:
>
>> Dear all,
>>
>> I'm using petsc4py
On Thursday, 15 September 2016, Hengjie Wang wrote:
> Hi Dave,
>
> Sorry, I should have put more comment to explain the code.
>
No problem. I was looking at the code after only 3 hrs of sleep
>
> The number of process in each dimension is the same: Px = Py=Pz=P. So is
>
Hi Dave,
Sorry, I should have put more comment to explain the code.
The number of process in each dimension is the same: Px = Py=Pz=P. So is
the domain size.
So if the you want to run the code for a 512^3 grid points on 16^3
cores, you need to set "-N 512 -P 16" in the command line.
I add
13 matches
Mail list logo