Thank you, Pierre!
On Wed, Apr 13, 2022 at 10:05 PM Pierre Jolivet wrote:
> You can also use the uncommented option -pc_asm_print_subdomains which
> will, as Matt told you, show you that it is exactly the same algorithm.
>
> Thanks,
> Pierre
>
> On 13 Apr 2022, at 3:58 PM, Zhuo Chen wrote:
>
>
You can also use the uncommented option -pc_asm_print_subdomains which will, as
Matt told you, show you that it is exactly the same algorithm.
Thanks,
Pierre
> On 13 Apr 2022, at 3:58 PM, Zhuo Chen wrote:
>
> Thank you, Matt! I will do that.
>
> On Wed, Apr 13, 2022 at 9:55 PM Matthew
Thank you, Matt! I will do that.
On Wed, Apr 13, 2022 at 9:55 PM Matthew Knepley wrote:
> On Wed, Apr 13, 2022 at 9:53 AM Zhuo Chen wrote:
>
>> Dear Pierre,
>>
>> Thank you! I looked into the webpage you sent me and I think it is not
>> the situation that I am talking about.
>>
>> I think I
On Wed, Apr 13, 2022 at 9:53 AM Zhuo Chen wrote:
> Dear Pierre,
>
> Thank you! I looked into the webpage you sent me and I think it is not the
> situation that I am talking about.
>
> I think I need to attach a figure for an illustrative purpose. This figure
> is Figure 14.5 of "Iterative Method
Dear Pierre,
Thank you! I looked into the webpage you sent me and I think it is not the
situation that I am talking about.
I think I need to attach a figure for an illustrative purpose. This figure
is Figure 14.5 of "Iterative Method for Sparse Linear Systems" by Saad.
[image:
> On 13 Apr 2022, at 3:30 PM, Zhuo Chen wrote:
>
> Dear Matthew and Mark,
>
> Thank you very much for the reply! Much appreciated!
>
> The question was about a 1D problem. I think I should say core 1 has row 1:32
> instead of 1:32, 1:32 as it might be confusing.
>
> So the overlap is
Dear Matthew and Mark,
Thank you very much for the reply! Much appreciated!
The question was about a 1D problem. I think I should say core 1 has row
1:32 instead of 1:32, 1:32 as it might be confusing.
So the overlap is extended to both directions for the middle processor but
only toward the
On Wed, Apr 13, 2022 at 9:11 AM Mark Adams wrote:
>
>
> On Wed, Apr 13, 2022 at 8:56 AM Matthew Knepley wrote:
>
>> On Wed, Apr 13, 2022 at 6:42 AM Mark Adams wrote:
>>
>>> No, without overlap you have, let say:
>>> core 1: 1:32, 1:32
>>> core 2: 33:64, 33:64
>>>
>>> Overlap will increase
On Wed, Apr 13, 2022 at 8:56 AM Matthew Knepley wrote:
> On Wed, Apr 13, 2022 at 6:42 AM Mark Adams wrote:
>
>> No, without overlap you have, let say:
>> core 1: 1:32, 1:32
>> core 2: 33:64, 33:64
>>
>> Overlap will increase the size of each domain so you get:
>> core 1: 1:33, 1:33
>>
On Wed, Apr 13, 2022 at 6:42 AM Mark Adams wrote:
> No, without overlap you have, let say:
> core 1: 1:32, 1:32
> core 2: 33:64, 33:64
>
> Overlap will increase the size of each domain so you get:
> core 1: 1:33, 1:33
> core 2: 32:65, 32:65
>
I do not think this is correct. Here is
Yes, it's automatic when you use methods that have triangular solves, such as
the default (incomplete LU).
Peter Kavran writes:
> Dear team,
>
> Could you tell me whether the method described in the paper:
>
> BARRY SMITH AND HONG ZHANG - SPARSE TRIANGULAR SOLVE REVISITED: DATA LAYOUT
>
No, without overlap you have, let say:
core 1: 1:32, 1:32
core 2: 33:64, 33:64
Overlap will increase the size of each domain so you get:
core 1: 1:33, 1:33
core 2: 32:65, 32:65
What you want is reasonable but requires PETSc to pick a separator set,
which is not well defined.
You need
Dear team,
Could you tell me whether the method described in the paper:
BARRY SMITH AND HONG ZHANG - SPARSE TRIANGULAR SOLVE REVISITED: DATA LAYOUT
CRUCIAL TO BETTER PERFORMANCE
has been implemented in Petsc?
Kind regards,
Pete Kavran
Hi,
I hope that everything is going well with everybody.
I have a question about the PCASMSetOverlap. If I have a 128x128 matrix and
I use 4 cores with overlap=1. Does it mean that from core 1 to core 4, the
block ranges are (starting from 1):
core 1: 1:33, 1:33
core 2: 33:65, 33:65
core
14 matches
Mail list logo