It is essentially doing AMR each time-step, and for the given application, I 
don't think that is pathological. It is an app built on Randy LeVeque's 
Clawpack stuff. The linear solver totally dominates the time which makes users 
of Clawpack very hesitant to consider methods with implicit steps, even when 
they need them. One linear solution takes far longer than the entire explicit 
step including its sub-grid cycling and the adaptive changes to the grid at 
each time-step.

  There is sub-grid cycling where similar solves are done on the same mesh 
(hence same nonzero pattern) two or three times, so thanks, we will try 
-pc_gamg_reuse_interpolation true and it could potentially help a good amount.

  There could be a mode where -pc_gamg_reuse_interpolation true is on by 
default, and the KSP/PC monitors the performance (convergence rate) of the 
solve following the reuse to decide if sticking with the old interpolation is 
ok or if a new interpolation should be done for the next change in the matrix 
values. Thus not requiring user knowledge and tuning of this option which most 
users who just want to get on with their work would not want to mess with.






> On Sep 17, 2022, at 1:43 PM, Mark Adams <mfad...@lbl.gov> wrote:
> 
> I don't see a problem here other than the network looks bad relative to the 
> problem size.
> 
> All the graph methods (PCGAMGCreateG and MIS) are 2x slower.
>   - THe symmetrization must be in PCGAMGCreateG.
>   - MIS is pretty old code (the algorithm and original code are 25 years old)
> RAPs are about the same.
> KSPGMRESOrthog and MatMult are nowhere near perfect.
> 
> The graph setup work gets amortized by (most) applications and benchmarkers 
> that know how to benchmark, so it is not highly engineered like the RAP and 
> MatMult.
> Note, this application is building the graph work for every linear solve.
> I am guessing they want '-pc_gamg_reuse_interpolation true' or are doing a 
> single step/stage TS with a linear problem and AMR every time step, which 
> would be pretty pathological.
> I'm doing an MR right now, maybe I should change the default for 
> -pc_gamg_reuse_interpolation?
> 
> Mark
> 
> 
> 
> On Sat, Sep 17, 2022 at 10:12 AM Barry Smith <bsm...@petsc.dev 
> <mailto:bsm...@petsc.dev>> wrote:
> 
>   Sure, but have you ever seen such a large jump in time in going from one to 
> two MPI ranks, and are there any algorithms to do the aggregation that would 
> not require this very expensive parallel symmetrization?
> 
>> On Sep 17, 2022, at 9:07 AM, Mark Adams <mfad...@lbl.gov 
>> <mailto:mfad...@lbl.gov>> wrote:
>> 
>> Symetrix graph make a transpose and then adds them.
>> I imagine adding two different matrices is expensive.
>> 
>> On Fri, Sep 16, 2022 at 8:30 PM Barry Smith <bsm...@petsc.dev 
>> <mailto:bsm...@petsc.dev>> wrote:
>> 
>> Mark,
>> 
>>    I have runs of GAMG on one and two ranks with -pc_gamg_symmetrize_graph 
>> because the matrix is far from symmetric and some of GAMG is taking a huge 
>> amount more time with 2 ranks than one. (While other stuff like VecNorm 
>> shows improvement with two ranks). I've attached the two files
>> 
>>   Have you seen this before, is there anything that can be done about? If 
>> going to two ranks causes almost a doubling in GAMG setup time that makes 
>> using parallelism not useful,
>> 
>>   Barry
>> 
> 

Reply via email to