> On 23 Jun 2023, at 10:06 PM, Pierre Jolivet <pierre.joli...@lip6.fr> wrote:
> 
> 
>> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay <alexlindsay...@gmail.com> 
>> wrote:
>> 
>> Ah, I see that if I use Pierre's new 'full' option for 
>> -mat_schur_complement_ainv_type
> 
> That was not initially done by me

Oops, sorry for the noise, looks like it was done by me indeed in 
9399e4fd88c6621aad8fe9558ce84df37bd6fada…

Thanks,
Pierre

> (though I recently tweaked MatSchurComplementComputeExplicitOperator() a bit 
> to use KSPMatSolve(), so that if you have a small Schur complement — which is 
> not really the case for NS — this could be a viable option, it was previously 
> painfully slow).
> 
> Thanks,
> Pierre
> 
>> that I get a single iteration for the Schur complement solve with LU. That's 
>> a nice testing option
>> 
>> On Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay <alexlindsay...@gmail.com 
>> <mailto:alexlindsay...@gmail.com>> wrote:
>>> I guess it is because the inverse of the diagonal form of A00 becomes a 
>>> poor representation of the inverse of A00? I guess naively I would have 
>>> thought that the blockdiag form of A00 is A00
>>> 
>>> On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay 
>>> <alexlindsay...@gmail.com <mailto:alexlindsay...@gmail.com>> wrote:
>>>> Hi Jed, I will come back with answers to all of your questions at some 
>>>> point. I mostly just deal with MOOSE users who come to me and tell me 
>>>> their solve is converging slowly, asking me how to fix it. So I generally 
>>>> assume they have built an appropriate mesh and problem size for the 
>>>> problem they want to solve and added appropriate turbulence modeling 
>>>> (although my general assumption is often violated).
>>>> 
>>>> > And to confirm, are you doing a nonlinearly implicit velocity-pressure 
>>>> > solve?
>>>> 
>>>> Yes, this is our default.
>>>> 
>>>> A general question: it seems that it is well known that the quality of 
>>>> selfp degrades with increasing advection. Why is that?
>>>> 
>>>> On Wed, Jun 7, 2023 at 8:01 PM Jed Brown <j...@jedbrown.org 
>>>> <mailto:j...@jedbrown.org>> wrote:
>>>>> Alexander Lindsay <alexlindsay...@gmail.com 
>>>>> <mailto:alexlindsay...@gmail.com>> writes:
>>>>> 
>>>>> > This has been a great discussion to follow. Regarding
>>>>> >
>>>>> >> when time stepping, you have enough mass matrix that cheaper 
>>>>> >> preconditioners are good enough
>>>>> >
>>>>> > I'm curious what some algebraic recommendations might be for high Re in
>>>>> > transients. 
>>>>> 
>>>>> What mesh aspect ratio and streamline CFL number? Assuming your model is 
>>>>> turbulent, can you say anything about momentum thickness Reynolds number 
>>>>> Re_θ? What is your wall normal spacing in plus units? (Wall resolved or 
>>>>> wall modeled?)
>>>>> 
>>>>> And to confirm, are you doing a nonlinearly implicit velocity-pressure 
>>>>> solve?
>>>>> 
>>>>> > I've found one-level DD to be ineffective when applied monolithically 
>>>>> > or to the momentum block of a split, as it scales with the mesh size. 
>>>>> 
>>>>> I wouldn't put too much weight on "scaling with mesh size" per se. You 
>>>>> want an efficient solver for the coarsest mesh that delivers sufficient 
>>>>> accuracy in your flow regime. Constants matter.
>>>>> 
>>>>> Refining the mesh while holding time steps constant changes the advective 
>>>>> CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful 
>>>>> scaling study is to increase Reynolds number (e.g., by growing the 
>>>>> domain) while keeping mesh size matched in terms of plus units in the 
>>>>> viscous sublayer and Kolmogorov length in the outer boundary layer. That 
>>>>> turns out to not be a very automatic study to do, but it's what matters 
>>>>> and you can spend a lot of time chasing ghosts with naive scaling studies.
> 

Reply via email to