Also regarding feature/colored-smp-assembly: I noted that the more
threads I use, the larger fraction of the running time is taken by
update (in the assemble/solve/update trio, I mean). It will be very cool
to have the update parallelized, too.
Best regards,
Dmitry
On 22.10.2021 01:39, Dmitry Pavlov wrote:
Timo,
Sorry, I got distracted from this.
I had to fix NoPrimaryVariableSwitch to use it, but you already know
that since you saw the merge request.
EnableGridFluxVariablesCache totally helped! Now I see only 4*1 calls to
update() after the Newton step and 4*5 calls to update() during
assemble(). Here, 4 is the number of scvs connected to a node, and 5 is
the number of primary variables. So it does 4*6 calls in total on every
step, which is what I expected.
I still want just 6 calls instead of 4*6, but for that, I will have to
switch to TPFA or implement my own GridVolumeVariables as you suggested.
The feature/colored-smp-assembly works, thank you! I think that it
scales better than MPI for my purposes. But what is "oneapi"? Is this
supposed to come from some Intel-packaged package? I am not sure I want
to have that dependency so I installed the usual libtbb-dev from Ubuntu
repo and removed oneapi prefixes from your source. This came at a cost:
it is no oneapi::tbb::info::default_concurrency() in the version that I
installed, so I changed it to some hard-coded value for now.
Best regards,
Dmitry
On 14.10.2021 12:39, Timo Koch wrote:
On 14. Oct 2021, at 00:12, Dmitry Pavlov <dmitry.pav...@outlook.com
<mailto:dmitry.pav...@outlook.com>> wrote:
Hello,
I am making a compositional oil-gas simulator in DuMux. An important
part of it, and arguably the most CPU-demanding, is the determination
if the given composition is going to stay in one phase (stability
test) and, in case the stability test returned false, the two-phase
flash.
VolumeVariables::update seems to be the most appropriate place to do
this kind of calculations. Please correct me if I am wrong.
I defined my own MyVolumeVariables and I call completeFluidState() in
its update() method every time. Now, the less
MyVolumeVariables::update() calls, the better. I found out that my
program performs far more calls than I anticipated.
1. Some of the extra calls were coming from the (unmerged) fix
<https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/merge_requests/2146>
that allowed relative permeabilities of an scv to depend of neighbor
dofs. The fix was needed for another problem (surfactant simulation),
so I reverted it for now. OK.
2. Some of the extra calls were coming from the PrimaryVariableSwitch
machinery which is not required at all in the task. I set my own
PrimaryVariableSwitch implementation via VolumeVariables with an
empty update(). OK.
Hi Dmitry,
I would have to look into this a bit closer to give better answers but
some quick answers for now.
For 2. that’s the right thing to do. There is also the
|NoPrimaryVariableSwitch |which should do exactly that, i.e. nothing.
3. I am using the box method. Each node has its primary variables
that go into MyVolumeVariables::update() and then into
completeFluidState(). This is good. What is bad is that this is done
independently for every scv that is connected to this node. So with a
rectangular 2D grid I get four completeFluidState() calls with the
same primary variables. So I am repeating the same calculations four
times. Can this be avoided at present? Can this be avoided in principle?
Dumux chooses the most general case here (although arguably
inconsistent this is not done for the point 1. above as you already
noticed). The volvars might indeed be different for every scv at the
node so this has to be done in the general case.
On a rectangular 2D grid you should probably get 8 calls, no? 4 for
the residual and 4 for the deflected residual to compute the derivative.
Avoiding this is not possible out-of-the-box, but you can of course
achieve this by implementing a caching mechanism, e.g. in the
GridVolumeVariables (which then probably need to be accessible through
the problem since that’s what you get in volvars.update).
(also see below if you haven’t enabled the out-of-the-box caching
already).
4. As soon as one step of the Newton method is completed,
computeResidualReduction_ is called, then in calls assembleResidual,
and after that, MyPrimaryVariables::update() for every scv, without
any change in primary variables, is being called twise: first via
bindLocalViews -> curElemVolVars.bind -> bindElement
and then via
bindLocalViews -> prevElemVolVars.bindElement
I honestly do not understand what is going on here.
I’m not 100% sure if this is case here. But there is a caching
mechanism already implemented. So if volvar updates are expensive you
definitely want to use
template<class TypeTag>
struct EnableGridVolumeVariablesCache<TypeTag, TTag::YOURTYPETAG> {
static constexpr bool value = true; };
template<class TypeTag>
struct EnableGridFluxVariablesCache<TypeTag, TTag::YOURTYPETAG> { static
constexpr bool value = true; };
if you don’t have this enabled already? This will require more memory
but avoids unnecessary updates.
But usually for computing a residual criterion for the Newton you
definitely need to do at least one more update with the final
solution. But using caching you can avoid this then for the next
newton step and also the bindLocalViews -> prevElemVolVars.bindElement
should be a loop.
5. Then the next step is started, during which the system is
assembled. There, MyPrimaryVariables::update() is called 7 times for
each scv. I have 5 components, so 5 dofs (pressure + 4 mole
fractions). Why 7 calls and not 5? I figure that 5 calls should be
made to calculate the numerical derivatives w. r. t. the current
state that has been already assembled in the end of the previous
step. Even if we reassemble it, it is expected that we have only 6 calls.
Sorry I also can’t answer that without looking more into it.
Maybe some of the suggestions above already help?
Best wishes,
Timo
Please bear with me as I may have misconceptions about the box method
and the DuMux ways of work. The above is not critique but a call for
advice.
Best regards,
Dmitry
_______________________________________________
DuMux mailing list
DuMux@listserv.uni-stuttgart.de <mailto:DuMux@listserv.uni-stuttgart.de>
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux
_______________________________________________
DuMux mailing list
DuMux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux
_______________________________________________
DuMux mailing list
DuMux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux
_______________________________________________
DuMux mailing list
DuMux@listserv.uni-stuttgart.de
https://listserv.uni-stuttgart.de/mailman/listinfo/dumux