At Thu, 1 Oct 2020 04:20:27 +0000, "tsunakawa.ta...@fujitsu.com" 
<tsunakawa.ta...@fujitsu.com> wrote in 
> From: Kyotaro Horiguchi <horikyota....@gmail.com>
> > In more detail, if smgrcachednblocks() returned InvalidBlockNumber for
> > any of the forks, we should give up the optimization at all since we
> > need to run a full scan anyway.  On the other hand, if any of the
> > forks is smaller than the threshold, we still can use the optimization
> > when we know the accurate block number of all the forks.
> 
> Ah, I got your point (many eyes in open source development is nice.)  Still, 
> I feel it's better to treat each fork separately, because the inner loop in 
> the traditional path may be able to skip forks that have been already 
> processed in the optimization path.  For example, if the forks[] array 
> contains {fsm, vm, main} in this order (I know main is usually put at the 
> beginning), fsm and vm are processed in the optimization path and the inner 
> loop in the traditional path can skip fsm and vm.

I thought that the advantage of this optimization is that we don't
need to visit all buffers?  If we need to run a full-scan for any
reason, there's no point in looking-up already-visited buffers
again. That's just wastefull cycles.  Am I missing somethig?


> > Still, I prefer to use total block number of all forks since we anyway
> > visit the all forks.  Is there any reason to exlucde forks other than
> > the main fork while we visit all of them already?
> 
> When the number of cached blocks for a main fork is below the threshold but 
> the total cached blocks of all forks exceeds the threshold, the optimization 
> is skipped.  I think it's mottainai.

I don't understand. If we chose to the optimized dropping, the reason
is the number of buffer lookup is fewer than a certain threashold. Why
do you think that the fork kind a buffer belongs to is relevant to the
criteria?

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


Reply via email to