On 2/15/26 23:13, Andres Freund wrote:
> Hi,
> 
> On 2026-02-15 22:35:35 +0100, Tomas Vondra wrote:
>> On 2/15/26 21:59, Andres Freund wrote:
>>> Hi,
>>>
>>> On 2026-02-15 14:34:07 -0500, Andres Freund wrote:
>>>> debug_io_direct=data, enable_indexscan_prefetch=1, w/ stream->distance * 2 
>>>> + 1
>>>> ┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
>>>> │                                                                          
>>>>    QUERY PLAN                                                              
>>>>                 │
>>>> ├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
>>>> │ Index Scan using idx_periodic_100000 on prefetch_test_data_100000  
>>>> (cost=0.29..15351.09 rows=100000 width=8) (actual time=0.316..176.703 
>>>> rows=100000.00 loops=1)    │
>>>> │   Index Searches: 1                                                      
>>>>                                                                            
>>>>                 │
>>>> │   Prefetch: distance=707.476 count=11158 stalls=88503 skipped=0 resets=0 
>>>> pauses=26 ungets=0 forwarded=0                                             
>>>>                 │
>>>> │             histogram [2,4) => 5, [4,8) => 11, [8,16) => 26, [16,32) => 
>>>> 30, [32,64) => 54, [64,128) => 109, [128,256) => 221, [256,512) => 428, 
>>>> [512,1024) => 10274 │
>>>> │   Buffers: shared hit=96875 read=3400                                    
>>>>                                                                            
>>>>                 │
>>>> │   I/O Timings: shared read=33.874                                        
>>>>                                                                            
>>>>                 │
>>>> │ Planning:                                                                
>>>>                                                                            
>>>>                 │
>>>> │   Buffers: shared hit=78 read=21                                         
>>>>                                                                            
>>>>                 │
>>>> │   I/O Timings: shared read=2.772                                         
>>>>                                                                            
>>>>                 │
>>>> │ Planning Time: 3.065 ms                                                  
>>>>                                                                            
>>>>                 │
>>>> │ Execution Time: 182.959 ms                                               
>>>>                                                                            
>>>>                 │
>>>> └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
>>>>
>>>> The stall stats are bogus, because they get increased even when we 
>>>> correctly
>>>> are not prefetching due to everything being in shared buffers. I think the
>>>>   if (distance == 1) stats.nstalls++
>>>> would need to be just before the WaitReadBuffers().
>>>
>>> The histogram and distance are also somewhat misleading: They measure what 
>>> the
>>> distance is at the time the next block is determined, but that's not really
>>> informative, as the distance can be much bigger than what we are actually
>>> doing IO wise (to allow for IO combining etc).  The limit for the number of
>>> in-flight IOs will be the limiting factor in a case with random-ish IOs and
>>> it's also really what matters for performance.
>>>
>>
>> This EXPLAIN part was hacked together as something to help us during
>> development, and a lot of the information is wonky and not well defined.
>> Which is why we chose not to include it in the patches posted to
>> hackers, so I'm a bit confused which patch / branch you're looking at.
> 
> I was looking at Peter's git tree and unreverted the stats, after seeing some
> odd performance.  If found the stats quite valuable, without them it'd have
> been quite hard to figure out why right now lager sequential or periodic
> tables (from Alexandre's workload) have very subpar performance.
> 

OK, I was just confused as it wasn't in the published patch. I agree the
info is useful/valuable, which is why wrote the patch initially. I'll
try to improve it to make it less misleading etc.

> 
>> For stalls you're probably right. I'll think about it.
> 
> Thx.
> 
> 
>> I'm not sure about the distance shown. What do you mean by "the distance
>> can be much bigger than what we are actually doing IO wise"?
> 
> stream->distance is just a cap of how far we *may* look ahead, not how far we
> are currently looking ahead.
> 
> E.g. if you have a stream that full tilt blazes ahead with 1 block random IOs,
> none of then in s_b, you'll soon have a distance that's large, as it gets
> doubled for every miss until hitting the cap (of io_combine_limit *
> effective_io_concurrency, capped by the buffer pin limit).  But because you're
> doing random IO, you're just effective_io_concurrency IOs, not
> effective_io_concurrency * io_combine_limit.
> 
> This gets even more extreme if you yield often, because that will lead to the
> distance staying relatively high, while preventing actually issuing much
> concurrent IO.
> 

I wonder if this might be hurting us. Peter was working on adding what
he calls "adaptive yielding", so maybe it could be preventing issuing
much enough concurrent IOs.

> 
>> IIRC in that particular case we needed to know how far ahead is the
>> "prefetch position" (I mean, how many index entries are in between).
> 
> Right - but that's not what looking at ->distance tells you :).  I think you
> could use ->pinned_buffers for it, if you want to look at the number of
> blocks, not the number of IOs.
> 

Sure, but it's the one thing that's easily accessible, even if it's
imperfect. It can definitely tell us when the distance "collapses" close
to 1.0 (in which case we can't be issuing any concurrent IOs).

Ideally we'd have a way to look at the "distance" in the actual batch,
but that's invisible to the read_stream code. Maybe we could track that
within indexam.c/indexbatch.c - I'll give it a try. The patch predates
the refactoring a couple weeks ago, it might work much nicer now.

> 
>>> FWIW, if I change the batchdistance <= 2 check to <= 8, I get good perf even
>>> with io_combine_limit=16:
>>>
>>> stats using stream->ios_in_progress:
>>>    Prefetch: distance=2.605 count=315526 stalls=3 skipped=9687128 resets=0 
>>> pauses=3035 ungets=0 forwarded=50
>>>              histogram [1,2) => 72679, [2,4) => 170115, [4,8) => 72682
>>>    Buffers: shared hit=27325 read=312500
>>>    I/O Timings: shared read=125.902
>>>
>>> but that was just an experiment.
>>>
>>

I missed this comment about batchdistance before. I'm sure there are
cases where a higher threshold would work better, but IIRC it's meant as
a safety for short queries that happen to run-away trying to look for
the next block to prefetch. And the block may not even be needed in some
cases, e.g. with LIMIT.

So increasing the value to 8 would help this particular query, but could
easily hurt various other queries - like index-only scans with LIMIT.

Maybe we should gradually ramp-up the threshold, instead of keeping it
at 2 forever.

>> I'll take a close look tomorrow, but AFAICS we really aim to measure two
>> rather different things. I've been interested in "how far ahead are we
>> looking" and you're more interested in the number of I/Os initiated by
>> the stream. Which both seem interesting and necessary to understand
>> what's going on.
> 
> When do you care about the distance purely in blocks, rather than IOs? If you
> can't actually have IO concurrency, due to io combining and yielding / low pin
> limits never actually allowing multiple IOs, you'll have no gain from AIO.
> 

If we're doing a 1000 IOs, but each IO can is either 8K or 128K chunk,
those seem like rather different situations, no? The bandwidth will be
vastly different, and it also saturates the other operators differently.
But I haven't thought about that too deeply. Just knowing the number of
IOs seems incomplete.


regards

-- 
Tomas Vondra



Reply via email to