On Fri, Jul 24, 2020 at 12:55 PM Peter Geoghegan <p...@bowt.ie> wrote: > Could that be caused by clustering in the data? > > If the input data is in totally random order then we have a good > chance of never having to spill skewed "common" values. That is, we're > bound to encounter common values before entering spill mode, and so > those common values will continue to be usefully aggregated until > we're done with the initial groups (i.e. until the in-memory hash > table is cleared in order to process spilled input tuples). This is > great because the common values get aggregated without ever spilling, > and most of the work is done before we even begin with spilled tuples. > > If, on the other hand, the common values are concentrated together in > the input...
I still don't know if that was a factor in your example, but I can clearly demonstrate that the clustering of data can matter a lot to hash aggs in Postgres 13. I attach a contrived example where it makes a *huge* difference. I find that the sorted version of the aggregate query takes significantly longer to finish, and has the following spill characteristics: "Peak Memory Usage: 205086kB Disk Usage: 2353920kB HashAgg Batches: 2424" Note that the planner doesn't expect any partitions here, but we still get 2424 batches -- so the planner seems to get it totally wrong. OTOH, the same query against a randomized version of the same data (no longer in sorted order, no clustering) works perfectly with the same work_mem (200MB): "Peak Memory Usage: 1605334kB" Hash agg avoids spilling entirely (so the planner gets it right this time around). It even uses notably less memory. -- Peter Geoghegan
test-agg-sorted.sql
Description: Binary data