Robert Bedell kirjutas N, 18.12.2003 kell 01:55:
> > I guess that by adding hash aggregates Tom solved most problems of
> > adding ROLLUP, CUBE and GROUPING SETS.
> > 
> > OTOH, I'm not sure if hash aggregates can already spill to disk if not
> > enough memory is available for keeping them all. If not, then adding
> > this capability would be great push towards their general use for
> > GROUPING SETS.
> > 
> > ALso, a mix of scan-over-sorted-group-by + hash aggregates for
> > out-of-order extra groups would be great way to using less memory for
> > hash aggregates.
> 
> The other issue is that in a scan-over-sorted-group-by without out of order
> grouping sets you can return tuples as you reset the aggregators.  With out
> of order grouping sets you would have to wait until the whole table was
> scanned - at least for those grouping sets - to return the resulting tuples.
> Since this could get rather large the spill to disk functionality is
> necessary.  It should probably mimic how the sort does it...
> 
> Another point is selecting the best way to sort a given collection of
> grouping sets for minimal memory usage. 

it seems that the longest GROUPING SET and all its left-continuous
subsets could be collected from the sorted scan and the rest from hash
aggregates. 

GROUPING SET () will always need a "hash" ;)

To optimise any further would require use of statistics data, and is
probably not a good idea to do before having the simpler one implemented

> Any ORDER BY in the query should
> really be applied after the grouping operation.
> 
> The CUBE and ROLLUP operators should really be applied by expanding them
> into the equivalent collections of grouping sets.

For pure ROLLUP one could shortcut the split-into-groups and
put-together-again process, as ROLLUP is already doable from single
sorted scan.

-------------
Hannu










---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faqs/FAQ.html

Reply via email to