On 6 December 2012 17:21, Tom Lane <t...@sss.pgh.pa.us> wrote: > Simon Riggs <si...@2ndquadrant.com> writes: >> On 5 December 2012 23:37, David Rowley <dgrowle...@gmail.com> wrote: >>> Though this plan might not be quite as optimal as it could be as it performs >>> the grouping after the join. > >> PostgreSQL always calculates aggregation as the last step. > >> It's a well known optimisation to push-down GROUP BY clauses to the >> lowest level, but we don't do that, yet. > >> You're right that it can make a massive difference to many queries. > > In the case being presented here, it's not apparent to me that there's > any advantage to be had at all. You still need to aggregate over the > rows joining to each uniquely-keyed row. So how exactly are you going > to "push down the GROUP BY", and where does the savings come from?
David presents SQL that shows how that is possible. In terms of operators, after push down we aggregate 1 million rows and then join 450. Which seems cheaper than join 1 million rows and aggregate 1 million. So we're passing nearly 1 million fewer rows into the join. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers