Andres Freund <and...@anarazel.de> writes: > It's quite easy to change iteration so we start with the latest item, > and iterate till the first, rather than the other way round. In > benchmarks with somewhat wide columns and aggregation, this yields > speedups of over 30%, before hitting other bottlenecks.
> I do wonder however if it's acceptable to change the result order of > sequential scans. I think there will be a lot of howls. People expect that creating a table by inserting a bunch of rows, and then reading back those rows, will not change the order. We already futzed with that guarantee a bit with syncscans, but that only affects quite large tables --- and even there, we were forced to provide a way to turn it off. If you were talking about 3X then maybe it would be worth it, but for 30% (on a subset of queries) I am not excited. I wonder whether we could instead adjust the rules for insertion so that tuples tend to be physically in order by itemid. I'm imagining leaving two "holes" in a page and sometimes (hopefully not often) having to shift data during insert to preserve the ordering. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers