Alvaro Herrera wrote:
No amount of tinkering is going to change the fact that a seqscan is the
fastest way to execute these queries.  Even if you got it to be all in
memory, it would still be much slower than the other systems which, I
gather, are using columnar storage and thus are perfectly suited to this
problem (unlike Postgres).  The talk about "compression ratios" caught
me by surprise until I realized it was columnar stuff.  There's no way
you can get such high ratios on a regular, row-oriented storage.

One of the "good tricks" with Postgres is to convert a very wide table into a 
set of narrow tables, then use a view to create something that looks like the original 
table.  It requires you to modify the write portions of your app, but the read portions 
can stay the same.

A seq scan on one column will *much* faster when you rearrange your database 
this way since it's only scanning relevant data.  You pay the price of an extra 
join on primary keys, though.

If you have just a few columns in a very wide table that are seq-scanned a lot, 
you can pull out just those columns and leave the rest in the wide table.

The same trick is also useful if you have one or a few columns that are updated 
frequently: pull them out, and use a view to recreate the original appearance.  
It saves a lot on garbage collection.

Craig


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to