Our ~600,000,000
row table is changed very infrequently and is on a 12 disk software raid-6
for historical reasons using an  LSI Logic / Symbios Logic SAS1068 PCI-X
Fusion-MPT SAS Our ~50,000,000 row staging table is on a 12 disk hardware
raid-10 using a Dell PowerEdge Expandable RAID controller 5.

So my disk IO and index question.  When I issue a query on the big table
like this:
SELECT    column, count(*)
FROM    bigtable
GROUP BY column
ORDER BY count DESC
When I run dstat to see my disk IO I see the software raid-6 consistently
holding over 70M/sec. This is fine with me, but I generally don't like to
do queries that table scan 600,000,000 rows.  So I do:

Note that RAID5 or 6 is fine when reading, it's the small random writes that kill it. Is the table being inserted to while you run this query, which will generate small random writes for the index updates ?
        Or is the table only inserted to during the nightly cron job ?

70 MB/s seems to me quite close to what a single SATA disk could do these days. My software RAID 5 saturates the PCI bus in the machine and pushes more than 120 MB/s. You have PCI-X and 12 disks so you should get huuuuge disk throughput, really mindboggling figures, not 70 MB/s.
        Since this seems a high-budget system perhaps a good fast hardware RAID 
?
Or perhaps this test was performed under heavy load and it is actually a good result.


All of the
rows in the staging table are changed at least once and then deleted and
recreated in the bigger table.  All of the staging table's indexes are on
the raid-10. The postgres data directory itself is on the raid-6. I think
all the disks are SATA 10Ks. The setup is kind of a beast.

SELECT    column, count(*)
FROM    bigtable
WHERE date > '4-24-08'
GROUP BY column
ORDER BY count DESC
When I run dstat I see only around 2M/sec and it is not consistent at all.

So my question is, why do I see such low IO load on the index scan version?

First, it is probably choosing a bitmap index scan, which means it needs to grab lots of pages from the index. If your index is fragmented, just scanning the index could take a long time.
        Then, i is probably taking lots of random bites in the table data.
If this is an archive table, the dates should be increasing sequentially. If this is not the case you will get random IO which is rather bad on huge data sets.

        So.

If you need the rows to be grouped on-disk by date (or perhaps another field if you more frequently run other types of query, like grouping by category, or perhaps something else, you decide) :

        The painful thing will be to reorder the table, either
        - use CLUSTER
- or recreate a table and INSERT INTO it ORDER BY the field you chose. This is going to take a while, set sort_mem to a large value. Then create the indexes.

Then every time you insert data in the archive, be sure to insert it in big batches, ORDER BY the field you chose. That way new inserts will be still in the order you want.

While you're at it you might think about partitioning the monster on a useful criterion (this depends on your querying).

If I could tweak some setting to make more aggressive use of IO, would it
actually make the query faster?  The field I'm scanning has a .960858
correlation, but I haven't vacuumed since importing any of the data that

        You have ANALYZEd at least ?
Cause if you didn't and an index scan (not bitmap) comes up on this kind of query and it does a million index hits you have a problem.

I'm
scanning, though the correlation should remain very high.  When I do a
similar set of queries on the hardware raid I see similar performance
except  the numbers are both more than doubled.

Here is the explain output for the queries:
SELECT    column, count(*)
FROM    bigtable
GROUP BY column
ORDER BY count DESC
"Sort  (cost=74404440.58..74404444.53 rows=1581 width=10)"
"  Sort Key: count(*)"
"  ->  HashAggregate  (cost=74404336.81..74404356.58 rows=1581 width=10)"
"        ->  Seq Scan on bigtable (cost=0.00..71422407.21 rows=596385921
width=10)"

        Plan is OK (nothing else to do really)

---------------
SELECT    column, count(*)
FROM    bigtable
WHERE date > '4-24-08'
GROUP BY column
ORDER BY count DESC
"Sort  (cost=16948.80..16948.81 rows=1 width=10)"
"  Sort Key: count(*)"
"  ->  HashAggregate  (cost=16948.78..16948.79 rows=1 width=10)"
"        ->  Index Scan using date_idx on bigtable (cost=0.00..16652.77
rows=59201 width=10)"
" Index Cond: (date > '2008-04-21 00:00:00'::timestamp without
time zone)"

        Argh.
        So you got an index scan after all.
Is the 59201 rows estimate right ? If it is 10 times that you really have a problem.
        Is it ANALYZEd ?

So now the asking for advice part.  I have two questions:
What is the fastest way to copy data from the smaller table to the larger
table?

        INSERT INTO SELECT FROM (add ORDER BY to taste)

We plan to rearrange the setup when we move to Postgres 8.3. We'll probably move all the storage over to a SAN and slice the larger table into monthly or weekly tables. Can someone point me to a good page on partitioning? My
gut tells me it should be better, but I'd like to learn more about why.

Because in your case, records having the dates you want will be in 1 partition (or 2), so you get a kind of automatic CLUSTER. For instance if you do your query on last week's data, it will seq scan last week's partition (which will be a much more manageable size) and not even look at the others.

Matthew said :
You could possibly not bother with a staging table, and replacethe mass copy with making a new partition. Not sure of the details myself though.

        Yes you could do that.
When a partition ceases to become actively updated, though, you should CLUSTER it so it is really tight and fast. CLUSTER on a partition which has a week's worth of data will obviously be much faster than CLUSTERing your monster archive.









--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to