Our usage pattern has recently left me with some very bloated database clusters. I have, in the past, scheduled downtime to run VACUUM FULL and tried CLUSTER as well, followed by a REINDEX on all tables. This does work, however the exclusive lock has become a real thorn in my side. As our system grows, I am having trouble scheduling enough downtime for either of these operations or a full dump/reload. I do run VACUUM regularly, it's just that sometimes we need to go back and update a huge percentage of rows in a single batch due to changing customer requirements, leaving us with significant table bloat.

So within the last few days my db cluster has grown from 290GB to 370GB and because of some other major data updates on my TO-DO list, I expect this to double and I'll be bumping up against my storage capacity.

The root of my question is due to my not understanding why the tables can't be in read-only mode while one of these is occurring? Since most of our usage is OLAP, this really wouldn't matter much as long as the users could still query their data while it was running. Is there some way I can allow users read-only access to this data while things are cleaned up in the background? INSERTs can wait, SELECTs cannot.

So how do other people handle such a problem when downtime is heavily frowned upon? We have 24/7 access ( but again, the users only read data ).

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

               http://www.postgresql.org/about/donate

Reply via email to