On 2012-02-15, Asher Hoskins as...@piceur.com wrote:
Hello.
I've got a database with a very large table (currently holding 23.5
billion rows, the output of various data loggers over the course of my
PhD so far). The table itself has a trivial structure (see below) and is
partitioned by
On Sat, Mar 24, 2012 at 9:40 PM, Jasen Betts ja...@xnet.co.nz wrote:
have you tried using COPY instead of INSERT (you'll have to insert
into the correct partition)
triggers fire on copy, but rules do not. So if he has partitioning
triggers they'll fire on the parent table etc.
HOWEVER,
Hello.
I've got a database with a very large table (currently holding 23.5
billion rows, the output of various data loggers over the course of my
PhD so far). The table itself has a trivial structure (see below) and is
partitioned by data time/date and has quite acceptable INSERT/SELECT
On Wed, Feb 15, 2012 at 18:46, Asher Hoskins as...@piceur.com wrote:
My problem is that the autovacuum system isn't keeping up with INSERTs and I
keep running out of transaction IDs.
This is usually not a problem with vacuum, but a problem with
consuming too many transaction IDs. I suspect
On Wed, Feb 15, 2012 at 19:25, Marti Raudsepp ma...@juffo.org wrote:
VACUUM FULL is extremely inefficient in PostgreSQL 8.4 and older.
Oh, a word of warning, PostgreSQL 9.0+ has a faster VACUUM FULL
implementation, but it now requires twice the disk space of your table
size, during the vacuum
On 02/15/12 8:46 AM, Asher Hoskins wrote:
I've got a database with a very large table (currently holding 23.5
billion rows,
a table that large should probably be partitioned, likely by time.
maybe a partition for each month. as each partition is filled, it can
be VACUUM FREEZE'd since it
On Wed, Feb 15, 2012 at 12:38 PM, John R Pierce pie...@hogranch.com wrote:
so, your ~ monthly batch run could be something like...
create new partition table
copy/insert your 1-2 billion rows
vacuum analyze (NOT full) new table
vacuum freeze new table
update master partition