Hi Tom, Greg,
Thanks for your helpful suggestions - switching the BIGINT to FLOAT
and fixing the postgresql.conf to better match my server configuration
gave me about 30% speedup on the queries.
Because of the fact that my data insert order was almost never the
data retrieval order, I also got a
If you want to partition your huge data set by "time", and the data
isn't already ordered by "time" on disk, you could do this :
SET work_mem TO something very large like 10GB since you got 32GB RAM,
check your shared buffers etc first;
CREATE TABLE tmp AS SELECT * FROM bigTable ORDER BY "tim
Indexes are on the partitions, my bad.
If you need to insert lots of data, it is faster to create the indexes
afterwards (and then you can also create them in parallel, since you have
lots of RAM and cores).
The explain plan looks like this:
explain SELECT * from bigTable
where
"time" >
> On Tuesday 01 September 2009 03:26:08 Pierre FrĂŠdĂŠric Caillaud wrote:
>> > We have a table that's > 2billion rows big and growing fast. We've
>> setup
>> > monthly partitions for it. Upon running the first of many select *
>> from
>> > bigTable insert into partition statements (330million rows
On Tuesday 01 September 2009 03:26:08 Pierre Frédéric Caillaud wrote:
> > We have a table that's > 2billion rows big and growing fast. We've setup
> > monthly partitions for it. Upon running the first of many select * from
> > bigTable insert into partition statements (330million rows per month) t
We have a table that's > 2billion rows big and growing fast. We've setup
monthly partitions for it. Upon running the first of many select * from
bigTable insert into partition statements (330million rows per month) the
entire box eventually goes out to lunch.
Any thoughts/suggestions?
Thanks
> Hi all;
>
> We have a table that's > 2billion rows big and growing fast. We've setup
> monthly partitions for it. Upon running the first of many select * from
> bigTable insert into partition statements (330million rows per month) the
> entire box eventually goes out to lunch.
>
> Any thoughts/s
Hi all;
We have a table that's > 2billion rows big and growing fast. We've setup
monthly partitions for it. Upon running the first of many select * from
bigTable insert into partition statements (330million rows per month) the
entire box eventually goes out to lunch.
Any thoughts/suggestions?