On Sun, 22 Feb 2004, Sean Shanny wrote: > Tom, > > We have the following setting for random page cost: > > random_page_cost = 1 # units are one sequential page fetch cost > > Any suggestions on what to bump it up to? > > We are waiting to hear back from Apple on the speed issues, so far we > are not impressed with the hardware in helping in the IO department. > Our DB is about 263GB with indexes now so there is not way it is going > to fit into memory. :-( I have taken the step of breaking out the data > into month based groups just to keep the table sizes down. Our current > months table has around 72 million rows in it as of today. The joys of > building a data warehouse and trying to make it as fast as possible.
You may be able to achieve similar benefits with a clustered index. see cluster: \h cluster Command: CLUSTER Description: cluster a table according to an index Syntax: CLUSTER indexname ON tablename CLUSTER tablename CLUSTER I've found this can greatly increase speed, but on 263 gigs of data, I'd run it when you had a couple days free. You might wanna test it on a smaller test set you can afford to chew up some I/O CPU time on over a weekend. ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster