Re: [GENERAL] Large DB

2004-04-06 Thread Ericson Smith
I've been following this thread with interest since it started, and it really seems that there is just too much data in that single table. When it comes down to it, making smaller separate tables seems to be the way to go. Querying will be a little harder, but much faster. Warmest regards, Eri

Re: [GENERAL] Large DB

2004-04-05 Thread Manfred Koizar
On Sat, 03 Apr 2004 22:39:31 -0800, "Mooney, Ryan" <[EMAIL PROTECTED]> wrote: >Ok, so I ran a vacuum analyse. It took ~1.7 days to finish. Just to make it clear: VACUUM and ANALYSE are two different commands. VACUUM is for cleaning up. It has to visit every tuple in every page, and if there

Re: [GENERAL] Large DB

2004-04-02 Thread Manfred Koizar
[time to move this to -hackers] On Fri, 02 Apr 2004 11:16:21 -0500, Tom Lane <[EMAIL PROTECTED]> wrote: >Manfred Koizar <[EMAIL PROTECTED]> writes: >> The first step, however, (acquire_sample_rows() in analyze.c) has to >> read more rows than finally end up in the sample. It visits less than >> O

Re: [GENERAL] Large DB

2004-04-02 Thread Tom Lane
Manfred Koizar <[EMAIL PROTECTED]> writes: > The first step, however, (acquire_sample_rows() in analyze.c) has to > read more rows than finally end up in the sample. It visits less than > O(nblocks) pages but certainly more than O(1). > A vague feeling tries to tell me that the number of page rea

Re: [GENERAL] Large DB

2004-04-02 Thread Manfred Koizar
On Thu, 01 Apr 2004 12:22:58 +0200, I wrote: >BTW, ANALYSE is basically a constant time operation. On closer inspection, this is not the whole truth. ANALY[SZ]E is a two stage process: First it collects a sample of rows, then these rows are examined to produce various statistics. The cost of th

Re: [GENERAL] Large DB

2004-03-31 Thread Manfred Koizar
On Tue, 30 Mar 2004 17:48:14 -0800, "Mooney, Ryan" <[EMAIL PROTECTED]> wrote: >I have a single table that just went over 234GB in size with about 290M+ >rows. That would mean ~ 800 bytes/row which, given your schema, is hard to believe unless there are lots of dead tuples lying around. >queries u

Re: [GENERAL] Large DB

2004-03-30 Thread Ericson Smith
The issue here might be just organizing the data differently. Or getting an Opteron server with 16GB RAM :-) Based on the strength of the developers recommendations in this newsgroup, we recently upgraded to a dual Opteron 2GHZ with 16GB Ram and 15K hard drives. We set shared_buffers to 40,000

[GENERAL] Large DB

2004-03-30 Thread Mooney, Ryan
Hello, I have a single table that just went over 234GB in size with about 290M+ rows. I think that I'm starting to approach some limits since things have gotten quite a bit slower over the last couple days. The table is really simple and I'm mostly doing simple data mining queries like the quer

RE: [GENERAL] LARGE db dump/restore for upgrade question

2001-08-14 Thread Andrew Snow
> Any suggestion on how to prepare for the next upgrade would be > appreciated. I think it has to be said that if you want decent performance on excessively large (50GB+) databases, you're going to need excessively good hardware to operate it on. Buy a 3ware IDE RAID controller (www.hypermicro.

Re: [GENERAL] LARGE db dump/restore for upgrade question

2001-08-14 Thread Joseph Shraibman
Philip Crotwell wrote: > Hi > > I have a very large database of seismic data. It is about 27 Gb now, and > growing at about the rate of 1 Gb every 3-4 days. I am running Out of curiosity, how long does it take you to vacuum that? -- Joseph Shraibman [EMAIL PROTECTED] Increase signal to nois

[GENERAL] LARGE db dump/restore for upgrade question

2001-08-14 Thread Philip Crotwell
Hi I have a very large database of seismic data. It is about 27 Gb now, and growing at about the rate of 1 Gb every 3-4 days. I am running postgres 7.1.2. I might possibly try to upgrade to 7.2 when it comes out, but I don't know if it will be possible for me to do 7.3 due to the pg_dump/pg_rest