Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-19 Thread Bucky Jordan
Mike, On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote: If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much). I'll keep that in mind (minimizing

[PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Bucky Jordan
Yes. What's pretty large? We've had to redefine large recently, now we're talking about systems with between 100TB and 1,000TB. - Luke Well, I said large, not gargantuan :) - Largest would probably be around a few TB, but the problem I'm having to deal with at the moment is large numbers

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Merlin Moncure
On 9/18/06, Bucky Jordan [EMAIL PROTECTED] wrote: My question is at what point do I have to get fancy with those big tables? From your presentation, it looks like PG can handle 1.2 billion records or so as long as you write intelligent queries. (And normal PG should be able to handle that,

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Alan Hodgson
On Monday 18 September 2006 13:56, Merlin Moncure [EMAIL PROTECTED] wrote: just another fyi, if you have a really big database, you can forget about doing pg_dump for backups (unless you really don't care about being x day or days behind)...you simply have to due some type of

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Bucky Jordan
good normalization skills are really important for large databases, along with materialization strategies for 'denormalized sets'. Good points- thanks. I'm especially curious what others have done for the materialization. The matview project on gborg appears dead, and I've only found a

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Alex Turner
Do the basic math:If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much). If you have to do a full table scan, then it will take roughly 400 seconds with a single 10k RPM

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Michael Stone
On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote: If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much). If you're in that range it doesn't even count

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Luke Lonergan
Bucky, On 9/18/06 7:37 AM, Bucky Jordan [EMAIL PROTECTED] wrote: My question is at what point do I have to get fancy with those big tables? From your presentation, it looks like PG can handle 1.2 billion records or so as long as you write intelligent queries. (And normal PG should be able to

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-18 Thread Alex Turner
Sweet - thats good - RAID 10 support seems like an odd thing to leave out.AlexOn 9/18/06, Luke Lonergan [EMAIL PROTECTED] wrote:Alex,On 9/18/06 4:14 PM, Alex Turner [EMAIL PROTECTED] wrote: Be warned, the tech specs page: http://www.sun.com/servers/x64/x4500/specs.xml#anchor3 doesn't mention