Re: [PERFORM] large table vs multiple smal tables

2005-07-14 Thread Kenneth Marshall
Nicolas,

These sizes would not be considered large. I would leave them
as single tables.

Ken

On Wed, Jul 13, 2005 at 12:08:54PM +0200, Nicolas Beaume wrote:
> Hello
> 
> I have a large database with 4 large tables (each containing at least 
> 200 000 rows, perhaps even 1 or 2 million) and i ask myself if it's 
> better to split them into small tables (e.g tables of 2000 rows) to 
> speed the access and the update of those tables (considering that i will 
> have few update but a lot of reading).
> 
> Do you think it would be efficient ?
> 
> Nicolas, wondering if he hadn't be too greedy
> 
> -- 
> 
> -
> ? soyez ce que vous voudriez avoir l'air d'?tre ? Lewis Caroll
> 
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] large table vs multiple smal tables

2005-07-14 Thread Jim C. Nasby
On Wed, Jul 13, 2005 at 12:08:54PM +0200, Nicolas Beaume wrote:
> Hello
> 
> I have a large database with 4 large tables (each containing at least 
> 200 000 rows, perhaps even 1 or 2 million) and i ask myself if it's 
> better to split them into small tables (e.g tables of 2000 rows) to 
> speed the access and the update of those tables (considering that i will 
> have few update but a lot of reading).

2 million rows is nothing unless you're on a 486 or something. As for
your other question, remember the first rule of performance tuning:
don't tune unless you actually need to.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PERFORM] large table vs multiple smal tables

2005-07-13 Thread Nicolas Beaume

Hello

I have a large database with 4 large tables (each containing at least 
200 000 rows, perhaps even 1 or 2 million) and i ask myself if it's 
better to split them into small tables (e.g tables of 2000 rows) to 
speed the access and the update of those tables (considering that i will 
have few update but a lot of reading).


Do you think it would be efficient ?

Nicolas, wondering if he hadn't be too greedy

--

-
« soyez ce que vous voudriez avoir l'air d'être » Lewis Caroll


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster