[PERFORM] queries on huge tables

2005-03-17 Thread Lending, Rune
Title: Melding



Hello 
all.

I am having a couple 
of tables with couple of hundre millions records in them. Thetables 
containsa timestamp column. 
I am almost always 
interested in getting datas from a specific day or month. Each day contains 
aprox. 400.000 entries.

When I do such 
queries as " select... from archive where m_date between '2005-01-01' and 
'2005-02-01' group by ... " and so on. 
It takes very long. 
I am having 
indexes that kicks in, but still it takes sometime.

I have splitted the 
archive table in smaller monthly tables, it then goes a lot faster, but not fast 
enough. 

I know simular 
systems that uses Oracle and gains a lot on performance because of the 
partioning. That kind of anoyes me a bit :)

Does anyone of you 
have some good ideas on how speed up such queries on huge 
tables?

regards
rune






[PERFORM] pg_autovacuum parameters

2004-08-03 Thread Lending, Rune
Title: Melding



Hello 
all.

I am managing a 
large database with lots of transactions in different 
tables.
The largest tables 
have around 5-6 millions tuples and around 5-6 inserts and maybe 2 
updates pr day.
While the smalest 
tables have only a few tuples and a few updates /inserts pr day. In addition we 
have small tables with many updates/inserts. So what I am saying is that there 
is all kinds of tables and uses of tables in our database.
This, I think, makes 
it difficult to set up pg_autovacuum. I am now running vacuum jobs on different 
tables in cron. 

What things should I 
consider when setting but base and threshold values in pg_autovacuum? Since the 
running of vacuum and analyze is relative to the table size, as it must be, I 
thinkit isdifficult to cover all tables..

Are there anyone who 
have some thoughts around this?

Regards
Rune