I'm got the enviable task of redesigning an MySQL based project from scratch. 
We need a proper rdbms for this project, and I want to use PG 8.2.

The table I'm concerned with at the moment have (currently) 5 million rows, 
with a churn of about 300,000 rows a week. The table has about a million hits a 
day, which makes it the main potential bottleneck in this database.

We need to store some large ( 0 -> 100kB ) data with each row. Would you 
recommend adding it as columns in this table, given that blobs will be stored 
in the pg_largeobject table anyway, or would you recommend a daughter table for 
this?

Any other suggestions on how to avoid performance problems with this table ( 
hardware is dual Xeon, 4GB mem, 2 hardware raid channels for storage + 1 for 
logs, all running debian 32 bit ).

Cheers,

Steve

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to