Hi,

Am 13.01.2010 09:16, schrieb Yan Cheng Cheok:
I realize the READ performance goes down dramatically when my table goes large. 
Every new day goes on, my table can increase x millions of new rows.

I was wondering whether this is good practice I can design my database in this 
way?

Instead of having

lot<->  unit<->  measurement

Can I have

lot-March-2010<->  unit-March-2010<->  measurement-March-2010
lot-April-2010<->  unit-April-2010<->  measurement-April-2010

(1) That's mean in my stored procedure, I need to dynamically generate the table name. Is 
this the "dynamic SQL" to correct way, to dynamically generate table name : 
http://www.postgresql.org/docs/8.1/interactive/ecpg-dynamic.html

(2) Is this consider a good approach, to overcome speed problem (especially 
read speed). Any potential problem I should put an eye on, before I implement 
this strategy?

You might combine this approach with table partitioning to give you a
cleaner view to your data like this:

http://www.postgresql.org/docs/current/static/ddl-partitioning.html

in your situation it would probably make sense to put the actual
partitiones into a separate schema to keep your main work area clean
from clutter.

HTH
Tino

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to