On 03/28/2016 02:41 PM, Mat Arye wrote:
Hi All,

I am writing a program that needs time-series-based insert mostly workload. I need to make the system scaleable with many thousand of inserts/s. One of the techniques I plan to use is time-based table partitioning and I am trying to figure out how large to make my time tables.

Does anybody have any hints on optimal table sizes either in terms of rows or in terms of size? Any rule of thumbs I can use for table size in relation to amount of memory on the machine? Is the size of the index more important than the size of the table (if queries mostly use indexes)?

Basically, I am asking for pointers about how to think about this problem and any experiences people have had.

Thanks,
Mat

P.S. I am aware of limits listed here: http://www.postgresql.org/about/. I am asking about practical size limits for performance consideration.
Your current hardware, or hardware budget might play into the answer.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to