I've recently enabled statistics gathering on our large (> 50GB) postgres database because of a recent significant decrease in performance, believed to be related to changes in the application running on top of it. I'm trying to make sense of these statistics in my investigation of this. Are there any rules-of-thumb used when looking at them? For example the ratio of idx_scan to idx_tup_fetch in pg_stat_user_tables, or heap_blks_read to heap_blks_hit in pg_statio_user_tables.
I'm assuming that the answer will be "Well, it really depends on your database/application", but if anybody has any quick rules of thumb, I'd love to hear them.
I've found a lot of advice on the web for optimizing postgres databases, and so I don't need that kind of info.
Thanks in advance.
Ryan
---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly