We have a fresh database and have begun to observe performance degradation for INSERTs as a table went from empty to 100,000-ish rows. Initial INSERTs were sub second while after 30k rows, they were 1-3 seconds.
Note that we have done NO performance tuning yet nor are there any indexes on the tables. My question is about the way Postgres manages the organizational structure of the base table data. From my Ingres background, I know that that product allows you to set the structure of the base data to be any of it's supported types (Hash, Heap, Btree, etc.) . And my experience tells me that this can greatly effect INSERT performance and must be properly managed. For example, you have to consider the allocation of data pages for hash and overflow and page fill factors for btrees to avoid excessive splits and balancing of index data pages. Is there anything like this in Postgres? I realize that these issues must also be considered for alternate indexes but currently, we don't have any. I'm sure eventually we will and will need to consider the overhead indexes add to INSERTs. But currently, we are basically just INSERTing into rather empty tables. Is there anything that can be done when defining a table or managing it after the fact to ensure it is ready for inserts? Aside from alternate indexes, how is the base data for a table stored and structured? While much is written about shared cache, buffer and spreading tables across disks, my thinking is that this has a much bigger impact on SELECTs (or UPDATEs) than INSERTs. So what can to done to optimize this? Also, I'm aware of fsync but don't wish to go there until I know we're not missing something simpler. Any feedback is very much appreciated. ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])