Hi,

I am getting database shutdown issues as transaction wraparound limit hits 
999999.

Here is the story...
We recently inherited a system in which a postgresql 8.1.18 server hosts 700+ 
databases and growing.  Essentially it is horizontal partitioning  per user, 
but on a single physical sever.  Each database is on avg 250M.  Vacuum Full 
Analyze takes 30 sec per database.  The web interface performs 100,000’s 
transactions per request (99% read).  We have up to 200 concurrent connections. 
 We seem to reach the transaction id limits faster than we can vacuum.

Any suggestions on how to deal with this issue??  (without re-architecting if 
possible)

Autovacuum parameters...
vacuum_cost_delay = 10                  # 0-1000 milliseconds
#vacuum_cost_page_hit = 1               # 0-10000 credits
#vacuum_cost_page_miss = 10             # 0-10000 credits
#vacuum_cost_page_dirty = 20            # 0-10000 credits
vacuum_cost_limit = 200         # 0-10000 credits
autovacuum = on                 # enable autovacuum subprocess?
autovacuum_naptime = 8642               # time between autovacuum runs, in secs
#autovacuum_vacuum_threshold = 1000     # min # of tuple updates before
                                        # vacuum
#autovacuum_analyze_threshold = 500     # min # of tuple updates before
autovacuum_vacuum_scale_factor = 0.4    # fraction of rel size before
                                        # vacuum
autovacuum_analyze_scale_factor = 0.2   # fraction of rel size before
autovacuum_vacuum_cost_delay = -1       # default vacuum cost delay for
                                        # autovac, -1 means use
                                        # vacuum_cost_delay
autovacuum_vacuum_cost_limit = -1       # default vacuum cost limit for
                                        # autovac, -1 means use
                                        # vacuum_cost_limit

Thank you,
Tom

Reply via email to