Hi,

     I am in the process of cleaning up one of our big table, this table
has 187 million records and we need to delete around 100 million of them. 

     I am deleting around 4-5 million of them daily in order to catchup
with vacuum and also with the archive logs space. So far I have deleted
around 15million in past few days.

     max_fsm_pages value is set to 1200000. Vacuumdb runs once daily,
here is the output from last night's vacuum job

    
=======================================================================================
     INFO:  free space map: 999 relations, 798572 pages stored; 755424
total pages needed
     DETAIL:  Allocated FSM size: 1000 relations + 1200000 pages = 7096
kB shared memory.
     VACUUM
    
========================================================================================

     From the output it says 755424 total pages needed , this number
keeps growing daily even after vacuums are done daily. This was around
350K pages before the delete process started.
  
     I am afraid that this number will reach the max_fsm_pages limit
soon and vacuums thereafter will never catch up .

     Can anyone please explain this behavior ? What should I do to catch
up with vacuumdb daily ?

      Postgres Version : 8.0.2.
      Backup Mode: PITR.
     

Thanks!
Pallav

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to