today i could able to complete the patch and it is working only for b-tree. i have added a new method am_deleteJanardhan <[EMAIL PROTECTED]> writes:Does it breaks any other things if all the index entries pointing the dead tuple are removed before reusing the dead tuple?.Possibly you could make that work, but I think you'll find the efficiency advantage you were chasing to be totally gone. The locking scheme is heavily biased against you, and the index AMs don't offer an API designed for efficient retail index-tuple deletion.Of course that just says that you're swimming against the tide of previous optimization efforts. But the thing you need to face up to is you are taking what had been background maintenance tasks (viz, VACUUM) and moving them into the foreground critical path. This *will* slow down your foreground applications. regards, tom lane
to the API and bt_delete to the B-tree index to delete a single entry. for the timebeing this works only with
b-tree indexs.
Regarding the complexity of deleting a tuple from b-tree , it is same or less then that of
inserting a tuple into a B-tree( since delete does not require spliting the page). The approach is slightly
different to that of lazy vacuum. lazy vacuum scan entire index table to delete the dead entries.
here it search for the pariticilar entry similer to that of insert .
here locking may not have much impact. It locks only single buffer to delete the index entry.
Regarding the efficiency, if the entire Index table is in buffered then it does not require any
additional IO , only extra CPU is required to delete entries in index table.
I am using postgres in a application where is there is heavy updates for group of tables(small size), before inserting
a single record in huge table. this entire thing constitue single transaction. currently as time goes on the transaction
processing speed decreases till the database is vacuumed.
Using this new patch i am hoping the trasaction processing time will remain constant irrespective of time. Only i need to do
vaccum once i delete large number of entries from some of the tables.
regards, jana