In a previous thread, I noted the performance degradation brought about by
excessive I/O's in scanning unexpected.tdb.  Over time, the contiual adding and
deleting of records by nmbd to this file makes sequential scanning of the
records extremely inefficient.  At first, I thought the problem was just the
index tree becoming fragmented, but further study shows that it is the deleted
data records themselves causing the problem.

As far as I can tell, there is no way to clean up the data area on an open
file.  You have to convert/reclaim the file (which requires exclusive access).

------------------------------------------------------------------------------
David L. Jones               |      Phone:    (614) 292-6929
Ohio State University        |      Internet:
140 W. 19th St. Rm. 231a     |               [EMAIL PROTECTED]
Columbus, OH 43210           |               [EMAIL PROTECTED]

Disclaimer: I'm looking for marbles all day long.

PLEASE READ THIS IMPORTANT ETIQUETTE MESSAGE BEFORE POSTING:

http://www.catb.org/~esr/faqs/smart-questions.html

Reply via email to