Hi,

This message is both to report a problem and also a potential fix for  
the problem.
This has been reported in Bugzilla ID=1028
URL : https://wwws.clamav.net/bugzilla/show_bug.cgi?id=1028

Details:

Due to the large memory needed to hold the database in resident, when  
a reload
(scheduled or manual) occurs, if the reference count is not zero, the  
existing
object is left floating among active working threads, until the last  
working
thread that sees reference count dropping to zero. Where upon, the  
object is
freed. Despite that the semantic is correct, but this causes severe  
memory
fragmentation, which exhibits as doubling of resident memory. For  
example, on a
i686 server, initially RSS was 40m after initial load, but as soon as  
a reload
happens, it doubles to 80m, and than, 120m and so on. Due to the  
existing logic
in 0.93, non of the working will ever see reference count as 0,  
because the
thread manager has already increase reference count by one more via  
cl_dup(),
before cl_reload() is called for the actual reload.

The strategy is to force all working threads to stop, wait for them  
(a barrier,
so to make sure cl_free() is actually called), then reload.  All  
these happens
within 2 seconds, so, minimum problem to real-time responsiveness. In  
addition,
one will not miss out any mails in the process.  I have also made  
this behavior
dependent on macro  OPTIMIZE_MEMORY_FOOTPRINT in thrmgr.h, so that  
people who
has tons of memory to spare can enjoy seeing them dissipate.   The  
fix is not
perfect, memory residency will slowly creep up, but real slowly.

Please comment and verify.

The problems as described above are fixed (tested) against 0.93 as  
well as
0.93-20080515.  Also, I have made some changes to make threads more  
robust
against unexpected cancellation (either by design or bugs). The  
change is
tested on both i686 and x86-64.

The above text and patch are also available at:  http://fossof. 
88pobox.com/wp/?p=3

Regards,
Lee
_______________________________________________
Help us build a comprehensive ClamAV guide: visit http://wiki.clamav.net
http://lurker.clamav.net/list/clamav-users.html

Reply via email to