A NOTE has been added to this issue. 
====================================================================== 
http://www.dbmail.org/mantis/view.php?id=462 
====================================================================== 
Reported By:                fehuq
Assigned To:                
====================================================================== 
Project:                    DBMail
Issue ID:                   462
Category:                   IMAP daemon
Reproducibility:            always
Severity:                   major
Priority:                   normal
Status:                     new
target:                      
====================================================================== 
Date Submitted:             29-Nov-06 15:13 CET
Last Modified:              19-Dec-06 16:00 CET
====================================================================== 
Summary:                    imap daemon appears to leak memory on FETCH command
Description: 
I delivered several thousand messages to an account on a test server, and
set up an IMAP account in Thunderbird to access it. Thunderbird ran its
junkmail controls on the folder, which issues a FETCH command for each
message. I noticed later that the dbmail-imapd child process handling my
connection had used almost all of the system's free memory and swap. 

As best as I can tell, the process usually doesn't free memory used after
a message is fetched. I'm attaching a text file containing top's output of
the process at 3 second intervals. The memory usage does drop a bit in some
of the intervals, but overall it gets large fairly fast. 

Killing, or having dbmail restart the child process after 1 connection
does successfully free the memory, though. The pop server does not show
the same behavior.
======================================================================
Relationships       ID      Summary
----------------------------------------------------------------------
has duplicate       0000469 dbmail-imapd fetches all available memory
====================================================================== 

---------------------------------------------------------------------- 
 fehuq - 29-Nov-06 15:24  
---------------------------------------------------------------------- 
I should have added about the top output that it's just a small sample,
over the course of about a minute and a half of a new process as I fetch
all the messages. It's not meant to show a process using all system
memory. 

---------------------------------------------------------------------- 
 aaron - 09-Dec-06 06:09  
---------------------------------------------------------------------- 
Using the fetches you've shown and running through 'valgrind dbmail-imapd
-n' and typing them in by hand, against my testing database, I found no
active leaks.

Maybe the leaks depend upon the messages themselves? (*ugh*) 

---------------------------------------------------------------------- 
 Valen - 09-Dec-06 08:45  
---------------------------------------------------------------------- 
I have a system that exhibits this symptom, I don't have the skill to run
the test but dbmail can be built on the system (and has been in the past)
if an admin wants to login and check it out let me know. 

---------------------------------------------------------------------- 
 paul - 09-Dec-06 18:46  
---------------------------------------------------------------------- 
I've been busy running valgrind, and I can't find *any* significant leaks
either. But I am working on reducing memory usage in dbmail-message.c by
doing less copying which may affect this bug when I merge that change. 

---------------------------------------------------------------------- 
 Valen - 10-Dec-06 01:40  
---------------------------------------------------------------------- 
attached files with memory graph from massif, this is the result of copying
100 emails with imapcopy 

---------------------------------------------------------------------- 
 Valen - 10-Dec-06 06:26  
---------------------------------------------------------------------- 
found perhaps the causes of the problems and i got debug turned on so it
should be extra helpfull.

Called from:
  41.8% : 0x4128031: g_malloc (in /usr/lib/libglib-2.0.so.0.1000.2)

  24.8% : 0x401C3EB: posix_memalign (vg_replace_malloc.c:384)

  15.4% : 0x4127F1E: g_realloc (in /usr/lib/libglib-2.0.so.0.1000.2)

  11.0% : 0x4434D69: my_realloc (in /usr/lib/libmysqlclient.so.15.0.0)

   1.8% : 0x4434B86: my_malloc (in /usr/lib/libmysqlclient.so.15.0.0)

   1.8% : 0x443B415: my_once_alloc (in /usr/lib/libmysqlclient.so.15.0.0)

   0.7% : 0x4127FB9: g_malloc0 (in /usr/lib/libglib-2.0.so.0.1000.2)

  and 33 other insignificant places

== 1 ===========================
Context accounted for 41.8% of measured spacetime
  0x4128031: g_malloc (in /usr/lib/libglib-2.0.so.0.1000.2)

Called from:
  33.4% : 0x8052491: mopen (memblock.c:64)

  19.8% : 0x805255E: __m_blkadd (memblock.c:435)

   1.5% : 0x405ED0A: g_mime_filter_set_size (in
/usr/lib/libgmime-2.0.so.2.1.19)

  and 40 other insignificant places

---------------------------------
Context accounted for 24.8% of measured spacetime
  0x401C3EB: posix_memalign (vg_replace_malloc.c:384)

Called from:
  24.8% : 0x4136098: (within /usr/lib/libglib-2.0.so.0.1000.2)

---------------------------------
Context accounted for 15.4% of measured spacetime
  0x4127F1E: g_realloc (in /usr/lib/libglib-2.0.so.0.1000.2)

Called from:
  14.2% : 0x4109AB0: (within /usr/lib/libglib-2.0.so.0.1000.2)

   8.6% : 0x413A832: (within /usr/lib/libglib-2.0.so.0.1000.2)

  and 20 other insignificant places

---------------------------------
Context accounted for 11.0% of measured spacetime
  0x4434D69: my_realloc (in /usr/lib/libmysqlclient.so.15.0.0)

Called from:
  11.0% : 0x445BF37: net_realloc (in /usr/lib/libmysqlclient.so.15.0.0)

--------------------------------- 

---------------------------------------------------------------------- 
 paul - 10-Dec-06 15:51  
---------------------------------------------------------------------- 
I've merged my pending patch creatively called 'lesscopy' into rev 2391.
This should definitely affect the massif graph. Please test y'all; I can't
find flaws in it. 

---------------------------------------------------------------------- 
 Valen - 11-Dec-06 15:00  
---------------------------------------------------------------------- 
Getting better i think but note the yellow section keeps growing
I believe the purple section is also growing but at a slower rate.
Over time memory useage goes down with 0 activity too it seems
The files and graphs attached to this note are the 31023 series
the yellow context is

Context accounted for 36.2% of measured spacetime
  0x4128031: g_malloc (in /usr/lib/libglib-2.0.so.0.1000.2)

Called from:
  32.6% : 0x8052491: mopen (memblock.c:64)

  15.2% : 0x805255E: __m_blkadd (memblock.c:435)

   2.7% : 0x405ED0A: g_mime_filter_set_size (in
/usr/lib/libgmime-2.0.so.2.1.19)

   0.9% : 0x413EC83: g_tree_new_full (in
/usr/lib/libglib-2.0.so.0.1000.2)

  and 39 other insignificant places

The purple
Context accounted for 27.9% of measured spacetime
  0x401C3EB: posix_memalign (vg_replace_malloc.c:384)

Called from:
  27.9% : 0x4136098: (within /usr/lib/libglib-2.0.so.0.1000.2) 

---------------------------------------------------------------------- 
 Valen - 11-Dec-06 15:01  
---------------------------------------------------------------------- 
Getting better i think but note the yellow section keeps growing
I believe the purple section is also growing but at a slower rate.
Over time memory useage goes down with 0 activity too it seems
The files and graphs attached to this note are the 31023 series
the yellow context is

Context accounted for 36.2% of measured spacetime
  0x4128031: g_malloc (in /usr/lib/libglib-2.0.so.0.1000.2)

Called from:
  32.6% : 0x8052491: mopen (memblock.c:64)

  15.2% : 0x805255E: __m_blkadd (memblock.c:435)

   2.7% : 0x405ED0A: g_mime_filter_set_size (in
/usr/lib/libgmime-2.0.so.2.1.19)

   0.9% : 0x413EC83: g_tree_new_full (in
/usr/lib/libglib-2.0.so.0.1000.2)

  and 39 other insignificant places

The purple
Context accounted for 27.9% of measured spacetime
  0x401C3EB: posix_memalign (vg_replace_malloc.c:384)

Called from:
  27.9% : 0x4136098: (within /usr/lib/libglib-2.0.so.0.1000.2) 

---------------------------------------------------------------------- 
 fehuq - 19-Dec-06 16:00  
---------------------------------------------------------------------- 
Here's a log from valgrind with --leak-check=yes instead of
massif. Of particular interest is this section:

==21508== 5,063,000 bytes in 20,405 blocks are possibly
lost in loss record 37 of 40
==21508==    at 0x4005863: memalign
(vg_replace_malloc.c:332)
==21508==    by 0x40058E2: posix_memalign
(vg_replace_malloc.c:384)
==21508==    by 0x1DC136: (within
/usr/lib/libglib-2.0.so.0.1200.4)
==21508==    by 0x1DC408: g_slice_alloc (in
/usr/lib/libglib-2.0.so.0.1200.4)
==21508==    by 0x1E4457: (within
/usr/lib/libglib-2.0.so.0.1200.4)
==21508==    by 0x1E544E: (within
/usr/lib/libglib-2.0.so.0.1200.4)
==21508==    by 0x402DED9: dbmail_mailbox_open
(dbmail-mailbox.c:151)
==21508==    by 0x80555CC: _ic_fetch (imapcommands.c:1433)
==21508==    by 0x805667A: _ic_uid (imapcommands.c:1976)
==21508==    by 0x8052BBC: IMAPClientHandler (imap4.c:320)
==21508==    by 0x40447A3: PerformChildTask
(serverchild.c:368)
==21508==    by 0x404492A: CreateChild (serverchild.c:202)


But I'm no expert on evaluating these debug outputs. 

Issue History 
Date Modified   Username       Field                    Change               
====================================================================== 
29-Nov-06 15:13 fehuq          New Issue                                    
29-Nov-06 15:13 fehuq          File Added: top_3sec_interval.txt                
   
29-Nov-06 15:24 fehuq          Note Added: 0001633                          
06-Dec-06 12:41 fehuq          File Added: dbmail-imap-fetches.txt              
     
09-Dec-06 05:51 aaron          Relationship added       has duplicate 0000469
09-Dec-06 06:09 aaron          Note Added: 0001661                          
09-Dec-06 08:45 Valen          Note Added: 0001662                          
09-Dec-06 18:46 paul           Note Added: 0001663                          
10-Dec-06 01:39 Valen          File Added: massif.13357.gif                    
10-Dec-06 01:39 Valen          File Added: massif.13357.txt                    
10-Dec-06 01:40 Valen          Note Added: 0001664                          
10-Dec-06 06:26 Valen          Note Added: 0001665                          
10-Dec-06 15:51 paul           Note Added: 0001666                          
11-Dec-06 14:55 Valen          File Added: massif.31023.gif                    
11-Dec-06 15:00 Valen          Note Added: 0001670                          
11-Dec-06 15:01 Valen          Note Added: 0001671                          
11-Dec-06 15:01 Valen          File Added: massif.31023.txt.tar.gz              
     
11-Dec-06 15:05 Valen          File Added: massif.4134.png                    
19-Dec-06 16:00 fehuq          File Added: valgrind_leak-check.log              
     
19-Dec-06 16:00 fehuq          Note Added: 0001699                          
======================================================================

_______________________________________________
Dbmail-dev mailing list
Dbmail-dev@dbmail.org
http://twister.fastxs.net/mailman/listinfo/dbmail-dev

Reply via email to