[ 
https://issues.apache.org/jira/browse/QPID-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12785425#action_12785425
 ] 

Andrew Stitcher commented on QPID-2214:
---------------------------------------

Good work for isolating this "unbounded growth" scenario when creating and then 
destroying io worker threads.

However I don't see how the patch can fix the diagnosed problem.

There is no way to call the ThreadStatus destructor presently except when 
shutting the entire program dow, in other words during the call of the 
allThreadsStatus destructor.

The individual ThreadStatus instances aren't deleted on each thread being 
destroyed and I don't think there is a way to make this happen.

I'm currently thinking about this.

> memory leak in qpid::client::Connection
> ---------------------------------------
>
>                 Key: QPID-2214
>                 URL: https://issues.apache.org/jira/browse/QPID-2214
>             Project: Qpid
>          Issue Type: Bug
>          Components: C++ Client
>    Affects Versions: 0.5
>         Environment: qpid 0.5 on a Debian Linux with gcc 4.3.3.
>            Reporter: Daniel Etzold
>            Assignee: Andrew Stitcher
>            Priority: Critical
>         Attachments: qpid-memleak.patch
>
>
> Hi,
> when executing the code below (connecting and disconnecting to a local broker 
> without sending any messages) the memory usage increases constantly and 
> rapidly. After 10.000 iterations several hundred megabytes of resident memory 
> are used.
> When commenting out the lines "connection.open()" and "close()" the memory 
> usage does not increase.
> So, is there a memory leak in Connection open/close?
> int
> main(int argc, char** argv)
> {
>     while (1) {
>         qpid::client::Connection connection;
>         connection.open("localhost", 5672);
>         connection.close();
>     }
> }
> Running my test binary with valgrind (the loop is limited to 100 iterations) 
>   valgrind --leak-check=full ./myqpidtest
> it seems that valgrind does not find any memory leaks.
> =17321== 76 bytes in 1 blocks are definitely lost in loss record 2 of 2
> ==17321==    at 0x4007ADE: calloc (vg_replace_malloc.c:279)
> ==17321==    by 0x430CD4E7: (within /lib/ld-2.3.6.so)
> ==17321==    by 0x430CD58B: _dl_allocate_tls (in /lib/ld-2.3.6.so)
> ==17321==    by 0x43AE428F: pthread_create@@GLIBC_2.1 (in 
> /lib/tls/i686/cmov/libpthread-2.3.6.so)
> ==17321==    by 0x43AE4AA7: pthread_cre...@glibc_2.0 (in 
> /lib/tls/i686/cmov/libpthread-2.3.6.so)
> ==17321==    by 0x4AA3B55: 
> qpid::sys::ThreadPrivate::ThreadPrivate(qpid::sys::Runnable*) (in 
> libqpidcommon.so.0.1.0)
> ==17321==    by 0x4AA3682: qpid::sys::Thread::Thread(qpid::sys::Runnable*) 
> (in libqpidcommon.so.0.1.0)
> ==17321==    by 0x4C8BC6B: qpid::client::TCPConnector::init() (in 
> libqpidclient.so.0.1.0)
> ==17321==    by 0x4C8040A: qpid::client::ConnectionImpl::open() (in 
> libqpidclient.so.0.1.0)
> ==17321==    by 0x4C6CD4A: 
> qpid::client::Connection::open(qpid::client::ConnectionSettings const&) (in 
> libqpidclient.so.0.1.0)
> ==17321==    by 0x4C6CED2: qpid::client::Connection::open(std::string const&, 
> int, std::string const&, std::string const&, std::string const&, unsigned 
> short) (in libqpidclient.so.0.1.0)
> ==17321==    by 0x805EE1E: main (myqpidtest.cpp:281)
> ==17321== 
> ==17321== LEAK SUMMARY:
> ==17321==    definitely lost: 76 bytes in 1 blocks.
> ==17321==      possibly lost: 0 bytes in 0 blocks.
> ==17321==    still reachable: 48 bytes in 3 blocks.
> ==17321==         suppressed: 0 bytes in 0 blocks.
> The 76 bytes which are reported to be definitely lost is constant whether 
> running the loop 100 time or 1000 times.
> Regards,
> Daniel

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:dev-subscr...@qpid.apache.org

Reply via email to