[ 
https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572187#comment-16572187
 ] 

Alan Conway commented on QPID-8134:
-----------------------------------

> will try the use of massif with the visualization tool, although under 
> valgrind the performance degrades too significantly to test at much load so I 
> fear that massif may contribute yet another performance drop...

Actually massif is less degrading to performance than memcheck (the default 
--tool for valgrind). It just maintains records of alloc/dealloc locations, it 
doesn't do all the byte-colouring  stuff needed to catch use of un-initialized 
memory, double frees etc.

> Sender has the problem in general, requires a large elasticity as the sender 
> must tolerate spikes of behavior that the receiver then catches up to.

"unreliable" means that there is no acknowledgement, the sender simply forgets 
each message as it is sent, there is no cache. That is a good option if you 
don't need the Qpid client to do failover and re-send for you.

"at-least-once" means the sender remembers each message it sends until it 
receives an acknowledgement, to allow re-sending in the event of fail-over. I 
suspect that's where memory is building up.

Your sender capacity is very high. I suggest reducing sender capacity to deal 
with normal conditions, and have your application wait to send messages when 
the sender is at capacity. In other words, keep message data under your own 
control until you have a reasonable expectation that Qpid will be able to send 
it. That will improve performance and also uncertainty during failures: if you 
push a million unacked messages into Qpid, you'll have a huge and probably 
unhelpful re-send if the client re-connects after a failure, and they'll be 
lost entirely if the client crashes. If you keep them under application control 
until it's reasonable to send, you have less message loss and more recovery 
options in a failure.

 

> qpid::client::Message::send multiple memory leaks
> -------------------------------------------------
>
>                 Key: QPID-8134
>                 URL: https://issues.apache.org/jira/browse/QPID-8134
>             Project: Qpid
>          Issue Type: Bug
>          Components: C++ Client
>    Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0
>         Environment: *CentOS* Linux release 7.4.1708 (Core)
> Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 
> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> *qpid*-qmf-1.37.0-1.el7.x86_64
> *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64
> python-*qpid*-1.37.0-1.el7.noarch
> *qpid*-proton-c-0.18.1-1.el7.x86_64
> python-*qpid*-qmf-1.37.0-1.el7.x86_64
> *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64
> *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64
> *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64
> *qpid*-cpp-server-1.37.0-1.el7.x86_64
> *qpid*-cpp-client-1.37.0-1.el7.x86_64
>  
>            Reporter: dan clark
>            Assignee: Alan Conway
>            Priority: Blocker
>              Labels: leak, maven
>             Fix For: qpid-cpp-1.39.0
>
>         Attachments: drain.cpp, godrain.sh, gospout.sh, qpid-8134.tgz, 
> qpid-stat.out, spout.cpp, spout.log
>
>   Original Estimate: 40h
>  Remaining Estimate: 40h
>
> There may be multiple leaks of the outgoing message structure and associated 
> fields when using the qpid::client::amqp0_10::SenderImpl::send function to 
> publish messages under certain setups. I will concede that there may be 
> options that are beyond my ken to ameliorate the leak of messages structures, 
> especially since there is an indication that under prolonged runs (a 
> demonized version of an application like spout) that the statistics for quidd 
> indicate increased acquires with zero releases.
> The basic notion is illustrated with the test application spout (and drain).  
> Consider a long running daemon reducing the overhead of open/send/close by 
> keeping the message connection open for long periods of time.  Then the logic 
> would be: start application/open connection.  In a loop send data (and never 
> reach a close).  Thus the drain application illustrates the behavior and 
> demonstrates the leak using valgrind by sending the data followed by an 
> exit(0).  
> Note also the lack of 'releases' associated with the 'acquires' in the stats 
> output.
> Capturing the leaks using the test applications spout/drain required adding 
> an 'exit()' prior to the close, as during normal operations of a daemon, the 
> connection remains open for a sustained period of time, thus the leak of 
> structures within the c++ client library are found as structures still 
> tracked by the library and cleaned up on 'connection.close()', but they 
> should be cleaned up as a result of the completion of the send/receive ack or 
> the termination of the life of the message based on the TTL of the message, 
> which ever comes first.  I have witnessed growth of the leaked structures 
> into the millions of messages lasting more than 24hours with short (300sec) 
> TTL of the messages based on scenarios attached using spout/drain as test 
> vehicle.
> The attached spout.log uses a short 10message test and the spout.log contains 
> 5 sets of different structures leaked (found with the 'bytes in 10 blocks are 
> still reachable' lines, that are in line with much more sustained leaks when 
> running the application for multiple days with millions of messages.
> The leaks seem to be associated with structures allocation 'stdstrings' to 
> save the "subject" and the "payload" for string based messages using send for 
> amq.topic output.
> Suggested work arounds are welcome based on application level changes to 
> spout/drain (if they are missing key components) or changes to the 
> address/setup of the queues for amq.topic messages (see the 'gospout.sh and 
> godrain.sh' test drivers providing the specific address structures being used.
> For example, the following is one of the 5 different categories of leaked 
> data from 'spout.log' on a valgrind analysis of the output post the send and 
> session.sync but prior connection.close():
>  
> ==3388== 3,680 bytes in 10 blocks are still reachable in loss record 233 of 
> 234
> ==3388==    at 0x4C2A203: operator new(unsigned long) 
> (vg_replace_malloc.c:334)
> ==3388==    by 0x4EB046C: qpid::client::Message::Message(std::string const&, 
> std::string const&) (Message.cpp:31)
> ==3388==    by 0x51742C1: 
> qpid::client::amqp0_10::OutgoingMessage::OutgoingMessage() 
> (OutgoingMessage.cpp:167)
> ==3388==    by 0x5186200: 
> qpid::client::amqp0_10::SenderImpl::sendImpl(qpid::messaging::Message const&) 
> (SenderImpl.cpp:140)
> ==3388==    by 0x5186485: operator() (SenderImpl.h:114)
> ==3388==    by 0x5186485: execute<qpid::client::amqp0_10::SenderImpl::Send> 
> (SessionImpl.h:102)
> ==3388==    by 0x5186485: 
> qpid::client::amqp0_10::SenderImpl::send(qpid::messaging::Message const&, 
> bool) (SenderImpl.cpp:49)
> ==3388==    by 0x40438D: main (spout.cpp:185)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org

Reply via email to