[ 
https://issues.apache.org/jira/browse/PROTON-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Ross reassigned PROTON-1170:
-----------------------------------

    Assignee: Cliff Jansen

> closed links are never deleted
> ------------------------------
>
>                 Key: PROTON-1170
>                 URL: https://issues.apache.org/jira/browse/PROTON-1170
>             Project: Qpid Proton
>          Issue Type: Bug
>          Components: proton-c
>         Environment: miserable
>            Reporter: michael goulish
>            Assignee: Cliff Jansen
>              Labels: leak, perf
>             Fix For: 0.19.0
>
>
> I wrote a reactor-based application that makes a single connection, and then 
> repeatedly makes-and-closes links (receivers) on that connection.
> It makes and closes the links as fast as possible: as soon as it gets the 
> on_receiver_close event, it makes a new one.  As soon as it gets the 
> on_receiver_open event -- it closes that receiver.
> This application talks to a dispatch router.
> Problem:  Both the router and my application grow their memory (RSS) rapidly 
> -- and the router's ability to respond to new link creations slows down 
> rapidly.  Looking at the router with   Valgrind/Callgrind, after about 15,000 
> links have been created and closed I see that 45% of all CPU time on the 
> router is being consumed by pn_find_link().   Instrumenting that code, I see 
> that the list it is looking at never decreases in size.
> I tried creating my links with the "lifetime_policy" set to DELETE_ON_CLOSE, 
> but that had no effect.  Grepping for that symbol, I see that it does not 
> occur in the proton C code except in its definition, and in a printing 
> convenience function.
> Major scalability bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org

Reply via email to