[ 
https://issues.apache.org/jira/browse/QPID-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16019768#comment-16019768
 ] 

ASF subversion and git services commented on QPID-7784:
-------------------------------------------------------

Commit c42f1589a5da52c549d5c52bb7b224ba5d9a6f4e in qpid-broker-j's branch 
refs/heads/6.1.x from [~lorenz.quack]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=c42f158 ]

QPID-7784: [Java Broker] Dispose QpidByteBuffers associated with pooled threads 
when shutting down executors.

Cherry picked from d9af2660089139e2f4fdad8c0aa0e0c8e6529ff5


> Closing a virtualhost does not dispose QBBs still associated with pooled IO 
> threads
> -----------------------------------------------------------------------------------
>
>                 Key: QPID-7784
>                 URL: https://issues.apache.org/jira/browse/QPID-7784
>             Project: Qpid
>          Issue Type: Bug
>          Components: Java Broker
>    Affects Versions: qpid-java-6.0, qpid-java-6.0.6, qpid-java-6.1, 
> qpid-java-6.1.2
>            Reporter: Keith Wall
>             Fix For: qpid-java-6.0.7, qpid-java-broker-7.0.0, qpid-java-6.1.3
>
>
> If I close a virtualhost (either via management or owing to a change of HA 
> mastership), the QBBs that are associated with pooled IO threads don't get 
> disposed.
> This causes the value returned by {{QBB.getNumberOfActivePooledBuffers()}} to 
> be incorrect.  This value is used to determine when to flow to disk, to this 
> would cause flow to disk to be more frequent than it needs.
> This problem does exist on 6.0/6.1, but is not particular impactful.   The 
> garbage collector will eventually collect the QBBs and the associated direct 
> memory.   The number of threads in the pool is small, so this probably won't 
> cause a problem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to