[ 
https://issues.apache.org/jira/browse/QPID-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy closed QPID-7636.
----------------------------
    Resolution: Fixed

The issue was fixed as part of QPID-7658. However, there is a remaining issue 
with suspension of transaction coordinator links as credits are not restored 
for them after flow is resumed. The suspension of transaction coordinator links 
should not happened on producer flow control. It will be fixed as part of 
QPID-7752

> [AMQP 1.0] producer message flow does not restart after disk-space based flow 
> control relinquished
> --------------------------------------------------------------------------------------------------
>
>                 Key: QPID-7636
>                 URL: https://issues.apache.org/jira/browse/QPID-7636
>             Project: Qpid
>          Issue Type: Bug
>          Components: Java Broker
>            Reporter: Keith Wall
>            Assignee: Alex Rudyy
>             Fix For: qpid-java-broker-7.0.0
>
>         Attachments: 
> 0001-QPID-7636-Producer-message-flow-control-fro-AMQP-1.0.patch
>
>
> Manually testing flow control, if I start a Qpid JMS Client client 
> transactionally publishing messages to the Broker, and then ensure disk space 
> utilisation exceeds {{store.filesystem.maxUsagePercent}} using {{dd}}, I see 
> the Broker report that flow control is enforced, and the client hangs 
> awaiting link credit:
> {noformat}
> 2017-01-24 16:15:04,686 DEBUG [Housekeeping[default]] 
> (o.a.q.s.v.AbstractVirtualHost) - Checking message status for queue: myqueue
> 2017-01-24 16:15:08,298 INFO  [IO-/127.0.0.1:53903] (q.m.c.flow_enforced) - 
> [IO Pool] [con:0(guest@/127.0.0.1:53903/default)/ch:0] CHN-1005 : Flow 
> Control Enforced (Queue ** All Queues **)
> 2017-01-24 16:15:17,498 INFO  [IO-/127.0.0.1:53903] (q.m.c.flow_enforced) - 
> [IO Pool] [con:0(guest@/127.0.0.1:53903/default)/ch:1] CHN-1005 : Flow 
> Control Enforced (Queue ** All Queues **)
> 2017-01-24 16:15:17,498 DEBUG [IO-/127.0.0.1:53903] (o.a.q.s.p.frame) - 
> SEND[/127.0.0.1:53903|1] : 
> Flow{nextIncomingId=1460,incomingWindow=2048,nextOutgoingId=0,outgoingWindow=2048,handle=1,deliveryCount=486,linkCredit=0,drain=true,echo=false}
> 2017-01-24 16:15:17,498 DEBUG [IO-/127.0.0.1:53903] (o.a.q.s.p.frame) - 
> SEND[/127.0.0.1:53903|1] : 
> Flow{nextIncomingId=1460,incomingWindow=2048,nextOutgoingId=0,outgoingWindow=2048,handle=0,deliveryCount=973,linkCredit=0,drain=true,echo=false}
> {noformat}
> If I think resolve the disk space problem, I expect to see flow restart.  On 
> trunk, this does not happen:
> * there is a {{ClassCastException}} in 
> {{org.apache.qpid.server.protocol.v1_0.Session_1_0#doUnblock()}} as 
> {{TxnCoordinatorReceivingLink_1_0}} receiving link cannot be cast to a 
> {{StandardReceivingLink_1_0}}.
> * the ClassCastException is hidden and not even logged.  This is because 
> nothing observes the IOThread futures created by block/unblock.  Something 
> needs to observe the future and check for error, and behave accordingly.
> * Even if I avoid the ClassCastException, flow does not resume.  I think that 
> we are falling to give credit back to the TxnCoord link.   Here is the trace, 
> on removal, when the ClassCast is avoided.
> {noformat}
> 2017-01-24 16:16:08,824 INFO  [IO-/127.0.0.1:53903] (q.m.c.flow_removed) - 
> [IO Pool] [con:0(guest@/127.0.0.1:53903/default)/ch:0] CHN-1006 : Flow 
> Control Removed
> 2017-01-24 16:16:10,167 INFO  [IO-/127.0.0.1:53903] (q.m.c.flow_removed) - 
> [IO Pool] [con:0(guest@/127.0.0.1:53903/default)/ch:1] CHN-1006 : Flow 
> Control Removed
> 2017-01-24 16:16:10,168 DEBUG [IO-/127.0.0.1:53903] (o.a.q.s.p.frame) - 
> SEND[/127.0.0.1:53903|1] : 
> Flow{nextIncomingId=1461,incomingWindow=2048,nextOutgoingId=0,outgoingWindow=2048,handle=1,deliveryCount=487,linkCredit=20486,echo=false}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to