Hi Keith,
Thanks so much for the update and putting together the new version with
enhancements to flowToDisk. Just to confirm, in order for me to get the
broker with these fixes and also use MultiQueueConsumer, I will get 6.0.7
version and apply QPID-7462 on it?
For the longer terms fix (periodic
The problem was really prefetch set to 0 because of some previous issues
where high prefetch caused problem with message consumption on multiple
parallel consumers on single Service Bus queue. (With high prefetch
messages are usually not processed in predefined time. Which then triggers
another mes
The client attempts to drain the consumer credit in 3 main situations:
- When a receive[NoWait] call is attempted and any timeout elapses
without a message being available in the local buffer.
- While performing a transaction rollback (inc transacted session closure).
- A special case of the first,
Hi Ramayan
Further changes have been made on the 6.0.x branch that prevent the
noisy flow to disk messages in the log. The flow to disk on/off
messages we had before didn't really fit and this resulted in the logs
being spammed when either a queue was considered full or memory was
under pressure
Unfortunately it seems that simply setting either of
jms.receiveLocalOnly=true or jms.receiveNoWaitLocalOnly=true parameters
does not help. I Tried adding those parameters to connector URL, I tried
setting the matching options on my JmsConnectionFactory, but the
JmsListenerEndpointRegistry still ca
The consumers credit 'drain' attempt is failing to complete because
Azure Service Bus because doesnt implement the 'drain' functionality.
Instead of reducing the drainTimeout, you will instead need to toggle
the client to only receive out of its local buffers when working with
Service Bus, using jm