[ 
https://issues.apache.org/jira/browse/AMQ-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13915975#comment-13915975
 ] 

Richard Wagg commented on AMQ-5077:
-----------------------------------

Hi,
We weren't setting a producer window before, so whatever's default was used. 
Have tried some quick tests now - setting abnormally high/low sizes have no 
effect (1k, 10k, 100k, 1mb, 10mb).
Is this used if flow control is disabled? 

Client side, we've seen calls blocked at the network level, rather than any AMQ 
specific code - i think regardless of the window size, it's not been hit at the 
time the network call is blocked. I've been using mostly NMS clients for the 
testing as we'd thought the problem was initially in the NMS library - i'll get 
a java test set up next week and go through the code path it takes and where 
the time is spent waiting. 

I think we can either say that singular acks contending on the index lock is a 
problem, or that we write the messages to the index too quickly in the first 
place - if we just want to handle the ideal case, where consumers are able to 
ACK the message near-instantaneously, then we could remove the need to write 
the messages and then remove them in short order from the index just by 
delaying the writes by the roundtrip time from consumer to broker. It might 
reduce durability, but we've already accepted that tradeoff with other settings 
- so it makes sense to me to try and optimise away the index writes/contention, 
rather than attempt to batch up the acks, which might cause less contention but 
probably guarantees that the messages are written in the first place. 

Thanks,
Richard

> Improve performance of ConcurrentStoreAndDispatch
> -------------------------------------------------
>
>                 Key: AMQ-5077
>                 URL: https://issues.apache.org/jira/browse/AMQ-5077
>             Project: ActiveMQ
>          Issue Type: Wish
>          Components: Message Store
>    Affects Versions: 5.9.0
>         Environment: 5.9.0.redhat-610343
>            Reporter: Jason Shepherd
>            Priority: Minor
>         Attachments: Test combinations.xlsx, compDesPerf.tar.gz
>
>
> We have publishers publishing to a topic which has 5 topic -> queue routings, 
> and gets a max message rate attainable of ~833 messages/sec, with each 
> message around 5k in size.
> To test this i set up a JMS config with topic queues:
> Topic
> TopicRouted.1
> ...
> TopicRouted.11
> Each topic has an increasing number of routings to queues, and a client is 
> set up to subscribe to all the queues.
> Rough message rates:
> routings messages/sec
> 0 2500
> 1 1428
> 2 2000
> 3 1428
> 4 1111
> 5 833
> This occurs whether the broker config has producerFlowControl="false" set to 
> true or false , and KahaDB disk synching is turned off. We also tried 
> experimenting with concurrentStoreAndDispatch, but that didn't seem to help. 
> LevelDB didn't give any notable performance improvement either.
> We also have asyncSend enabled on the producer, and have a requirement to use 
> persistent messages. We have also experimented with sending messages in a 
> transaction, but that hasn't really helped.
> It seems like producer throughput rate across all queue destinations, all 
> connections and all publisher machines is limited by something on the broker, 
> through a mechanism which is not producer flow control. I think the prime 
> suspect is still contention on the index.
> We did some test with Yourkit profiler.
> Profiler was attached to broker at startup, allowed to run and then a topic 
> publisher was started, routing to 5 queues. 
> Profiler statistics were reset, the publisher allowed to run for 60 seconds, 
> and then profiling snapshot was taken. During that time, ~9600 messages were 
> logged as being sent for a rate of ~160/sec.
> This ties in roughly with the invocation counts recorded in the snapshot (i 
> think) - ~43k calls. 
> From what i can work out, in the snapshot (filtering everything but 
> org.apache.activemq.store.kahadb), 
> For the 60 second sample period, 
> 24.8 seconds elapsed in 
> org.apache.activemq.store.kahadb.KahaDbTransactionStore$1.removeAsyncMessage(ConnectionContext,
>  MessageAck).
> 18.3 seconds elapsed in 
> org.apache.activemq.store.kahadb.KahaDbTransactionStore$1.asyncAddQueueMessage(ConnectionContext,
>  Message, boolean).
> From these, a further large portion of the time is spent inside 
> MessageDatabase:
> org.apache.activemq.store.kahadb.MessageDatabase.process(KahaRemoveMessageCommand,
>  Location) - 10 secs elapsed
> org.apache.activemq.store.kahadb.MessageDatabase.process(KahaAddMessageCommand,
>  Location) - 8.5 secs elapsed.
> As both of these lock on indexLock.writeLock(), and both take place on the 
> NIO transport threads, i think this accounts for at least some of the message 
> throughput limits. As messages are added and removed from the index one by 
> one, regardless of sync type settings, this adds a fair amount of overhead. 
> While we're not synchronising on writes to disk, we are performing work on 
> the NIO worker thread which can block on locks, and could account for the 
> behaviour we've seen client side. 
> To Reproduce:
> 1. Install a broker and use the attached configuration.
> 2. Use the 5.8.0 example ant script to consume from the queues, 
> TopicQueueRouted.1 - 5. eg:
>    ant consumer -Durl=tcp://localhost:61616 -Dsubject=TopicQueueRouted.1 
> -Duser=admin -Dpassword=admin -Dmax=-1
> 3. Use the modified version of 5.8.0 example ant script (attached) to send 
> messages to topics, TopicRouted.1 - 5, eg:
>    ant producer 
> -Durl='tcp://localhost:61616?jms.useAsyncSend=true&wireFormat.tightEncodingEnabled=false&keepAlive=true&wireFormat.maxInactivityDuration=60000&socketBufferSize=32768'
>  -Dsubject=TopicRouted.1 -Duser=admin -Dpassword=admin -Dmax=1 -Dtopic=true 
> -DsleepTime=0 -Dmax=10000 -DmessageSize=5000
> This modified version of the script prints the number of messages per second 
> and prints it to the console.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to