[ 
https://issues.apache.org/jira/browse/UIMA-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12619143#action_12619143
 ] 

Marshall Schor commented on UIMA-1130:
--------------------------------------

Thanks, Eddie.  Let me clarify my statements a bit :-)...

>>>The capability to add listeners for a remote reply queue should give equal 
>>>or better performance than setting a prefetch value in most cases. Can we 
>>>see if a single tuning parameter is enough before adding more complexity? 
>>>Note that prefetch is not part of the JMS standard and is not available in 
>>>all JMS implementations. 

+1 to avoiding more tweaking/tuning parameters whenever possible, in favor of 
something that just always works well.

>>>> are there use cases where the remote delegate would be able to get 
>>>> requests from the remote broker, but possibly not be able to send it 
>>>> replies? (e.g., due to firewall issues?)
>>>Yes, when the client is behind a firewall, but the remote delegate [service] 
>>>and it's broker are outside the firewall. This is one of the motivations for 
>>>always allocating the reply queue on the service's broker (the other main 
>>>one being to eliminate the need for a colocated broker to instantiate a 
>>>local reply queue, again something not possible with many JMS 
>>>implementations).

Well, I meant the other way 'round.  So let me try again - are there use cases 
where the remote broker serves the remote delegate new work OK, but for some 
reason can't host the reply queue?

>>>>>There should only ever be one listener pulling messages off of the reply 
>>>>>queue.
>>>It is sometimes desirable to have more than one thread doing deserialization 
>>>of reply messages, which is the original point of this issue. 

Yes, true - I meant one listener process (perhaps with mutliple threads 
though).  So there's no need to worry about "load balancing" due to prefetch > 
0 - or is there? Not sure how these threads for multiple concurrent listeners 
interact with prefetch.  

>>>>>>>Is there a penalty for setting up the prefetch value to something high?
>>>>The main problem for UIMA is memory management, as serialized XmiCas 
>>>>messages can be quite large. 

So this, together with comment above about prefetch and testing if it's needed 
tell me the decision to support prefetch is not yet figured out...  but that we 
should lean toward avoiding adding this complexity if it can be shown it isn't 
needed.  Adam - can you do a test or offer an opinion here?



> Deployment Descriptor should allow setting the number of concurrent listeners 
> for a reply queue
> -----------------------------------------------------------------------------------------------
>
>                 Key: UIMA-1130
>                 URL: https://issues.apache.org/jira/browse/UIMA-1130
>             Project: UIMA
>          Issue Type: Improvement
>          Components: Async Scaleout
>    Affects Versions: 2.2.2
>            Reporter: Adam Lally
>
> The Spring XML allows setting a concurrentConsumers property for a reply 
> queue (either an aggregate's collocated reply queue or a remote reply queue):
>      <property name="concurrentConsumers" value="1"/>
> The deployment descriptor should allow setting this property.  In some 
> deployments where remote delegates are scaled out many times, the bottleneck 
> can become the aggregate deserializing CASes from the reply queue.  If the 
> aggregate is running on a multicore machine it helps to increase the number 
> of threads that can process the reply queue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to