[ 
https://issues.apache.org/jira/browse/AMQ-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15167170#comment-15167170
 ] 

Dejan Bosanac commented on AMQ-6184:
------------------------------------

Hi [~cshannon], thanks for checking it out. It could be related to the core 
pool size being big before the later fix, although it didn't show up on my 
system during the whole build (I guess it depends on the max thread available 
on the system). Anyways, the smaller default value make much more sense for the 
"normal" cases.

Let me know if you spot any more regressions.

Cheers again!

> Improve nio transport scalability
> ---------------------------------
>
>                 Key: AMQ-6184
>                 URL: https://issues.apache.org/jira/browse/AMQ-6184
>             Project: ActiveMQ
>          Issue Type: Improvement
>    Affects Versions: 5.13.0
>            Reporter: Dejan Bosanac
>            Assignee: Dejan Bosanac
>             Fix For: 5.14.0
>
>
> NIO transport uses unbounded thread pool executor to handle read operation. 
> Under large number of connections and load, this could lead to large number 
> of threads and eventually OOM errors. Which is the exact problem that nio 
> transport is supposed to solve. Some work has been done in [AMQ-5480], to 
> make this configurable, but there's still more work to make it more robust. 
> Creating a fixed thread pool with a queue in front gives much better results 
> in my tests.
> Additionally, the same thread pool is used for accepting connections 
> ([AMQ-5269]). This can lead to the broker not being able to accept new 
> connections under the load. I got much better results when experimenting with 
> implementing acceptor logic directly and handling it in the same thread 
> (without reintroducing the old problem). 
> With these two improvements in place, the broker accept and handle the number 
> of connections up to the system limits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to