[ https://issues.apache.org/jira/browse/CASSANDRA-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895439#action_12895439 ]
Jonathan Ellis commented on CASSANDRA-1358: ------------------------------------------- Yes, unbounding those would also be required in the get-rid-of-MDP scenario. In short, the bound never makes things better and can make things worse. (But, if you are running into that 4096 bound -- as I predicted earlier :) -- your node is already _severely_ overwhelmed and the only thing we are discussing is how to mitigate the effects, this is not going to stop it from timing out a ton of requests under those conditions.) > Clogged RRS/RMS stages can hold up processing of gossip messages and request > acks > --------------------------------------------------------------------------------- > > Key: CASSANDRA-1358 > URL: https://issues.apache.org/jira/browse/CASSANDRA-1358 > Project: Cassandra > Issue Type: Improvement > Components: Core > Affects Versions: 0.5 > Environment: All. > Reporter: Mike Malone > Fix For: 0.6.5 > > > The message deserialization process can become a bottleneck that prevents > efficient resource utilization because the executor that manages the > deserialization process will never grow beyond a single thread. The message > deserializer executor is instantiated in the MessagingService constructor as > a JMXEnableThreadPoolExecutor, which extends > java.util.concurrent.ThreadPoolExecutor. The thread pool is instantiated with > a corePoolSize of 1 and a maximumPoolSize of > Runtime.getRuntime().availableProcessors(). But, according to the > ThreadPoolExecutor documentation "using an unbounded queue (for example a > LinkedBlockingQueue without a predefined capacity) will cause new tasks to be > queued in cases where all corePoolSize threads are busy. Thus, no more than > corePoolSize threads will ever be created. (And the value of the > maximumPoolSize therefore doesn't have any effect.)" > The message deserializer pool uses a LinkedBlockingQueue, so there will never > be more than one deserialization thread. This issue became a problem in our > production cluster when the MESSAGE-DESERIALIZER-POOL began to back up on a > node that was only lightly loaded. We increased the core pool size to 4 and > the situation improved, but the deserializer pool was still backing up while > the machine was not fully utilized (less than 100% CPU utilization). This > leads me to think that the deserializer thread is blocking on some sort of > I/O, which seems like it shouldn't happen. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.