Thanks Erick. Currently, migrating to 5.3 and it is taking a bit of time. Meanwhile, I looked at the JIRAs from the blog and the stack trace looks a bit different from what I see but not sure if they are related. Also, as per the stack trace I have included in my original email, it is the tomcat thread that is locking but not the recovery thread which will be responsible writing updates to followers. I agree that we might throttle updates but what is annoying is unable to see issues in controlled load test env.

Just to understand better, what is the tomcat thread doing in this case?

Thanks

On 10/22/15 12:53 PM, Erick Erickson wrote:
The details are in Tim's blog post and the linked JIRAs

Unfortunately, the only real solution I know of is to upgrade
to at least Solr 5.2. Meanwhile, throttling the indexing rate
will at least smooth out the issue. Not a great approach but
all there is for 4.6.

Best,
Erick

On Thu, Oct 22, 2015 at 10:48 AM, Rallavagu Kon <rallav...@gmail.com> wrote:
Erick,

Indexing happening via Solr cloud server. This thread was from the leader. Some 
followers show symptom of high cpu during this time. You think this is from 
locking? What is the thread that is holding the lock doing? Also, we are unable 
to reproduce this issue in load test environment. Any clues would help.

On Oct 22, 2015, at 09:50, Erick Erickson <erickerick...@gmail.com> wrote:

Prior to Solr 5.2, there were several inefficiencies when distributing
updates to replicas, see:
https://lucidworks.com/blog/2015/06/10/indexing-performance-solr-5-2-now-twice-fast/.

The symptom was that there was significantly higher CPU utilization on
the followers
compared to the leader.

The only real fix is to upgrade to 5.2+ assuming that's your issue.

How are you indexing? Using SolrJ with CloudSolrServer would help if
you're not using
them.

Best,
Erick

On Thu, Oct 22, 2015 at 9:43 AM, Rallavagu <rallav...@gmail.com> wrote:
Solr 4.6.1 cloud

Looking into thread dump 4-5 threads causing cpu to go very high and causing
issues. These are tomcat's http threads and are locking. Can anybody help me
understand what is going on here? I see that incoming connections coming in
for updates and they are being passed on to StreamingSolrServer and
subsequently ConcurrentUpdateSolrServer and they both have locks. Thanks.


"http-bio-8080-exec-4394" id=8774 idx=0x988 tid=14548 prio=5 alive,
native_blocked, daemon
    at __lll_lock_wait+34(:0)@0x38caa0e262
    at safepointSyncOnPollAccess+167(safepoint.c:83)@0x7fc29b9c9138
    at trapiNormalHandler+484(traps_posix.c:220)@0x7fc29b9fd745
    at _L_unlock_16+44(:0)@0x38caa0f710
    at
java/util/concurrent/locks/ReentrantLock.lock(ReentrantLock.java:262)[optimized]
    at
org/apache/solr/client/solrj/impl/ConcurrentUpdateSolrServer.blockUntilFinished(ConcurrentUpdateSolrServer.java:391)[inlined]
    at
org/apache/solr/update/StreamingSolrServers.blockUntilFinished(StreamingSolrServers.java:98)[inlined]
    at
org/apache/solr/update/SolrCmdDistributor.finish(SolrCmdDistributor.java:61)[inlined]
    at
org/apache/solr/update/processor/DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:501)[inlined]
    at
org/apache/solr/update/processor/DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1278)[optimized]
    ^-- Holding lock:
org/apache/solr/update/StreamingSolrServers$1@0x496cf6e50[biased lock]
    ^-- Holding lock:
org/apache/solr/update/StreamingSolrServers@0x49d32adc8[biased lock]
    at
org/apache/solr/handler/ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:83)[optimized]
    at
org/apache/solr/handler/RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)[optimized]
    at org/apache/solr/core/SolrCore.execute(SolrCore.java:1859)[optimized]
    at
org/apache/solr/servlet/SolrDispatchFilter.execute(SolrDispatchFilter.java:721)[inlined]
    at
org/apache/solr/servlet/SolrDispatchFilter.doFilter(SolrDispatchFilter.java:417)[inlined]
    at
org/apache/solr/servlet/SolrDispatchFilter.doFilter(SolrDispatchFilter.java:201)[optimized]
    at
org/apache/catalina/core/ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)[inlined]
    at
org/apache/catalina/core/ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)[optimized]
    at
org/apache/catalina/core/StandardWrapperValve.invoke(StandardWrapperValve.java:222)[optimized]
    at
org/apache/catalina/core/StandardContextValve.invoke(StandardContextValve.java:123)[optimized]
    at
org/apache/catalina/core/StandardHostValve.invoke(StandardHostValve.java:171)[optimized]
    at
org/apache/catalina/valves/ErrorReportValve.invoke(ErrorReportValve.java:99)[optimized]
    at
org/apache/catalina/valves/AccessLogValve.invoke(AccessLogValve.java:953)[optimized]
    at
org/apache/catalina/core/StandardEngineValve.invoke(StandardEngineValve.java:118)[optimized]
    at
org/apache/catalina/connector/CoyoteAdapter.service(CoyoteAdapter.java:408)[optimized]
    at
org/apache/coyote/http11/AbstractHttp11Processor.process(AbstractHttp11Processor.java:1023)[optimized]
    at
org/apache/coyote/AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)[optimized]
    at
org/apache/tomcat/util/net/JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)[optimized]
    ^-- Holding lock:
org/apache/tomcat/util/net/SocketWrapper@0x496e58810[thin lock]
    at
java/util/concurrent/ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)[inlined]
    at
java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)[optimized]
    at java/lang/Thread.run(Thread.java:682)[optimized]
    at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)

Reply via email to