Re: Committed before 500
Since you are getting these failures, the 90 second timeout is not “good enough”. Try increasing it. wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) On Feb 20, 2015, at 5:22 AM, NareshJakher wrote: > Hi Shawn, > > I do not want to increase timeout as these errors are very few. Also current > timeout of 90 seconds is good enough. Is there a way to find why Solr is > getting timed-out ( at times ), could it be that Solr is busy doing other > activities like re-indexing, commits etc. > > Additionally I also found that some of non-leader node move to recovering or > recovery failed after these time out errors. I am just wondering if these are > related to performance issue and Solr commits needs to be controlled. > > Regards, > Naresh Jakher > > From: Shawn Heisey-2 [via Lucene] > [mailto:ml-node+s472066n4187382...@n3.nabble.com] > Sent: Thursday, February 19, 2015 8:12 PM > To: Jakher, Naresh > Subject: Re: Committed before 500 > > On 2/19/2015 6:30 AM, NareshJakher wrote: > >> I am using Solr cloud with 3 nodes, at times following error is observed in >> logs during delete operation. Is it a performance issue ? What can be done >> to resolve this issue >> >> "Committed before 500 {msg=Software caused connection abort: socket write >> error,trace=org.eclipse.jetty.io.EofException" >> >> I did search on old topics but couldn't find anything concrete related to >> Solr cloud. Would appreciate any help on the issues as I am relatively new >> to Solr. > > A jetty EofException indicates that one specific thing is happening: > > The TCP connection from the client was severed before Solr responded to > the request. Usually this happens because the client has been > configured with an absolute timeout or an inactivity timeout, and the > timeout was reached. > > Configuring timeouts so that you can be sure clients don't get stuck is > a reasonable idea, but any configured timeouts should be VERY long. > You'd want to use a value like five minutes, rather than 10, 30, or 60 > seconds. > > The timeouts MIGHT be in the HttpShardHandler config that Solr and > SolrCloud use for distributed searches, and they also might be in > operating-system-level config. > > https://wiki.apache.org/solr/SolrConfigXml?highlight=%28HttpShardHandler%29#Configuration_of_Shard_Handlers_for_Distributed_searches > > Thanks, > Shawn > > > > If you reply to this email, your message will be added to the discussion > below: > http://lucene.472066.n3.nabble.com/Committed-before-500-tp4187361p4187382.html > To unsubscribe from Committed before 500, click > here<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4187361&code=bmFyZXNoLmpha2hlckBjYXBnZW1pbmkuY29tfDQxODczNjF8NzQ0MTczNzc0>. > NAML<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> > This message contains information that may be privileged or confidential and > is the property of the Capgemini Group. It is intended only for the person to > whom it is addressed. If you are not the intended recipient, you are not > authorized to read, print, retain, copy, disseminate, distribute, or use this > message or any part thereof. If you receive this message in error, please > notify the sender immediately and delete all copies of this message. > > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Committed-before-500-tp4187361p4187601.html > Sent from the Solr - User mailing list archive at Nabble.com.
RE: Committed before 500
Hi Shawn, I do not want to increase timeout as these errors are very few. Also current timeout of 90 seconds is good enough. Is there a way to find why Solr is getting timed-out ( at times ), could it be that Solr is busy doing other activities like re-indexing, commits etc. Additionally I also found that some of non-leader node move to recovering or recovery failed after these time out errors. I am just wondering if these are related to performance issue and Solr commits needs to be controlled. Regards, Naresh Jakher From: Shawn Heisey-2 [via Lucene] [mailto:ml-node+s472066n4187382...@n3.nabble.com] Sent: Thursday, February 19, 2015 8:12 PM To: Jakher, Naresh Subject: Re: Committed before 500 On 2/19/2015 6:30 AM, NareshJakher wrote: > I am using Solr cloud with 3 nodes, at times following error is observed in > logs during delete operation. Is it a performance issue ? What can be done > to resolve this issue > > "Committed before 500 {msg=Software caused connection abort: socket write > error,trace=org.eclipse.jetty.io.EofException" > > I did search on old topics but couldn't find anything concrete related to > Solr cloud. Would appreciate any help on the issues as I am relatively new > to Solr. A jetty EofException indicates that one specific thing is happening: The TCP connection from the client was severed before Solr responded to the request. Usually this happens because the client has been configured with an absolute timeout or an inactivity timeout, and the timeout was reached. Configuring timeouts so that you can be sure clients don't get stuck is a reasonable idea, but any configured timeouts should be VERY long. You'd want to use a value like five minutes, rather than 10, 30, or 60 seconds. The timeouts MIGHT be in the HttpShardHandler config that Solr and SolrCloud use for distributed searches, and they also might be in operating-system-level config. https://wiki.apache.org/solr/SolrConfigXml?highlight=%28HttpShardHandler%29#Configuration_of_Shard_Handlers_for_Distributed_searches Thanks, Shawn If you reply to this email, your message will be added to the discussion below: http://lucene.472066.n3.nabble.com/Committed-before-500-tp4187361p4187382.html To unsubscribe from Committed before 500, click here<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4187361&code=bmFyZXNoLmpha2hlckBjYXBnZW1pbmkuY29tfDQxODczNjF8NzQ0MTczNzc0>. NAML<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. -- View this message in context: http://lucene.472066.n3.nabble.com/Committed-before-500-tp4187361p4187601.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Committed before 500
On 2/19/2015 6:30 AM, NareshJakher wrote: > I am using Solr cloud with 3 nodes, at times following error is observed in > logs during delete operation. Is it a performance issue ? What can be done > to resolve this issue > > "Committed before 500 {msg=Software caused connection abort: socket write > error,trace=org.eclipse.jetty.io.EofException" > > I did search on old topics but couldn't find anything concrete related to > Solr cloud. Would appreciate any help on the issues as I am relatively new > to Solr. A jetty EofException indicates that one specific thing is happening: The TCP connection from the client was severed before Solr responded to the request. Usually this happens because the client has been configured with an absolute timeout or an inactivity timeout, and the timeout was reached. Configuring timeouts so that you can be sure clients don't get stuck is a reasonable idea, but any configured timeouts should be VERY long. You'd want to use a value like five minutes, rather than 10, 30, or 60 seconds. The timeouts MIGHT be in the HttpShardHandler config that Solr and SolrCloud use for distributed searches, and they also might be in operating-system-level config. https://wiki.apache.org/solr/SolrConfigXml?highlight=%28HttpShardHandler%29#Configuration_of_Shard_Handlers_for_Distributed_searches Thanks, Shawn