SolrCloud servers restart
Hi, We've SolrCloud servers configuration with the following topology. 3 servers in which each server is zookeeper and Solr search. We would like to do maintained restart of the servers. Is there any specific guidelines on how to do the restart? Thank you Regards, Moshe Recanati CTO Mobile + 972-52-6194481 Skype: recanati [KMS2]<http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html> More at: www.kmslh.com<http://www.kmslh.com/> | LinkedIn<http://www.linkedin.com/company/kms-lighthouse> | FB<https://www.facebook.com/pages/KMS-lighthouse/123774257810917>
RE: SolrCloud required ports
Hi Jan, Thank you. To summarize we need to open these ports within the cluster: 8983 2181 2888 3888 Regards, Moshe Recanati CTO Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Jan Høydahl Sent: Monday, December 3, 2018 12:43 PM To: solr-user Subject: Re: SolrCloud required ports Hi This depends on your exact coniguration, so you should ask the engineers who deployed ZK and Solr, not this list. If default solr port is used, you'd need at least 8983 open between servers and from the app server to the cluster. If default zk port is used, you'd need port 2181 open between all three servers but not externally (unless you use a client that needs to talk to zk) Also zk needs to communicate internally in the quorum on two other ports, which could be using ports 2888 and 3888 but could also be something else depending on your exact configs. These will never need to be open outside the cluster. -- Jan Høydahl, search solution architect Cominvent AS - www.cominvent.com > 3. des. 2018 kl. 09:22 skrev Moshe Recanati | KMS : > > Hi, > We're currently running SolrCloud with 3 servers: 3 ZK and 3 Search Engines. > Each one on each machine. > Our security team would like to open only the required ports between the > servers. > Please let me know which ports we need to open between the servers? > > Thank you > > Regards, > Moshe Recanati > CTO > Mobile + 972-52-6194481 > Skype: recanati > > <https://urldefense.proofpoint.com/v2/url?u=http-3A__finance.yahoo.com > _news_kms-2Dlighthouse-2Dnamed-2Dgartner-2Dcool-2D121000184.html=DwI > FaQ=EtlJpXAqSaq3cSC4ACVw6-ifVo6KHbawEuqEp-kfN24=vNaquGtywQ6F1lNXYN > 9CVw=v6BhW17PQhjSm3ktWyaGQdUkg8AU6Dl5qw-QKMMoTFQ=uMsj8Lg5_yk5C70SF > Xpo5k1wRp5x-n55rw3x5L4iVwQ=> More at: www.kmslh.com > <http://www.kmslh.com/> | LinkedIn > <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.linkedin.com_ > company_kms-2Dlighthouse=DwIFaQ=EtlJpXAqSaq3cSC4ACVw6-ifVo6KHbawEu > qEp-kfN24=vNaquGtywQ6F1lNXYN9CVw=v6BhW17PQhjSm3ktWyaGQdUkg8AU6Dl5q > w-QKMMoTFQ=WCIz8QUF02gSOunmgRennfMTdqBj6llOG0WkXzBurzc=> | FB > <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.facebook.com > _pages_KMS-2Dlighthouse_123774257810917=DwIFaQ=EtlJpXAqSaq3cSC4ACV > w6-ifVo6KHbawEuqEp-kfN24=vNaquGtywQ6F1lNXYN9CVw=v6BhW17PQhjSm3ktWy > aGQdUkg8AU6Dl5qw-QKMMoTFQ=QI58JVs9eO7ARCUmSaJ4LVmBnR1unoV0jRSMBFhx7x > U=>
SolrCloud required ports
Hi, We're currently running SolrCloud with 3 servers: 3 ZK and 3 Search Engines. Each one on each machine. Our security team would like to open only the required ports between the servers. Please let me know which ports we need to open between the servers? Thank you Regards, Moshe Recanati CTO Mobile + 972-52-6194481 Skype: recanati [KMS2]<http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html> More at: www.kmslh.com<http://www.kmslh.com/> | LinkedIn<http://www.linkedin.com/company/kms-lighthouse> | FB<https://www.facebook.com/pages/KMS-lighthouse/123774257810917>
Re: Error while indexing Thai core with SolrCloud
Thank you. Will check all options and let you know. From: Alexandre Rafalovitch Sent: Sunday, October 21, 2018 8:09:34 PM To: solr-user Subject: Re: Error while indexing Thai core with SolrCloud Ok, That may have been a bit too much :-) However, it was useful. There seem to have several possible avenues: 1) You are using SolrJ and your SolrJ version is not the same as the version of the Solr server. There was a bunch of things that could trigger, especially in combination with Unicode but also with - at some point - SolrJ sending javabin format to an XML endpoint. So, I would check that first. I know you said that it works with nonCluster setup, but are you definitely using the same client/configuration in both approaches? Especially because of the following line below: DEBUG - 2018-10-19 02:15:23.384; org.apache.solr.update.processor.LogUpdateProcessor; PRE_UPDATE FINISH {{params(update.contentType=3Dapplication/xml),defaults(wt=3Djavabin=3D2)}} 2) If that did not help, I would focus on whether it is any document to Thai core that causes an issue or just specific one. If you are doing the tests, make sure to commit each time, because of (3 below): 3) It is possible that there is a unicode related bug in the inter-cluster communication. Possible. But I could not find the specific Jira. Still, if you cannot reproduce it on later version of Solr, that is a likely scenario. 4) If all else fails, use something like Wireshark and capture the network-level traffic during this error. This will show you exactly what is being passed around (https://urldefense.proofpoint.com/v2/url?u=https-3A__www.wireshark.org_=DwIBaQ=EtlJpXAqSaq3cSC4ACVw6-ifVo6KHbawEuqEp-kfN24=vNaquGtywQ6F1lNXYN9CVw=seZ8E_NtzvUrfzcyq4Qd69lVXp2kTwsvSenlB-9dKx4=1g9wWSaa5JTr7Wv6cuEtWdpPQLMWhIn0dAZ7wfygxcA=). This is using a power-hammer on a nail, but - if you read this far - I suspect you are out of other options. Let us know when you resolve the issue, for those who may see it later. Regards, Alex. On Sun, 21 Oct 2018 at 12:16, Moshe Recanati | KMS wrote: > > Hi, > > Thank you. > > Full stacktrace below
Re: Error while indexing Thai core with SolrCloud
Hi, Thank you. Full stacktrace below "core_node_name":"172.19.218.201:8082_solr_core_th"}DEBUG - 2018-10-19 02:13:20.343; org.apache.zookeeper.ClientCnxn$SendThread; Reading reply sessionid:0x200b5a04a770005, packet:: clientPath:null serverPath:null finished:false header:: 356,1 replyHeader:: 356,17179869988,0 request:: '/overseer/queue-work/qn-,#7ba2020226f7065726174696f6e223a227374617465222ca2020227374617465223a22616374697665222ca202022626173655f75726c223a22687474703a2f2f3137322e31392e3231382e3230313a383038322f736f6c7ca202022636f7265223a22636f72655f7468222ca202022726f6c6573223a6e756c6c2ca2020226e6f64655f6e616d65223a223137322e31392e3231382e3230313a383038325f736f6c7ca2020227368617264223a22736861726431222ca202022636f6c6c656374696f6e223a22636f72655f7468222ca2020226e756d536861726473223a2231222ca202022636f72655f6e6f64655f6e616d65223a223137322e31392e3231382e3230313a383038325f736f6c725f636f72655f7468227d,v{s{31,s{'world,'anyone}}},2 response:: '/overseer/queue-work/qn-000103 DEBUG - 2018-10-19 02:13:20.344; org.apache.zookeeper.ClientCnxn$SendThread; Reading reply sessionid:0x200b5a04a770005, packet:: clientPath:null serverPath:null finished:false header:: 357,8 replyHeader:: 357,17179869988,0 request:: '/overseer/queue,F response:: v{'qn-000103} DEBUG - 2018-10-19 02:13:20.345; org.apache.zookeeper.ClientCnxn$SendThread; Reading reply sessionid:0x200b5a04a770005, packet:: clientPath:null serverPath:null finished:false header:: 358,4 replyHeader:: 358,17179869988,0 request:: '/overseer/queue/qn-000103,F response:: #7ba2020226f7065726174696f6e223a227374617465222ca2020227374617465223a22616374697665222ca202022626173655f75726c223a22687474703a2f2f3137322e31392e3231382e3230313a383038322f736f6c7ca202022636f7265223a22636f72655f7468222ca202022726f6c6573223a6e756c6c2ca2020226e6f64655f6e616d65223a223137322e31392e3231382e3230313a383038325f736f6c7ca2020227368617264223a22736861726431222ca202022636f6c6c656374696f6e223a22636f72655f7468222ca2020226e756d536861726473223a2231222ca202022636f72655f6e6f64655f6e616d65223a223137322e31392e3231382e3230313a383038325f736f6c725f636f72655f7468227d,s{17179869987,17179869987,153989329,153989329,0,0,0,0,290,0,17179869987} DEBUG - 2018-10-19 02:13:20.348; org.apache.zookeeper.ClientCnxn$SendThread; Got notification sessionid:0x200b5a04a770005DEBUG - 2018-10-19 02:13:20.348; org.apache.zookeeper.ClientCnxn$SendThread; Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/overseer/queue for sessionid 0x200b5a04a770005INFO - 2018-10-19 02:13:20.348; org.apache.solr.cloud.DistributedQueue$LatchChildWatcher; LatchChildWatcher fired on path: /overseer/queue state: SyncConnected type NodeChildrenChangedDEBUG - 2018-10-19 02:13:20.348; org.apache.zookeeper.ClientCnxn$SendThread; Reading reply sessionid:0x200b5a04a770005, packet:: clientPath:null serverPath:null finished:false header:: 359,2 replyHeader:: 359,17179869989,0 request:: '/overseer/queue/qn-000103,-1 response:: nullDEBUG - 2018-10-19 02:13:20.349; org.apache.zookeeper.ClientCnxn$SendThread; Reading reply sessionid:0x200b5a04a770005, packet:: clientPath:null serverPath:null finished:false header:: 360,8 replyHeader:: 360,17179869989,0 request:: '/overseer/queue,T response:: v{} DEBUG - 2018-10-19 02:13:20.451; org.apache.zookeeper.ClientCnxn$SendThread; Reading reply sessionid:0x200b5a04a770005, packet:: clientPath:null serverPath:null finished:false header:: 361,8 replyHeader:: 361,17179869989,0 request:: '/overseer/queue,T response:: v{} DEBUG - 2018-10-19 02:13:20.454; org.apache.zookeeper.ClientCnxn$SendThread; Got notification sessionid:0x200b5a04a770005DEBUG - 2018-10-19 02:13:20.455; org.apache.zookeeper.ClientCnxn$SendThread; Got WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json for sessionid 0x200b5a04a770005INFO - 2018-10-19 02:13:20.455; org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating... (live nodes size: 3)DEBUG - 2018-10-19 02:13:20.456; org.apache.zookeeper.ClientCnxn$SendThread; Reading reply sessionid:0x200b5a04a770005, packet:: clientPath:null serverPath:null finished:false header:: 362,5 replyHeader:: 362,17179869990,0 request::
Re: Error while indexing Thai core with SolrCloud
Hi Alexandre, Thank you. How this explain the issue exists only with SolrCloud and not standalone? Moshe From: Alexandre Rafalovitch Sent: Sunday, October 21, 2018 5:18:24 PM To: solr-user Subject: Re: Error while indexing Thai core with SolrCloud I would check if the Byte-order mark is the cause: https://urldefense.proofpoint.com/v2/url?u=https-3A__en.wikipedia.org_wiki_Byte-5Forder-5Fmark=DwIBaQ=EtlJpXAqSaq3cSC4ACVw6-ifVo6KHbawEuqEp-kfN24=vNaquGtywQ6F1lNXYN9CVw=YMfuLHL6Bp0Vuxk1moCO18f8dk3kVotS4K6LTVQmLKI=mDTnbgD4DDoBegg-1crj1OxZ3BqMCiN96ev_Nt29BSw= The error message does not seem to be a perfect match to this issue, but a good thing to check anyway. That symbol (right at the file start) is usually invisible and can trip Java XML parsers for some reasons. So I would check what editor on your platform understands Byte-order mark and/or try to strip it. I that does not help, I would run the file through XML validator to see if there are maybe invisible/unexpected characters elsewhere in the file. Regards, Alex. On Sun, 21 Oct 2018 at 09:55, Moshe Recanati | KMS wrote: > > Hi, > > We've specific exception that happening only on Thai core and only once we're > using SolrCloud. > > Same indexing activity is running successfully while running on EN core with > SolrCloud or with Thai core and standalone configuration. > > > We're running on Linux with Solr 4.6 > > and with -Dfile.encoding=UTF-8 on all scenarios. > > > This is the exception: > > com.ctc.wstx.exc.WstxUnexpectedCharException: Illegal character ((CTRL-CHAR, > code 26)) > and > > org.apache.solr.common.SolrException: Invalid UTF-8 middle byte 0xe0 (at char > #1, byte #-1) > at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:176) > at > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) > at > > > Do you know what is the root cause of it and how to overcome it. > > As I mentioned this is not happning on standalon or in Core EN in any > scenario. > > > Thank you, > > Moshe
Error while indexing Thai core with SolrCloud
Hi, We've specific exception that happening only on Thai core and only once we're using SolrCloud. Same indexing activity is running successfully while running on EN core with SolrCloud or with Thai core and standalone configuration. We're running on Linux with Solr 4.6 and with -Dfile.encoding=UTF-8 on all scenarios. This is the exception: com.ctc.wstx.exc.WstxUnexpectedCharException: Illegal character ((CTRL-CHAR, code 26)) and org.apache.solr.common.SolrException: Invalid UTF-8 middle byte 0xe0 (at char #1, byte #-1) at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:176) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) at Do you know what is the root cause of it and how to overcome it. As I mentioned this is not happning on standalon or in Core EN in any scenario. Thank you, Moshe
RE: SolrCloud indexing
Hi Shawn, Thank you. I just need to run full indexing due to massive changes in the document. Regards, Moshe Recanati CTO Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Shawn Heisey <apa...@elyograg.org> Sent: Sunday, April 15, 2018 8:23 PM To: solr-user@lucene.apache.org Subject: Re: SolrCloud indexing On 4/15/2018 1:22 AM, Moshe Recanati | KMS wrote: > > We’re using SolrCloud as part of our product solution for High > Availability. > > During upgrade of a version we need to run full index build on our > Solr data. > What are you upgrading? If it's Solr, you should pause/stop indexing while you do the upgrade. You'll have to stop Solr processes to upgrade them, and even if you're using an external load balancer, it takes a little bit of time for failover to occur. It would be up to your indexing software to handle errors in that situation. There is nothing that Solr can do about that. If your indexing software correctly detects and handles errors, then you might be able to restart Solr instances without a problem. > I would like to know if as part of SolrCloud we can manage it and make > sure that items are available during the index so only once specific > item is indexed it’s changing with no affect on end-user. > I can't decipher exactly what you're asking here. Thanks, Shawn
RE: SolrCloud indexing
Hi Erick, Thank you very much. I'll check it out. Regards, Moshe Recanati CTO Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Erick Erickson <erickerick...@gmail.com> Sent: Monday, April 16, 2018 7:01 AM To: solr-user <solr-user@lucene.apache.org> Subject: Re: SolrCloud indexing I think you're saying you want to prove out the upgrade in some kind of test setup then switch live traffic. What's commonly used for that is collection aliasing. You just create a new collection and populate it and check it out. When you're satisfied that it's doing what you want, use the collections API CREATEALIAS command to seamlessly switch. Here's the sequence: old_collection is active create new_collection, index to it and make sure it is doing what you want CREATEALIAS pointing old_collection at new_colletion. The advantage here is that you have as long as you want to verify your new collection is fine. You can also switch back if you need to. The disadvantage is that you effectively need extra hardware. You can mitigate this somewhat by bringing up your new collection with a limited number of replicas and after switching you delete replicas from the old collection and add them to the new one. Be very very careful here though that your collection commands (ADDREPLICA/DELETEREPLICA) don't actually operate on the new collection! Best, Erick On Sun, Apr 15, 2018 at 10:22 AM, Shawn Heisey <apa...@elyograg.org> wrote: > On 4/15/2018 1:22 AM, Moshe Recanati | KMS wrote: >> >> >> We’re using SolrCloud as part of our product solution for High >> Availability. >> >> During upgrade of a version we need to run full index build on our >> Solr data. >> > > What are you upgrading? If it's Solr, you should pause/stop indexing > while you do the upgrade. You'll have to stop Solr processes to > upgrade them, and even if you're using an external load balancer, it > takes a little bit of time for failover to occur. > > It would be up to your indexing software to handle errors in that situation. > There is nothing that Solr can do about that. If your indexing > software correctly detects and handles errors, then you might be able > to restart Solr instances without a problem. > >> I would like to know if as part of SolrCloud we can manage it and >> make sure that items are available during the index so only once >> specific item is indexed it’s changing with no affect on end-user. >> > > I can't decipher exactly what you're asking here. > > Thanks, > Shawn >
SolrCloud indexing
Hi, We're using SolrCloud as part of our product solution for High Availability. During upgrade of a version we need to run full index build on our Solr data. I would like to know if as part of SolrCloud we can manage it and make sure that items are available during the index so only once specific item is indexed it's changing with no affect on end-user. If there is such option please send me some guidelines on how to do it. Thank you in advance, Regards, Moshe Recanati CTO Mobile + 972-52-6194481 Skype: recanati [KMS2]<http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html> More at: www.kmslh.com<http://www.kmslh.com/> | LinkedIn<http://www.linkedin.com/company/kms-lighthouse> | FB<https://www.facebook.com/pages/KMS-lighthouse/123774257810917>
Languages dialects
Hi, We've a request to support the following dialects using Solr. Let me know if this is supported dialects or we need to implement something in our code. 1. Chinese - Mandarin 2. French - Canadian 3. Portuguese - European 4. Spanish - European Thank you, Regards, Moshe Recanati CTO Mobile + 972-52-6194481 Skype: recanati [KMS2]<http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html> More at: www.kmslh.com<http://www.kmslh.com/> | LinkedIn<http://www.linkedin.com/company/kms-lighthouse> | FB<https://www.facebook.com/pages/KMS-lighthouse/123774257810917>
Solr in different locations
Hi, We would like have a system that will run on different regions with same Solr index. These are the regions: 1. Europe 2. Singapore 3. US I would like to know what is the best practice to implement it. If it by implementing SolrCloud please share some basic guidelines on how to enable it and configure. Thank you, Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [KMS2]http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html More at: www.kmslh.comhttp://www.kmslh.com/ | LinkedInhttp://www.linkedin.com/company/kms-lighthouse | FBhttps://www.facebook.com/pages/KMS-lighthouse/123774257810917
RE: Error while reading index
Hi, I uploaded the log to drive. https://drive.google.com/file/d/0B0GR0M-lL5QHX1B2a2NZZXh3a1E/view?usp=sharing Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [KMS2]http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html More at: www.kmslh.comhttp://www.kmslh.com/ | LinkedInhttp://www.linkedin.com/company/kms-lighthouse | FBhttps://www.facebook.com/pages/KMS-lighthouse/123774257810917 From: Moshe Recanati [mailto:mos...@kmslh.com] Sent: Wednesday, April 01, 2015 5:22 PM To: solr-user@lucene.apache.org Subject: Error while reading index Hi, We're running on production environment with Solr 4.7.1 master and slave with replication every 1 minute. During regular activity and index delta build we got the following error: ERROR - 2015-03-30 04:06:12.318; java.lang.RuntimeException: [was class java.net.SocketException] Connection reset at com.ctc.wstx.util.ExceptionUtil.throwRuntimeException(ExceptionUtil.java:18) at com.ctc.wstx.sr.StreamScanner.throwLazyError(StreamScanner.java:731) After additional 2 minutes we got the following error: ERROR - 2015-03-30 04:07:39.875; Unable to get file names for indexCommit generation: 638 java.io.FileNotFoundException: _tu.fdt at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:261) at org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:178) And since than Solr wasn't recover until we did full rebuild of all documents. Detailed log attached. Let me know if you familiar with such issue. And what can create such issue that prevent from recovery and requires rebuild index. This is major issue for us. Thank you in advance, Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [KMS2]http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html More at: www.kmslh.comhttp://www.kmslh.com/ | LinkedInhttp://www.linkedin.com/company/kms-lighthouse | FBhttps://www.facebook.com/pages/KMS-lighthouse/123774257810917
Solr logs encoding
Hi, I've wired situation. Starting yesterday restart I've issue with log encoding. My log looks like: DEBUG - 2015-02-27 10:47:01.432; [0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xc7]8[0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0x89][0x5][0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0x97][0x4][0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xa4][0x6][0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xfc]b[0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xfc]F[0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xfb]:[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]a[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]v[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]Y[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]Y[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]V[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]H[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]U[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]\[0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xe4][0x96][0x1][0x4][0xfc][0xff][0xff][0xff][0xf][0x4]`[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]j[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]l[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]j[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]][0x4][0xfc][0xff][0xff][0xff][0xf][0x4]X[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]e[0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xdd][0xba][0x1][0x4][0xfc][0xff][0xff][0xff][0xf][0x4]h[0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xb5][0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xee][0x3][0x4][0xfc][0xff][0xff][0xff][0xf][0x4]\[0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xe2][0x1d][0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xbb][0x1a][0x4][0xfc][0xff][0xff][0xff][0xf][0x4]c[0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xd2]%[0x4][0xfc][0xff][0xff][0xff][0xf][0x4]b[0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0x92][0x1a][0x4][0xfc][0xff][0xff][0xff][0xf][0x4][0xa3][0x4][0x4][0xfc][0xff][0xff][0xff] Anyone familiar with this? How to fix it? Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [KMS2]http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html More at: www.kmslh.comhttp://www.kmslh.com/ | LinkedInhttp://www.linkedin.com/company/kms-lighthouse | FBhttps://www.facebook.com/pages/KMS-lighthouse/123774257810917
RE: Stop solr query
Hi Shawn, We checked this option and it didn't solve our problem. We're using https://github.com/healthonnet/hon-lucene-synonyms for query based synonyms. While running query with high number of words that have high number of synonyms the query got stuck and solr memory is exhausted. We tried to use this parameter suggested by you however it didn't stop the query and solve the issue. Please let me know if there is other option to tackle it. Today it might be high number of words that cause the issue and tomorrow it might be other something wrong. We can't rely only on user input check. Thank you in advance. Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Shawn Heisey [mailto:apa...@elyograg.org] Sent: Monday, February 23, 2015 5:49 PM To: solr-user@lucene.apache.org Subject: Re: Stop solr query On 2/23/2015 7:23 AM, Moshe Recanati wrote: Recently there were some scenarios in which queries that user sent to solr got stuck and increased our solr heap. Is there any option to kill or timeout query that wasn't returned from solr by external command? The best thing you can do is examine all user input and stop such queries before they execute, especially if they are the kind of query that will cause your heap to grow out of control. The timeAllowed parameter can abort a query that takes too long in certain phases of the query. In recent months, Solr has been modified so that timeAllowed will take effect during more query phases. It is not a perfect solution, but it can be better than nothing. http://wiki.apache.org/solr/CommonQueryParameters#timeAllowed Be aware that sometimes legitimate queries will be slow, and using timeAllowed may cause those queries to fail. Thanks, Shawn
RE: Stop solr query
HI Mikhail, We're using 4.7.1. This means I can't stop the search. I think this is mandatory feature. Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com] Sent: Wednesday, February 25, 2015 3:42 PM To: solr-user Subject: Re: Stop solr query Moshe, if you take a thread dump while a particular query stuck (via jstack of in SolrAdmin tab), it may explain where exactly it's stalled, just check the longest stack trace. FWIW, in 4.x timeallowed is checked only while documents are collected, and in 5 it's also checked during query expansion (see http://lucidworks.com/blog/solr-5-0/ now cut-offs requests https://issues.apache.org/jira/browse/SOLR-5986 during the query-expansion stage as well ). however I'm not sure it has place (long query expansion) with hon-synonyms. On Wed, Feb 25, 2015 at 3:21 PM, Moshe Recanati mos...@kmslh.com wrote: Hi Shawn, We checked this option and it didn't solve our problem. We're using https://github.com/healthonnet/hon-lucene-synonyms for query based synonyms. While running query with high number of words that have high number of synonyms the query got stuck and solr memory is exhausted. We tried to use this parameter suggested by you however it didn't stop the query and solve the issue. Please let me know if there is other option to tackle it. Today it might be high number of words that cause the issue and tomorrow it might be other something wrong. We can't rely only on user input check. Thank you in advance. Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Shawn Heisey [mailto:apa...@elyograg.org] Sent: Monday, February 23, 2015 5:49 PM To: solr-user@lucene.apache.org Subject: Re: Stop solr query On 2/23/2015 7:23 AM, Moshe Recanati wrote: Recently there were some scenarios in which queries that user sent to solr got stuck and increased our solr heap. Is there any option to kill or timeout query that wasn't returned from solr by external command? The best thing you can do is examine all user input and stop such queries before they execute, especially if they are the kind of query that will cause your heap to grow out of control. The timeAllowed parameter can abort a query that takes too long in certain phases of the query. In recent months, Solr has been modified so that timeAllowed will take effect during more query phases. It is not a perfect solution, but it can be better than nothing. http://wiki.apache.org/solr/CommonQueryParameters#timeAllowed Be aware that sometimes legitimate queries will be slow, and using timeAllowed may cause those queries to fail. Thanks, Shawn -- Sincerely yours Mikhail Khludnev Principal Engineer, Grid Dynamics http://www.griddynamics.com mkhlud...@griddynamics.com
RE: FW: NRTCachingDirectory threads stuck
Thank you. Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com] Sent: Sunday, February 22, 2015 6:16 PM To: solr-user Subject: Re: FW: NRTCachingDirectory threads stuck On Sun, Feb 22, 2015 at 1:54 PM, Moshe Recanati mos...@kmslh.com wrote: Hi Mikhail, Thank you. 1. Regarding jetty threads - How I can reduce them? https://wiki.eclipse.org/Jetty/Howto/High_Load#Thread_Pool note, you'll get 503 or something when pool size is exceeded. 2. Is it related to the fact we're running Solr 4.0 in parallel on this machine? are their index dirs different? Nevertheless, running something at same machine leads to resource contention. What does `top` say? Thank you Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com] Sent: Sunday, February 22, 2015 11:18 AM To: solr-user Subject: Re: FW: NRTCachingDirectory threads stuck Hello, I checked 20020.tdump. From the update perspective, it's ok, I see the single thread committed and awaits for opening a searcher. There are a few very bad evidences: - there are many threads executing search requests in parallel. let;s say it's roughly hundred of them. This is dead end. Consider to limit number of jetty threads, start from number of cores available; - heap is full, it's no-way for java. Either increase it, or reduce load or make sure that there are no any leak; - i see many threads executing Luke handler code, it might be really wrong setup, or regular approach for Solr replication. I'm not sure here. On Sun, Feb 22, 2015 at 9:57 AM, Moshe Recanati mos...@kmslh.com wrote: Hi, I saw message rejected because of attachment. I uploaded data to drive https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?us p= sharing Moshe *From:* Moshe Recanati [mailto:mos...@kmslh.com] *Sent:* Sunday, February 22, 2015 8:37 AM *To:* solr-user@lucene.apache.org *Subject:* RE: NRTCachingDirectory threads stuck *From:* Moshe Recanati *Sent:* Sunday, February 22, 2015 8:34 AM *To:* solr-user@lucene.apache.org *Subject:* NRTCachingDirectory threads stuck Hi, We're running two Solr servers on same machine. Once Solr 4.0 and the second is Solr 4.7.1. In the Solr 4.7.1 we've very strange behavior, while indexing document we get spike of memory from 1GB to 4Gb in couple of minutes and huge number of threads stuck on NRTCachingDirectory.openInput methods. Thread sump and GC attached. Are you familiar with this behavior? What can be the trigger for this? Thank you, *Regards,* *Moshe Recanati* *SVP Engineering* Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [image: KMS2] http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121 00 0184.html More at: www.kmslh.com | LinkedIn http://www.linkedin.com/company/kms-lighthouse | FB https://www.facebook.com/pages/KMS-lighthouse/123774257810917 -- Sincerely yours Mikhail Khludnev Principal Engineer, Grid Dynamics http://www.griddynamics.com mkhlud...@griddynamics.com -- Sincerely yours Mikhail Khludnev Principal Engineer, Grid Dynamics http://www.griddynamics.com mkhlud...@griddynamics.com
Stop solr query
Hi, Recently there were some scenarios in which queries that user sent to solr got stuck and increased our solr heap. Is there any option to kill or timeout query that wasn't returned from solr by external command? Thank you, Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [KMS2]http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html More at: www.kmslh.comhttp://www.kmslh.com/ | LinkedInhttp://www.linkedin.com/company/kms-lighthouse | FBhttps://www.facebook.com/pages/KMS-lighthouse/123774257810917
RE: FW: NRTCachingDirectory threads stuck
Hi Mikhail, Thank you. 1. Regarding jetty threads - How I can reduce them? 2. Is it related to the fact we're running Solr 4.0 in parallel on this machine? Thank you Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com] Sent: Sunday, February 22, 2015 11:18 AM To: solr-user Subject: Re: FW: NRTCachingDirectory threads stuck Hello, I checked 20020.tdump. From the update perspective, it's ok, I see the single thread committed and awaits for opening a searcher. There are a few very bad evidences: - there are many threads executing search requests in parallel. let;s say it's roughly hundred of them. This is dead end. Consider to limit number of jetty threads, start from number of cores available; - heap is full, it's no-way for java. Either increase it, or reduce load or make sure that there are no any leak; - i see many threads executing Luke handler code, it might be really wrong setup, or regular approach for Solr replication. I'm not sure here. On Sun, Feb 22, 2015 at 9:57 AM, Moshe Recanati mos...@kmslh.com wrote: Hi, I saw message rejected because of attachment. I uploaded data to drive https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?usp= sharing Moshe *From:* Moshe Recanati [mailto:mos...@kmslh.com] *Sent:* Sunday, February 22, 2015 8:37 AM *To:* solr-user@lucene.apache.org *Subject:* RE: NRTCachingDirectory threads stuck *From:* Moshe Recanati *Sent:* Sunday, February 22, 2015 8:34 AM *To:* solr-user@lucene.apache.org *Subject:* NRTCachingDirectory threads stuck Hi, We're running two Solr servers on same machine. Once Solr 4.0 and the second is Solr 4.7.1. In the Solr 4.7.1 we've very strange behavior, while indexing document we get spike of memory from 1GB to 4Gb in couple of minutes and huge number of threads stuck on NRTCachingDirectory.openInput methods. Thread sump and GC attached. Are you familiar with this behavior? What can be the trigger for this? Thank you, *Regards,* *Moshe Recanati* *SVP Engineering* Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [image: KMS2] http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-12100 0184.html More at: www.kmslh.com | LinkedIn http://www.linkedin.com/company/kms-lighthouse | FB https://www.facebook.com/pages/KMS-lighthouse/123774257810917 -- Sincerely yours Mikhail Khludnev Principal Engineer, Grid Dynamics http://www.griddynamics.com mkhlud...@griddynamics.com
RE: FW: NRTCachingDirectory threads stuck
Hi, Another question. We're using lockTypesingle/lockType Is it related? Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Moshe Recanati Sent: Sunday, February 22, 2015 12:55 PM To: solr-user Subject: RE: FW: NRTCachingDirectory threads stuck Hi Mikhail, Thank you. 1. Regarding jetty threads - How I can reduce them? 2. Is it related to the fact we're running Solr 4.0 in parallel on this machine? Thank you Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com] Sent: Sunday, February 22, 2015 11:18 AM To: solr-user Subject: Re: FW: NRTCachingDirectory threads stuck Hello, I checked 20020.tdump. From the update perspective, it's ok, I see the single thread committed and awaits for opening a searcher. There are a few very bad evidences: - there are many threads executing search requests in parallel. let;s say it's roughly hundred of them. This is dead end. Consider to limit number of jetty threads, start from number of cores available; - heap is full, it's no-way for java. Either increase it, or reduce load or make sure that there are no any leak; - i see many threads executing Luke handler code, it might be really wrong setup, or regular approach for Solr replication. I'm not sure here. On Sun, Feb 22, 2015 at 9:57 AM, Moshe Recanati mos...@kmslh.com wrote: Hi, I saw message rejected because of attachment. I uploaded data to drive https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?usp= sharing Moshe *From:* Moshe Recanati [mailto:mos...@kmslh.com] *Sent:* Sunday, February 22, 2015 8:37 AM *To:* solr-user@lucene.apache.org *Subject:* RE: NRTCachingDirectory threads stuck *From:* Moshe Recanati *Sent:* Sunday, February 22, 2015 8:34 AM *To:* solr-user@lucene.apache.org *Subject:* NRTCachingDirectory threads stuck Hi, We're running two Solr servers on same machine. Once Solr 4.0 and the second is Solr 4.7.1. In the Solr 4.7.1 we've very strange behavior, while indexing document we get spike of memory from 1GB to 4Gb in couple of minutes and huge number of threads stuck on NRTCachingDirectory.openInput methods. Thread sump and GC attached. Are you familiar with this behavior? What can be the trigger for this? Thank you, *Regards,* *Moshe Recanati* *SVP Engineering* Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [image: KMS2] http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-12100 0184.html More at: www.kmslh.com | LinkedIn http://www.linkedin.com/company/kms-lighthouse | FB https://www.facebook.com/pages/KMS-lighthouse/123774257810917 -- Sincerely yours Mikhail Khludnev Principal Engineer, Grid Dynamics http://www.griddynamics.com mkhlud...@griddynamics.com
RE: Suspicious message with attachment
Please proceed Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype : recanati More at: www.kmslh.com | LinkedIn | FB -Original Message- From: postmas...@ssww.com [mailto:postmas...@ssww.com] On Behalf Of h...@ssww.com Sent: Sunday, February 22, 2015 8:39 AM To: solr-user@lucene.apache.org Subject: Suspicious message with attachment The following message addressed to you was quarantined because it likely contains a virus: Subject: RE: NRTCachingDirectory threads stuck From: Moshe Recanati mos...@kmslh.com However, if you know the sender and are expecting an attachment, please reply to this message, and we will forward the quarantined message to you.
FW: NRTCachingDirectory threads stuck
Hi, I saw message rejected because of attachment. I uploaded data to drive https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?usp=sharing Moshe From: Moshe Recanati [mailto:mos...@kmslh.com] Sent: Sunday, February 22, 2015 8:37 AM To: solr-user@lucene.apache.org Subject: RE: NRTCachingDirectory threads stuck From: Moshe Recanati Sent: Sunday, February 22, 2015 8:34 AM To: solr-user@lucene.apache.orgmailto:solr-user@lucene.apache.org Subject: NRTCachingDirectory threads stuck Hi, We're running two Solr servers on same machine. Once Solr 4.0 and the second is Solr 4.7.1. In the Solr 4.7.1 we've very strange behavior, while indexing document we get spike of memory from 1GB to 4Gb in couple of minutes and huge number of threads stuck on NRTCachingDirectory.openInput methods. Thread sump and GC attached. Are you familiar with this behavior? What can be the trigger for this? Thank you, Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [KMS2]http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html More at: www.kmslh.comhttp://www.kmslh.com/ | LinkedInhttp://www.linkedin.com/company/kms-lighthouse | FBhttps://www.facebook.com/pages/KMS-lighthouse/123774257810917
How-to get results of comparison between documents
Hi, I've several documents that describe mobile phone specification with index on release date. Assume I want to query these documents and get the latest document based on release date. Please describe how can I do it (if at all) and which query I need to execute Regards, Moshe Recanati SVP Engineering Office + 972-73-2617564 Mobile + 972-52-6194481 Skype: recanati [KMS2]http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html more at www.kmslh.comhttp://www.kmslh.com/
RE: How-to get results of comparison between documents
Hi Ludvic, Thanks a lot. I looked into this reference guide and I would like to make sure I understand it correctly. Can you mention specific example on how to use it? It'll help me a lot. Regards, Moshe Original message From: lboutros Date:06/23/2014 19:10 (GMT+02:00) To: solr-user@lucene.apache.org Subject: Re: How-to get results of comparison between documents Hi Moshe, If I understand correctly your needs, I think you want to use the CollapsingQParser post filter: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=40509582 I think that, basically, adding this filter to your query should solve your problem: fq={!collapse field=document_id_field max=revision_field} Ludovic. - Jouve France. -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-get-results-of-comparison-between-documents-tp4143389p4143458.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: How-to get results of comparison between documents
Got it thank you Regards, Moshe Original message From: Shalin Shekhar Mangar Date:06/23/2014 21:38 (GMT+02:00) To: solr-user@lucene.apache.org Subject: Re: How-to get results of comparison between documents Hi Moshe, The CollpasingQParser will group documents on a given field. In this case, you can group by the mobile phone's id (which is unique across all mobile phones) and then ask Solr to return the document with the maximum revision value. That is exactly what Ludovic's example does. On Mon, Jun 23, 2014 at 10:16 PM, Moshe Recanati mos...@kmslh.com wrote: Hi Ludvic, Thanks a lot. I looked into this reference guide and I would like to make sure I understand it correctly. Can you mention specific example on how to use it? It'll help me a lot. Regards, Moshe Original message From: lboutros Date:06/23/2014 19:10 (GMT+02:00) To: solr-user@lucene.apache.org Subject: Re: How-to get results of comparison between documents Hi Moshe, If I understand correctly your needs, I think you want to use the CollapsingQParser post filter: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=40509582 I think that, basically, adding this filter to your query should solve your problem: fq={!collapse field=document_id_field max=revision_field} Ludovic. - Jouve France. -- View this message in context: http://lucene.472066.n3.nabble.com/How-to-get-results-of-comparison-between-documents-tp4143389p4143458.html Sent from the Solr - User mailing list archive at Nabble.com. -- Regards, Shalin Shekhar Mangar.