commit it taking 1300 ms

2016-08-08 Thread Midas A
Hi , commit is taking more than 1300 ms . what should i check on server. below is my configuration . ${solr.autoCommit.maxTime:15000} < openSearcher>false ${solr.autoSoftCommit.maxTime:-1}

Re: NoNode error on -downconfig when node does exist?

2016-08-08 Thread John Bickerstaff
LOL! Thanks. Oh yeah. I've done my time in a support role! Nothing more maddening than a user who won't share the facts! On Mon, Aug 8, 2016 at 9:24 PM, Erick Erickson wrote: > BTW, kudos for including the commands in your first problem statement > even though, I'm

Solr DeleteByQuery vs DeleteById

2016-08-08 Thread Bharath Kumar
Hi All, We are using SOLR 6.1 and i wanted to know which is better to use - deleteById or deleteByQuery? We have a program which deletes 10 documents every 5 minutes from the SOLR and we do it in a batch of 200 to delete those documents. For that we now use deleteById(List ids, 1) to

Re: NoNode error on -downconfig when node does exist?

2016-08-08 Thread Erick Erickson
BTW, kudos for including the commands in your first problem statement even though, I'm sure, you wondered if it was necessary. Saved at least three back-and-forths to get to the root of the problem (little pun there)... Erick On Mon, Aug 8, 2016 at 3:11 PM, John Bickerstaff

Re: NoNode error on -downconfig when node does exist?

2016-08-08 Thread John Bickerstaff
OMG! Thanks. Too long staring at the same string. On Mon, Aug 8, 2016 at 3:49 PM, Kevin Risden wrote: > Just a quick guess: do you have a period (.) in your zk connection string > chroot when you meant an underscore (_)? > > When you do the ls you use

Re: NoNode error on -downconfig when node does exist?

2016-08-08 Thread Kevin Risden
Just a quick guess: do you have a period (.) in your zk connection string chroot when you meant an underscore (_)? When you do the ls you use /solr6_1/configs, but you have /solr6.1 in your zk connection string chroot. Kevin Risden On Mon, Aug 8, 2016 at 4:44 PM, John Bickerstaff

NoNode error on -downconfig when node does exist?

2016-08-08 Thread John Bickerstaff
First, the caveat: I understand this is technically a zookeeper error. It is an error that occurs when trying to deal with Solr however, so I'm hoping someone on the list may have some insight. Also, I'm getting the error via the zkcli.sh tool that comes with Solr... I have created a

Re: Can a MergeStrategy filter returned docs?

2016-08-08 Thread tedsolr
Some more info that might be helpful. If I can trust my logging this is what's happening (search with rows=3 on collection with 2 shards): 1) delegating collector finish() method places custom data on request object for _shard 1_ 2) doc transformer transform() method is called for 3 requested

RE: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-08 Thread Ritesh Kumar (Avanade)
This is great but where can I do this change in SOLR 6 as I have implemented CDCR. Ritesh K Infrastructure Sr. Engineer – Jericho Team Sales & Marketing Digital Services t +91-7799936921   v-kur...@microsoft.com -Original Message- From: Erick Erickson [mailto:erickerick...@gmail.com]

Re: Can a MergeStrategy filter returned docs?

2016-08-08 Thread tedsolr
That makes sense. I would prefer to just merge the custom analytics, but sending that much info via the solr response seems very slow. However I still can't figure out how to access the custom analytics in a doc transformer. That would provide the fastest response but I would have to merge the Ids

Re: Should we still optimize?

2016-08-08 Thread Yonik Seeley
On Mon, Aug 8, 2016 at 5:10 AM, Callum Lamb wrote: > We have a cronjob that runs every week at a quiet time to run the > optimizecommand on our Solr collections. Even when it's quiet it's still an > extremely heavy operation. > > One of the things I keep seeing on stackoverflow

Re: Should we still optimize?

2016-08-08 Thread Walter Underwood
Did you change the merge settings and max segments? If you did, try going back to the defaults. wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) > On Aug 8, 2016, at 8:56 AM, Erick Erickson wrote: > > Callum: > > re: the

Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-08 Thread Erick Erickson
Yeah, Shawn, but you, like, know something about Tomcat and actually provide useful advice ;) On Mon, Aug 8, 2016 at 6:44 AM, Shawn Heisey wrote: > On 8/7/2016 6:53 PM, Tim Chen wrote: >> Exception in thread "http-bio-8983-exec-6571" java.lang.OutOfMemoryError: >> unable to

Re: query problem

2016-08-08 Thread Erick Erickson
If at all possible, denormalize the data But you can also use Solr's Join capability here, see: https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-JoinQueryParser Best, Erick On Mon, Aug 8, 2016 at 8:47 AM, Pithon Philippe wrote: > Hello, >

Re: Should we still optimize?

2016-08-08 Thread Erick Erickson
Callum: re: the optimize failing: Perhaps it's just timing out? That is, the command succeeds fine (which you are reporting), but it's taking long enough that the request times out so the client you're using reports an error. Just a guess... My personal feeling is that (of course), you need

query problem

2016-08-08 Thread Pithon Philippe
Hello, I have two documents type : - tickets (type_s:"ticket", customerid_i:10) - customers (type_s:customer,customerid_i:10,name_s:"FISHER" ) I want a query to find all tickets for name customer FISHER In document ticket (type_s:"ticket") , I have id customer but not name customer... Any ideas

Re: Should we still optimize?

2016-08-08 Thread Callum Lamb
Yeah I figured that was too many deleteddocs. It could just be that our max segments is set too high though. The reason I asked is because our optimize requests have started failing. Or at least,they are appearing to fail because the optimize request returns a non 200. The optimize seems to go

Re: problems with bulk indexing with concurrent DIH

2016-08-08 Thread Shawn Heisey
On 8/2/2016 7:50 AM, Bernd Fehling wrote: > Only assumption so far, DIH is sending the records as "update" (and > not pure "add") to the indexer which will generate delete files during > merge. If the number of segments is high it will take quite long to > merge and check all records of all

Re: Should we still optimize?

2016-08-08 Thread Shawn Heisey
On 8/8/2016 3:10 AM, Callum Lamb wrote: > How true is this claim? Is optimizing still a good idea for the > general case? For the general case, optimizing is not recommended. If there are a very large number of deleted documents, which does describe your situation, then there is definitely a

Re: Can a MergeStrategy filter returned docs?

2016-08-08 Thread Joel Bernstein
The mergeIds() method should be true if you are handling the merge of the documents from the shards. If you are merging custom analytics from an AnalyticsQuery only then you would return false. In your case, since you are de-duping documents you would need to return true. There are two methods in

Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-08 Thread Shawn Heisey
On 8/7/2016 6:53 PM, Tim Chen wrote: > Exception in thread "http-bio-8983-exec-6571" java.lang.OutOfMemoryError: > unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > at >

Solr 5.4.1 Master/Slave Replication

2016-08-08 Thread Kalpana
Hello I have 14 cores, with a couple of them using Shards and now I am looking at the master/Slave fallback solution. Can anyone please point me in the right direction to get started? Thanks Kalpana -- View this message in context:

Re: Group and sum in SOLR 5.3

2016-08-08 Thread andreap21
Hi Pablo, will try this. Sorry for the late reply but I didn't get any notification of this answer! Thanks, Andrea -- View this message in context: http://lucene.472066.n3.nabble.com/Group-and-sum-in-SOLR-5-3-tp4289556p4290750.html Sent from the Solr - User mailing list archive at

Should we still optimize?

2016-08-08 Thread Callum Lamb
We have a cronjob that runs every week at a quiet time to run the optimizecommand on our Solr collections. Even when it's quiet it's still an extremely heavy operation. One of the things I keep seeing on stackoverflow is that optimizing is now essentially deprecated and lucene (We're on Solr