In that case, hard optimisation like that is out the question.
Resort to automatic merge policies, specifying a maximum
amount of segments. Solr is created with multiple segments
in mind. Hard optimisation seems like not worth the problem.

The problem is this: the less segments you specify during
during an optimisation, the longer it will take, because it has to read
all of these segments to be merged, and redo the sorting. And a cluster
has a lot of housekeeping on top of it.

If you really want to issue a optimisation, then you can
also do it in steps (max segments parameter)

10 -> 9 -> 8 -> 7 .. -> 1

that way less segments need to be merged in one go.

testing your index will show you what a good maximum
amount of segments is for your index.

> On 7 Jun 2019, at 07:27, jena <sthita2...@gmail.com> wrote:
> 
> Hello guys,
> 
> We have 4 solr(version 4.4) instance on production environment, which are
> linked/associated with zookeeper for replication. We do heavy deleted & add
> operations. We have around 26million records and the index size is around
> 70GB. We serve 100k+ requests per day.
> 
> 
> Because of heavy indexing & deletion, we optimise solr instance everyday,
> because of that our solr cloud getting unstable , every solr instance go on
> recovery mode & our search is getting affected & very slow because of that.
> Optimisation takes around 1hr 30minutes. 
> We are not able fix this issue, please help.
> 
> Thanks & Regards
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Reply via email to