You will need to cap the maximum segment size using
LogByteSizeMergePolicy.setMaxMergeMB.  As then you will only have
segments that are of an optimal size, and Lucene will not try to
create gigantic segments.  I think though on the query side you will
run out of heap space due to the terms index size.  What version are
you using?

On Wed, Mar 9, 2011 at 10:17 AM, danomano <dshopk...@earthlink.net> wrote:
> After About 4-5 hours the merge completed (ran out of heap)..as you
> suggested..it was having memory issues..
>
> Read queries during the merge were working just fine (they were taking
> longer then normal ~30-60seconds).
>
> I think I need to do more reading on understanding the merge/optimization
> processes.
>
> I am beginning to think what I need to do is have lots of segments? (i.e.
> frequent merges..of smaller sized segments, wouldn't that speed up the
> merging process when it actually runs?).
>
> A couple things I'm trying to wrap my ahead around:
>
> Increasing the segments will improve indexing speed on the whole.
> The question I have is: when it needs to actually perform a merge: will
> having more segments be better  (i.e. make the merge process faster)? or
> longer? ..having a 4 hour merge aka (indexing request) is not really
> acceptable (unless I can control when that merge happens).
>
> We are using our Solr server differently then most: Frequent Inserts (in
> batches), with few Reads.
>
> I would say having a 'long' query time is acceptable (say ~60 seconds).
>
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Solr-Hanging-all-of-sudden-with-update-csv-tp2652903p2656457.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Reply via email to