[ 
https://issues.apache.org/jira/browse/LUCENE-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13471882#comment-13471882
 ] 

Simon Willnauer commented on LUCENE-4462:
-----------------------------------------

bq. I think we should keep the safety in there (the fallback to forcePurge if 
too many segments are backlogged)...? Hopefully it never needs to run... but 
just in case.

I agree, I remove it for beasting. I will add back and commit. I will let this 
bake in a bit and then port to 4.x
                
> Publishing flushed segments is single threaded and too costly
> -------------------------------------------------------------
>
>                 Key: LUCENE-4462
>                 URL: https://issues.apache.org/jira/browse/LUCENE-4462
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Michael McCandless
>            Assignee: Simon Willnauer
>         Attachments: LUCENE-4462.patch
>
>
> Spinoff from http://lucene.markmail.org/thread/4li6bbomru35qn7w
> The new TestBagOfPostings failed the build because it timed out after 2 hours 
> ... but in digging I found that it was a starvation issue: the 4 threads were 
> flushing segments much faster than the 1 thread could publish them.
> I think this is because publishing segments 
> (DocumentsWriter.publishFlushedSegment) is actually rather costly (creates 
> CFS file if necessary, writes .si, etc.).
> I committed a workaround for now, to prevent starvation (see svn diff -c 
> 1394704 https://svn.apache.org/repos/asf/lucene/dev/trunk), but we really 
> should address the root cause by moving these costly ops into flush() so that 
> publishing is a low cost operation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to