[ 
https://issues.apache.org/jira/browse/LUCENE-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12546042
 ] 

Michael McCandless commented on LUCENE-1044:
--------------------------------------------


How about if we don't sync every single commit point?

I think on a crash what's important when you come back up is 1) index
is consistent and 2) you have not lost that many docs from your index.
Losing the last N (up to mergeFactor) flushes might be acceptable?

EG we could force a full sync only when we commit the merge, before we
remove the merged segments.  This would mean on a crash that you're
"guaranteed" to have the last successfully committed & sync'd merge to
fall back to, and possibly a newer commit point if the OS had sync'd
those files on its own?

That would be a big simplification because I think we could just do
the sync() in the foreground since ConcurrentMergeScheduler is already
using BG threads to do merges.

This would also mean we cannot delete the commit points that were not
sync'd.  So the first 10 flushes would result in 10 segments_N files.
But then when the merge of these segments completes, and the result is
sync'd, those files could all be deleted.

Plus we would have to fix retry logic on loading the segments file to
try more than just the 2 most recent commit points but that's a pretty
minor change.

I think it should mean better performance, because the longer you wait
to call sync() presumably the more likely it is a no-op if the OS has
already sync'd the file.


> Behavior on hard power shutdown
> -------------------------------
>
>                 Key: LUCENE-1044
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1044
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Index
>         Environment: Windows Server 2003, Standard Edition, Sun Hotspot Java 
> 1.5
>            Reporter: venkat rangan
>            Assignee: Michael McCandless
>             Fix For: 2.3
>
>         Attachments: FSyncPerfTest.java, LUCENE-1044.patch, 
> LUCENE-1044.take2.patch, LUCENE-1044.take3.patch, LUCENE-1044.take4.patch
>
>
> When indexing a large number of documents, upon a hard power failure  (e.g. 
> pull the power cord), the index seems to get corrupted. We start a Java 
> application as an Windows Service, and feed it documents. In some cases 
> (after an index size of 1.7GB, with 30-40 index segment .cfs files) , the 
> following is observed.
> The 'segments' file contains only zeros. Its size is 265 bytes - all bytes 
> are zeros.
> The 'deleted' file also contains only zeros. Its size is 85 bytes - all bytes 
> are zeros.
> Before corruption, the segments file and deleted file appear to be correct. 
> After this corruption, the index is corrupted and lost.
> This is a problem observed in Lucene 1.4.3. We are not able to upgrade our 
> customer deployments to 1.9 or later version, but would be happy to back-port 
> a patch, if the patch is small enough and if this problem is already solved.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to