I've got some follow-up from the user.

> Is it possible disk filled up?  Though I'd expect an IOE during write
> or close in that case.
>
> In this case nothing should be lost in the index: the merge simply
> refused to commit itself, since it detected something went wrong.  But
> I believe we also have the same check during flush... have they hit an
> exception during flush?

They couldn't find any errors, including disk full, in their solr log/tomcat
log/syslog, except the exception in the title.

> Also: what java version are they running?  We added this check
> originally as a workaround for a JRE bug... but usually when that bug
> strikes the file size is very close (like off by just 1 byte or 8
> bytes or something).

They are using JDK6u15.

If you think up something the cause of this problem, please let me know!

koji
--
Check out "Query Log Visualizer" for Apache Solr
http://www.rondhuit-demo.com/loganalyzer/loganalyzer.html
http://www.rondhuit.com/en/

(11/09/09 21:36), Michael McCandless wrote:
Interesting...

This wouldn't be caused by the "NFS happily deletes open files"
problem (= Stale NFS file handle error).

But this could in theory be caused by the NFS client somehow being
wrong about the file's metadata (file length).  It's sort of odd
because I would expect since the client wrote the file, there wouldn't
be any stale client-side cache problems.

What happened is SegmentMerger just merged all the stored docs, and as
a check in the end it verifies that the fdx file size is exactly 4 +
numDocs*8 bytes in length, but in your case it wasn't -- it was 10572
bytes short, and so it aborts the merge.

Is it possible disk filled up?  Though I'd expect an IOE during write
or close in that case.

In this case nothing should be lost in the index: the merge simply
refused to commit itself, since it detected something went wrong.  But
I believe we also have the same check during flush... have they hit an
exception during flush?

Also: what java version are they running?  We added this check
originally as a workaround for a JRE bug... but usually when that bug
strikes the file size is very close (like off by just 1 byte or 8
bytes or something).

Mike McCandless

http://blog.mikemccandless.com

2011/9/9 Koji Sekiguchi<k...@r.email.ne.jp>:
A user here hit the exception the title says when optimizing. They're using 
Solr 1.4
(Lucene 2.9) running on a server that mounts NFS for index.

I think I know the famous "Stale NFS File Handle IOException" problem, but I 
think it causes
FileNoutFoundException. Is there any chance to hit the exception in the title 
due
to NFS? If so what is the mechanism?

The full stack trace is:

2011/09/07 9:40:00 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start 
commit(optimize=true,waitFlush=true,waitSearcher=true,expungeDeletes=false)

:

2011/09/07 9:40:52 org.apache.solr.update.processor.LogUpdateProcessor finish
INFO: {} 0 52334
2011/09/07 9:40:52 org.apache.solr.common.SolrException log
FATAL: java.io.IOException: background merge hit exception: _73ie:C290089 
_73if:C34 _73ig:C31
_73ir:C356 into _73is [optimize] [mergeDocStores]
        at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2908)
        at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2829)
        at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:403)
        at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
        at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:169)
        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:69)
        at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
        at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
        at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
        at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
        at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at 
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:525)
        at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
        at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568)
        at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
        at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
        at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
        at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
        at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.RuntimeException: mergeFields produced an invalid result: 
docCount is 290089
but fdx file size is 2310144 file=_73is.fdx file exists?=true; now aborting 
this merge to prevent
index corruption
        at 
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:369)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153)
        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5112)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4675)
        at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:235)
        at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:291\
)


koji
--
Check out "Query Log Visualizer" for Apache Solr
http://www.rondhuit-demo.com/loganalyzer/loganalyzer.html
http://www.rondhuit.com/en/

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org





---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to