On Nov 10, 2007 4:01 PM, Ryan McKinley <[EMAIL PROTECTED]> wrote:
> Using solr, we have been running an indexing process for a while and
> when I checked on it today, it spits out an error:
>
> java.lang.RuntimeException: java.io.FileNotFoundException:
> /path/to/index/_cf9.fnm (No such file or directory)
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:584)
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:475)
>
> Looking through the archives, it looks like we are up a creek.
>
> Any thoughts on what could have caused this?  The log files contains
> some 'too many open files' errors, I can't tell if that corresponds with
> when the index went bad though.

Yup... that would most likely be it.

> the startup script includes:
>   ulimit -n 100000
> which seems generous, no?

The kernel may have a lower limit.

> it is a 22GB index, ls -l | wc shows 180K files (oh my)

I don't think any index with a normal mergeFactor should have that many files.
Most of these files are probably unreferenced by the current index but
haven't been cleaned up due to the errors with file descriptors.

> So my questions:
>
> 1. Anything I can do to use this index while I rebuild another? (takes a
> long time!)

Doubt it... you would never be sure if the index was correct.

> 2. Does the ulimit number explain how the index got corrupted?  If so,
> it seems like a problem.

I think newest lucene versions would prevent this with
lucene_autocommit=false.  A new segments file (the file which
references all other files in the current index) is not written until
a close of the writer.

-Yonik

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to