Ryan can you post the output of CheckIndex on your now-working index? (1800 is still too many files I think, certainly after having optimized).
ok, 1800 was wrong - that was from a botched attempt where I: 1. ran optimize on the broken 18K file index. It crashed midway through. 2. run CheckIndex -fix on that When I: 1. run CheckIndex -fix 2. run Optimize it results in an index with 65 files (that seems normal)
Also, what steps finally allowed you to recover? CheckIndex (back-ported to 2.2) followed by optimize?
I did not back-port to 2.2, this converted 2.2 index to 2.3, and i shoved 2.3 libs into an exploded solr.war. (just to see if it works) It works, but I'm better off going back to an older working version.
I'm still baffled how Lucene 2.2 could ever produce a corrupt index even on hitting descriptor limits or other exceptions. I can see that this could cause files to not be deleted properly, but, I can't see how it can corrupt the index.
I'm not confident it is lucene's fault - the hardware has been flaky too. But the 18K files and 'too many open files' error makes me suspicious. Unfortunately, I can't quite grock when stuff started going wrong and how long it has been going on from my log files.
Ryan can you share any details of how you (Solr) is using Lucene? Are you using autoCommit=false? I'd really love to get to the root cause here.
I am using all standard solr config (copied from the example). I am using solr's <autoCommit> to "commit" added documents every 30 secs.
ryan --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]