Hello,

We are using Solr(v 1.3.0 694707 with Lucene version 2.4-dev 691741) in multicore mode with an average of 400 indexes (all indexes have the same structure).
These indexes are stored on a nfs disk.
A java process writes continuously in these indexes while solr is only used to read those indexes.

We often got this exception :
HTTP Status 500 - No such file or directory java.io.IOException: No such file or directory at java.io.RandomAccessFile.readBytes(Native Method) at java.io.RandomAccessFile.read(RandomAccessFile.java:322) at org.apache.lucene.store.FSDirectory $FSIndexInput.readInternal(FSDirectory.java:596) at org .apache .lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java: 136) at org.apache.lucene.index.CompoundFileReader $CSIndexInput.readInternal(CompoundFileReader.java:247) at org .apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java: 157) at org .apache .lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38) at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78) at org.apache.lucene.index.TermBuffer.read(TermBuffer.java:64) at org.apache.lucene.index.SegmentTermEnum.next(SegmentTermEnum.java:127) at org.apache.lucene.index.SegmentTermEnum.scanTo(SegmentTermEnum.java: 158) at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:270) at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java: 217) at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:744) at org .apache .lucene.index.MultiSegmentReader.docFreq(MultiSegmentReader.java:375) at org.apache.lucene.search.IndexSearcher.docFreq(IndexSearcher.java: 87) at org.apache.lucene.search.Similarity.idf(Similarity.java:457) at org.apache.lucene.search.TermQuery$TermWeight.<init>(TermQuery.java: 44) at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java: 146) at org.apache.lucene.search.Query.weight(Query.java:95) at org.apache.lucene.search.Searcher.createWeight(Searcher.java:185) at org.apache.lucene.search.Searcher.search(Searcher.java:126) at org.apache.lucene.search.Searcher.search(Searcher.java:105) at org .apache .solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java: 966) at org .apache .solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:838) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java: 269) at org .apache .solr.handler.component.QueryComponent.process(QueryComponent.java: 160) at org .apache .solr .handler.component.SearchHandler.handleRequestBody(SearchHandler.java: 169) at org .apache .solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java: 131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204) at org .apache .solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303) at org .apache .solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232) at org .apache .catalina .core .ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java: 235) at org .apache .catalina .core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org .apache .catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java: 233) at org .apache .catalina.core.StandardContextValve.invoke(StandardContextValve.java: 191) at org .apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java: 128) at org .apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java: 102) at org .apache .catalina.core.StandardEngineValve.invoke(StandardEngineValve.java: 109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java: 286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java: 845) at org.apache.coyote.http11.Http11Protocol $Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java: 447) at java.lang.Thread.run(Thread.java:619)


How can we avoid this problem ?

Thanks

Valérie

Reply via email to