Hi,

65K is already a very large number and should have been sufficient...

However: have you increased the merge factor?  Doing so increases the
open files (maps) required.

Have you disabled compound file format?  (Hmmm: I think Solr does so
by default... which is dangerous).  Maybe try enabling compound file
format?

Can you "ls -l" your index dir and post the results?

It's also possible Solr isn't closing the old searchers quickly enough
... I don't know the details on when Solr closes old searchers...

Mike McCandless

http://blog.mikemccandless.com



On Tue, Apr 10, 2012 at 11:35 PM, Gopal Patwa <gopalpa...@gmail.com> wrote:
> Michael, Thanks for response
>
> it was 65K as you mention the default value for "cat
> /proc/sys/vm/max_map_count" . How we determine what value this should be?
>  is it number of document during hard commit in my case it is 15 minutes?
> or it is number of  index file or number of documents we have in all cores.
>
> I have raised the number to 140K but I still get when it reaches to 140K,
> we have to restart jboss server to free up the map count, sometime OOM
> error happen during "*Error opening new searcher"*
>
> is making this number to unlimited is only solution?''
>
>
> Error log:
>
> *location=CommitTracker line=93 auto commit
> error...:org.apache.solr.common.SolrException: Error opening new
> searcher
>        at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>        at 
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>        at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>        at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>        at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>        at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>        at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)Caused by:
> java.io.IOException: Map failed
>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>        at 
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>        at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>        at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>        at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>        at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>        at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>        at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>        at 
> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>        at 
> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>        at 
> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>        at 
> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>        at 
> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>        at 
> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>        at 
> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>        at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
>        at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
>        at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
>        at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
>        at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
>        at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
>        at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
>        ... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
>        at sun.nio.ch.FileChannelImpl.map0(Native Method)
>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*
>
>
>
> And one more issue we came across i.e
>
> On Sat, Mar 31, 2012 at 3:15 AM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> It's the virtual memory limit that matters; yours says unlimited below
>> (good!), but, are you certain that's really the limit your Solr
>> process runs with?
>>
>> On Linux, there is also a per-process map count:
>>
>>    cat /proc/sys/vm/max_map_count
>>
>> I think it typically defaults to 65,536 but you should check on your
>> env.  If a process tries to map more than this many regions, you'll
>> hit that exception.
>>
>> I think you can:
>>
>>  cat /proc/<pid>/maps | wc
>>
>> to see how many maps your Solr process currently has... if that is
>> anywhere near the limit then it could be the cause.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <gopalpa...@gmail.com> wrote:
>> > *I need help!!*
>> >
>> > *
>> > *
>> >
>> > *I am using Solr 4.0 nightly build with NRT and I often get this error
>> > during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
>> > have search this forum and what I found it is related to OS ulimit
>> > setting, please se below my ulimit settings. I am not sure what ulimit
>> > setting I should have? and we also get "**java.net.SocketException:*
>> > *Too* *many* *open* *files" NOT sure how many open file we need to
>> > set?*
>> >
>> >
>> > I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
>> > 15GB, with Single shard
>> >
>> > *
>> > *
>> >
>> > *We update the index every 5 seconds, soft commit every 1 second and
>> > hard commit every 15 minutes*
>> >
>> > *
>> > *
>> >
>> > *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
>> >
>> > *
>> > *
>> >
>> > ulimit:
>> >
>> > core file size          (blocks, -c) 0
>> > data seg size           (kbytes, -d) unlimited
>> > scheduling priority             (-e) 0
>> > file size               (blocks, -f) unlimited
>> > pending signals                 (-i) 401408
>> > max locked memory       (kbytes, -l) 1024
>> > max memory size         (kbytes, -m) unlimited
>> > open files                      (-n) 1024
>> > pipe size            (512 bytes, -p) 8
>> > POSIX message queues     (bytes, -q) 819200
>> > real-time priority              (-r) 0
>> > stack size              (kbytes, -s) 10240
>> > cpu time               (seconds, -t) unlimited
>> > max user processes              (-u) 401408
>> > virtual memory          (kbytes, -v) unlimited
>> > file locks                      (-x) unlimited
>> >
>> >
>> > *
>> > *
>> >
>> > *ERROR:*
>> >
>> > *
>> > *
>> >
>> > *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
>> > *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
>> > *commit* *error...:java.io.IOException:* *Map* *failed*
>> >        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
>> >        *at*
>> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
>> >        *at*
>> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
>> >        *at*
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
>> >        *at*
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
>> >        *at*
>> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
>> >        *at*
>> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
>> >        *at*
>> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
>> >        *at*
>> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
>> >        *at*
>> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>> >        *at*
>> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>> >        *at*
>> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>> >        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>> >        *at*
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>> >        *at*
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>> >        *at*
>> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>> >        *at*
>> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>> >        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
>> > *java.lang.OutOfMemoryError:* *Map* *failed*
>> >        *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
>> >        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
>> >        *...* *28* *more*
>> >
>> > *
>> > *
>> >
>> > *
>> > *
>> >
>> > *
>> >
>> >
>> > SolrConfig.xml:
>> >
>> >
>> >        <indexDefaults>
>> >                <useCompoundFile>false</useCompoundFile>
>> >                <mergeFactor>10</mergeFactor>
>> >                <maxMergeDocs>2147483647</maxMergeDocs>
>> >                <maxFieldLength>10000</maxFieldLength-->
>> >                <ramBufferSizeMB>4096</ramBufferSizeMB>
>> >                <maxThreadStates>10</maxThreadStates>
>> >                <writeLockTimeout>1000</writeLockTimeout>
>> >                <commitLockTimeout>10000</commitLockTimeout>
>> >                <lockType>single</lockType>
>> >
>> >            <mergePolicy
>> class="org.apache.lucene.index.TieredMergePolicy">
>> >              <double name="forceMergeDeletesPctAllowed">0.0</double>
>> >              <double name="reclaimDeletesWeight">10.0</double>
>> >            </mergePolicy>
>> >
>> >            <deletionPolicy class="solr.SolrDeletionPolicy">
>> >              <str name="keepOptimizedOnly">false</str>
>> >              <str name="maxCommitsToKeep">0</str>
>> >            </deletionPolicy>
>> >
>> >        </indexDefaults>
>> >
>> >
>> >        <updateHandler class="solr.DirectUpdateHandler2">
>> >            <maxPendingDeletes>1000</maxPendingDeletes>
>> >             <autoCommit>
>> >               <maxTime>900000</maxTime>
>> >               <openSearcher>false</openSearcher>
>> >             </autoCommit>
>> >             <autoSoftCommit>
>> >
>> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>> >             </autoSoftCommit>
>> >
>> >        </updateHandler>
>> >
>> >
>> >
>> > Thanks
>> > Gopal Patwa
>> > *
>>

Reply via email to