I should add some more context:

   1. the problem index included several cfs segment files that were around
   4.7G, and
   2. I'm running four SOLR instances on the same box, all of which have
   similiar problem indeces.

A colleague thought perhaps I was bumping up against my 256,000 open files
ulimit. Do the MultiMMapIndexInput ByteBuffer arrays each consume a file
handle/descriptor?

On Thu, Sep 8, 2011 at 5:19 PM, Rich Cariens <richcari...@gmail.com> wrote:

> FWiW I optimized the index down to a single segment and now I have no
> trouble opening an MMapDirectory on that index, even though the 23G cfx
> segment file remains.
>
>
> On Thu, Sep 8, 2011 at 4:27 PM, Rich Cariens <richcari...@gmail.com>wrote:
>
>> Thanks for the response. "free -g" reports:
>>
>>         total        used        free        shared        buffers
>> cached
>> Mem:      141          95          46             0
>> 0            93
>> -/+ buffers/cache:      2         139
>> Swap:       3           0           3
>>
>> 2011/9/7 François Schiettecatte <fschietteca...@gmail.com>
>>
>>> My memory of this is a little rusty but isn't mmap also limited by mem +
>>> swap on the box? What does 'free -g' report?
>>>
>>> François
>>>
>>> On Sep 7, 2011, at 12:25 PM, Rich Cariens wrote:
>>>
>>> > Ahoy ahoy!
>>> >
>>> > I've run into the dreaded OOM error with MMapDirectory on a 23G cfs
>>> compound
>>> > index segment file. The stack trace looks pretty much like every other
>>> trace
>>> > I've found when searching for OOM & "map failed"[1]. My configuration
>>> > follows:
>>> >
>>> > Solr 1.4.1/Lucene 2.9.3 (plus
>>> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
>>> > )
>>> > CentOS 4.9 (Final)
>>> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada
>>> > Java SE (build 1.6.0_21-b06)
>>> > Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
>>> > ulimits:
>>> >    core file size     (blocks, -c)     0
>>> >    data seg size    (kbytes, -d)     unlimited
>>> >    file size     (blocks, -f)     unlimited
>>> >    pending signals    (-i)     1024
>>> >    max locked memory     (kbytes, -l)     32
>>> >    max memory size     (kbytes, -m)     unlimited
>>> >    open files    (-n)     256000
>>> >    pipe size     (512 bytes, -p)     8
>>> >    POSIX message queues     (bytes, -q)     819200
>>> >    stack size    (kbytes, -s)     10240
>>> >    cpu time    (seconds, -t)     unlimited
>>> >    max user processes     (-u)     1064959
>>> >    virtual memory    (kbytes, -v)     unlimited
>>> >    file locks    (-x)     unlimited
>>> >
>>> > Any suggestions?
>>> >
>>> > Thanks in advance,
>>> > Rich
>>> >
>>> > [1]
>>> > ...
>>> > java.io.IOException: Map failed
>>> > at sun.nio.ch.FileChannelImpl.map(Unknown Source)
>>> > at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
>>> > Source)
>>> > at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
>>> > Source)
>>> > at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
>>> > at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
>>> Source)
>>> >
>>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>>> > at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
>>> > at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown
>>> Source)
>>> > at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
>>> > at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
>>> > Source)
>>> > at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
>>> > at org.apache.lucene.index.IndexReader.open(Unknown Source)
>>> > ...
>>> > Caused by: java.lang.OutOfMemoryError: Map failed
>>> > at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>> > ...
>>>
>>>
>>
>

Reply via email to