It could also be that you are running up against Lucene index size limitations. 
In that case, 3.0 will help you when it is released.

--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]


> On 04 Apr 2016, at 21:15, Chris Vest <[email protected]> wrote:
> 
> Lucene does not use the Neo4j page cache, so try setting 
> dbms.pagecache.memory to a lower value.
> Our page cache won’t allocate much more memory than what can fit the data in 
> your store files, so if your store is about 80G, then that explains why it 
> doesn’t consume more memory than that.
> If you are using Linux, there might be a note in dmesg about high memory 
> usage, or there might not. “Map failed” isn’t a very informative error 
> message either way.
> 
> --
> Chris Vest
> System Engineer, Neo Technology
> [ skype: mr.chrisvest, twitter: chvest ]
> 
> 
>> On 04 Apr 2016, at 18:32, Zhixuan Wang <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> I am using neo4j-2.3.3 and my java version is 1.8.0
>> 
>> So I am now creating an index on a label with 2.1 billion nodes, and 
>> encountered this error:
>> 
>> Caused by: java.io.IOException: Map failed
>>         at sun.nio.ch.FileChannelImpl.map(Unknown Source)
>>         at 
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:270)
>>         at 
>> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:220)
>>         at 
>> org.apache.lucene.index.CompoundFileReader.<init>(CompoundFileReader.java:65)
>>         at 
>> org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:75)
>>         at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:116)
>>         at 
>> org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:696)
>>         at 
>> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4368)
>>         at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3908)
>>         at 
>> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
>>         at 
>> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)
>> Caused by: java.lang.OutOfMemoryError: Map failed
>>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>  
>> It looks like there is not enough memory to handle that many indexes. 
>> My server has 160G of memory, and I have explicitly set 
>> "dbms.pagecache.memory=140g" in my neo4j.properties. However, I noticed that 
>> the actual memory usage of my neo4j process never exceeded 80G. 
>> 
>> I'm wondering if there is any other memory limit in the neo4j/java settings 
>> that prohibits using more than 50% of the actual memory? 
>> Or, it may be caused by some other error?
>> 
>> Thanks,
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] 
>> <mailto:[email protected]>.
>> For more options, visit https://groups.google.com/d/optout 
>> <https://groups.google.com/d/optout>.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to