I am using neo4j-2.3.3 and my java version is 1.8.0

So I am now creating an index on a label with 2.1 billion nodes, and 
encountered this error:

*Caused by: java.io.IOException: Map failed*
*        at sun.nio.ch.FileChannelImpl.map(Unknown Source)*
*        at 
org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:270)*
*        at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:220)*
*        at 
org.apache.lucene.index.CompoundFileReader.<init>(CompoundFileReader.java:65)*
*        at 
org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:75)*
*        at 
org.apache.lucene.index.SegmentReader.get(SegmentReader.java:116)*
*        at 
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:696)*
*        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4368)*
*        at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3908)*
*        at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)*
*        at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)*
*Caused by: java.lang.OutOfMemoryError: Map failed*
*        at sun.nio.ch.FileChannelImpl.map0(Native Method)*
 
It looks like there is not enough memory to handle that many indexes. 
My server has 160G of memory, and I have explicitly set "
*dbms.pagecache.memory=140g*" in my neo4j.properties. However, I noticed 
that the actual memory usage of my neo4j process never exceeded 80G. 

I'm wondering if there is any other memory limit in the neo4j/java settings 
that prohibits using more than 50% of the actual memory? 
Or, it may be caused by some other error?

Thanks,

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to