I have seen people discuss committing transactions after some microbatch of a 
few hundred records, but I thought this was optional.  I thought Neo4J would 
automatically write out to disk as memory became full.  Well, I encountered an 
OOM and want to make sure that I understand the reason.  Was my understanding 
incorrect, or is there a parameter that I need to set to some limit, or is the 
problem them I am indexing as I go.  The stack trace, FWIW, is:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
            at java.util.HashMap.<init>(HashMap.java:209)
            at java.util.HashSet.<init>(HashSet.java:86)
            at 
org.neo4j.index.lucene.LuceneTransaction$TxCache.add(LuceneTransaction.java:334)
            at 
org.neo4j.index.lucene.LuceneTransaction.insert(LuceneTransaction.java:93)
            at 
org.neo4j.index.lucene.LuceneTransaction.index(LuceneTransaction.java:59)
            at 
org.neo4j.index.lucene.LuceneXaConnection.index(LuceneXaConnection.java:94)
            at 
org.neo4j.index.lucene.LuceneIndexService.indexThisTx(LuceneIndexService.java:220)
            at 
org.neo4j.index.impl.GenericIndexService.index(GenericIndexService.java:54)
            at 
org.neo4j.index.lucene.LuceneIndexService.index(LuceneIndexService.java:209)
            at 
JiraLoader$JiraExtractor$Item.setNodeProperty(JiraLoader.java:321)
            at JiraLoader$JiraExtractor$Item.updateGraph(JiraLoader.java:240)

Thanks,
Paul Jackson
_______________________________________________
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to