FYI, we experimented with different heap size (1GB), along with different "chunk sizes", and were able to eliminate the heap error and get about a 10X improvement in insert speed. It would be helpful to better understand the interactions of the various Neo startup parameters, transaction buffers, and so on, and their impact on performance. I read the performance guidelines, which was some help, but perhaps some additional scenario-based recommendations might help (frequent updates/frequent access, infrequent update/frequent access, burst mode update vs steady update rate, etc...).
Learning more about Neo every hour! -----Original Message----- From: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org] On Behalf Of Rick Bullotta Sent: Wednesday, December 09, 2009 2:57 PM To: 'Neo user discussions' Subject: [Neo] Troubleshooting performance/memory issues Hi, all. When trying to load a few hundred thousand nodes & relationships (chunking it in groups of 1000 nodes or so), we are getting an out of memory heap error after 15-20 minutes or so. No big deal, we expanded the heap settings for the JVM. But then we also noticed that the nioneo_logical_log.xxx file was continuing to grow, even though we were wrapping each 1000 node inserts in their own transaction (there is no other transaction active) and committing w/success and finishing each group of 1000. Periodically (seemingly unrelated to our transaction finishing), that file shrinks again and the data is flushed to the other neo propertystore and relationshipstore files. I just wanted to check if that was normal behavior, or if there is something wrong with way we (or Neo) is handling the transactions, and thus the reason we hit an out-of-memory error. Thanks, Rick _______________________________________________ Neo mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user _______________________________________________ Neo mailing list User@lists.neo4j.org https://lists.neo4j.org/mailman/listinfo/user