Sorry, it's a bug we fixed in 3.0.2 I think.

Please upgrade to 3.0.4

then all of that should go away and you should be able to run your imports with 
1-2G heap.

Your page-cahe should be fine around 1-2G you can check your store-file sizes 
(neostore.*.db)

Michael

> Am 26.08.2016 um 00:24 schrieb kmcg...@absolute-performance.com:
> 
> I am using LOAD CSV to update nodes of an existing label. The label contains 
> 4M existing nodes. The CSV file contains 600k rows; some new, some updates.
> 
> My Centos 6.x box has 36G of ram. Using the 3.0.1 tuning guide, I have set 
> pagecache to 24G and heap_initial_size/heap_max_size to 8G.
> 
> Attached is my cql file. When executing this cql file neo4j consumes heap 
> space until space is exhausted. The query plan (attached) appears to indicate 
> there are no issues so I would expect period commits to re-use heap. I have 
> adjusted pagecache and heap sizes up and down but every load attempts fails 
> when heap is exhausted.
> 
> Any ideas or thoughts?
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+unsubscr...@googlegroups.com 
> <mailto:neo4j+unsubscr...@googlegroups.com>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.
> <queryInQuestion.txt><queryPlan.txt>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to