Hi Ben,

nice machine. Sorry for the delay in answering

Usually you don't need such gigantic heaps. 32G to 64G usually do fine.

in neo4-wrapper.conf

wrapper.java.initmemory=32000
wrapper.java.maxmemory=32000


in neo4j.properties
For the pagecache settings (you only need this dbms.pagecache.memory=15G)
it is enough to use your db-size + some leeway for growth, e.g. 15G.

In general I recommend to upgrade to 3.x (please note that in 3.0 the
config options change in neo4j.conf)
in neo4j-wrapper.conf

dbms.memory.heap.initial_size=32000
dbms.memory.heap.max_size=32000

in neo4j.conf
dbms.memory.pagecache.size=15G

You probably want to use:
neo4j-shell -file queries.cypher

It is much more important to look at your queries, indexes and query plans
(prefix the queries with EXPLAIN or PROFILE) to figure out why they don't
return fast enough.

Btw. in 3.x many of the counting queries (e.g. count nodes with a label or
rel-types are faster as they use the transactional database statistics).

Please share your queries, index/constraint information (run "schema" in
the shell) and most importantly query plans.


On Tue, Jul 19, 2016 at 11:21 AM, Ben Steer <benstee...@gmail.com> wrote:

> I am currently running some simple cypher queries (count etc) on a large
> data-set (>10G) and am having some issues with tuning NE04J. I posted a
> similar question on stack overflow, but was hoping I would have a little
> more luck here.
>
>
> The machine running the queries has 4TB of ram, 160 cores and is running
> Ubuntu 14.04/neo4j version 2.3. Originally I left all the settings as
> default as it is stated that free memory will be dynamically allocated as
> required. However, as the queries are taking several minutes to complete I
> assumed this was not the case. As such I have set various combinations of
> the following parameters within the neo4j-wrapper.conf:
>
> wrapper.java.initmemory=1200000
> wrapper.java.maxmemory=1200000
> dbms.memory.heap.initial_size=1200000
> dbms.memory.heap.max_size=1200000
> dbms.jvm.additional=-XX:NewRatio=1
>
> dbms.pagecache.memory=1500000
>
>
> and the following within neo4j.properties:
>
> use_memory_mapped_buffers=true
> neostore.nodestore.db.mapped_memory=50G
> neostore.relationshipstore.db.mapped_memory=50G
> neostore.propertystore.db.mapped_memory=50G
> neostore.propertystore.db.strings.mapped_memory=50G
> neostore.propertystore.db.arrays.mapped_memory=1G
>
> following every guide/Stackoverflow post I could find on the topic, but I
> seem to have exhausted the available material with little effect.
>
> I am running queries through the shell using the following command neo4j-shell
> -c < "queries/$1.cypher", but have also tried explicitly passing the conf
> files with -config $NEO4J_HOME/conf/neo4j-wrapper.conf (restarting the
> sever every time I make a change).
>
>
> There is the default amount of indexing from an initial ingestion (i.e.
> none), and I am aware that adding this will make a difference, but I wish
> to first fully optimise the system before improving this.
>
>
> I imagine that I have missed something silly which is causing the issue,
> as there are many reports of neo4j working well with data of this size, but
> cannot think what it could be. As such any help would be greatly
> appreciated.
>
>
> Thanks in advance,
>
> Ben
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to