2010/6/23 Suruchi Deodhar <deodharsuru...@gmail.com>

> Hi Mattias,
>
> Replying to your query on the forum.
>
> Could you copy-paste the exact command line to start java? I'd like to
> see if there's some tyop with the heap size option.
>
>
> I typed following command for execution.:
> /usr/java/default/bin/java -Xmx2048M BuildGraph.java
>

Quite frankly, I don't think that's the actual command you wrote :)  (no
classpath option and the direct referens to a .java file).

Well, anyways... your code looks good. It's a petty I don't have your
dataset so that I can try it out locally, but that'd be somewhat difficult
to achieve I'd guess.


>
> where Java 1.6 executable version is stored at /usr/java/default/bin/java
>
> I am running many queries on the Graph DB that I created and noticed that
> the speed of query execution depends a lot on memory heap size available and
> how many nodes and relationships are being read in main memory by the graph.
>
Yep, the first time you execute a query nothing will have been read from
disk into the neo4j cache (resulting in lots of disk reads) so queries on
cold caches aren't faster than your disk is. The next time it's much faster.

> For eg. A query that has a call to getAllNodes() would take a much longer
> duration to process. (I have close to 2 million nodes)
> Just to confirm, what is the optimal way to commit after querying/updating
> a fixed size of data say 20, 000 nodes. Do we need to call tx.finish() every
> time after processing 20, 000 nodes?
>
> Yes, it's quite good to commit (tx.success()/tx.finish()/beginTx()) your
transactions every couple of thousands of updates/additions.


> Do you know of any other optimizations to make query processing faster and
> more efficient??
>
Do you really need to iterate over all nodes in your queries? That's usually
not the prefered way. It's better to find a good starting node and then
traverse its surroundings for answers.

>
> Details on my server config:
>
> 1. Java version:
> javac version: javac 1.6.0_18
> java version: (default) "1.4.2_17"
> However, while executing the program, I use the java executable 1.6.0_18.
>
> 2. Neo4j version:
>
> I downloaded the neo4j-apoc-1.0.zip<http://dist.neo4j.org/neo4j-apoc-1.0.zip>
>  from the website and using it for the project.
>
> 3. - RAM, OS and Filesystem of your machine?
>
> 2 Intel Xeon Quad-Core 3.0GHz CPUs
> 96 nodes, each node with 2 Intel Quad-Core Xeon E5440 processo
> RAM - 16 GB(2 GB/Core) . I am running the code on the head node currently.
> OS-SUSE Linux Enterprise Server 10.2 (x86_64)
>
>
> ~Suruchi
>
>
>

-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com
_______________________________________________
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to