Re: [Neo4j] Traversal performance

2011-09-26 Thread David Montag
Also, try running it 100 times. Then you should see some JVM optimizations/JIT kick in. David On Mon, Sep 26, 2011 at 9:24 PM, Rick Devinsus wrote: > That was it- the cache wasn't warmed. I tried running the same test twice, > that increased the speed around 7x (450K traversals per second). Th

Re: [Neo4j] Traversal performance

2011-09-26 Thread Rick Devinsus
That was it- the cache wasn't warmed. I tried running the same test twice, that increased the speed around 7x (450K traversals per second). Thanks for the help. -- View this message in context: http://neo4j-community-discussions.438527.n3.nabble.com/Traversal-performance-tp3371038p3371546.html

Re: [Neo4j] Traversal performance

2011-09-26 Thread Bryce
It wont make any difference if the memory mapping settings are just larger than the file sizes, or a lot larger therefore fiddling with those settings wont make any difference from your original test. Generally when people see very high performance it is because a lot of the data they are trav

Re: [Neo4j] Traversal performance

2011-09-26 Thread Rick Devinsus
I took a look at the files and none were larger than 500MB, however it makes a lot of sense to change the memory as you suggested so I altered the options as shown below. I also started eclipse with different memory options than the defaults (eclipse -vmargs -Xmx2000m -server). The changes didn't

Re: [Neo4j] Traversal performance

2011-09-26 Thread Bryce
One initial suggestion would be that your memory mapped settings are probably not very near optimal. If you have a look at the file sizes in your graph data directory then the closer you can get to covering each db files entire size the better. I would assume that some of the files will be bigger

[Neo4j] Traversal performance

2011-09-26 Thread Rick
Looking for help on how to tune traversals, this is a great product with the best API and I want to make sure Im getting the most from it. I'm trying to understand if 62,500 traversals per second is the best I can do given the following scenario: - 15.6M nodes - 15.6M relationships - Data is stru