Hi Tobias,

The problem here is that the machine has to little RAM to handle 244M
relationships without reading from disk.

What type of hard disk are you using? The low CPU usage and continuous
reads from disk indicate that cache misses are to high resulting in
many random reads from disk. I would suggest to first make sure you
have a good configuration, about 2-3GB memory mapped for relationship
store then only run 512M-1G java heap but this will probably not be
enough. Instead you either have to get more RAM (8GB-12GB) or buy a
better disk (a good SSD will increase random read performance with
50-100x).

Regards,
Johan

On Mon, May 31, 2010 at 9:39 AM, Peter Neubauer
<peter.neuba...@neotechnology.com> wrote:
> ...
> 2010/5/31 Tobias Mühlbauer <tobias.muehlba...@gmail.com>:
>> Hi,
>>
>> We're currently using a Neo4j database with 3,7 million nodes and 244 
>> million relationships. However we are facing a problem using the shortest 
>> path algorithms "ShortestPath" from graph-algo 0.6.
>>
>> Our server has a 2.8Ghz Core 2 Duo processor and 4 GB of RAM installed. 
>> Starting a shortest-path search between two arbitrary nodes can take up to 
>> half an hour. Calling the search for the second time using the same nodes it 
>> finishes in milliseconds. 2 things make me wonder:
>> 1. Is there a way to load parts of the database into memory prior to the 
>> first search (preloading)?
>> 2. Running the search algorithm only uses 2% of CPU/0,5MB/s read from disk. 
>> So there are resources left that are unused. What can I do to find the 
>> bottleneck?
>>
>> Greetings,
>> Toby
_______________________________________________
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to