: are '23 rels per milliseconds'
good ?
Guillaume,
the memory mapping happens outside the JVM. As such there is no
guarantee that you actually have that memory and you need to make sure
no other proceses are competing on that, e.g. you browser which might
take a large chunk of the memory y
Guillaume,
the memory mapping happens outside the JVM. As such there is no
guarantee that you actually have that memory and you need to make sure
no other proceses are competing on that, e.g. you browser which might
take a large chunk of the memory you have on the machine (5GB-2.2GB
for the JVM)
C
Guillaume,
you still having problems or is the performance ok now? I think with
tweaking the 110 hops/ms can be upped to at least 500, since you are
not touching any indexes except in the beginning. However, that would
require to look at the underlying system and do some inspection. Let
me know if
Hi Michael,
I tried to lower the values by 30% => same and then by 50% => same again.
(MapMemException)
Here is the message.log http://pastebin.com/bvhhZfjZ
My neo4j.properties looks like that :
neostore.nodestore.db.mapped_memory=100M
neostore.propertystore.db.arrays.mapped_memory=50M
neostore.
Guillaume,
please try to reduce the memory mapped settings I provided, e.g by 30%.
You can also inspect your system how much "free" memory is reported when you're
running the neo4j test. This amount of memory can be added to the memory mapped
file caches.
Have you looked at the linux transactio
Hi,
1) I had a problem with my cache warm up code which was not working.
Now with the default cache properties, I am traversing 110 relations per
millesecond.
2) When I load my db with your cache properties, I have MappedMemException.
See here the full stack. http://pastebin.com/XwV41iJB
I tried
It should, but in Guillaume's example they didn't differ by much, both were
slow.
It seems to be running with the default soft references cache which might turn
over if every node and property is just visited once and then only a second
time in the second run.
Michael
Am 07.10.2011 um 09:19 s
Also, is that the first run directly after a JVM start? The first time you
encounter a Node or Relationship it is loaded from disk into memory so that
the next time it's read from memory instead. The difference between two runs
can be order of magnitudes in difference.
2011/10/6 Michael Hunger
>
Hmm virtual machines might be difficult, esp. with the io indirection.
Your memory settings for the db are:
• Fri Oct 07 00:10:01 IST 2011: neostore.nodestore.db.mapped_memory=20M
• Fri Oct 07 00:10:01 IST 2011:
neostore.propertystore.db.arrays.mapped_memory=130M
• Fri Oct
Hi,
Here are the answers I can give you quickly:
How much memory does your machine have?
>5066 MB (from free -m)
What kind of disk is in there?
>I do not know, the machine is a VM provided by another department of my
company. What I can tell you is that on my i5 laptop the same was taking 6-8
m
How much memory does your machine have?
What kind of disk is in there?
Have you looked at the memory config for the neo4j db?
What kind of scheduler do you use (please try deadline or as)?
Can you please share the config and JVM info that is output at the head of
graphdb/messages.log ?
I'll att
Hi all,
I am using neo4j 1.5 java embedded. My traverser is a main java program
http://pastebin.com/1ynVESbc which takes a db path as input.
What it does is pretty basic : it follows my graph (which is a tree) and
stores the ending leaves in a file. However it is quite big : I goes through
800.0
12 matches
Mail list logo