Hi Tobias,
The problem here is that the machine has to little RAM to handle 244M
relationships without reading from disk.
What type of hard disk are you using? The low CPU usage and continuous
reads from disk indicate that cache misses are to high resulting in
many random reads from disk. I
I've done a fair bit of loading of Neo4j instances from different graph file
formats recently, and I agree.
For me, about 10,000 operations per Transaction worked well.
On Tue, Jun 1, 2010 at 6:44 AM, Craig Taverner cr...@amanzi.com wrote:
A quick comment about transaction size. I find a good
Hi Johan,
Thank you very much for the advices.
Simply turning off the logical log rotation fixed my problem.
Just to let you know (as a feeback) that only modifying the
vm.dirty_background_ratio and vm.dirty_ratio configuration parameters as you
proposed, did not reduce the time values of
Well, I'll go first.
At Burning Sky Software, as part of our ThingWorx platform, we're using
Neo4J to model the internet of things as well as to collect a related set
of data streams that can be semantically searched/navigated/queried
leveraging that same model. If I told you any more, I'd have
Hi there!
Sorry, been a bit quiet on the PHP REST API front for a few weeks.
I will be added some features this week (traversals etc...), but in the mean
time, I have (finally) written up a little blog post detailing how the
current version works!
While trying to perform a create-only stress test for nodes, i'm constantly
getting Out of Memory error even while running with these params (with
default config - as no searching/optimizations are being exercised just
yet):
EXTRA_JVM_ARGUMENTS=-d64 -server -Xms256m -Xmx1024m
Able to create 200K
Correction - disk size 116K is applicable only in failure cases. Here are
the numbers for 100K node inserts (takes up 17MB):
4.0Kactive_tx_log
12K lucene
12K lucene-fulltext
4.0Kneostore
4.0Kneostore.id
884Kneostore.nodestore.db
4.0Kneostore.nodestore.db.id
2.4M
7 matches
Mail list logo