Yes, that worked wonders! Thanks so much. I think there is an underlying
issue with Neo4j trying to fsync too often, but at least now I can continue
to develop on my machine which is all that really matters. Really
appreciate your time.
Percentage of the requests served within a certain time
Added it to my travis script:
https://github.com/wfreeman/cq/blob/master/.travis.yml
With it I get:
Percentage of the requests served within a certain time (ms)
50% 3
66% 3
75% 3
80% 3
90% 3
95% 4
98% 4
99% 5
100% 11 (longest request)
W
Bill,
Wes gave me an idea, so I created a version of the Neo4j server that uses our
testing-in-memory graph database,
care to try it?
https://github.com/jexp/neo4j-in-memory-server
Cheers
Michael
Am 17.01.2014 um 02:52 schrieb Michael Hunger
:
> Good question, not sure about that.
>
> Inde
Good question, not sure about that.
Indexing uses lucene behind the scenes, and adds some overhead.
Michael
Am 17.01.2014 um 02:42 schrieb Bill Scheidel :
> So does that mean multiple fsync operations are done? I'm trying to figure
> out what would cause the extra 110ms of delay between no ind
So does that mean multiple fsync operations are done? I'm trying to figure
out what would cause the extra 110ms of delay between no indexing and
indexing one property.
On Thursday, January 16, 2014 5:39:32 PM UTC-8, Michael Hunger wrote:
>
> Because it is transactional.
>
> So during your tx whe
Because it is transactional.
So during your tx whenever things are changed / created that correspond to the
configured auto-indexing properties they are written to the index.
Michael
Am 17.01.2014 um 02:35 schrieb Bill Scheidel :
> Hmm... turning off node_auto_indexing drops it from 150ms to 4
Hmm... turning off node_auto_indexing drops it from 150ms to 40ms. Why
would auto indexing block a request?
On Thursday, January 16, 2014 5:07:48 PM UTC-8, Bill Scheidel wrote:
>
> My hdparm results are 118 MB/sec which isn't horrible, but it seems like
> disk latency is the only thing that mat
My hdparm results are 118 MB/sec which isn't horrible, but it seems like
disk latency is the only thing that matters. I guess I'll try going back
to the stock settings and moving it over to an SSD and see what happens.
On Thursday, January 16, 2014 4:34:30 PM UTC-8, Wes Freeman wrote:
>
> My ma
My macbook's virtualbox (running centos) got good results too (99% <20ms,
50% <7ms). Was hoping for some weirdness. It is running on an ssd (vintage
2011 macbook pro 13"), hdparm 250MB/sec, so not a great comparison. Only
has 800MB allocated for the VM RAM, using Neo4j stock settings.
Wes
On Thu,
Yeah, definitely not great. Odd though since I never had a problem when
working with Postgres or Mongo and they force things to disk as well.
Never had local requests take more than a couple of ms and then I switch
over to Neo4j and its almost unusable. There are no flags to change the
behav
Not so good latency during the test, or?
Here is my ioping (cool never heard of that one, nice tool).
w/o ab
--- /tmp/ (hfs /dev/disk0s2) ioping statistics ---
11 requests completed in 10.2 s, 32.1 k iops, 125.3 mb/s
min/avg/max/mdev = 21 us / 31 us / 50 us / 7 us
w ab
29 requests completed in 2
And this is ioping without the ab test running:
31 requests completed in 30.5 s, 3.2 k iops, 12.5 mb/s
min/avg/max/mdev = 190 us / 312 us / 477 us / 63 us
On Thursday, January 16, 2014 3:51:32 PM UTC-8, Bill Scheidel wrote:
>
> I ran vmstat while running the ab test:
>
> procs ---memory--
I ran vmstat while running the ab test:
procs ---memory-- ---swap-- -io -system--
cpu
r b swpd free buff cache si sobibo in cs us sy id
wa
0 1 0 2429284 161572 2133284002275 98 354 9 8
80 2
I also ran ioping
Let's continue this discussion here.
To collect the other information so
far:
http://stackoverflow.com/questions/21145723/neo4j-2-0-0-poor-performance-for-dev-test-in-a-virtual-machine
The GH issue you raised with Wes' and my
answers: https://github.com/neo4j/neo4j/issues/1829
My "ab" tests: h
14 matches
Mail list logo