Bob,

I don't know if you have already answered these questions. 

Which JDK (also version) are you using for that, what are the JVM memory 
settings?

Do you have a profiler handy that you could throw at your benchmark? (E.g. 
yourkit has a 30 day trial, other profilers surely too).

Do you have the source code of your tests at hand? So we could run exactly the 
same code on our own Linux systems for cross checking?

What Linux distribution is it, and 64 or 32 bit? Do you also have a disk 
formatted with ext3 to cross check? (Perhaps just a loopback device).

How much memory does the linux box have available?

Thanks so much.

Michael

Am 21.04.2011 um 21:53 schrieb Bob Hutchison:

> 
> On 2011-04-20, at 7:30 AM, Tobias Ivarsson wrote:
> 
>> Sorry I got a bit distracted when writing this. I should have added that I
>> then want you to send the results of running that benchmark to me so that I
>> can further analyze what the cause of these slow writes might be.
>> 
>> Thank you,
>> Tobias
> 
> That's what I figured you meant. Sorry for the delay, here they are:
> 
> On a HP z400, quad Xeon W3550 @ 3.07GHz
> ext4 filesystem
> -----
> 
>>> dd if=/dev/urandom of=store bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 111.175 s, 9.4 MB/s
>>> dd if=store of=/dev/null bs=100M
> 10+0 records in
> 10+0 records out
> 1048576000 bytes (1.0 GB) copied, 0.281153 s, 3.7 GB/s
>>> dd if=store of=/dev/null bs=100M
> 10+0 records in
> 10+0 records out
> 1048576000 bytes (1.0 GB) copied, 0.244339 s, 4.3 GB/s
>>> dd if=store of=/dev/null bs=100M
> 10+0 records in
> 10+0 records out
> 1048576000 bytes (1.0 GB) copied, 0.242583 s, 4.3 GB/s
> 
> 
>>> ./run ../store logfile 33 100 500 100
> tx_count[100] records[31397] fdatasyncs[100] read[0.9881029 MB] 
> wrote[1.9762058 MB]
> Time was: 5.012
> 19.952114 tx/s, 6264.365 records/s, 19.952114 fdatasyncs/s, 201.87897 kB/s on 
> reads, 403.75793 kB/s on writes
> 
>>> ./run ../store logfile 33 1000 5000 10 
> tx_count[10] records[30997] fdatasyncs[10] read[0.9755144 MB] wrote[1.9510288 
> MB]
> Time was: 0.604
> 16.556292 tx/s, 51319.54 records/s, 16.556292 fdatasyncs/s, 1653.8523 kB/s on 
> reads, 3307.7046 kB/s on writes
> 
>>> ./run ../store logfile 33 1000 5000 100 
> tx_count[100] records[298245] fdatasyncs[100] read[9.386144 MB] 
> wrote[18.772287 MB]
> Time was: 199.116
> 0.5022198 tx/s, 1497.8455 records/s, 0.5022198 fdatasyncs/s, 48.270412 kB/s 
> on reads, 96.540825 kB/s on writes
> 
> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
> r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
> 1  2      0 8541712 336716 3670940    0    0     1     7   12   20  4  1 95  0
> 0  2      0 8525712 336716 3670948    0    0     0   979 1653 3186  4  1 60 35
> 1  2      0 8525220 336716 3671204    0    0     0  1244 1671 3150  4  1 71 24
> 0  2      0 8524724 336716 3671332    0    0     0   709 1517 3302  4  1 65 30
> 0  2      0 8524476 336716 3671460    0    0     0  1033 1680 69342  5  7 59 
> 29
> 0  2      0 8539168 336716 3671588    0    0     0  1375 1599 3272  3  1 70 25
> 1  2      0 8538860 336716 3671716    0    0     0  1157 1594 3097  3  1 72 24
> 0  1      0 8541340 336716 3671844    0    0     0  1151 1512 3182  3  2 70 25
> 0  1      0 8524812 336716 3671972    0    0     0  1597 1641 3391  4  2 72 22
> 
> 
> _______________________________________________
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user

_______________________________________________
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to