On modern Ubuntu installs, the default security configuration is to 'Encrypt
the user's home directory." This involves mounting a FUSE-based encrypted
filesystem (ecryptfs) on top of /home/you, just before you log in.

I found this gave horrific performance running the kernel tests, so I moved
my working directory onto a non-encrypted area (I used a plain ext4
filesystem) and it is blisteringly fast on that (SSD for the win!).

Can you run the command:

    $ df -T /path/to/database/directory

and report back the filesystem type? If it says ecryptfs you should consider
moving the database somewhere else. It seems reading and writing many small
files through an untuned ecryptfs is pretty poor.

Cheers,
Dan

On 14 July 2011 13:34, Michael Hunger <michael.hun...@neotechnology.com>wrote:

> James,
>
> could you perhaps post the exact specs of the machines that performed
> slowly?
> Were there any other applications running on those, what does top show
> during the operation ?
>
> The test I ran, was on a virtual machine (vmware) :)
> We sometimes had massive issues with certain raid controllers on machines
> running ESX servers.
>
> I think this is really just one bit of operating system configuration that
> needs to be tweaked.
>
> When working on performance issues I saw (and tried successfully) tweaking
> the scheduler to use.
> (unix boot kernel parameter)
> try to use:
> -anticipatory scheduler :  elevator=as
> -deadline scheduler: elevator=deadline
>
> Other than that I'd say a linux/sys admin probably has much more to say
> about options / possible investigation ideas.
>
> Cheers
>
> Michael
>
> You could also try to get the information from hdparm:
> hdparm -t /dev/sdX
> hdparm -i /dev/sdX
>
> (if virtual then check the reald drive outside the vm)
>
> PS. I also found this superuser.com thread where some of the options were
> discussed, like wrong initial partition offset, or the scheduler as
> mentioned above
> (but they used no boot param, but instead: echo deadline >
>  /sys/block/sda/queue/scheduler)
> http://superuser.com/questions/101640/disk-operations-freeze-debian
>
> http://blog.vacs.fr/index.php?post/2010/08/28/Solving-Linux-system-lockup-when-intensive-disk-I/O-are-performed
>
> PPS: I fork this discussion to help.neo4j.org, so we don't have to go via
> the mailing list for details
>
>
>
> Am 14.07.2011 um 13:08 schrieb Jean-Pierre Bergamin:
>
> > It seems to turn out that the IO systems of some machines have problem
> > writing the small transactions that neo4j is generating and that the
> > OS (Windows vs. Linux) does not much change here. Machines are either
> > fast on both OSes, or slow on both.
> > As you also pointed out, waiting for IO is the critical point. I ran a
> > profiler on one slow system and saw that the program was 95% of the
> > time in the call to sun.nio.ch.FileChannelImpl.force(boolean).
> >
> > But what really baffles me is the fact that the differences are not in
> > the range of 50% to 200% but in an order of magnitude of 4000% to
> > 10'000%! I've never seen similar behaviour with other db systems or
> > other benchmarks. What also surprises me is that we had not one system
> > that performs so-so. They all have 700 traversals/s or more or 50
> > traversals/s or less. Any performance tuning on such a slow linux
> > system as describe in the neo4j wiki had no effect.
> >
> > We were evaluating MongoDB and ran tests with very small transactions
> > as well (inserts) on the same developer machines as the neo4j tests
> > and got performance differences of around 50% to 200% - which is
> > absolutely explainable, regarding the different hardware. This even
> >
> > What makes me very, very nervous is the fact that the test runs very
> > poorly on a virtual machine on one of our XenServers. Has anyone maybe
> > a VMWare VM instance laying around where this simple test could be
> > performed as well? I'd be very interested to see some results with
> > other virtualization technologies.
> >
> >
> >
> > Best regards,
> > James
> >
> >
> > 2011/7/13 Michael Hunger <michael.hun...@neotechnology.com>:
> >> James,
> >>
> >> 10 tx/s in the write test seem to be not very much.
> >>
> >> A normal system should be able to run several hundred of them.
> >> With your small store-file I get:
> >>
> >> root@cineasts:~/write-test# ./run store logfile 33 1000 5000 100
> >> tx_count[100] records[312035] fdatasyncs[100] read[9.820132 MB]
> wrote[19.640265 MB]
> >> Time was: 0.25
> >> 400.0 tx/s, 1248140.0 records/s, 400.0 fdatasyncs/s, 40223.26 kB/s on
> reads, 80446.52 kB/s on writes
> >>
> >> From the statistics output you can also see that your system encounters
> a io_wait of around 20% during the operation.
> >> That high percentage of io_waits increases the load of the system
> manifold and causes it to have almost no cpu utilization because it is
> waiting _all the time_ for the disk operations
> >> to finish.
> >>
> >> Perhaps you should have a look at : filesystem, disk-controller,
> interrupts, etc. Unfortunately I'm no linux sysadmin guru, so I don't know
> which other knobs
> >> affect the io-latency of your local disk. Perhaps someone else can chime
> in.
> >>
> >> For comparison you can also fire up some amazon ec2 instances and check
> your test there.
> >>
> >> Cheers
> >>
> >> Michael
> >>
> >>
> >> Am 13.07.2011 um 13:58 schrieb Jean-Pierre Bergamin:
> >>
> >>> I ran the write_test on an ubuntu server 10.04 that performs bad with
> >>> our test app.
> >>> I first tried to use the 1GB store file as described here:
> >>> http://wiki.neo4j.org/content/Linux_Performance_Guide
> >>> I had the effect that a flush-x-x process got spawned even with large
> >>> transactions and the performance was very bad, probably because the
> >>> whole 1GB file cannot be cached in memory  (1.5 GB RAM in this
> >>> maschine).
> >>>
> >>> james@v4-test:~/write-test$ ./run ../store logfile 33 100 500 100
> >>> tx_count[100] records[27301] fdatasyncs[100] read[0.85919666 MB]
> >>> wrote[1.7183933 MB]
> >>> Time was: 279.747
> >>> 0.35746583 tx/s, 97.59175 records/s, 0.35746583 fdatasyncs/s,
> >>> 3.1450467 kB/s on reads, 6.2900934 kB/s on writes
> >>>
> >>>
> >>> Since the storefile of our test app is 51 bytes I reran the write_test
> >>> with a store file of 100K:
> >>>
> >>> james@v4-test:~$ dd if=/dev/urandom of=store bs=1K count=100
> >>> 100+0 records in
> >>> 100+0 records out
> >>> 102400 bytes (102 kB) copied, 0.0399093 s, 2.6 MB/s
> >>> james@v4-test:~$ dd if=store of=/dev/null bs=1K
> >>> 100+0 records in
> >>> 100+0 records out
> >>> 102400 bytes (102 kB) copied, 0.000379937 s, 270 MB/s
> >>>
> >>> With the smaller store file I got reasonable results:
> >>>
> >>> james@v4-test:~/write-test$ ./run ../store logfile 33 1000 5000 100
> >>> tx_count[100] records[286685] fdatasyncs[100] read[9.022336 MB]
> >>> wrote[18.044672 MB]
> >>> Time was: 9.935
> >>> 10.065425 tx/s, 28856.062 records/s, 10.065425 fdatasyncs/s, 929.9317
> >>> kB/s on reads, 1859.8634 kB/s on writes
> >>>
> >>> james@v4-test:~/write-test$ ./run ../store logfile 33 1000 2000 100
> >>> tx_count[100] records[148032] fdatasyncs[100] read[4.6587524 MB]
> >>> wrote[9.317505 MB]
> >>> Time was: 8.786
> >>> 11.381743 tx/s, 16848.623 records/s, 11.381743 fdatasyncs/s, 542.9732
> >>> kB/s on reads, 1085.9464 kB/s on writes
> >>>
> >>> james@v4-test:~/write-test$ ./run ../store logfile 33 100 500 100
> >>> tx_count[100] records[28514] fdatasyncs[100] read[0.8973713 MB]
> >>> wrote[1.7947426 MB]
> >>> Time was: 7.301
> >>> 13.6967535 tx/s, 3905.4924 records/s, 13.6967535 fdatasyncs/s,
> >>> 125.860596 kB/s on reads, 251.72119 kB/s on writes
> >>>
> >>> james@v4-test:~/write-test$ ./run ../store logfile 33 1 2 100
> >>> tx_count[100] records[100] fdatasyncs[100] read[0.0031471252 MB]
> >>> wrote[0.0062942505 MB]
> >>> Time was: 6.107
> >>> 16.374653 tx/s, 16.374653 records/s, 16.374653 fdatasyncs/s,
> >>> 0.52769876 kB/s on reads, 1.0553975 kB/s on writes
> >>>
> >>> So even with very small transaction it seems as if a reasonable number
> >>> of transactions per seconds can be achieved on this machine with the
> >>> write_test.
> >>>
> >>>
> >>> When running the write_test app, I get the following vmstat output:
> >>>
> >>> james@v4-test:~/neo4j-traversal$ vmstat 3
> >>> procs -----------memory---------- ---swap-- -----io---- -system--
> ----cpu----
> >>> r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> id wa
> >>> 0  1      0 830000  32000  61272    0    0     4   157    8    2  8 11
> 76  4
> >>> 0  1      0 829776  32352  61288    0    0     0   281  475  663  1  0
> 46 53
> >>> 0  1      0 829520  32704  61280    0    0     0   251  483  664  0  1
> 51 48
> >>> 0  1      0 828992  33104  61292    0    0     0   293  524  698  0  1
> 51 48
> >>> 0  1      0 828496  33496  61288    0    0     0   261  486  677  0  1
> 50 49
> >>> 0  1      0 828264  33848  61288    0    0     0   249  469  666  1  1
> 46 52
> >>> 0  1      0 827860  34192  61300    0    0     0   257  431  633  0  1
> 48 51
> >>> 0  1      0 827280  34552  61320    0    0     0   265  486  659  1  0
> 52 47
> >>>
> >>> When running our sample app, the vmstat output is:
> >>>
> >>> james@v4-test:~/write-test$ vmstat 3
> >>> procs -----------memory---------- ---swap-- -----io---- -system--
> ----cpu----
> >>> r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> id wa
> >>> 1  0      0 756772  23684  67676    0    0     4   157    8    2  8 11
> 76  4
> >>> 0  0      0 759260  23828  67744    0    0     0   352  472  655 10  0
> 64 26
> >>> 0  0      0 759136  23948  67804    0    0     0   305  462  654  9  1
> 69 21
> >>> 0  0      0 758608  24076  67892    0    0     0   343  479  663  7  1
> 67 26
> >>> 0  2      0 758236  24228  67956    0    0     0   375  485  678 12  0
> 62 25
> >>> 0  0      0 756208  24360  68012    0    0     0   333  523  680 13  1
> 68 18
> >>> 0  0      0 756828  24496  68080    0    0     0   348  461  654  7  0
> 69 24
> >>> 0  1      0 756604  24632  68136    0    0     0   355  459  680  5  0
> 72 22
> >>> 0  0      0 756356  24768  68204    0    0     0   371  445  649  5  0
> 69 26
> >>> 0  0      0 756232  24912  68272    0    0     0   375  444  649  3  1
> 77 19
> >>> 0  0      0 756232  25040  68328    0    0     0   308  401  620  5  1
> 65 29
> >>>
> >>> The block out (bo) value is not much higher on our test app.
> >>>
> >>> There is not flush-x-x process when running our test app.
> >>>
> >>> Here the output of the other commands, while running the test app:
> >>>
> >>> james@v4-test:~$ mpstat 3
> >>> Linux 2.6.32-32-server (v4-test)        07/13/2011      _x86_64_
>  (2 CPU)
> >>>
> >>> 01:49:01 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft
> >>> %steal  %guest   %idle
> >>> 01:49:04 PM  all   61.72    0.00    4.29    0.17    0.00    0.00
> >>> 0.00    0.00   33.83
> >>> 01:49:07 PM  all   81.09    0.00    3.12    1.81    0.00    0.33
> >>> 0.00    0.00   13.65
> >>> 01:49:10 PM  all   63.34    0.00    7.36    4.58    0.33    0.82
> >>> 0.00    0.00   23.57
> >>> 01:49:13 PM  all   31.75    0.00    2.45   20.46    0.00    0.00
> >>> 0.00    0.00   45.34
> >>> 01:49:16 PM  all    5.41    0.00    0.98   18.20    0.00    0.00
> >>> 0.00    0.00   75.41
> >>> 01:49:19 PM  all   11.99    0.00    1.15   24.14    0.00    0.00
> >>> 0.00    0.00   62.73
> >>> 01:49:22 PM  all    5.15    0.00    1.61   21.58    0.00    0.00
> >>> 0.00    0.00   71.66
> >>> 01:49:25 PM  all    4.38    0.00    0.16   21.72    0.00    0.00
> >>> 0.00    0.00   73.74
> >>> 01:49:28 PM  all    4.66    0.00    0.67   23.79    0.00    0.00
> >>> 0.00    0.00   70.88
> >>> 01:49:31 PM  all    7.69    0.00    0.16   19.71    0.00    0.00
> >>> 0.00    0.00   72.44
> >>> 01:49:34 PM  all    4.40    0.00    0.98   24.96    0.00    0.00
> >>> 0.00    0.00   69.66
> >>> 01:49:37 PM  all    2.59    0.00    0.81   14.40    0.00    0.00
> >>> 0.00    0.00   82.20
> >>>
> >>> james@v4-test:~$ vmstat -S M 3
> >>> procs -----------memory---------- ---swap-- -----io---- -system--
> ----cpu----
> >>> r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> id wa
> >>> 0  0      0    751     44    108    0    0     4   157    8    3  8 11
> 76  4
> >>> 2  0      0    715     44    108    0    0     0    39  455  594 41  3
> 56  0
> >>> 1  1      0    684     44    108    0    0    49     0  623  707 86  3
> 10  1
> >>> 0  0      0    679     44    109    0    0     1    48 1781 2981 76  7
> 16  1
> >>> 0  1      0    661     44    106    0    0     0   315 1198 1965 34  5
> 43 17
> >>> 0  0      0    661     44    106    0    0     0   340  475  672  7  1
> 72 21
> >>> 0  0      0    671     44    107    0    0     0   344  511  698  9  1
> 65 24
> >>> 0  1      0    671     44    107    0    0     0   355  495  701  8  2
> 70 21
> >>> 0  1      0    670     44    107    0    0     0   311  473  659  4  0
> 74 22
> >>> 0  1      0    670     44    107    0    0     0   336  467  672  5  0
> 76 19
> >>> 0  0      0    670     45    107    0    0     0   351  492  692  8  0
> 67 24
> >>>
> >>>
> >>> james@v4-test:~/write-test$ iostat -m 3
> >>> Linux 2.6.32-32-server (v4-test)        07/13/2011      _x86_64_
>  (2 CPU)
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           8.11    0.00   11.45    4.39    0.00   76.05
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda               5.07         0.01         0.31       9291     348972
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>          48.03    0.00    3.78    0.16    0.00   48.03
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda               4.00         0.00         0.04          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>          86.57    0.00    2.65    0.50    0.00   10.28
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda             104.33         0.05         0.00          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>          70.03    0.00    9.12    1.30    0.00   19.54
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda            1595.67         0.00         0.05          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>          32.18    0.00    3.12   20.69    0.00   44.01
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda             245.00         0.00         0.35          0          1
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           6.37    0.00    0.82   20.42    0.00   72.39
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              98.33         0.00         0.33          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>          10.56    0.00    1.32   23.43    0.00   64.69
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              95.33         0.00         0.34          0          1
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           6.74    0.00    1.44   20.87    0.00   70.95
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              89.33         0.00         0.33          0          1
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           3.58    0.00    0.33   22.80    0.00   73.29
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              82.33         0.00         0.32          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           5.09    0.00    0.49   17.73    0.00   76.68
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              95.67         0.00         0.33          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           7.93    0.00    0.16   24.60    0.00   67.31
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              91.67         0.00         0.34          0          1
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           3.91    0.00    0.81   25.08    0.00   70.20
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              94.33         0.00         0.34          0          1
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           3.22    0.00    0.97   16.43    0.00   79.39
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              89.33         0.00         0.35          0          1
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           2.62    0.00    0.66   28.69    0.00   68.03
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              67.00         0.00         0.25          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           4.23    0.00    0.49   28.34    0.00   66.94
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              93.33         0.00         0.47          0          1
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>          13.58    0.00    0.33   19.54    0.00   66.56
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              89.67         0.00         0.34          0          1
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>          23.69    0.00    1.80   22.88    0.00   51.63
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              68.33         0.00         0.26          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           6.57    0.00    0.33   18.56    0.00   74.55
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              91.67         0.00         0.31          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           2.64    0.00    0.99   30.31    0.00   66.06
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              82.67         0.00         0.34          0          1
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           0.00    0.00    0.83   17.69    0.00   81.49
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              91.00         0.00         0.30          0          0
> >>> sdb               0.00         0.00         0.00          0          0
> >>>
> >>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>>           3.96    0.00    0.95   26.78    0.00   68.30
> >>>
> >>> Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
> >>> sda              80.00         0.00         0.30          0          0
> >>>
> >>>
> >>> I'll try to run a profiler and see what the problem could be.
> >>>
> >>> Any other ideas in the meantime?
> >>>
> >>>
> >>>
> >>> Best regards
> >>> James
> >>>
> >>>
> >>> 2011/7/13 Michael Hunger <michael.hun...@neotechnology.com>:
> >>>> James,
> >>>>
> >>>> I reran the tests with the binary JDK from oracle as well as the one
> installed via apt-get.
> >>>>
> >>>> Both yielded the same good results.
> >>>>
> >>>> Did you re-run the "linux-write-test" on the different performing
> systems (especially try to set it to small tx - 10 or so) ?
> >>>>
> >>>> Also please look into system configurations like file system,
> page-sizes etc.
> >>>>
> >>>> You can also try to run a profiler (like visualvm or yourkit to the
> remote system) so that the bottleneck gets obvious.
> >>>>
> >>>> Thanks
> >>>>
> >>>> Michael
> >>>>
> >>>> Am 13.07.2011 um 12:06 schrieb Jean-Pierre Bergamin:
> >>>>
> >>>>> On Ubuntu, we installed the sun jdk with apt-get:
> >>>>>
> >>>>> $ sudo apt-get install python-software-properties
> >>>>> $ sudo add-apt-repository "deb http://archive.canonical.com/ lucid
> partner"
> >>>>> $ sudo apt-get update
> >>>>> $ sudo apt-get install sun-java6-jdk
> >>>>>
> >>>>>
> >>>>> Best regards,
> >>>>> James
> >>>>>
> >>>>> 2011/7/13 Michael Hunger <michael.hun...@neotechnology.com>:
> >>>>>> James,
> >>>>>>
> >>>>>> So you didn't install openjdk on the unix machines using apt-get but
> the sun/oracle jdk binary?
> >>>>>>
> >>>>>> Cheers
> >>>>>>
> >>>>>> Michael
> >>>>>>
> >>>>>> Am 13.07.2011 um 11:49 schrieb Jean-Pierre Bergamin:
> >>>>>>
> >>>>>>> Hi Michael
> >>>>>>>
> >>>>>>> All systems at least have Suns JDK 1.6, but with different minor
> >>>>>>> numbers. On my Windows that runs normally fast, I have 1.6.0_22, on
> >>>>>>> CentOS which is slow, there is also 1.6.0_22. So I *believe* the
> >>>>>>> problem is not related to the JDK version.
> >>>>>>> The systems are all commodity hardware - a variety of newer Dektop
> and
> >>>>>>> Laptop maschines. I'll gather all specs and let you know.
> >>>>>>>
> >>>>>>> If you'd like to start the app with different JVM params, you'd
> need
> >>>>>>> to use exec:exec with java as the executable and specify the params
> in
> >>>>>>> the pom.xml.
> >>>>>>> See
> http://mojo.codehaus.org/exec-maven-plugin/examples/example-exec-for-java-programs.html
> >>>>>>>
> >>>>>>> We already experimented on one Linux instance with different
> >>>>>>> Memory-Settings etc, but the very low rate did not change. And the
> 7
> >>>>>>> nodes should not eat up too much memory anyway. ;-)
> >>>>>>> We also applied the tipps from the Linux Performance Guide on the
> >>>>>>> neo4j Wiki without any noticable changes.
> >>>>>>>
> >>>>>>>
> >>>>>>> Best regards,
> >>>>>>> James
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> 2011/7/13 Michael Hunger <michael.hun...@neotechnology.com>:
> >>>>>>>> What version of the JDK are you using on the different systems?
> >>>>>>>>
> >>>>>>>> The maven exec goal doesn't define any JVM parameters?
> >>>>>>>>
> >>>>>>>> Could you try to explicitely set those to (for instance) "-server
> -Xmx512m", different OS implementations of the jvm have different mechanisms
> on
> >>>>>>>> selecting memory, etc.
> >>>>>>>>
> >>>>>>>> What are the memory / cpu / disk characteristics of the different
> systems?
> >>>>>>>>
> >>>>>>>> On my MacBook Air I get around 2800 traversals / sec.
> >>>>>>>>
> >>>>>>>> I'll look into it.
> >>>>>>>>
> >>>>>>>> Cheers
> >>>>>>>>
> >>>>>>>> Michael
> >>>>>>>>
> >>>>>>>> Am 13.07.2011 um 10:28 schrieb Jean-Pierre Bergamin:
> >>>>>>>>
> >>>>>>>>> 2011/7/13 Jean-Pierre Bergamin <jpberga...@gmail.com>:
> >>>>>>>>>
> >>>>>>>>>> We have severe performance issues on Linux.
> >>>>>>>>>
> >>>>>>>>> We just ran the tests on another Windows 7 x64 Laptop and we also
> >>>>>>>>> faced very bad performance with just 26 traversals per second
> >>>>>>>>> (compared to 1000 on a other Windows machine).
> >>>>>>>>> So it is not a Linux problem per se, but this behaviour shows up
> on
> >>>>>>>>> some machines.
> >>>>>>>>>
> >>>>>>>>> Any thoughts?
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Best regards,
> >>>>>>>>> James
> >>>>>>>>> _______________________________________________
> >>>>>>>>> Neo4j mailing list
> >>>>>>>>> User@lists.neo4j.org
> >>>>>>>>> https://lists.neo4j.org/mailman/listinfo/user
> >>>>>>>>
> >>>>>>>> _______________________________________________
> >>>>>>>> Neo4j mailing list
> >>>>>>>> User@lists.neo4j.org
> >>>>>>>> https://lists.neo4j.org/mailman/listinfo/user
> >>>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> Neo4j mailing list
> >>>>>>> User@lists.neo4j.org
> >>>>>>> https://lists.neo4j.org/mailman/listinfo/user
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> Neo4j mailing list
> >>>>>> User@lists.neo4j.org
> >>>>>> https://lists.neo4j.org/mailman/listinfo/user
> >>>>>>
> >>>>> _______________________________________________
> >>>>> Neo4j mailing list
> >>>>> User@lists.neo4j.org
> >>>>> https://lists.neo4j.org/mailman/listinfo/user
> >>>>
> >>>> _______________________________________________
> >>>> Neo4j mailing list
> >>>> User@lists.neo4j.org
> >>>> https://lists.neo4j.org/mailman/listinfo/user
> >>>>
> >>> _______________________________________________
> >>> Neo4j mailing list
> >>> User@lists.neo4j.org
> >>> https://lists.neo4j.org/mailman/listinfo/user
> >>
> >> _______________________________________________
> >> Neo4j mailing list
> >> User@lists.neo4j.org
> >> https://lists.neo4j.org/mailman/listinfo/user
> >>
> > _______________________________________________
> > Neo4j mailing list
> > User@lists.neo4j.org
> > https://lists.neo4j.org/mailman/listinfo/user
>
> _______________________________________________
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
>
_______________________________________________
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to