[ceph-users] Re: Mapped rbd is very slow

2019-08-14 Thread Paul Emmerich
On Wed, Aug 14, 2019 at 2:38 PM Olivier AUDRY wrote: > let's test random write > rbd -p kube bench kube/bench --io-type write --io-size 8192 --io-threads 256 > --io-total 10G --io-pattern rand > elapsed: 125 ops: 1310720 ops/sec: 10416.31 bytes/sec: 85330446.58 > > dd if=/dev/zero of=test b

[ceph-users] Re: Mapped rbd is very slow

2019-08-14 Thread Ilya Dryomov
On Wed, Aug 14, 2019 at 2:49 PM Paul Emmerich wrote: > > On Wed, Aug 14, 2019 at 2:38 PM Olivier AUDRY wrote: > > let's test random write > > rbd -p kube bench kube/bench --io-type write --io-size 8192 --io-threads > > 256 --io-total 10G --io-pattern rand > > elapsed: 125 ops: 1310720 ops/s

[ceph-users] Re: Mapped rbd is very slow

2019-08-14 Thread Olivier AUDRY
hello I mean a filesystem mounted on top of a mapped rbd rbd create --size=10G kube/benchrbd feature disable kube/bench object- map fast-diff deep-flattenrbd map bench --pool kube --name client.admin/sbin/mkfs.ext4 /dev/rbd/kube/bench mount /dev/rbd/kube/bench /mnt/cd /mnt/ about the bench I did

[ceph-users] Re: Mapped rbd is very slow

2019-08-15 Thread Vitaliy Filippov
rbd -p kube bench kube/bench --io-type write --io-threads 1 --io-total 10G --io-pattern rand elapsed:14 ops: 262144 ops/sec: 17818.16 bytes/sec: 72983201.32 It's a totally unreal number. Something is wrong with the test. Test it with `fio` please: fio -ioengine=rbd -name=test -bs=4k

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Olivier AUDRY
hello here the result : fio --ioengine=rbd --name=test --bs=4k --iodepth=1 --rw=randwrite -- runtime=60 -pool=kube -rbdname=bench test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=1 fio-3.12 Starting 1 process Jobs: 1 (f=1): [w(1)][100.0%][w=2

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Olivier AUDRY
hello just for the record the nvme disk are pretty fast. dd if=/dev/zero of=test bs=8192k count=100 oflag=direct 100+0 records in 100+0 records out 838860800 bytes (839 MB, 800 MiB) copied, 0.49474 s, 1.7 GB/s oau Le vendredi 16 août 2019 à 13:31 +0200, Olivier AUDRY a écrit : > hello > > here

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread vitalif
Now to go for "apples to apples" either run fio -ioengine=libaio -name=test -bs=4k -iodepth=1 -direct=1 -fsync=1 -rw=randwrite -runtime=60 -filename=/dev/nvmeX to compare with the single-threaded RBD random write result (the test is destructive, so use a separate partition without dat

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Olivier AUDRY
hello here on the nvme partition directly - libaio randwrite /dev/nvme1n1p4 => WRITE: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=728MiB (763MB), run=60001- 60001msec - libaio randread /dev/nvme1n1p4 => READ: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Olivier AUDRY
Ok I read your link. My ssds are bad. They got capacitors ... I don't choose them. They come with the hardware I rent. Perhaps it will be better to switch to hdd. I cannot even but journal on them ... bad news :( Le vendredi 16 août 2019 à 17:37 +0200, Olivier AUDRY a écrit : > hello > > here on

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread vitalif
- libaio randwrite - libaio randread - libaio randwrite on mapped rbd - libaio randread on mapped rbd - rbd read - rbd write recheck RBD with RAND READ / RAND WRITE you're again comparing RANDOM and NON-RANDOM I/O your SSDs aren't that bad, 3000 single-thread iops isn't the worst possib

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Olivier AUDRY
fio -ioengine=rbd -name=test -bs=4M -iodepth=32 -rw=randwrite -runtime=60 -pool=kube -rbdname=bench WRITE: bw=89.6MiB/s (93.9MB/s), 89.6MiB/s-89.6MiB/s (93.9MB/s- 93.9MB/s), io=5548MiB (5817MB), run=61935-61935msec fio -ioengine=rbd -name=test -bs=4M -iodepth=32 -rw=randread -runtime=60 -pool=k

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread vitalif
And once more you're checking random I/O with 4 MB !!! block size. Now recheck it with bs=4k. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Olivier AUDRY
RBD fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randread -runtime=60 -pool=kube -rbdname=bench READ: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s), io=1308MiB (1371MB), run=60011-60011msec fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randwrite -runtime=60 -pool

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread vitalif
fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randread -runtime=60 -filename=/dev/rbd/kube/bench Now add -direct=1 because Linux async IO isn't async without O_DIRECT. :) + Repeat the same for randwrite. ___ ceph-users mailing list -- ceph-

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Olivier AUDRY
on a new ceph cluster with the same software and config (ansible) on the old hardware. 2 replica, 1 host, 4 osd. RBD fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randread -runtime=60 -pool=kube -rbdname=bench READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=7189MiB (75

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread vitalif
on a new ceph cluster with the same software and config (ansible) on the old hardware. 2 replica, 1 host, 4 osd. => New hardware : 32.6MB/s READ / 10.5MiB WRITE => Old hardware : 184MiB/s READ / 46.9MiB WRITE No discussion ? I suppose I will keep the old hardware. What do you think ? :D In fac

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Olivier AUDRY
Write and read with 2 hosts 4 osd : mkfs.ext4 /dev/rbd/kube/bench mount /dev/rbd/kube/bench /mnt/ dd if=/dev/zero of=test bs=8192k count=1000 oflag=direct 8388608000 bytes (8.4 GB, 7.8 GiB) copied, 117.541 s, 71.4 MB/s fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randwrite -direct=1 -ru

[ceph-users] Re: Mapped rbd is very slow

2019-08-16 Thread Mike O'Connor
This probably muddies the water. Note Active cluster with around 22 read/write IOPS and 200kB read/write A CephFS mounted with 3 hosts 6 osd per host with 8G public and 10G private networking for Ceph. No SSDs and mostly WD Red 1T 2.5" drives some are HGST 1T 7200. root@blade7:~# fio -ioengine=li