Hi All
I've been worried that I did not have a good backup of my cluster and
having looked around I could not find anything which did not require
local storage.
I found Rhian script while looking for a backup solution before a major
version upgrade and found that it worked very well.
I'm
On Fri, 2019-08-16 at 14:12 +0200, Jonas Jelten wrote:
> Hi!
>
> I've missed your previous post, but we do have inline_data enabled on our
> cluster.
> We've not yet benchmarked, but the filesystem has a wide variety of file
> sizes, and it sounded like a good idea to speed
> up performance. We
A couple of weeks ago, I sent a request to the mailing list asking
whether anyone was using the inline_data support in cephfs:
https://docs.ceph.com/docs/mimic/cephfs/experimental-features/#inline-data
I got exactly zero responses, so I'm going to formally propose that we
move to start
Hello,
So I am able to install Ceph on CentOS 7.4 and I can successfully integrate my
Openstack testbed with it.
However, I am facing with an issue recently where after deleting stack, my
cinder volumes are not getting deleted and they are getting stuck.
Any idea on this issue?
Best Regards,
This probably muddies the water. Note Active cluster with around 22
read/write IOPS and 200kB read/write
A CephFS mounted with 3 hosts 6 osd per host with 8G public and 10G
private networking for Ceph.
No SSDs and mostly WD Red 1T 2.5" drives some are HGST 1T 7200.
root@blade7:~# fio
Write and read with 2 hosts 4 osd :
mkfs.ext4 /dev/rbd/kube/bench
mount /dev/rbd/kube/bench /mnt/
dd if=/dev/zero of=test bs=8192k count=1000 oflag=direct
8388608000 bytes (8.4 GB, 7.8 GiB) copied, 117.541 s, 71.4 MB/s
fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randwrite
-direct=1
on a new ceph cluster with the same software and config (ansible) on
the old hardware. 2 replica, 1 host, 4 osd.
=> New hardware : 32.6MB/s READ / 10.5MiB WRITE
=> Old hardware : 184MiB/s READ / 46.9MiB WRITE
No discussion ? I suppose I will keep the old hardware. What do you
think ? :D
In
on a new ceph cluster with the same software and config (ansible) on
the old hardware. 2 replica, 1 host, 4 osd.
RBD
fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randread
-runtime=60 -pool=kube -rbdname=bench
READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s),
io=7189MiB
fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randread
-runtime=60 -filename=/dev/rbd/kube/bench
Now add -direct=1 because Linux async IO isn't async without O_DIRECT.
:)
+ Repeat the same for randwrite.
___
ceph-users mailing list --
RBD
fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randread
-runtime=60 -pool=kube -rbdname=bench
READ: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s),
io=1308MiB (1371MB), run=60011-60011msec
fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randwrite
-runtime=60
And once more you're checking random I/O with 4 MB !!! block size.
Now recheck it with bs=4k.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
fio -ioengine=rbd -name=test -bs=4M -iodepth=32 -rw=randwrite
-runtime=60 -pool=kube -rbdname=bench
WRITE: bw=89.6MiB/s (93.9MB/s), 89.6MiB/s-89.6MiB/s (93.9MB/s-
93.9MB/s), io=5548MiB (5817MB), run=61935-61935msec
fio -ioengine=rbd -name=test -bs=4M -iodepth=32 -rw=randread
-runtime=60
- libaio randwrite
- libaio randread
- libaio randwrite on mapped rbd
- libaio randread on mapped rbd
- rbd read
- rbd write
recheck RBD with RAND READ / RAND WRITE
you're again comparing RANDOM and NON-RANDOM I/O
your SSDs aren't that bad, 3000 single-thread iops isn't the worst
Ok I read your link. My ssds are bad. They got capacitors ... I don't
choose them. They come with the hardware I rent. Perhaps it will be
better to switch to hdd. I cannot even but journal on them ... bad news
:(
Le vendredi 16 août 2019 à 17:37 +0200, Olivier AUDRY a écrit :
> hello
>
> here
hello
here on the nvme partition directly
- libaio randwrite /dev/nvme1n1p4 => WRITE: bw=12.1MiB/s (12.7MB/s),
12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=728MiB (763MB), run=60001-
60001msec
- libaio randread /dev/nvme1n1p4 => READ: bw=35.6MiB/s (37.3MB/s),
35.6MiB/s-35.6MiB/s
Now to go for "apples to apples" either run
fio -ioengine=libaio -name=test -bs=4k -iodepth=1 -direct=1 -fsync=1
-rw=randwrite -runtime=60 -filename=/dev/nvmeX
to compare with the single-threaded RBD random write result (the test is
destructive, so use a separate partition without
Personally I would not be trying to create a Ceph cluster across Consumer
Internet links, usually their upload speed is so slow and Ceph is so chatty
that it would make for a horrible experience. If you are looking for a
backup solution, then I would look at some sort of n-way rsync solution, or
hello
just for the record the nvme disk are pretty fast.
dd if=/dev/zero of=test bs=8192k count=100 oflag=direct
100+0 records in
100+0 records out
838860800 bytes (839 MB, 800 MiB) copied, 0.49474 s, 1.7 GB/s
oau
Le vendredi 16 août 2019 à 13:31 +0200, Olivier AUDRY a écrit :
> hello
>
>
Hi!
I've missed your previous post, but we do have inline_data enabled on our
cluster.
We've not yet benchmarked, but the filesystem has a wide variety of file sizes,
and it sounded like a good idea to speed
up performance. We mount it with the kernel client only, and I've had the
subjective
hello
here the result :
fio --ioengine=rbd --name=test --bs=4k --iodepth=1 --rw=randwrite --
runtime=60 -pool=kube -rbdname=bench
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=rbd, iodepth=1
fio-3.12
Starting 1 process
Jobs: 1 (f=1):
Hi.
It's not Ceph to blame!
Linux does not support cached asynchronous I/O, except for the new
io-uring! I.e. it supports aio calls, but they just block when you're
trying to do them on an FD opened without O_DIRECT.
So basically what happens when you benchmark it with -ioengine=libaio
Hi,
I think one of your problem is bcache.
Here is one example:
https://habr.com/en/company/selectel/blog/450818
BR,
Sebastian
> On 16 Aug 2019, at 00:49, Rich Bade wrote:
>
> Unfortunately the scsi reset on this vm happened again last night so this
> hasn't resolved the issue.
> Thanks
Hi,
I think one of your problem is bcache.
Here is one example:
https://habr.com/en/company/selectel/blog/450818
BR,
Sebastian
> On 16 Aug 2019, at 00:49, Rich Bade wrote:
>
> Unfortunately the scsi reset on this vm happened again last night so this
> hasn't resolved the issue.
> Thanks
23 matches
Mail list logo