Hello,
On Thu, 26 Dec 2019 18:11:29 +0100 Ml Ml wrote:
> Hello Christian,
>
> thanks for your reply. How should i benchmark my OSDs?
>
Benchmarking individual components can be helpful if you suspect
something, but you need to get a grip on what your systems are doing,
re-read my mail and
Hello List,
i have size = 3 and min_size = 2 with 3 Nodes.
My OSDs:
ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 60.17775 root default
-2 20.21155 host ceph01
0 hdd 1.71089 osd.0 up 1.0 1.0
8 hdd 1.71660 osd.8
Hello Christian,
thanks for your reply. How should i benchmark my OSDs?
"dd bs=1M count=2048 if=/dev/sdX of=/dev/null" for each OSD?
Here are my OSD (write) benchmarks:
root@ceph01:~# ceph tell osd.* bench -f plain
osd.0: bench: wrote 1GiB in blocks of 4MiB in 7.80794 sec at 131MiB/sec 32 IOPS
I would start by viewing "ceph status", drive IO with: "iostat -x 1
/dev/sd{a..z}" and the CPU/RAM usage of the active MDS. If "ceph status"
warns that the MDS cache is oversized, that may be an easy fix.
On Thu, Dec 26, 2019 at 7:33 AM renjianxinlover
wrote:
> hello,
>recently, after
hello,
recently, after deleting some fs data in a small-scale ceph cluster,
some clients IO performance became bad, specially latency. for example, opening
a tiny text file by vim maybe consumed nearly twenty seconds, i am not clear
about how to diagnose the cause, could anyone give
Hi all,
I have a ceph cluster with 4+2 EC used as a secondary storage system for
offloading big files from another storage system. Even if most of the
files are big (at least 50MB), we have also some small objects - less
than 4MB each. The current storage usage is 358TB of raw data and 237TB