Hi,
for rbd clients you can see statistics via:
rbd perf image iostat or
rbd perf image iotop.
You will see currently r/w IOPS done and bytes read/written.
Have a nice day
Rainer
Am 07.12.21 um 07:58 schrieb 胡 玮文:
Hi all,
Sometimes we see high IOPS in our cluster (from “ceph -s”), and
Hi all,
Sometimes we see high IOPS in our cluster (from “ceph -s”), and the access
latency is increased due to high load. Now how can I tell which client is
issuing a lot of requests? I want to find out the offending application so that
I can make changes to it. We are primarily using cephfs,
I should also say that I enabled the balancer with upmap mode, since the
only client (the backup server) is also running nautilus.
Seth
On 12/6/21 4:09 PM, Seth Galitzer wrote:
I'm running ceph 14.2.20 on Centos7, installed from the official
ceph-nautilus repo. I started a manual rebalance
I'm running ceph 14.2.20 on Centos7, installed from the official
ceph-nautilus repo. I started a manual rebalance run amd will set it
back to auto once that is done. But I'm already seeing cluster score of
0.015045, so I'm not sure what more it can do.
Thanks.
Seth
# ceph osd crush rule dump
Anthony,
Thanks for the input. I've got my command outputs below. As for the
balancer, I didn't realize it was off. Another colleague had suggested
this previously, but I didn't get very far with it before. I didn't
think much about it at the time since everything automatically
rebalanced
Hi,
Thanks for this hint, seems it happens a lot with proxmox users, lists
are full of such issues. Happened to me today when cloning on a proxmox
server with connectivity issues and where the process hanged.
Maybe a hook could be provided for an easier fix.
Cheers,
Andrej
On 11/03/2021
On 12/6/21 12:49, Yuri Weinstein wrote:
We merged 3 PRs on top of the RC1 tip:
https://github.com/ceph/ceph/pull/44164
https://github.com/ceph/ceph/pull/44154
https://github.com/ceph/ceph/pull/44201
Assuming that Neha or other leads see any point to retest any suites,
this is ready for
I have a fairly vanilla ceph nautilus setup. One node that is the mgr,
mds, and primary mon. Four nodes with 12 8TB osds each, two of which are
backup mons. I am configured for 3 replicas and 2048 pgs, per the
calculator. I recently added a new node with 12 10TB osds. Because of my
3 replicas,
Hello together,
i need to recreate the osd´s on one ceph-node, because the nvme wal device
has died. I replaced the nvme to a brand new one and i try now to recreate
the osd´s on this node, but i get an error while re-creating them.
Can somebody tell me why i get this error? i never saw before
Hi,
It's a bit weird that you benchmark 1024 bytes -- or is that your
realistic use-case?
This is smaller than the min alloc unit for even SSDs, so will need a
read/modify/write cycle to update, slowing substantially.
Anyway, since you didn't mention it, have you disabled the write cache
on your
Dear List,
until we upgraded our cluster 3 weeks ago we had a cute high performing small
productive CEPH cluster running Nautilus 14.2.22 on Proxmox 6.4 (Kernel 5.4-143
at this time). Then we started the upgrade to Octopus 15.2.15. Since we did an
online upgrade, we stopped the autoconvert
Hi Marius,
Have you changed any of the default settings? You've got a huge number
of pglog entries. Do you have any other pools as well? Even though
pglog is only taking up 6-7GB of the 37GB used, that's a bit of a red
flag for me. Something we don't track via the mempools is taking up a
Hi Eugen,
On 12/6/21 10:31, Eugen Block wrote:
> I'm curious if anyone is using this relatively new feature (I believe
> since Octopus?) in production. I haven't read too much about it in
> this list, so it's not really clear if nobody is using it or if it
> works like a charm without any
Hi *,
I'm curious if anyone is using this relatively new feature (I believe
since Octopus?) in production. I haven't read too much about it in
this list, so it's not really clear if nobody is using it or if it
works like a charm without any issues. One of our customers is
planning to
14 matches
Mail list logo