With the current releases of Ceph, the only way to accomplish this is
by gathering the IO stats on each client node. However, with the
future Nautilus release, this data will now be available directly from
the OSDs.

On Fri, Dec 28, 2018 at 6:18 AM Sinan Polat <si...@turka.nl> wrote:
>
> Hi all,
>
> We have a couple of hundreds RBD volumes/disks in our Ceph cluster, each RBD 
> disk is mounted by a different client. Currently we see quite high IOPS 
> happening on the cluster, but we don't know which client/RBD is causing it.
>
> Is it somehow easily to see the utilization per RBD disk?
>
> Thanks!
> Sinan
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to