That's right. I didn't actually use Jewel for very long. I'm glad it worked
for you.
On Fri, May 11, 2018, 4:49 PM Webert de Souza Lima
wrote:
> Thanks David.
> Although you mentioned this was introduced with Luminous, it's working
> with Jewel.
>
> ~# ceph osd pool stats
Thanks David.
Although you mentioned this was introduced with Luminous, it's working with
Jewel.
~# ceph osd pool stats
Fri May 11 17:41:39 2018
pool rbd id 5
client io 505 kB/s rd, 3801 kB/s wr, 46 op/s rd, 27 op/s wr
pool rbd_cache id 6
client io 2538 kB/s rd,
`ceph osd pool stats` with the option to specify the pool you are
interested in should get you the breakdown of IO per pool. This was
introduced with luminous.
On Fri, May 11, 2018 at 2:39 PM Webert de Souza Lima
wrote:
> I think ceph doesn't have IO metrics will filters
I think ceph doesn't have IO metrics will filters by pool right? I see IO
metrics from clients only:
ceph_client_io_ops
ceph_client_io_read_bytes
ceph_client_io_read_ops
ceph_client_io_write_bytes
ceph_client_io_write_ops
and pool "byte" metrics, but not "io":
Hey Jon!
On Wed, May 9, 2018 at 12:11 PM, John Spray wrote:
> It depends on the metadata intensity of your workload. It might be
> quite interesting to gather some drive stats on how many IOPS are
> currently hitting your metadata pool over a week of normal activity.
>
Any
On Wed, May 9, 2018 at 3:32 PM, Webert de Souza Lima
wrote:
> Hello,
>
> Currently, I run Jewel + Filestore for cephfs, with SSD-only pools used for
> cephfs-metadata, and HDD-only pools for cephfs-data. The current
> metadata/data ratio is something like 0,25% (50GB