On 04/08/2015 11:40 AM, Jeff Epstein wrote:
Hi, thanks for answering. Here are the answers to your questions.
Hopefully they will be helpful.

On 04/08/2015 12:36 PM, Lionel Bouton wrote:
I probably won't be able to help much, but people knowing more will
need at least: - your Ceph version, - the kernel version of the host
on which you are trying to format /dev/rbd1, - which hardware and
network you are using for this cluster (CPU, RAM, HDD or SSD models,
network cards, jumbo frames, ...).

ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)

Linux 3.18.4pl2 #3 SMP Thu Jan 29 21:11:23 CET 2015 x86_64 GNU/Linux

The hardware is an Amazon AWS c3.large. So, a (virtual) Xeon(R) CPU
E5-2680 v2 @ 2.80GHz, 3845992 kB RAM, plus whatever other virtual
hardware Amazon provides.

AWS will cause some extra perf variance, but...

There's only one thing surprising me here: you have only 6 OSDs, 1504GB
(~ 250G / osd) and a total of 4400 pgs ? With a replication of 3 this is
2200 pgs / OSD, which might be too much and unnecessarily increase the
load on your OSDs.

Best regards,

Lionel Bouton

Our workload involves creating and destroying a lot of pools. Each pool
has 100 pgs, so it adds up. Could this be causing the problem? What
would you suggest instead?

...this is most likely the cause. Deleting a pool causes the data and
pgs associated with it to be deleted asynchronously, which can be a lot
of background work for the osds.

If you're using the cfq scheduler you can try decreasing the priority of these operations with the "osd disk thread ioprio..." options:

http://ceph.com/docs/master/rados/configuration/osd-config-ref/#operations

If that doesn't help enough, deleting data from pools before deleting
the pools might help, since you can control the rate more finely. And of
course not creating/deleting so many pools would eliminate the hidden
background cost of deleting the pools.

Josh
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to