On 07/10/15 02:13, Christoph Adomeit wrote:
> Hi Guys,
>
> I have a ceph pool that is mixed with 10k rpm disks and 7.2 k rpm disks.
>
> There are 85 osds and 10 of them are 10k
> Size is not an issue, the pool is filled only 20%
>
> I want to somehow prefer the 10 k rpm disks so that they get more i/o
>
> What is the most intelligent wy to prefer the faster disks ?
> Just give them another weight or are there other methods ?

If you cluster is read intensive you can use primary affinity to
redirect reads to your 10k drives. Add

mon osd allow primary affinity = true

in your ceph.conf, restart your monitors and for each OSD on 7.2k use :

ceph osd primary-affinity <7.2k_id> 0

For every pg with at least one 10k OSD, this will make one of the 10k
drive OSD primary and will perform reads on it.

But with only 10 OSDs being 10k and 75 OSDs being 7.2k, I'm not sure
what will happen: most pgs clearly will be only on 7.2k OSDs so you may
not gain much.

It's worth a try if you don't want to reorganize your storage though and
it's by far the less time consuming if you want to revert your changes
later.

Another way with better predictability would be to define a 10k root and
use a custom rule for your pool which would take the primary from this
new root and switch to the default root for the next OSDs, but you don't
have enough of them to keep the data balanced (for a size=3 pool, you'd
need 1/3 of 10k OSD and 2/3 of 7.2k OSD). This would create a bottleneck
on your 10k drives.

I fear there's no gain in creating a separate 10k pool: you don't have
enough drives to get as much performance from the new 10k pool as you
can from the resulting 7.2k-only pool. Maybe with some specific data
access pattern this could work but I'm not sure what those would be (you
might get more useful suggestions if you describe how the current pool
is used).

Lionel
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to