Thanks, Sage! That did the trick.

Wido, seems like an interesting approach but I wasn't brave enough to
attempt it!

Eric, I suppose this does the same thing that the crushtool reclassify
feature does?

Thank you both for your suggestions.

For posterity:

-  I grabbed some 14.0.1 packages, extracted crushtool
and libceph-common.so.1
- Ran 'crushtool -i cm --reclassify --reclassify-root default hdd -o
cm_reclassified'
- Compared the maps with:

crushtool -i cm --compare cm_reclassified

That suggested I would get an acceptable amount of data reshuffling which I
expected, I didn't use --set-subtree-class as I'd already added SSD drives
to the cluster.

My ultimate goal was to migrate the cephfs_metadata pool onto SSD drives
while leaving the cephfs_data pool on the HDD drives. The device classes
feature made that really trivial, I just created an intermediary rule which
would use both HDD and SDD hosts (I didn't have any mixed devices in
hosts), set the Metadata pool to use the new rule, waited for recovery and
then set the Metadata pool to use an SSD-only rule. Not sure if that
intermediary stage was strictly necessary, I was concerned about inactive
PGs.

Thanks,
David

On Mon, Dec 31, 2018 at 6:06 PM Eric Goirand <egoir...@redhat.com> wrote:

> Hi David,
>
> CERN has provided with a python script to swap the correct bucket IDs
> (default <-> hdd), you can find it here :
>
> https://github.com/cernceph/ceph-scripts/blob/master/tools/device-class-id-swap.py
>
> The principle is the following :
> - extract the CRUSH map
> - run the script on it => it creates a new CRUSH file.
> - edit the CRUSH map and modify the rule associated with the pool(s) you
> want to associate with HDD OSDs only like :
> => step take default WITH step take default class hdd
>
> Then recompile and reinject the new CRUSH map and voilĂ  !
>
> Your cluster should be using only the HDD OSDs without rebalancing (or a
> very small amount).
>
> In case you have forgotten something, just reapply the former CRUSH map
> and start again.
>
> Cheers and Happy new year 2019.
>
> Eric
>
>
>
> On Sun, Dec 30, 2018, 21:16 David C <dcsysengin...@gmail.com> wrote:
>
>> Hi All
>>
>> I'm trying to set the existing pools in a Luminous cluster to use the hdd
>> device-class but without moving data around. If I just create a new rule
>> using the hdd class and set my pools to use that new rule it will cause a
>> huge amount of data movement even though the pgs are all already on HDDs.
>>
>> There is a thread on ceph-large [1] which appears to have the solution
>> but I can't get my head around what I need to do. I'm not too clear on
>> which IDs I need to swap. Could someone give me some pointers on this
>> please?
>>
>> [1]
>> http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000109.html
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to