On 9/2/19 11:53 AM, Zoltan Arnold Nagy wrote:
> On 2019-09-02 08:43, Wido den Hollander wrote:
>> On 9/1/19 9:51 PM, Zoltan Arnold Nagy wrote:
>>> On 2019-09-01 05:57, Konstantin Shalygin wrote:
>>>> On 8/31/19 4:14 PM, Zoltan Arnold Nagy wrote:
>>>>> Could you elaborate a bit more? upmap is used to map specific PGs to
>>>>> specific OSDs
>>>>> in order to deal with CRUSH inefficiencies.
>>>>>
>>>>> Why would I want to add a layer of indirection when the goal is to
>>>>> remove the bucket
>>>>> entirely?
>>>>
>>>> As I understood you want to make huge CRUSH map changes without huge
>>>> data movement.
>>>>
>>>> Upmap can help with this, you map your current PG's to OSD's that
>>>> already holds this PG's.
>>>
>>> Let's say with upmap I take the current mapping and "override" CRUSH,
>>> then remove the
>>> rack bucket and move the host directly to the root. And then what? We'd
>>> never, ever
>>> be able to go back, and what's worse, for any new expansions we'd need
>>> to manage the
>>> PG mappings manually.
>>>
>>> Unless I'm missing something, while this would solve the problem in the
>>> very very short
>>> term, it would create horrible issues down the line, and would be
>>> shooting ourselves
>>> in the foot as far as maintainability is concerned.
>>>
>>> As I've said we'd been rolling this cluster since at least Firefly, and
>>> I don't want to
>>> mess with it.
>>>
>>> The solution I've outlined in my original mail works (swapping the
>>> bucket IDs) and seems
>>> more maintainable, however, I've been wondering if the bucket types are
>>> just labels or do
>>> they have any other semantic meaning?
>>
>> They are just labels. Just like the names of buckets.
>>
>> Everything is mapped to the IDs you'll find in the CRUSHMap. Negative
>> IDs for buckets and positive IDs for devices/OSDs.
>>
>> upmap should be avoided in this case for exactly the reasons you already
>> mention: You create some kind of database which is not the solution
>> here. The only thing you can do with upmap is make this migration
>> smoother by removing upmap items in batches afterwards.
> 
> I wonder if I can swap the types just by editing the crush map, eg change
> a rack into a host without any issues. I'll give this a try :)
> 
> If so, then I could just swap the IDs, move the current host into the place
> of the bucket and then delete the host without causing any CRUSH mapping
> changes,
> eg. no data movement.

You can do this offline with crushtool. Use the --compare option.

$ crushtool -i crushmap.new --compare crushmap.old

This will tell you if there are any changes.

Wido
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to