we recently added 3 new nodes with 12x12TB OSDs. It took 3 days or so
to reshuffle the data and another 3 days to split the pgs. I did
increase the number of max backfills to speed up the process. We didn't
notice the reshuffling in normal operation.
On Wed, 2021-03-24 at 19:32 +0100, Dan van
Not sure why, without looking at your crush map in detail.
But to be honest, I don't think you need such a tool anymore. It was
written back in the filestore days when backfilling could be much more
disruptive than today.
You have only ~10 osds to fill up: just mark them fully in, or increment
I might be stupid, but do I do something wrong with the script?
[root@mon1 ceph-scripts]# ./tools/ceph-gentle-reweight -o
43,44,45,46,47,48,49,50,51,52,53,54,55 -s 00:00 -e 23:59 -b 82 -p rbd -t
1.74660
Draining OSDs: ['43', '44', '45', '46', '47', '48', '49', '50', '51',
'52', '53', '54', '55']
Den ons 24 mars 2021 kl 14:55 skrev Boris Behrens :
>
> Oh cool. Thanks :)
>
> How do I find the correct weight after it is added?
> For the current process I just check the other OSDs but this might be a
> question that someone will raise.
>
> I could imagine that I need to adjust the
Oh cool. Thanks :)
How do I find the correct weight after it is added?
For the current process I just check the other OSDs but this might be a
question that someone will raise.
I could imagine that I need to adjust the ceph-gentle-reweight's target
weight to the correct one.
Am Mi., 24. März
Den ons 24 mars 2021 kl 14:27 skrev Dan van der Ster :
> You can use:
>osd_crush_initial_weight = 0.0
We have it at 0.001 or something low which is non-zero so it doesn't
start as "out" or anything, but still will not receive any PGs.
--
May the most significant bit of your life be
You can use:
osd_crush_initial_weight = 0.0
-- Dan
On Wed, Mar 24, 2021 at 2:23 PM Boris Behrens wrote:
>
> Hi people,
>
> I currently try to add ~30 OSDs to our cluster and wanted to use the
> gentle-rerweight script for that.
> I use ceph-colume lvm prepare --data /dev/sdX to create the