If you're planning to remove the next set of disks, I would recommend
weighting them to 0.0 in the crush map if you have the room for it. The
process at this point would be weighting the next set to 0.0 when you add
the previous set back in. That way when you finish removing the next set
there is no additional data movement until you add them back in. Also, you
can tell when they are done because they'll be empty.

To increase the likelihood that a specific osd finishes backfilling sooner,
you can increase osd_max_backfills on it.

On Tue, Jun 20, 2017, 9:48 AM Logan Kuhn <log...@wolfram.com> wrote:

> Is there a way to prioritize specific pools during recovery?  I know there
> are issues open for it, but I wasn't aware it was implemented yet...
>
> Regards,
> Logan
>
> ----- On Jun 20, 2017, at 8:20 AM, Sam Wouters <s...@ericom.be> wrote:
>
> Hi,
>
> Are they all in the same pool? Otherwise you could prioritize pool
> recovery.
> If not, maybe you can play with the osd max backfills number, no idea if
> it accepts a value of 0 to actually disable it for specific OSDs.
>
> r,
> Sam
>
> On 20-06-17 14:44, Richard Hesketh wrote:
>
> Is there a way, either by individual PG or by OSD, I can prioritise 
> backfill/recovery on a set of PGs which are currently particularly important 
> to me?
>
> For context, I am replacing disks in a 5-node Jewel cluster, on a 
> node-by-node basis - mark out the OSDs on a node, wait for them to clear, 
> replace OSDs, bring up and in, mark out the OSDs on the next set, etc. I've 
> done my first node, but the significant CRUSH map changes means most of my 
> data is moving. I only currently care about the PGs on my next set of OSDs to 
> replace - the other remapped PGs I don't care about settling because they're 
> only going to end up moving around again after I do the next set of disks. I 
> do want the PGs specifically on the OSDs I am about to replace to backfill 
> because I don't want to compromise data integrity by downing them while they 
> host active PGs. If I could specifically prioritise the backfill on those 
> PGs/OSDs, I could get on with replacing disks without worrying about causing 
> degraded PGs.
>
> I'm in a situation right now where there is merely a couple of dozen PGs on 
> the disks I want to replace, which are all remapped and waiting to backfill - 
> but there are 2200 other PGs also waiting to backfill because they've moved 
> around too, and it's extremely frustating to be sat waiting to see when the 
> ones I care about will finally be handled so I can get on with replacing 
> those disks.
>
> Rich
>
>
>
>
> _______________________________________________
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to