Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-06-24 Thread Brett Chancellor
I have used the gentle reweight script many times in the past. But more
recently, I expanded one cluster from 334 to 1114 OSDs, by just changing
the crush weight 100 OSDs at a time. Once all pgs from those 100 were
stable and backfilling, add another hundred. I stopped at 500 and let the
backfill finish. I repeated the process for the last 500 drives and it was
finished in a weekend without any complaints.
Don't forget to adjust your PG count for the new OSDs once rebalancing is
done.

-Brett

On Sun, Jun 23, 2019, 2:51 PM  wrote:

> Hello,
>
> I would advice to use this Script from dan:
>
> https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight
>
> I have Used it many Times and it works Great - also if you want to drain
> the OSDs.
>
> Hth
> Mehmet
>
> Am 30. Mai 2019 22:59:05 MESZ schrieb Michel Raabe :
>>
>> Hi Mike,
>>
>> On 30.05.19 02:00, Mike Cave wrote:
>>
>>> I’d like a s little friction for the cluster as possible as it is in
>>> heavy use right now.
>>>
>>> I’m running mimic (13.2.5) on CentOS.
>>>
>>> Any suggestions on best practices for this?
>>>
>>
>> You can limit the recovery for example
>>
>> * max backfills
>> * recovery max active
>> * recovery sleep
>>
>> It will slow down the rebalance but will not hurt the users too much.
>>
>>
>> Michel.
>> --
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-06-23 Thread ceph
Hello,

I would advice to use this Script from dan:
https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight

I have Used it many Times and it works Great - also if you want to drain the 
OSDs.

Hth
Mehmet

Am 30. Mai 2019 22:59:05 MESZ schrieb Michel Raabe :
>Hi Mike,
>
>On 30.05.19 02:00, Mike Cave wrote:
>> I’d like a s little friction for the cluster as possible as it is in 
>> heavy use right now.
>> 
>> I’m running mimic (13.2.5) on CentOS.
>> 
>> Any suggestions on best practices for this?
>
>You can limit the recovery for example
>
>* max backfills
>* recovery max active
>* recovery sleep
>
>It will slow down the rebalance but will not hurt the users too much.
>
>
>Michel.
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-05-30 Thread Michel Raabe

Hi Mike,

On 30.05.19 02:00, Mike Cave wrote:
I’d like a s little friction for the cluster as possible as it is in 
heavy use right now.


I’m running mimic (13.2.5) on CentOS.

Any suggestions on best practices for this?


You can limit the recovery for example

* max backfills
* recovery max active
* recovery sleep

It will slow down the rebalance but will not hurt the users too much.


Michel.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-05-30 Thread Martin Verges
Hello Mike,

there is no problem adding 100 OSDs at the same time if your cluster is
configured correctly.
Just add the OSDs and let the cluster slowly (as fast as your hardware
supports without service interruption) rebalance.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Do., 30. Mai 2019 um 02:00 Uhr schrieb Mike Cave :

> Good afternoon,
>
>
>
> I’m about to expand my cluster from 380 to 480 OSDs (5 nodes with 20 disks
> per node) and am trying to determine the best way to go about this task.
>
>
>
> I deployed the cluster with ceph ansible and everything worked well. So
> I’d like to add the new nodes with ceph ansible as well.
>
>
>
> The issue I have is adding that many OSDs at once will likely cause a huge
> issue with the cluster if they come in fully weighted.
>
>
>
> I was hoping to use ceph ansible and set the initial weight to zero and
> then gently bring them up to the correct weight for each OSD.
>
>
>
> I will be doing this with a total of 380 OSDs over the next while. My plan
> is to bring in groups of 6 nodes (I have six racks and the map is
> rack-redundant) until I’m completed on the additions.
>
>
>
> In dev I tried bringing in a node while the cluster was in ‘no rebalance’
> mode and there was still significant movement with some stuck pgs and other
> oddities until I reweighted and then unset ‘no rebalance’.
>
>
>
> I’d like a s little friction for the cluster as possible as it is in heavy
> use right now.
>
>
>
> I’m running mimic (13.2.5) on CentOS.
>
>
>
> Any suggestions on best practices for this?
>
>
>
> Thank you for reading and any help you might be able provide. I’m happy to
> provide any details you might want.
>
>
>
> Cheers,
>
> Mike
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-05-29 Thread Mike Cave
Good afternoon,

I’m about to expand my cluster from 380 to 480 OSDs (5 nodes with 20 disks per 
node) and am trying to determine the best way to go about this task.

I deployed the cluster with ceph ansible and everything worked well. So I’d 
like to add the new nodes with ceph ansible as well.

The issue I have is adding that many OSDs at once will likely cause a huge 
issue with the cluster if they come in fully weighted.

I was hoping to use ceph ansible and set the initial weight to zero and then 
gently bring them up to the correct weight for each OSD.

I will be doing this with a total of 380 OSDs over the next while. My plan is 
to bring in groups of 6 nodes (I have six racks and the map is rack-redundant) 
until I’m completed on the additions.

In dev I tried bringing in a node while the cluster was in ‘no rebalance’ mode 
and there was still significant movement with some stuck pgs and other oddities 
until I reweighted and then unset ‘no rebalance’.

I’d like a s little friction for the cluster as possible as it is in heavy use 
right now.

I’m running mimic (13.2.5) on CentOS.

Any suggestions on best practices for this?

Thank you for reading and any help you might be able provide. I’m happy to 
provide any details you might want.

Cheers,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com