Re: [ceph-users] Shrinking lab cluster to free hardware for a new deployment

2017-03-09 Thread Ben Hines
AFAIK depending on how many you have, you are likely to end up with 'too
many pgs per OSD' warning for your main pool if you do this, because the
number of PGs in a pool cannot be reduced and there will be less OSDs to
put them on.

-Ben

On Wed, Mar 8, 2017 at 5:53 AM, Henrik Korkuc  wrote:

> On 17-03-08 15:39, Kevin Olbrich wrote:
>
> Hi!
>
> Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each).
> We want to shut down the cluster but it holds some semi-productive VMs we
> might or might not need in the future.
> To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we use
> size 2 and min_size 1).
>
> Should I set the OSDs out one by one or with norefill, norecovery flags
> set but all at once?
> If last is the case, which flags should be set also?
>
> just set OSDs out and wait for them to rebalace, OSDs will be active and
> serve traffic while data will be moving off them. I had a case where some
> pgs wouldn't move out, so after everything settles, you may need to remove
> OSDs from crush one by one.
>
> Thanks!
>
> Kind regards,
> Kevin Olbrich.
>
>
> ___
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Shrinking lab cluster to free hardware for a new deployment

2017-03-08 Thread Henrik Korkuc

On 17-03-08 15:39, Kevin Olbrich wrote:

Hi!

Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each).
We want to shut down the cluster but it holds some semi-productive VMs 
we might or might not need in the future.
To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we 
use size 2 and min_size 1).


Should I set the OSDs out one by one or with norefill, norecovery 
flags set but all at once?

If last is the case, which flags should be set also?

just set OSDs out and wait for them to rebalace, OSDs will be active and 
serve traffic while data will be moving off them. I had a case where 
some pgs wouldn't move out, so after everything settles, you may need to 
remove OSDs from crush one by one.



Thanks!

Kind regards,
Kevin Olbrich.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Shrinking lab cluster to free hardware for a new deployment

2017-03-08 Thread Maxime Guyot
Hi Kevin,

I don’t know about those flags, but if you want to shrink your cluster you can 
simply set the weight of the OSDs to be removed to 0 like so: “ceph osd 
reweight osd.X 0”
You can either do it gradually if your are concerned about client I/O (probably 
not since you speak of a test / semi prod cluster) or all at once.
This should take care of all the data movements.

Once the cluster is back to HEALTH_OK, you can then proceed with the standard 
remove OSD procedure: 
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual
You should be able to delete all the OSDs in a short period of time since the 
data movement has already been taken care of with the reweight.

I hope that helps.

Cheers,
Maxime

From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Kevin Olbrich 
<k...@sv01.de>
Date: Wednesday 8 March 2017 14:39
To: "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com>
Subject: [ceph-users] Shrinking lab cluster to free hardware for a new 
deployment

Hi!

Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each).
We want to shut down the cluster but it holds some semi-productive VMs we might 
or might not need in the future.
To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we use size 
2 and min_size 1).

Should I set the OSDs out one by one or with norefill, norecovery flags set but 
all at once?
If last is the case, which flags should be set also?

Thanks!

Kind regards,
Kevin Olbrich.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Shrinking lab cluster to free hardware for a new deployment

2017-03-08 Thread Kevin Olbrich
Hi!

Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each).
We want to shut down the cluster but it holds some semi-productive VMs we
might or might not need in the future.
To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we use
size 2 and min_size 1).

Should I set the OSDs out one by one or with norefill, norecovery flags set
but all at once?
If last is the case, which flags should be set also?

Thanks!

Kind regards,
Kevin Olbrich.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com