Re: [ceph-users] Shrinking lab cluster to free hardware for a new deployment

2017-03-09 Thread Ben Hines
AFAIK depending on how many you have, you are likely to end up with 'too many pgs per OSD' warning for your main pool if you do this, because the number of PGs in a pool cannot be reduced and there will be less OSDs to put them on. -Ben On Wed, Mar 8, 2017 at 5:53 AM, Henrik Korkuc wrote: > On

Re: [ceph-users] Shrinking lab cluster to free hardware for a new deployment

2017-03-08 Thread Henrik Korkuc
On 17-03-08 15:39, Kevin Olbrich wrote: Hi! Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each). We want to shut down the cluster but it holds some semi-productive VMs we might or might not need in the future. To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we

Re: [ceph-users] Shrinking lab cluster to free hardware for a new deployment

2017-03-08 Thread Maxime Guyot
Shrinking lab cluster to free hardware for a new deployment Hi! Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each). We want to shut down the cluster but it holds some semi-productive VMs we might or might not need in the future. To keep them, we would like to shrink our cluster f

[ceph-users] Shrinking lab cluster to free hardware for a new deployment

2017-03-08 Thread Kevin Olbrich
Hi! Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each). We want to shut down the cluster but it holds some semi-productive VMs we might or might not need in the future. To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we use size 2 and min_size 1). Should I set th