Re: [ceph-users] tunable question

2017-10-05 Thread mj
Hi, For the record, we changed tunables from "hammer" to "optimal", yesterday at 14:00, and it finished this morning at 9:00, so rebalancing took 19 hours. This was on a small ceph cluster, 24 4TB OSDs spread over three hosts, connected over 10G ethernet. Total amount of data: 32730 GB

Re: [ceph-users] tunable question

2017-10-03 Thread lists
Thanks Jake, for your extensive reply. :-) MJ On 3-10-2017 15:21, Jake Young wrote: On Tue, Oct 3, 2017 at 8:38 AM lists > wrote: Hi, What would make the decision easier: if we knew that we could easily revert the  > "ceph

Re: [ceph-users] tunable question

2017-10-03 Thread Jake Young
On Tue, Oct 3, 2017 at 8:38 AM lists wrote: > Hi, > > What would make the decision easier: if we knew that we could easily > revert the > > "ceph osd crush tunables optimal" > once it has begun rebalancing data? > > Meaning: if we notice that impact is too high, or it will

Re: [ceph-users] tunable question

2017-10-03 Thread lists
Hi, What would make the decision easier: if we knew that we could easily revert the > "ceph osd crush tunables optimal" once it has begun rebalancing data? Meaning: if we notice that impact is too high, or it will take too long, that we could simply again say > "ceph osd crush tunables

Re: [ceph-users] tunable question

2017-10-02 Thread Manuel Lausch
Hi, We have similar issues. After upgradeing from hammer to jewel the tunable "choose leave stabel" was introduces. If we activate it nearly all data will be moved. The cluster has 2400 OSD on 40 nodes over two datacenters and is filled with 2,5 PB Data. We tried to enable it but the

Re: [ceph-users] tunable question

2017-09-28 Thread mj
Hi Dan, list, Our cluster is small: three nodes, totally 24 4Tb platter OSDs, SSD journals. Using rbd for VMs. That's it. Runs nicely though :-) The fact that "tunable optimal" for jewel would result in "significantly fewer mappings change when an OSD is marked out of the cluster" is what

Re: [ceph-users] tunable question

2017-09-28 Thread Dan van der Ster
Hi, How big is your cluster and what is your use case? For us, we'll likely never enable the recent tunables that need to remap *all* PGs -- it would simply be too disruptive for marginal benefit. Cheers, Dan On Thu, Sep 28, 2017 at 9:21 AM, mj wrote: > Hi, > > We have

[ceph-users] tunable question

2017-09-28 Thread mj
Hi, We have completed the upgrade to jewel, and we set tunables to hammer. Cluster again HEALTH_OK. :-) But now, we would like to proceed in the direction of luminous and bluestore OSDs, and we would like to ask for some feedback first. From the jewel ceph docs on tubables: "Changing