Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-19 Thread Maxime Guyot
Hi Matthew, I would expect the osd_crush_location parameter to take effect from the OSD activation. Maybe ceph-ansible would have info there? A work around might be “set noin”, restart all the OSDs once the ceph.conf includes the crush location and enjoy the automatic CRUSHmap update (if you ha

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-19 Thread Matthew Vernon
Hi, > How many OSD's are we talking about? We're about 500 now, and even > adding another 2000-3000 is a 5 minute cut/paste job of editing the > CRUSH map. If you really are adding racks and racks of OSD's every week, > you should have found the crush location hook a long time ago. We have 540 a

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-18 Thread Richard Hesse
Most ceph clusters are setup once and then maintained. Even if new OSD nodes are added, it's not a frequent enough operation to warrant automation. Yes, ceph does provide hooks for automatically updating the CRUSH map (crush location hook), but it's up to you to properly write, debug, and maintain

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-18 Thread Adam Tygart
Ceph has the ability to us a script to figure out where in the crushmap this disk should go (on osd start): http://docs.ceph.com/docs/master/rados/operations/crush-map/#ceph-crush-location-hook -- Adam On Tue, Apr 18, 2017 at 7:53 AM, Matthew Vernon wrote: > On 17/04/17 21:16, Richard Hesse wrot

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-18 Thread Matthew Vernon
On 17/04/17 21:16, Richard Hesse wrote: > I'm just spitballing here, but what if you set osd crush update on start > = false ? Ansible would activate the OSD's but not place them in any > particular rack, working around the ceph.conf problem you mentioned. > Then you could place them in your CRUSH

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-17 Thread Richard Hesse
I'm just spitballing here, but what if you set osd crush update on start = false ? Ansible would activate the OSD's but not place them in any particular rack, working around the ceph.conf problem you mentioned. Then you could place them in your CRUSH map by hand. I know you wanted to avoid editing

[ceph-users] Adding a new rack to crush map without pain?

2017-04-12 Thread Matthew Vernon
Hi, Our current (jewel) CRUSH map has rack / host / osd (and the default replication rule does step chooseleaf firstn 0 type rack). We're shortly going to be adding some new hosts in new racks, and I'm wondering what the least-painful way of getting the new osds associated with the correct (new) r