Re: [ceph-users] CRUSH odd bucket affinity / persistence

2015-09-13 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > deeepdish > Sent: 13 September 2015 02:47 > To: Johannes Formann <mlm...@formann.de> > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] CRUSH odd bu

[ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread deeepdish
Hello, I’m having a (strange) issue with OSD bucket persistence / affinity on my test cluster.. The cluster is PoC / test, by no means production. Consists of a single OSD / MON host + another MON running on a KVM VM. Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be part of the

Re: [ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread Johannes Formann
Hi, > I’m having a (strange) issue with OSD bucket persistence / affinity on my > test cluster.. > > The cluster is PoC / test, by no means production. Consists of a single OSD > / MON host + another MON running on a KVM VM. > > Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be

Re: [ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread deeepdish
Johannes, Thank you — "osd crush update on start = false” did the trick. I wasn’t aware that ceph has automatic placement logic for OSDs (http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/9035 ). This