Thanks for sharing this. I modified it slightly to stop and start the OSDs
on the fly rather than having all osds needlessly stopped during the chown.

ie.

chown ceph:ceph /var/lib/ceph /var/lib/ceph/* && find /var/lib/ceph/osd
-maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 -I '{}' bash -c 'echo
"starting run on osd.$(cat {}/whoami)"; service ceph-osd stop id=$(cat
{}/whoami); time chown -R ceph:ceph {} && service ceph-osd restart id=$(cat
{}/whoami); echo "done with osd.$(cat {}/whoami)";'


Same note of course.. all of the non-osd dirs in /var/lib/ceph need to be
handled separately. Also keep an eye out for downed OSDs after the run
completes. If the chown does not return success then the script above will
not start the OSD.

On Tue, Nov 10, 2015 at 5:58 AM, Nick Fisk <n...@fisk.me.uk> wrote:

> I’m currently upgrading to Infernalis and the chown stage is taking a log
> time on my OSD nodes. I’ve come up with this little one liner to run the
> chown’s in parallel
>
>
>
> find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1
> chown -R ceph:ceph
>
>
>
> NOTE: You still need to make sure the other directory’s in the
> /var/lib/ceph folder are updated separately but this should speed up the
> process for machines with larger number of disks.
>
>
>
> Nick
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to