So would you recommend doing an entire node at the same time or per-osd?

 
Senior Systems Administrator

Research Computing Services Team

University of Victoria

O: 250.472.4997

On 2019-11-15, 10:28 AM, "Paul Emmerich" <paul.emmer...@croit.io> wrote:

    You'll have to tell LVM about multi-path, otherwise LVM gets confused.
    But that should be the only thing
    
    Paul
    
    -- 
    Paul Emmerich
    
    Looking for help with your Ceph cluster? Contact us at https://croit.io
    
    croit GmbH
    Freseniusstr. 31h
    81247 München
    www.croit.io
    Tel: +49 89 1896585 90
    
    On Fri, Nov 15, 2019 at 6:04 PM Mike Cave <mc...@uvic.ca> wrote:
    >
    > Greetings all!
    >
    >
    >
    > I am looking at upgrading to Nautilus in the near future (currently on 
Mimic). We have a cluster built on 480 OSDs all using multipath and simple 
block devices. I see that the ceph-disk tool is now deprecated and the 
ceph-volume tool doesn’t do everything that ceph-disk did for simple devices 
(e.g. I’m unable to activate a new osd and set the location of wal/block.db, so 
far as I have been able to figure out). So for disk replacements going forward 
it could get ugly.
    >
    >
    >
    > We deploy/manage using Ceph Ansible.
    >
    >
    >
    > I’m okay with updating the OSDs to LVM and understand that it will 
require a full rebuild of each OSD.
    >
    >
    >
    > I was thinking of going OSD by OSD through the cluster until they are all 
completed. However, someone suggested doing an entire node at a time (that 
would be 20 OSDs at a time in this case). Is one method going to be better than 
the other?
    >
    >
    >
    > Also a question about setting-up LVM: given I’m using multipath devices, 
do I have to preconfigure the LVM devices before running the ansible plays or 
will ansible take care of the LVM setup (even though they are on multipath)?
    >
    >
    >
    > I would then do the upgrade to Nautilus from Mimic after all the OSDs 
were converted.
    >
    >
    >
    > I’m looking for opinions on best practices to complete this as I’d like 
to minimize impact to our clients.
    >
    >
    >
    > Cheers,
    >
    > Mike Cave
    >
    > _______________________________________________
    > ceph-users mailing list
    > ceph-users@lists.ceph.com
    > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to