Hello,

Looking for a bit of guidance / approach to upgrading from Nautilus to
Octopus considering CentOS and Ceph-Ansible.

We're presently running a Nautilus cluster (all nodes / daemons 14.2.11 as
of this post).
- There are 4 monitor-hosts with mon, mgr, and dashboard functions
consolidated;
- 4 RGW hosts
- 4 ODS costs, with 10 OSDs each.   This is planned to scale to 7 nodes
with additional OSDs and capacity (considering to do this as part of
upgrade process)
- Currently using ceph-ansible (however it's a process to maintain scripts
/ configs between playbook versions - although a great framework, not ideal
in our case;
- All hosts run CentOS 7.x;
- dm-crypt in use on LVM OSDs (via ceph-ansible);
- Deployment IS NOT containerized.

Octopus support on CentOS 7 is limited due to python dependencies, as a
result we want to move to CentOS 8 or Ubuntu 20.04.   The other outlier is
CentOS native Kernel support for LSI2008 (eg. 9211)  HBAs which some of our
OSD nodes use.

Irrespective of OS considerations above, the upgrade will be to an OS that
fully supports Octopus.

We'd like to make use of ceph orchestrator for on-going cluster management.


Here's an upgrade path scenario that is being considered.   At a high-level:
1.  Deploy a new monitor on CentOS 8.   May be Nautilus via established
ceph-ansible playbook.
2.  Upgrade new monitor to Octopus using dnf / ceph package upgrade.
3.  Decommission individual monitor hosts (existing on CentOS 7) and
redeploy on CentOS 8 via ceph orchestrator from new monitor node;
4.  Repeat until all monitors are on new OS + Octopus (all deployed via
Ceph Orchestrator.
5.  Add additional OSD nodes / drives / capacity via orchestrator on
Octopus;
6.  Upgrade existing OSD hosts by keeping OSDs intact, reinstalling new OS
(CentOS 8 or Ubuntu 20.04);
7.  Deploy ceph octopus on new nodes via orchestrator;
8.  Reactivate / rescan in-tact OSDs on newly redeployed node. (i.e.
ceph-volume
lvm activate --all)
9.  Rinse / repeat for remaining Nautilus nodes.
10.  Manually upgrade RGW packages on gateway nodes.

Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to