> On Wed, 2021-12-08 at 16:06 +0000, Marc wrote: > > > > > > It isn't possible to upgrade from CentOS 7 to anything... At least > > > without required massive hacks that may of may not work (and most > > > likely won't). > > > > I meant wipe the os disk, install whatever, install nautilus and put > > back some dirs from the previous os, like /etc/ceph and > > /var/lib/ceph/. Get it to work for one node, and you have your blue print > for the rest. I was planning to do it like this in the near future. > > > > Was afraid this wasn't possible due to total different architecture (Nautilus > isn't docker/podman based yet). Still, my cluster only has 2 nodes with all > the > data on one of it, basically I'm running a RAID1 on steroids.
You don't *have* to run Pacific using Docker/Podman. I did a similar upgrade over the summer, only from RHEL7 to RHEL8, and the "backup key directories/files, clean install, restore directories" is essentially what you have to do, with the extra step of telling Ceph to re-detect your OSDs and recreate the startup scripts. That last bit is easier if all your OSDs are already LVM-based rather than simple. If you have simple OSDs, make sure you grab the JSON files for them in /etc/ceph. I can try to figure out what commands I used for some of this if needed. Once the operating systems are reinstalled and the cluster is functioning again, then you upgrade from Nautilus to Octopus, wait for the Octopus OSD format conversion to happen, then upgrade to Pacific, *then* worry about whether you want to switch to Docker/Podman. Oh, and a word of warning: Pacific only supports cephx v2 authentication. If you have clients doing kernel RBD mounting, make sure their kernels are at least 4.9.150, 4.14.86, 4.19, or later. Prior to those versions, the kernel driver doesn't support cephx v2, and you'll have a Bad Time. _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io