Thanks for posting this. I just ran into the same thing upgrading a
cluster from 10.2.7 to 10.2.9 - this time on CentOS 7.3, and also with
the same dmcrypt setup. Adding the ceph_fsid file to each of the lockbox
partitions lets the disks activate successfully.
Graham
On 07/26/2017 02:28 AM, J
Thanks for posting this. I just ran into the same thing upgrading a
cluster from 10.2.7 to 10.2.9 - this time on CentOS 7.3, and also with
the same dmcrypt setup. Adding the ceph_fsid file to each of the lockbox
partitions lets the disks activate successfully.
Graham
On 07/26/2017 02:28 AM, J
That value is in ceph.conf, but I wouldn't expect that to have helped,
looking at the ceph-disk code (in the module level function `activate`)::
ceph_fsid = read_one_line(path, 'ceph_fsid')
if ceph_fsid is None:
raise Error('No cluster uuid assigned.')
Maybe there is a thinko ther
Does your ceph.conf file have your cluster uuid lasted in it? You should be
able to see what it is from ceph status and add it to your config if it's
missing.
On Tue, Jul 25, 2017, 7:38 AM Jasper Spaans
wrote:
> Hi list,
>
> We had some troubles activating our OSDs after upgrading from Ceph
> 10
Hi list,
We had some troubles activating our OSDs after upgrading from Ceph
10.2.7 to 10.2.9. The error we got was 'No cluster uuid assigned' after
calling ceph-disk trigger --sync /dev/sda3 .
Our cluster runs on Ubuntu 16.04, has been deployed using the
Ceph-ansible roles, and we're using the co