Running the following after prepare and a reboot, "solves" this problem.
[root@osd01 ~]# partx -v -a /dev/mapper/mpatha
partition: none, disk: /dev/mapper/mpatha, lower: 0, upper: 0
/dev/mapper/mpatha: partition table type 'gpt' detected
partx: /dev/mapper/mpatha: adding partition #1 failed: Inval
I also noticed there are no folders under /var/lib/ceph/osd/ ...
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
2018-02-04 19:01 GMT+01:00 Kevin Olbrich :
> Hi!
>
> Currently I try to re-deploy a cluster from filestore to bluestore.
> I zapped all disks (multiple times) but I fail addin
Hi!
Currently I try to re-deploy a cluster from filestore to bluestore.
I zapped all disks (multiple times) but I fail adding a disk array:
Prepare:
> ceph-deploy --overwrite-conf osd prepare --bluestore --block-wal /dev/sdb
> --block-db /dev/sdb osd01.cloud.example.local:/dev/mapper/mpatha
Ac
Hello,
What is the best kernel for Luminous on Ubuntu 16.04 ?
Is linux-image-virtual-lts-xenial still the best one ? Or
linux-virtual-hwe-16.04 will offer some improvement ?
Thanks,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list
ceph-users@l
Hello!
One more newbie question )
I have a minimal working ceph cluster: 3 nodes, 6 OSDs.
I understand that I can scale up cluster adding one or more OSDs or
even nodes without stopping clients i/o and loosing consistency.
What if I decide to add a cache pool backed by SSDs or move a journal to