is this...
-> $ systemctl stop
[email protected]
-> $ ceph-bluestore-tool bluefs-bdev-sizes --path
/var/lib/ceph/9f4f9dba-72c7-11f0-8052-525400519d29/osd.10/
inferring bluefs devices from bluestore path
1 : device size 0x4affc00000(300 GiB) : using
0x36f49bd000(220 GiB)
-> $ ceph-bluestore-tool bluefs-bdev-expand --path
/var/lib/ceph/9f4f9dba-72c7-11f0-8052-525400519d29/osd.10/
inferring bluefs devices from bluestore path
1 : device size 0x4affc00000(300 GiB) : using
0x36f49bd000(220 GiB)
Expanding DB/WAL...
1 : nothing to do, skipped
not weird(wrong) - would not all agree?
-> $ ceph osd df
...
10 hdd 0.29300 1.00000 300 GiB 219 GiB 218 GiB 567
KiB 632 MiB 81 GiB 72.91 0.99 165 up
-> $ ceph-volume lvm list
...
====== osd.10 ======
...
devices /dev/vdc
...
-> $ fdisk -l /dev/vdc
Disk /dev/vdc: 400 GiB, 429496729600 bytes, 104857600 sectors
Something, somewhere Ceph is ignoring, not picking up...?
From what I can find 'ceph-volume' does not offer re-sizing
volumes.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]