Hello,

Ran into an interesting error today and I'm not sure best way to fix it.
When I run 'ceph orch device ls', I get the following error "Insufficient
space (<10 extents) on vgs, LVM detected, locked", on every HD.

Here's the output of ceph-volume lvm list, incase it helps
====== osd.0 =======

  [block]
/dev/ceph-efb83a91-3c7b-4329-babc-017b0a00e95a/osd-block-b017780d-38f9-4da7-b9df-2da66e1aa0fd

      block device
 
/dev/ceph-efb83a91-3c7b-4329-babc-017b0a00e95a/osd-block-b017780d-38f9-4da7-b9df-2da66e1aa0fd
      block uuid                8kIdfD-kQSh-Mhe4-zRIL-b1Pf-PTaC-CVosbE
      cephx lockbox secret
      cluster fsid              1684fe88-aae0-11ec-9593-df430e3982a0
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  b017780d-38f9-4da7-b9df-2da66e1aa0fd
      osd id                    0
      osdspec affinity          dashboard-admin-1648152609405
      type                      block
      vdo                       0
      devices                   /dev/sdb

====== osd.10 ======

  [block]
/dev/ceph-a0e85035-cfe2-4070-b58a-a88ec964794c/osd-block-3c353f8c-ab0f-4589-9e98-4f840e86341a

      block device
 
/dev/ceph-a0e85035-cfe2-4070-b58a-a88ec964794c/osd-block-3c353f8c-ab0f-4589-9e98-4f840e86341a
      block uuid                gvvrMV-O98L-P6Sl-dnJT-NVwM-P85e-Reqql4
      cephx lockbox secret
      cluster fsid              1684fe88-aae0-11ec-9593-df430e3982a0
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  3c353f8c-ab0f-4589-9e98-4f840e86341a
      osd id                    10
      osdspec affinity          dashboard-admin-1648152609405
      type                      block
      vdo                       0
      devices                   /dev/sdh

====== osd.12 ======

lvdisplay
--- Logical volume ---
  LV Path
 
/dev/ceph-a0e85035-cfe2-4070-b58a-a88ec964794c/osd-block-3c353f8c-ab0f-4589-9e98-4f840e86341a
  LV Name                osd-block-3c353f8c-ab0f-4589-9e98-4f840e86341a
  VG Name                ceph-a0e85035-cfe2-4070-b58a-a88ec964794c
  LV UUID                gvvrMV-O98L-P6Sl-dnJT-NVwM-P85e-Reqql4
  LV Write Access        read/write
  LV Creation host, time hyperion02, 2022-03-24 20:12:17 +0000
  LV Status              available
  # open                 24
  LV Size                <1.82 TiB
  Current LE             476932
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

Let me know if you need any other information.

Thanks,
Curt
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to