Hi, I already responded to your first attempt:

https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/GS7KJRJP7BAOF66KJM255G27TJ4KG656/

Please provide the requested details.


Zitat von Eugenio Tampieri <eugenio.tampi...@readydigital.it>:

Hello,
I'm writing to troubleshoot an otherwise functional Ceph quincy cluster that has issues with cephfs. I cannot mount it with ceph-fuse (it gets stuck), and if I mount it with NFS I can list the directories but I cannot read or write anything.
Here's the output of ceph -s
  cluster:
    id:     3b92e270-1dd6-11ee-a738-000c2937f0ec
    health: HEALTH_WARN
            mon ceph-storage-a is low on available space
            1 daemons have recently crashed
            too many PGs per OSD (328 > max 250)

  services:
mon: 5 daemons, quorum ceph-mon-a,ceph-storage-a,ceph-mon-b,ceph-storage-c,ceph-storage-d (age 105m) mgr: ceph-storage-a.ioenwq(active, since 106m), standbys: ceph-mon-a.tiosea
    mds:        1/1 daemons up, 2 standby
    osd:        4 osds: 4 up (since 104m), 4 in (since 24h)
    rbd-mirror: 2 daemons active (2 hosts)
    rgw:        2 daemons active (2 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   13 pools, 481 pgs
    objects: 231.83k objects, 648 GiB
    usage:   1.3 TiB used, 1.8 TiB / 3.1 TiB avail
    pgs:     481 active+clean

  io:
    client:   1.5 KiB/s rd, 8.6 KiB/s wr, 1 op/s rd, 0 op/s wr
Best regards,

Eugenio Tampieri
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to