Hi all,

we are running a 144 osds ceph cluster and a couple of osd are >80% full.

This is the general situation:

 osdmap e29344: 144 osds: 144 up, 144 in
      pgmap v48302229: 42064 pgs, 18 pools, 60132 GB data, 15483 kobjects
            173 TB used, 90238 GB / 261 TB avail

We are currenty mitigating the problem using osd reweight but the more we read about this problem the more our doubts abouts using osd crush reweight increases.
Actually, we do not have plans to buy new hardware.

Our main question is: what if the re-weighted osd restart and get the original weight are the data going back?

How to correcly face this kind of situation?

Many thanks

Simone



--
Simone Spinelli <simone.spine...@unipi.it>
Università di Pisa
Settore Rete, Telecomunicazioni e Fonia - Serra
Direzione Edilizia e Telecomunicazioni
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to