Hello Cephers!

When my cluster hit "full ratio" settings, objects from cache pull didn't flush to a cold storage.

1. Hit the 'full ratio':

2016-03-06 11:35:23.838401 osd.64 10.22.11.21:6824/31423 4327 : cluster [WRN] OSD near full (90%) 2016-03-06 11:35:55.447205 osd.64 10.22.11.21:6824/31423 4329 : cluster [WRN] OSD near full (90%) 2016-03-06 11:36:29.255815 osd.64 10.22.11.21:6824/31423 4332 : cluster [WRN] OSD near full (90%) 2016-03-06 11:37:04.769765 osd.64 10.22.11.21:6824/31423 4333 : cluster [WRN] OSD near full (90%)
...

2. Well, ok. Set the option 'ceph osd pool set hotec cache_target_full_ratio 0.8'.
But no one of objects didn't flush at all

3. Ok. Try flush all object manually:
[root@c1 ~]# rados -p hotec cache-flush-evict-all
        rbd_data.34d1f5746d773.0000000000016ba9

4. After full day objects still in cache pool, didn't flush at all:
[root@c1 ~]# rados df
pool name KB objects clones degraded unfound rd rd KB wr wr KB data 0 0 0 0 0 6 4 158212 215700473 hotec 797656118 25030755 0 0 0 370599 163045649 69947951 17786794779 rbd 0 0 0 0 0 0 0 0 0
  total used      2080570792     25030755

It a bug or predictable action?

--
Mike. runs!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to