Hi Everyone,

Getting close to cracking my understanding of cache tiering, and ec pools.   
Stuck on one anomaly which I do not understand — spent hours reviewing docs 
online, can’t seem to pin point what I’m doing wrong.   Referencing 
http://ceph.com/docs/master/rados/operations/cache-tiering/ 
<http://ceph.com/docs/master/rados/operations/cache-tiering/>

Setup:

Test / PoC Lab environment (not production)

1x [26x OSD/MON host]
1x MON VM

Erasure coded pool consisting of 10 spinning OSDs  (journals on SSDs - 5:1 
spinner:SSD ratio)
Cache tier consisting of 2 SSD OSDs

Issue:

Cache tier is not honoring configured thresholds.   In my particular case, I 
have 2 OSDs in pool ‘cache’ (140G each == 280G total pool capacity).   

Pool cache is configured with replica factor of 2 (size = 2, min size = 1)

Initially I tried the following settings:

ceph osd pool set cache cache_target_dirty_ratio 0.3
ceph osd pool set cache cache_target_full_ratio 0.7
ceph osd pool set cache cache_min_flush_age 1
ceph osd pool set cache cache_min_evict_age 1

My cache tier’s utilization hit 96%+, causing the pool to run out of capacity.

I realized that in a replicated pool, only 1/2 the capacity is available and 
made the following adjustments:

ceph osd pool set cache cache_target_dirty_ratio 0.1
ceph osd pool set cache cache_target_full_ratio 0.3
ceph osd pool set cache cache_min_flush_age 1
ceph osd pool set cache cache_min_evict_age 1

The above implies that 0.3 = 60% of replicated (2x) pool size) and 0.1 = 20% of 
replicated (2x) pool size.   

Even with above revised values, I still see the cache tier getting full.  

The cache tier can only be flushed / evicted by manually running the following:

rados -p cache cache-flush-evict-all

Thank you.







 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to