Thanks Nick.   That did it!   Cache cleans it self up now.   

> On Sep 14, 2015, at 11:49 , Nick Fisk <n...@fisk.me.uk> wrote:
> 
> Have you set the target_max_bytes? Otherwise those ratios are not relative to 
> anything, they use the target_max_bytes as a max, not the pool size.
>  
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com 
> <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of deeepdish
> Sent: 14 September 2015 16:27
> To: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> Subject: [ceph-users] Cache tier full not evicting
>  
> Hi Everyone,
>  
> Getting close to cracking my understanding of cache tiering, and ec pools.   
> Stuck on one anomaly which I do not understand — spent hours reviewing docs 
> online, can’t seem to pin point what I’m doing wrong.   Referencing 
> http://xo4t.mj.am/link/xo4t/no2irn4/1/4BSmK1EUshpYjOdI2VWk4g/aHR0cDovL2NlcGguY29tL2RvY3MvbWFzdGVyL3JhZG9zL29wZXJhdGlvbnMvY2FjaGUtdGllcmluZy8
>  
> <http://xo4t.mj.am/link/xo4t/no2irn4/1/4BSmK1EUshpYjOdI2VWk4g/aHR0cDovL2NlcGguY29tL2RvY3MvbWFzdGVyL3JhZG9zL29wZXJhdGlvbnMvY2FjaGUtdGllcmluZy8>
>  
> Setup:
>  
> Test / PoC Lab environment (not production)
>  
> 1x [26x OSD/MON host]
> 1x MON VM
>  
> Erasure coded pool consisting of 10 spinning OSDs  (journals on SSDs - 5:1 
> spinner:SSD ratio)
> Cache tier consisting of 2 SSD OSDs
>  
> Issue:
>  
> Cache tier is not honoring configured thresholds.   In my particular case, I 
> have 2 OSDs in pool ‘cache’ (140G each == 280G total pool capacity).   
>  
> Pool cache is configured with replica factor of 2 (size = 2, min size = 1)
>  
> Initially I tried the following settings:
>  
> ceph osd pool set cache cache_target_dirty_ratio 0.3
> ceph osd pool set cache cache_target_full_ratio 0.7
> ceph osd pool set cache cache_min_flush_age 1
> ceph osd pool set cache cache_min_evict_age 1
>  
> My cache tier’s utilization hit 96%+, causing the pool to run out of capacity.
>  
> I realized that in a replicated pool, only 1/2 the capacity is available and 
> made the following adjustments:
>  
> ceph osd pool set cache cache_target_dirty_ratio 0.1
> ceph osd pool set cache cache_target_full_ratio 0.3
> ceph osd pool set cache cache_min_flush_age 1
> ceph osd pool set cache cache_min_evict_age 1
>  
> The above implies that 0.3 = 60% of replicated (2x) pool size) and 0.1 = 20% 
> of replicated (2x) pool size.   
>  
> Even with above revised values, I still see the cache tier getting full.  
>  
> The cache tier can only be flushed / evicted by manually running the 
> following:
>  
> rados -p cache cache-flush-evict-all
>  
> Thank you.
>  
>  
>  
>  
>  
>  
>  
>  
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to