It seems that the pool .log increases in its size as ceph runs over time. I've 
using 20 placement groups (pgs) for the .log pool. Now it complains that 
"HEALTH_WARN pool .log has too few pgs". I don't have a good understanding on 
when ceph will remove the old log entries by itself. I saw some entries more 
than 9 months old, which I assume would not have any use in the future. If I'd 
like to reduce the size of .log pool manually, what is the recommended way to 
do it? Using 'rados rm <obj-name> ...' one by one seems very curbersome.

# ceph health detail
HEALTH_WARN pool .log has too few pgs
pool .log objects per pg (8829) is more than 12.9648 times cluster average (681)

# ceph osd dump |grep .log
pool 13 '.log' replicated size 3 min_size 2 crush_ruleset 0 object_hash 
rjenkins pg_num 20 pgp_num 20 last_change 303 owner 18446744073709551615 flags 
hashpspool min_read_recency_for_promote 1 stripe_width 0

# ceph df detail |grep .log
    .log                   13     -             836M      0.01         2476G    
  176581      172k         2      235k

Thanks,

Fangzhe Chang
PS.
Somehow I can only receive summaries from ceph-users list, and cannot 
immediately be notified about the responses. If you don't see my reaction to 
your answers, most likely it is due to that I have not learned about the 
existence of your replies yet ;-(. Thanks very much for any help.



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to