You mean `ceph -w` and `ceph -s` didn't show any PGs in
the active+clean+scrubbing state while pool 2's PGs were being scrubbed?

I see that happen with my really small pools.  I have a bunch of RadosGW
pools that contain <5 objects, and ~1kB of data.  When I scrub the PGs in
those pools, they complete so fast that they never show up in `ceph -w`.


Since you have pools 0, 1, and 2, I assume those are the default 'data',
'metadata', and 'rdb'.  If you're not using RDB, then the rdb pool will be
very small.



On Tue, Dec 2, 2014 at 5:32 AM, Mallikarjun Biradar <
mallikarjuna.bira...@gmail.com> wrote:

> Hi all,
>
> I was running scrub while cluster is in re-balancing state.
>
> From the osd logs..
>
> 2014-12-02 18:50:26.934802 7fcc6b614700  0 log_channel(default) log [INF]
> : 0.3 scrub ok
> 2014-12-02 18:50:27.890785 7fcc6b614700  0 log_channel(default) log [INF]
> : 0.24 scrub ok
> 2014-12-02 18:50:31.902978 7fcc6b614700  0 log_channel(default) log [INF]
> : 0.25 scrub ok
> 2014-12-02 18:50:33.088060 7fcc6b614700  0 log_channel(default) log [INF]
> : 0.33 scrub ok
> 2014-12-02 18:50:50.828893 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.61 scrub ok
> 2014-12-02 18:51:06.774648 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.68 scrub ok
> 2014-12-02 18:51:20.463283 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.80 scrub ok
> 2014-12-02 18:51:39.883295 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.89 scrub ok
> 2014-12-02 18:52:00.568808 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.9f scrub ok
> 2014-12-02 18:52:15.897191 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.a3 scrub ok
> 2014-12-02 18:52:34.681874 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.aa scrub ok
> 2014-12-02 18:52:47.833630 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.b1 scrub ok
> 2014-12-02 18:53:09.312792 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.b3 scrub ok
> 2014-12-02 18:53:25.324635 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.bd scrub ok
> 2014-12-02 18:53:48.638475 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.c3 scrub ok
> 2014-12-02 18:54:02.996972 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.d7 scrub ok
> 2014-12-02 18:54:19.660038 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.d8 scrub ok
> 2014-12-02 18:54:32.780646 7fcc6b614700  0 log_channel(default) log [INF]
> : 1.fa scrub ok
> 2014-12-02 18:54:36.772931 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.4 scrub ok
> 2014-12-02 18:54:41.758487 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.9 scrub ok
> 2014-12-02 18:54:46.910043 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.a scrub ok
> 2014-12-02 18:54:51.908335 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.16 scrub ok
> 2014-12-02 18:54:54.940807 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.19 scrub ok
> 2014-12-02 18:55:00.956170 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.44 scrub ok
> 2014-12-02 18:55:01.948455 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.4f scrub ok
> 2014-12-02 18:55:07.273587 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.76 scrub ok
> 2014-12-02 18:55:10.641274 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.9e scrub ok
> 2014-12-02 18:55:11.621669 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.ab scrub ok
> 2014-12-02 18:55:18.261900 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.b0 scrub ok
> 2014-12-02 18:55:19.560766 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.b1 scrub ok
> 2014-12-02 18:55:20.501591 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.bb scrub ok
> 2014-12-02 18:55:21.523936 7fcc6b614700  0 log_channel(default) log [INF]
> : 2.cd scrub ok
>
> Interestingly, for pg's 2.x (2.4, 2.9 etc)in logs here, cluster status was
> not reporting scrubbing, whereas for 0.x & 1.x it was reporting as
> scrubbing in cluster status.
>
> In case of scrub operation on PG's (2.x) is really scrubbing performed OR
> cluster status is missing to report them?
>
>  -Thanks & Regards,
> Mallikarjun Biradar
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to