Thank Irek, it really worked
 
02.06.2015, 15:58, "Irek Fasikhov" <malm...@gmail.com>:
Hi.
 
Restart the OSD. :)

2015-06-02 11:55 GMT+03:00 Никитенко Виталий <v1...@yandex.ru>:
Hi!

I have ceph version 0.94.1.

root@ceph-node1:~# ceph -s
    cluster 3e0d58cd-d441-4d44-b49b-6cff08c20abf
     health HEALTH_OK
     monmap e2: 3 mons at {ceph-mon=10.10.100.3:6789/0,ceph-node1=10.10.100.1:6789/0,ceph-node2=10.10.100.2:6789/0}
            election epoch 428, quorum 0,1,2 ceph-node1,ceph-node2,ceph-mon
     osdmap e978: 16 osds: 16 up, 16 in
      pgmap v6735569: 2012 pgs, 8 pools, 2801 GB data, 703 kobjects
            5617 GB used, 33399 GB / 39016 GB avail
                2011 active+clean
                   1 active+clean+scrubbing+deep
  client io 174 kB/s rd, 30641 kB/s wr, 80 op/s

root@ceph-node1:~# ceph pg dump  | grep -i deep | cut -f 1
  dumped all in format plain
  pg_stat
  19.b3

In log file i see
2015-05-14 03:23:51.556876 7fc708a37700  0 log_channel(cluster) log [INF] : 19.b3 deep-scrub starts
but no "19.b3 deep-scrub ok"

then i do "ceph pg deep-scrub 19.b3", nothing happens and in logs file no any records about it.

What can i do to pg return in "active + clean" station?
is there any sense restart OSD or the entirely server where the OSD?

Thanks.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 
--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to