Hi Craig, many thanks for your help. I decided to reinstall ceph.
Regards, Mike ________________________________ Von: Craig Lewis [cle...@centraldesktop.com] Gesendet: Dienstag, 19. August 2014 22:24 An: Riederer, Michael Cc: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean On Tue, Aug 19, 2014 at 1:22 AM, Riederer, Michael <michael.riede...@br.de<mailto:michael.riede...@br.de>> wrote: root@ceph-admin-storage:~# ceph pg force_create_pg 2.587 pg 2.587 now creating, ok root@ceph-admin-storage:~# ceph pg 2.587 query ... "probing_osds": [ "5", "8", "10", "13", "20", "35", "46", "56"], ... All mentioned osds "probing_osds" are up and in, but the cluster can not create the pg. Not even scrub, deep-scrub or repair it. My experience is that as long as you have down_osds_we_would_probe in the pg query, ceph pg force_create_pg won't do anything. ceph osd lost didn't help. The PGs would go into the creating state, then revert to incomplete. The only way I was able to get them to stay in the creating state was to re-create all of the OSD IDs listed in down_osds_we_would_probe. Even then, it wasn't deterministic. I issued the ceph pg force_create_pg, and it actually took effect sometime in the middle of the night, after an unrelated OSD went down and up. It was a very frustrating experience. Just to be sure, that I did it the right way: # stop ceph-osd id=x # ceph osd out x # ceph osd crush remove osd.x # ceph auth del osd.x # ceph osd rm x My procedure was the same as yours, with the addition of a ceph osd lost x before ceph osd rm. -------------------------------------------------------------------------------------------------- Bayerischer Rundfunk; Rundfunkplatz 1; 80335 München Telefon: +49 89 590001; E-Mail: i...@br.de; Website: http://www.BR.de
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com