Hi again,
found a solution:
initctl stop ceph-osd id=29

root@ceph-02:~# ceph osd tree
# id    weight  type name       up/down reweight
-1      203.8   root default
-3      203.8           rack unknownrack
-2      29.12                   host ceph-01
52      3.64                            osd.52  up      1
53      3.64                            osd.53  up      1
54      3.64                            osd.54  up      1
55      3.64                            osd.55  up      1
56      3.64                            osd.56  up      1
57      3.64                            osd.57  up      1
58      3.64                            osd.58  up      1
59      3.64                            osd.59  up      1
-4      43.68                   host ceph-02
8       3.64                            osd.8   up      1
10      3.64                            osd.10  up      1
9       3.64                            osd.9   up      0.8936
11      3.64                            osd.11  up      0.9022
12      3.64                            osd.12  up      0.8664
13      3.64                            osd.13  up      0.9084
14      3.64                            osd.14  up      0.8097
15      3.64                            osd.15  up      0.893
29      0                               osd.29  down    0
...

root@ceph-02:~# ceph-osd -i 29 --flush-journal
2014-06-23 09:08:05.311614 7f1ecb5d6780 -1 journal FileJournal::_open:
disabling aio for non-block journal.  Use journal_force_aio to force use
of aio anyway
2014-06-23 09:08:05.313059 7f1ecb5d6780 -1 flushed journal
/srv/journal/osd.29.journal for object store /var/lib/ceph/osd/ceph-29

root@ceph-02:~# umount /var/lib/ceph/osd/ceph-29


But why don't work "ceph osd down osd.29"?

Udo

Am 23.06.2014 09:01, schrieb Udo Lembke:
> Hi,
> AFAIK should an "ceph osd down osd.29" marked osd.29 as down.
> But what is to do if this don't happens?
> 
> I got following:
> root@ceph-02:~# ceph osd down osd.29
> marked down osd.29.
> 
> root@ceph-02:~# ceph osd tree
> 2014-06-23 08:51:00.588042 7f15747f5700  0 -- :/1018258 >>
> 172.20.2.11:6789/0 pipe(0x7f157002a370 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7f157002a5d0).fault
> # id    weight  type name       up/down reweight
> -1      203.8   root default
> -3      203.8           rack unknownrack
> -2      29.12                   host ceph-01
> 52      3.64                            osd.52  up      1
> 53      3.64                            osd.53  up      1
> 54      3.64                            osd.54  up      1
> 55      3.64                            osd.55  up      1
> 56      3.64                            osd.56  up      1
> 57      3.64                            osd.57  up      1
> 58      3.64                            osd.58  up      1
> 59      3.64                            osd.59  up      1
> -4      43.68                   host ceph-02
> 8       3.64                            osd.8   up      1
> 10      3.64                            osd.10  up      1
> 9       3.64                            osd.9   up      0.8936
> 11      3.64                            osd.11  up      0.9022
> 12      3.64                            osd.12  up      0.8664
> 13      3.64                            osd.13  up      0.9084
> 14      3.64                            osd.14  up      0.8097
> 15      3.64                            osd.15  up      0.893
> 29      0                               osd.29  up      0
> ...
> 
> osd.29 is marked as up and can't be unmountet, because it's in use. If I
> kill the osd-process, they will automaticly restarted just in time.
> 
> My ceph-version is
> ceph --version
> ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
> 
> The OSD-node is an "Linux ceph-02 3.11.0-15-generic #25~precise1-Ubuntu"
> 
> Any hints?
> 
> The ugly way is simple remove the unused OSD - but I want to know how
> should this normaly work.
> 
> Udo
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to