On Mon, 12 Jan 2015, Christopher Kunz wrote:
> Hi,
> 
> [redirecting back to list]
> > Oh, it could be that... can you include the output from 'ceph osd tree'?  
> > That's a more concise view that shows up/down, weight, and in/out.
> > 
> > Thanks!
> > sage
> > 
> 
> root@cepharm17:~# ceph osd tree
> # id  weight  type name       up/down reweight
> -1    0.52    root default
> -21   0.16            chassis board0
> -2    0.032                   host cepharm11
> 0     0.032                           osd.0   up      1       
> -3    0.032                   host cepharm12
> 1     0.032                           osd.1   up      1       
> -4    0.032                   host cepharm13
> 2     0.032                           osd.2   up      1       
> -5    0.032                   host cepharm14
> 3     0.032                           osd.3   up      1       
> -6    0.032                   host cepharm16
> 4     0.032                           osd.4   up      1       
> -22   0.18            chassis board1
> -7    0.03                    host cepharm18
> 5     0.03                            osd.5   up      1       
> -8    0.03                    host cepharm19
> 6     0.03                            osd.6   up      1       
> -9    0.03                    host cepharm20
> 7     0.03                            osd.7   up      1       
> -10   0.03                    host cepharm21
> 8     0.03                            osd.8   up      1       
> -11   0.03                    host cepharm22
> 9     0.03                            osd.9   up      1       
> -12   0.03                    host cepharm23
> 10    0.03                            osd.10  up      1       
> -23   0.18            chassis board2
> -13   0.03                    host cepharm25
> 11    0.03                            osd.11  up      1       
> -14   0.03                    host cepharm26
> 12    0.03                            osd.12  up      1       
> -15   0.03                    host cepharm27
> 13    0.03                            osd.13  up      1       
> -16   0.03                    host cepharm28
> 14    0.03                            osd.14  up      1       
> -17   0.03                    host cepharm29
> 15    0.03                            osd.15  up      1       
> -18   0.03                    host cepharm30
> 16    0.03                            osd.16  up      1       

Okay, it sounds like something is not quite right then.  Can you attach 
the OSDMap once it is in the not-quite-repaired state?  And/or try 
setting 'ceph osd crush tunables optimal' and see if that has any 
effect?

> I am working on one of these boxes:
> <http://www.ambedded.com.tw/pt_spec.php?P_ID=20141109001>
> So, each chassis is one 7-node board (with a shared 1gbe switch and
> shared electrical supply), and I figured each board is definitely a
> separate failure domain.

Cute!  That kind of looks like 3 sleds of 7 in one chassis though?  Or am 
I looking at the wrong thing?

sage
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to