Re: [ceph-users] Power failure recovery woes (fwd)

2015-02-20 Thread Gregory Farnum
rded message from Jeff - > > Date: Tue, 17 Feb 2015 09:16:33 -0500 > From: Jeff > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Power failure recovery woes > > Some additional information/questions: > > Here is the output of "ceph osd tree" &g

Re: [ceph-users] Power failure recovery woes (fwd)

2015-02-20 Thread Jeff
ists.ceph.com Subject: Re: [ceph-users] Power failure recovery woes Some additional information/questions: Here is the output of "ceph osd tree" Some of the "down" OSD's are actually running, but are "down". For example osd.1: root 30158 8.6 12.7 1542

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Michal Kozanecki
y? Michal Kozanecki -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jeff Sent: February-17-15 9:17 AM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Power failure recovery woes Some additional information/questions: Here is the output of &quo

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Michal Kozanecki
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jeff Sent: February-17-15 9:17 AM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Power failure recovery woes Some additional information/questions: Here is the output of "ceph osd tree" Some of the

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Jeff
osd.9 up 1 10 0.91osd.10 down0 -6 1.82host ceph5 11 0.91osd.11 up 1 12 0.91osd.12 up 1 On 2/17/2015 8:28 AM, Jeff wrote: ---- Original Message ---- Subject: R

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Jeff
Udo, Yes, the osd is mounted: /dev/sda4 963605972 260295676 703310296 28% /var/lib/ceph/osd/ceph-2 Thanks, Jeff Original Message Subject: Re: [ceph-users] Power failure recovery woes Date: 2015-02-17 04:23 From: Udo Lembke To: Jeff , ceph-users

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Udo Lembke
Hi Jeff, is the osd /var/lib/ceph/osd/ceph-2 mounted? If not, does it helps, if you mounted the osd and start with service ceph start osd.2 ?? Udo Am 17.02.2015 09:54, schrieb Jeff: > Hi, > > We had a nasty power failure yesterday and even with UPS's our small (5 > node, 12 OSD) cluster is havi

[ceph-users] Power failure recovery woes

2015-02-17 Thread Jeff
Hi, We had a nasty power failure yesterday and even with UPS's our small (5 node, 12 OSD) cluster is having problems recovering. We are running ceph 0.87 3 of our OSD's are down consistently (others stop and are restartable, but our cluster is so slow that almost everything we do times out).