On Thu, May 10, 2012 at 5:23 PM, Tommi Virtanen wrote:
> Good question! "ceph -s" will show you that. This is from a run where
> I ran "ceph osd out 1" on a cluster of 3 osds. See the active+clean
> counts going up and active+recovering counts going down, and the
> "degraded" percentage dropping.
On Thu, May 10, 2012 at 3:44 PM, Nick Bartos wrote:
> After I run the 'ceph osd out 123' command, is there a specific ceph
> command I can poll so I know when it's OK to kill the OSD daemon and
> begin the reformat process?
Good question! "ceph -s" will show you that. This is from a run where
I r
After I run the 'ceph osd out 123' command, is there a specific ceph
command I can poll so I know when it's OK to kill the OSD daemon and
begin the reformat process?
On Tue, May 8, 2012 at 12:38 PM, Sage Weil wrote:
> On Tue, 8 May 2012, Tommi Virtanen wrote:
>> On Tue, May 8, 2012 at 8:39 AM, Ni
On Tue, 8 May 2012, Tommi Virtanen wrote:
> On Tue, May 8, 2012 at 8:39 AM, Nick Bartos wrote:
> > I am considering converting some OSDs to xfs (currently running btrfs)
> > for stability reasons. I have a couple of ideas for doing this, and
> > was hoping to get some comments:
> >
> > Method #1:
On Tue, May 8, 2012 at 8:39 AM, Nick Bartos wrote:
> I am considering converting some OSDs to xfs (currently running btrfs)
> for stability reasons. I have a couple of ideas for doing this, and
> was hoping to get some comments:
>
> Method #1:
> 1. Check cluster health and make sure data on a sp
I am considering converting some OSDs to xfs (currently running btrfs)
for stability reasons. I have a couple of ideas for doing this, and
was hoping to get some comments:
Method #1:
1. Check cluster health and make sure data on a specific OSD is
replicated elsewhere.
2. Bring down the OSD
3.