seeing is definitely at the far end of what one would expect with
Ceph.
Christian
> Thanks for your help
>
> Andrei
>
> - Original Message -
> > From: "Shinobu Kinjo"
> > To: "Andrei Mikhailovsky"
> > Cc: "Christian Balzer" ,
: "Christian Balzer" , "ceph-users"
>
> Sent: Friday, 8 April, 2016 01:35:18
> Subject: Re: [ceph-users] rebalance near full osd
> There was a discussion before regarding to the situation where you are
> facing now. [1]
> Would you have a look, if it's
There was a discussion before regarding to the situation where you are
facing now. [1]
Would you have a look, if it's helpful or not for you.
[1]
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-February/007622.html
Cheers,
Shinobu
___
ceph-use
thanks for pointing it out.
Cheers
Andrei
- Original Message -
> From: "Christian Balzer"
> To: "ceph-users"
> Cc: "Andrei Mikhailovsky"
> Sent: Wednesday, 6 April, 2016 04:36:30
> Subject: Re: [ceph-users] rebalance near full osd
>
Hello,
On Wed, 6 Apr 2016 04:18:40 +0100 (BST) Andrei Mikhailovsky wrote:
> Hi
>
> I've just had a warning ( from ceph -s) that one of the osds is near
> full. Having investigated the warning, i've located that osd.6 is 86%
> full. The data distribution is nowhere near to being equal on my osd
Hi
I've just had a warning ( from ceph -s) that one of the osds is near full.
Having investigated the warning, i've located that osd.6 is 86% full. The data
distribution is nowhere near to being equal on my osds as you can see from the
df command output below:
/dev/sdj1 2.8T 2.4T 413G 86% /v