On Wed, 9 Sep 2015, Vickey Singh wrote:
> Agreed with Alphe , Ceph Hammer (0.94.2) sucks when it comes to recovery and
> rebalancing.
>
> Here is my Ceph Hammer cluster , which is like this for more than 30 hours.
>
> You might be thinking about that one OSD which is down and not in. Its
> inten
Agreed with Alphe , Ceph Hammer (0.94.2) sucks when it comes to recovery
and rebalancing.
Here is my Ceph Hammer cluster , which is like this for more than 30 hours.
You might be thinking about that one OSD which is down and not in. Its
intentional, i want to remove that OSD.
I want the cluster
I can say exactly the same I am using ceph sin 0.38 and I never get osd
so laggy than with 0.94. rebalancing /rebuild algorithm is crap in 0.94
serriously I have 2 osd serving 2 discs of 2TB and 4 GB of RAM osd takes
1.6GB each !!! serriously ! that makes avanche snow.
Let me be straight and e
On Wed, Sep 2, 2015 at 9:34 PM, Bob Ababurko wrote:
> When I lose a disk OR replace a OSD in my POC ceph cluster, it takes a very
> long time to rebalance. I should note that my cluster is slightly unique in
> that I am using cephfs(shouldn't matter?) and it currently contains about
> 310 million
I found place to paste my output for `ceph daemon osd.xx config show` for
all my OSD's:
https://www.zerobin.net/?743bbbdea41874f4#FNk5EjsfRxvkX1JuTp52fQ4CXW6VOIEB0Lj0Icnyr4Q=
If you want it in a gzip'd txt file, you can download here:
https://mega.nz/#!oY5QAByC!JEWhHRms0WwbYbwG4o4RdTUWtFwFjUDLWh
Can you post the output ot
ceph daemon osd.xx config show? (probably as an attachment).
There are several things that I've seen cause it
1) too many PGs but too little degraded objects make it seem "slow" (if you
just have 2 degraded objects but restarted a host with 10K PGs, it will have to
sc
When I lose a disk OR replace a OSD in my POC ceph cluster, it takes a very
long time to rebalance. I should note that my cluster is slightly unique
in that I am using cephfs(shouldn't matter?) and it currently contains
about 310 million objects.
The last time I replaced a disk/OSD was 2.5 days a