Kevin Weiler schrieb:
Thanks Kyle,
What's the unit for osd recovery max chunk?
Have a look at
http://ceph.com/docs/master/rados/configuration/osd-config-ref/ where
all the possible OSD config options are described, especially have a
look at the backfilling and recovery sections.
Also, how
] ceph recovery killing vms
Kevin Weiler schrieb:
Thanks Kyle,
What's the unit for osd recovery max chunk?
Have a look at http://ceph.com/docs/master/rados/configuration/osd-config-ref/
where all the possible OSD config options are described, especially have a look
at the backfilling
Subject: Re: [ceph-users] ceph recovery killing vms
Thanks Guys,
after tested it in dev server, i have implemented the new config in prod
system.
next i will upgrade the hard drive.. :)
thanks again All.
On Tue, Oct 29, 2013 at 11:32 PM, Kyle Bader kyle.ba...@gmail.com wrote:
Recovering
Hi,
maybe you want to have a look at the following thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005368.html
Could be that you suffer from the same problems.
best regards,
Kurt
Rzk schrieb:
Hi all,
I have the same problem, just curious.
could it be caused by poor
Recovering from a degraded state by copying existing replicas to other OSDs
is going to cause reads on existing replicas and writes to the new
locations. If you have slow media then this is going to be felt more
acutely. Tuning the backfill options I posted is one way to lessen the
impact, another
Thanks Guys,
after tested it in dev server, i have implemented the new config in prod
system.
next i will upgrade the hard drive.. :)
thanks again All.
On Tue, Oct 29, 2013 at 11:32 PM, Kyle Bader kyle.ba...@gmail.com wrote:
Recovering from a degraded state by copying existing replicas to
Hi all,
We have a ceph cluster that being used as a backing store for several VMs
(windows and linux). We notice that when we reboot a node, the cluster enters a
degraded state (which is expected), but when it begins to recover, it starts
backfilling and it kills the performance of our VMs.
You can change some OSD tunables to lower the priority of backfills:
osd recovery max chunk: 8388608
osd recovery op priority: 2
In general a lower op priority means it will take longer for your
placement groups to go from degraded to active+clean, the idea is to
balance
Hi all,
I have the same problem, just curious.
could it be caused by poor hdd performance ?
read/write speed doesn't match the network speed ?
Currently i'm using desktop hdd in my cluster.
Rgrds,
Rzk
On Tue, Oct 29, 2013 at 6:22 AM, Kyle Bader kyle.ba...@gmail.com wrote:
You can change