Mark Nelson wrote:
> We ran tests a while back looking at different IO elevators but they are
> quite old now:
>
> http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/
It doesn't seem so interesting to switch from deadline to cfq with HDD.
But in this case, I can't use so
Hi Guys,
We ran tests a while back looking at different IO elevators but they are
quite old now:
http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/
On 04/05/2015 08:36 PM, Francois Lafont wrote:
On 04/06/2015 02:54, Lionel Bouton wrote:
I have never tested these pa
On 04/06/2015 02:54, Lionel Bouton wrote:
>> I have never tested these parameters (osd_disk_thread_ioprio_priority and
>> osd_disk_thread_ioprio_class), but did you check that the I/O scheduler of
>> the disks is cfq?
>
> Yes I did.
Ah ok. It was just in case. :)
>> Because, if I understand we
Hi,
On 04/06/15 02:26, Francois Lafont wrote:
> Hi,
>
> Lionel Bouton wrote :
>
>> Sorry this wasn't clear: I tried the ioprio settings before disabling
>> the deep scrubs and it didn't seem to make a difference when deep scrubs
>> occured.
> I have never tested these parameters (osd_disk_thread_i
Hi,
Lionel Bouton wrote :
> Sorry this wasn't clear: I tried the ioprio settings before disabling
> the deep scrubs and it didn't seem to make a difference when deep scrubs
> occured.
I have never tested these parameters (osd_disk_thread_ioprio_priority and
osd_disk_thread_ioprio_class), but did
On 04/02/15 21:02, Stillwell, Bryan wrote:
>
> I'm pretty sure setting 'nodeep-scrub' doesn't cancel any current
> deep-scrubs that are happening,
Indeed it doesn't.
> but something like this would help prevent
> the problem from getting worse.
If the cause of the recoveries/backfills are an OS
On 04/02/15 21:02, Stillwell, Bryan wrote:
>> With these settings and no deep-scrubs the load increased a bit in the
>> VMs doing non negligible I/Os but this was manageable. Even disk thread
>> ioprio settings (which is what you want to get the ionice behaviour for
>> deep scrubs) didn't seem to m
>Recovery creates I/O performance drops in our VM too but it's manageable.
>What really hurts us are deep scrubs.
>Our current situation is Firefly 0.80.9 with a total of 24 identical OSDs
>evenly distributed on 4 servers with the following relevant configuration:
>
>osd recovery max active
Hi,
On 04/02/15 19:31, Stillwell, Bryan wrote:
> All,
>
> Whenever we're doing some kind of recovery operation on our ceph
> clusters (cluster expansion or dealing with a drive failure), there
> seems to be a fairly noticable performance drop while it does the
> backfills (last time I measured it
All,
Whenever we're doing some kind of recovery operation on our ceph
clusters (cluster expansion or dealing with a drive failure), there
seems to be a fairly noticable performance drop while it does the
backfills (last time I measured it the performance during recovery was
something like 20% of a
10 matches
Mail list logo