Hello,
I also do these tests and find the same results. IMO, on faster storage with
deep queue depth, if device is asking for more requests,but our workload
can't send enough requests, we have to idle to provide service
differentiation. We'll see performance drop if applications can't drive
enou
Hello,
I also do these tests and find the same results. IMO, on faster storage with
deep queue depth, if device is asking for more requests,but our workload
can't send enough requests, we have to idle to provide service
differentiation. We'll see performance drop if applications can't drive
enou
2012/12/11 Vivek Goyal :
> These results are with slice_idle=0?
Yes, slice_idle is disabled.
> What's the storage you are using. Looking at the speed of IO I would
> guess it is not one of those rotational disks.
I have done the same test on 3 different type of boxes,and all of them
show a perfo
Hello, Vivek.
On Tue, Dec 11, 2012 at 11:18:20AM -0500, Vivek Goyal wrote:
> - Controlling device queue should bring down throughput too as it
> should bring down level of parallelism at device level. Also asking
> user to tune device queue depth seems bad interface. How would a
> user know
On Tue, Dec 11, 2012 at 08:01:37AM -0800, Tejun Heo wrote:
[..]
> > Only way to provide effective isolation seemed to be idling and the
> > moment we idle we kill the performance. It does not matter whether we
> > are scheduling time or iops.
>
> If the completion latency of IOs fluctuates heavil
Hello, Vivek.
On Tue, Dec 11, 2012 at 10:37:25AM -0500, Vivek Goyal wrote:
> I have experimented with schemes like that but did not see any very
> promising resutls. Assume device supports queue depth of 128, and there
> is one dependent reader and one writer. If reader goes away and comes
> back
On Tue, Dec 11, 2012 at 07:14:12AM -0800, Tejun Heo wrote:
> Hello, Vivek.
>
> On Tue, Dec 11, 2012 at 10:02:34AM -0500, Vivek Goyal wrote:
> > cfq_group_served() {
> > if (iops_mode(cfqd))
> > charge = cfqq->slice_dispatch;
> > cfqg->vdisktime += cfq_scale_slice(charge
Hello, Vivek.
On Tue, Dec 11, 2012 at 10:02:34AM -0500, Vivek Goyal wrote:
> cfq_group_served() {
> if (iops_mode(cfqd))
> charge = cfqq->slice_dispatch;
> cfqg->vdisktime += cfq_scale_slice(charge, cfqg);
> }
>
> Isn't it effectively IOPS scheduling. One should get
On Tue, Dec 11, 2012 at 06:47:18AM -0800, Tejun Heo wrote:
> Hello,
>
> On Tue, Dec 11, 2012 at 09:43:36AM -0500, Vivek Goyal wrote:
> > I think if one sets slice_idle=0 and group_idle=0 in CFQ, for all practical
> > purposes it should become and IOPS based group scheduling.
>
> No, I don't think
Hello,
On Tue, Dec 11, 2012 at 09:43:36AM -0500, Vivek Goyal wrote:
> I think if one sets slice_idle=0 and group_idle=0 in CFQ, for all practical
> purposes it should become and IOPS based group scheduling.
No, I don't think it is. You can't achieve isolation without idling
between group switche
On Tue, Dec 11, 2012 at 06:27:42AM -0800, Tejun Heo wrote:
> On Tue, Dec 11, 2012 at 09:25:18AM -0500, Vivek Goyal wrote:
> > In general, do not use blkcg on faster storage. In current form it
> > is at best suitable for single rotational SATA/SAS disk. I have not
> > been able to figure out how to
On Tue, Dec 11, 2012 at 09:25:18AM -0500, Vivek Goyal wrote:
> In general, do not use blkcg on faster storage. In current form it
> is at best suitable for single rotational SATA/SAS disk. I have not
> been able to figure out how to provide fairness without group idling.
I think cfq is just the wr
On Mon, Dec 10, 2012 at 08:28:54PM +0800, Zhao Shuai wrote:
> Hi,
>
> I plan to use blkcg(proportional BW) in my system. But I encounter
> great performance drop after enabling blkcg.
> The testing tool is fio(version 2.0.7) and both the BW and IOPS fields
> are recorded. Two instances of fio prog
13 matches
Mail list logo