On Fri, May 03, 2013 at 12:14:18PM -0700, Tejun Heo wrote:
> On Fri, May 03, 2013 at 03:08:23PM -0400, Vivek Goyal wrote:
> >             T1      T2      T3      T4      T5      T6      T7
> > parent:                     b1      b2      b3              b4      b5
> > child:              b1      b2      b3              b4      b5      
> > 
> > 
> > So continuity breaks down because application is waiting for previous
> > IO to finish. This forces expiry of existing time slices and new time
> > slice start both in child and parent and penalty keep on increasing.
> 
> It's a problem even in flat mode as the "child" above can easily be
> just a process which is throttling itself and it won't be able to get
> the configured bandwidth due to the scheduling bubbles introduced
> whenever new slice is started.  Shouldn't be too difficult to get rid
> of, right?

Key thing here is when to start a new slice. Generally when an IO has been
dispatched from a group, we do not expire slice immediately. We kind of
give group some grace period of throtl_slice (100ms). If next IO does not
come with-in that duration, we start a fresh slice upon next IO arrival.

I think similar problem should happen if there are two stacked devices
and both are doing throttling and if delays between 2 IOs are big enough
that it forces expirty of slice on each device.

Atleast for the hiearchy case, we should be able to start a fresh time
slice when child transfer bio to parent. I will write a patch and do
some experiment.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to