On Wed, Mar 02, 2016 at 10:19:38PM +0200, Nikolay Borisov wrote:
> On Wednesday, March 2, 2016, Vivek Goyal <vgo...@redhat.com> wrote:
> 
> > On Wed, Mar 02, 2016 at 09:59:13PM +0200, Nikolay Borisov wrote:
> > > On Wednesday, March 2, 2016, Vivek Goyal <vgo...@redhat.com
> > <javascript:;>> wrote:
> > >
> > > > On Wed, Mar 02, 2016 at 08:03:10PM +0200, Nikolay Borisov wrote:
> > > > > Thanks for the patch I will likely have time to test this sometime
> > next
> > > > week.
> > > > > But just to be sure - the expected behavior would be that processes
> > > > > writing to dm-based devices would experience the fair-shair
> > > > > scheduling of CFQ (provided that the physical devices that back those
> > > > > DM devices use CFQ), correct?
> > > >
> > > > Nikolay,
> > > >
> > > > I am not sure how well it will work with CFQ of underlying device. It
> > will
> > > > get cgroup information right for buffered writes. But cgroup
> > information
> > >
> > >
> > >  Right, what's your definition of  buffered writes?
> >
> > Writes which go through page cache.
> >
> > > My mental model is that
> > > when a process submits a write request to a dm device , the bio is going
> > to
> > > be put on a devi e workqueue which would then  be serviced by a
> > background
> > > worker thread and later the submitter notified. Do you refer to this
> > whole
> > > gamut of operations as buffered writes?
> >
> > No, once the bio is submitted to dm device it could be a buffered write or
> > a direct write.
> >
> > >
> > > for reads and direct writes will come from submitter's context and if dm
> > > > layer gets in between, then many a times submitter might be a worker
> > > > thread and IO will be attributed to that worker's cgroup (root cgroup).
> > >
> > >
> > > Be that as it may, proivded that the worker thread is in the  'correct'
> > > cgroup,  then the appropriate babdwidth policies should apply, no?
> >
> > Worker thread will most likely be in root cgroup. So if a worker thread
> > is submitting bio, it will be attributed to root cgroup.
> >
> > We had similar issue with IO priority and it did not work reliably with
> > CFQ on underlying device when dm devices were sitting on top.
> >
> > If we really want to give it a try, I guess we will have to put cgroup
> > info of submitter early in bio at the time of bio creation even for all
> > kind of IO. Not sure if it is worth the effort.
> >
> > For the case of IO throttling, I think you should put throttling rules on
> > the dm device itself. That means as long as filesystem supports the
> > cgroups, you should be getting right cgroup information for all kind of
> > IO and throttling should work just fine.
> 
> 
> Throttling does work even now,  but the use case I had in mind was
> proportional
> distribution of IO. Imagine 50  or so dm devices, hosting IO intensive
> workloads. In
> this situation, I'd  be interested each of them getting proportional IO
> based on the weights
> set in the blkcg controller for each respective cgroup for every workload.
> 

I see what you are trying to do. Carry the cgroup information from top to
bottom of IO stack for all kind of IO.

I guess we also need to call  bio_associate_current() when dm accepts
bio from the submitter.

Thanks
Vivek

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Reply via email to