On 4 May 2016 at 17:40, Greg Kurz <gk...@linux.vnet.ibm.com> wrote:

> On Mon, 2 May 2016 17:49:26 +0200
> Pradeep Kiruvale <pradeepkiruv...@gmail.com> wrote:
>
> > On 2 May 2016 at 14:57, Greg Kurz <gk...@linux.vnet.ibm.com> wrote:
> >
> > > On Thu, 28 Apr 2016 11:45:41 +0200
> > > Pradeep Kiruvale <pradeepkiruv...@gmail.com> wrote:
> > >
> > > > On 27 April 2016 at 19:12, Greg Kurz <gk...@linux.vnet.ibm.com>
> wrote:
> > > >
> > > > > On Wed, 27 Apr 2016 16:39:58 +0200
> > > > > Pradeep Kiruvale <pradeepkiruv...@gmail.com> wrote:
> > > > >
> > > > > > On 27 April 2016 at 10:38, Alberto Garcia <be...@igalia.com>
> wrote:
> > > > > >
> > > > > > > On Wed, Apr 27, 2016 at 09:29:02AM +0200, Pradeep Kiruvale
> wrote:
> > > > > > >
> > > > > > > > Thanks for the reply. I am still in the early phase, I will
> let
> > > you
> > > > > > > > know if any changes are needed for the APIs.
> > > > > > > >
> > > > > > > > We might also have to implement throttle-group.c for 9p
> devices,
> > > if
> > > > > > > > we want to apply throttle for group of devices.
> > > > > > >
> > > > > > > Fair enough, but again please note that:
> > > > > > >
> > > > > > > - throttle-group.c is not meant to be generic, but it's tied to
> > > > > > >   BlockDriverState / BlockBackend.
> > > > > > > - it is currently being rewritten:
> > > > > > >
> > > https://lists.gnu.org/archive/html/qemu-block/2016-04/msg00645.html
> > > > > > >
> > > > > > > If you can explain your use case with a bit more detail we can
> try
> > > to
> > > > > > > see what can be done about it.
> > > > > > >
> > > > > > >
> > > > > > We want to use  virtio-9p for block io instead of virtio-blk-pci.
> > > But in
> > > > > > case of
> > > > >
> > > > > 9p is mostly aimed at sharing files... why would you want to use
> it for
> > > > > block io instead of a true block device ? And how would you do
> that ?
> > > > >
> > > >
> > > > *Yes, we want to share the files itself. So we are using the
> virtio-9p.*
> > >
> > > You want to pass a disk image to the guest as a plain file on a 9p
> mount ?
> > > And then, what do you do in the guest ? Attach it to a loop device ?
> > >
> >
> > Yes, would like to mount as  a 9p drive and create file inside that and
> > read/write.
> > This was the experiment we are doing, actual use case no idea. My work is
> > to do
> > a feasibility test does it work or not.
> >
> >
> > >
> > > > *We want to have QoS on these files access for every VM.*
> > > >
> > >
> > > You won't be able to have QoS on selected files, but it may be
> possible to
> > > introduce limits at the fsdev level: control all write accesses to all
> > > files
> > > and all read accesses to all files for a 9p device.
> > >
> >
> > That is right, I do not want to have QoS for individual files but to
> whole
> > fsdev device.
> >
> >
> > > >
> > > > >
> > > > > > virtio-9p we can just use fsdev devices, so we want to apply
> > > throttling
> > > > > > (QoS)
> > > > > > on these devices and as of now the io throttling only possible
> with
> > > the
> > > > > > -drive option.
> > > > > >
> > > > >
> > > > > Indeed.
> > > > >
> > > > > > As a work around we are doing the throttling using cgroup. It has
> > > its own
> > > > > > costs.
> > > > >
> > > > > Can you elaborate ?
> > > > >
> > > >
> > > > *We saw that we need to create cgroups and set it and also we
> observed
> > > lot
> > > > of iowaits *
> > > > *compared to implementing the throttling inside the qemu.*
> > > > *This we did observe by using the virtio-blk-pci devices. (Using
> cgroups
> > > Vs
> > > > qemu throttling)*
> > > >
> > >
> >
> >
> > >
> > > Just to be sure I get it right.
> > >
> > > You tried both:
> > > 1) run QEMU with -device virtio-blk-pci and -drive throttling.*
> > > 2) run QEMU with -device virtio-blk-pci in its own cgroup
> > >
> > > And 1) has better performance and is easier to use than 2) ?
> > >
> > > And what do you expect with 9p compared to 1) ?
> > >
> > >
> > That was just to understand the cost of cpu
> >  io throttling inside the qemu vs using cgroup.
> >
> > The bench-marking we did to reproduce the numbers and understand the cost
> > mentioned in
> >
> >
> http://www.linux-kvm.org/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf
> >
> > Thanks,
> > Pradeep
> >
>
> Ok. So you did compare current QEMU block I/O throttling with cgroup ? And
> you observed numbers
> similar to the link above ?
>

*Yes, I did, I did run DD command in guest to do IO. The recent QEMU is in
par with cgroups in terms *
*of CPU utilization.*

>
> And now you would like to run the same test on a file in a 9p mount with
> experimental 9p QoS ?
>
> *Yes, you are right.*


> Maybe possible to reuse the throttle.h API and hack v9fs_write() and
> v9fs_read() in 9p.c then.
>
>
*OK, I am looking into it. Are there any sample test cases or something
about how to apply the*
*throttling APIs to a device?*


Regards,
Pradeep



> Cheers.
>
> --
> Greg
>
> >
> > > >
> > > > Thanks,
> > > > Pradeep
> > >
> > >
>
>
>

Reply via email to