> -----Original Message-----
> From: Kevin Wolf [mailto:kw...@redhat.com]
> Sent: 08 November 2018 15:21
> To: Paul Durrant <paul.durr...@citrix.com>
> Cc: 'Markus Armbruster' <arm...@redhat.com>; Anthony Perard
> <anthony.per...@citrix.com>; Tim Smith <tim.sm...@citrix.com>; Stefano
> Stabellini <sstabell...@kernel.org>; qemu-block@nongnu.org; qemu-
> de...@nongnu.org; Max Reitz <mre...@redhat.com>; xen-
> de...@lists.xenproject.org
> Subject: Re: [Qemu-devel] xen_disk qdevification
> 
> Am 08.11.2018 um 15:00 hat Paul Durrant geschrieben:
> > > -----Original Message-----
> > > From: Markus Armbruster [mailto:arm...@redhat.com]
> > > Sent: 05 November 2018 15:58
> > > To: Paul Durrant <paul.durr...@citrix.com>
> > > Cc: 'Kevin Wolf' <kw...@redhat.com>; Tim Smith <tim.sm...@citrix.com>;
> > > Stefano Stabellini <sstabell...@kernel.org>; qemu-block@nongnu.org;
> qemu-
> > > de...@nongnu.org; Max Reitz <mre...@redhat.com>; Anthony Perard
> > > <anthony.per...@citrix.com>; xen-de...@lists.xenproject.org
> > > Subject: Re: [Qemu-devel] xen_disk qdevification
> > >
> > > Paul Durrant <paul.durr...@citrix.com> writes:
> > >
> > > >> -----Original Message-----
> > > >> From: Kevin Wolf [mailto:kw...@redhat.com]
> > > >> Sent: 02 November 2018 11:04
> > > >> To: Tim Smith <tim.sm...@citrix.com>
> > > >> Cc: xen-de...@lists.xenproject.org; qemu-de...@nongnu.org; qemu-
> > > >> bl...@nongnu.org; Anthony Perard <anthony.per...@citrix.com>; Paul
> > > Durrant
> > > >> <paul.durr...@citrix.com>; Stefano Stabellini
> <sstabell...@kernel.org>;
> > > >> Max Reitz <mre...@redhat.com>; arm...@redhat.com
> > > >> Subject: xen_disk qdevification (was: [PATCH 0/3] Performance
> > > improvements
> > > >> for xen_disk v2)
> > > >>
> > > >> Am 02.11.2018 um 11:00 hat Tim Smith geschrieben:
> > > >> > A series of performance improvements for disks using the Xen PV
> ring.
> > > >> >
> > > >> > These have had fairly extensive testing.
> > > >> >
> > > >> > The batching and latency improvements together boost the
> throughput
> > > >> > of small reads and writes by two to six percent (measured using
> fio
> > > >> > in the guest)
> > > >> >
> > > >> > Avoiding repeated calls to posix_memalign() reduced the dirty
> heap
> > > >> > from 25MB to 5MB in the case of a single datapath process while
> also
> > > >> > improving performance.
> > > >> >
> > > >> > v2 removes some checkpatch complaints and fixes the CCs
> > > >>
> > > >> Completely unrelated, but since you're the first person touching
> > > >> xen_disk in a while, you're my victim:
> > > >>
> > > >> At KVM Forum we discussed sending a patch to deprecate xen_disk
> because
> > > >> after all those years, it still hasn't been converted to qdev.
> Markus
> > > is
> > > >> currently fixing some other not yet qdevified block device, but
> after
> > > >> that xen_disk will be the only one left.
> > > >>
> > > >> A while ago, a downstream patch review found out that there are
> some
> > > QMP
> > > >> commands that would immediately crash if a xen_disk device were
> present
> > > >> because of the lacking qdevification. This is not the code quality
> > > >> standard I envision for QEMU. It's time for non-qdev devices to go.
> > > >>
> > > >> So if you guys are still interested in the device, could someone
> please
> > > >> finally look into converting it?
> > > >>
> > > >
> > > > I have a patch series to do exactly this. It's somewhat involved as
> I
> > > > need to convert the whole PV backend infrastructure. I will try to
> > > > rebase and clean up my series a.s.a.p.
> > >
> > > Awesome!  Please coordinate with Anthony Prerard to avoid duplicating
> > > work if you haven't done so already.
> >
> > I've come across a bit of a problem that I'm not sure how best to deal
> > with and so am looking for some advice.
> >
> > I now have a qdevified PV disk backend but I can't bring it up because
> > it fails to acquire a write lock on the qcow2 it is pointing at. This
> > is because there is also an emulated IDE drive using the same qcow2.
> > This does not appear to be a problem for the non-qdev xen-disk,
> > presumably because it is not opening the qcow2 until the emulated
> > device is unplugged and I don't really want to introduce similar
> > hackery in my new backend (i.e. I want it to attach to its drive, and
> > hence open the qcow2, during realize).
> >
> > So, I'm not sure what to do... It is not a problem that both a PV
> > backend and an emulated device are using the same qcow2 because they
> > will never actually operate simultaneously so is there any way I can
> > bypass the qcow2 lock check when I create the drive for my PV backend?
> > (BTW I tried re-using the drive created for the emulated device, but
> > that doesn't work because there is a check if a drive is already
> > attached to something).
> >
> > Any ideas?
> 
> I think the clean solution is to keep the BlockBackend open in xen-disk
> from the beginning, but not requesting write permissions yet.
> 
> The BlockBackend is created in parse_drive(), when qdev parses the
> -device drive=... option. At this point, no permissions are requested
> yet. That is done in blkconf_apply_backend_options(), which is manually
> called from the devices; specifically from ide_dev_initfn() in IDE, and
> I assume you call the function from xen-disk as well.

Yes, I call it during realize.

> 
> xen-disk should then call this function with readonly=true, and at the
> point of the handover (when the IDE device is already gone) it can call
> blk_set_perm() to request BLK_PERM_WRITE in addition to the permissions
> it already holds.
> 

I tried that and it works fine :-)

> 
> The other option I see would be that you simply create both devices with
> share-rw=on (which results in conf->share_rw == true and therefore
> shared BLK_PERM_WRITE in blkconf_apply_backend_options()), but that
> feels like a hack because you don't actually want to have two writers at
> the same time.
> 

Yes, that does indeed seem like more of a hack. The first option works so I'll 
go with that.

Thanks for your help.

Cheers,

  Paul




> Kevin

Reply via email to