On Thu, Oct 11, 2012 at 07:38:54AM -0600, Alex Williamson wrote:
> On Thu, 2012-10-11 at 12:37 +0200, Michael S. Tsirkin wrote:
> > On Wed, Oct 10, 2012 at 01:31:52PM -0600, Alex Williamson wrote:
> > > On Tue, 2012-10-09 at 09:09 +0200, Jan Kiszka wrote:
> > > > On 2012-10-08 23:11, Alex Williamson wrote:
> > > > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > > >> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > > >>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > >>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > >>>>> Michael, Jan,
> > > > >>>>>
> > > > >>>>> Any comments on these?  I'd like to make the PCI changes before I 
> > > > >>>>> update
> > > > >>>>> vfio-pci to make use of the new resampling irqfd in kvm.  We 
> > > > >>>>> don't have
> > > > >>>>> anyone officially listed as maintainer of pci-assign since it's 
> > > > >>>>> been
> > > > >>>>> moved to qemu.  I could include the pci-assign patches in my tree 
> > > > >>>>> if you
> > > > >>>>> prefer.  Thanks,
> > > > >>>>>
> > > > >>>>> Alex
> > > > >>>>
> > > > >>>> Patches themselves look fine, but I'd like to
> > > > >>>> better understand why do we want the INTx fallback.
> > > > >>>> Isn't it easier to add intx routing support?
> > > > >>>
> > > > >>> vfio-pci can work with or without intx routing support.  Its 
> > > > >>> presence is
> > > > >>> just one requirement to enable kvm accelerated intx support.  
> > > > >>> Regardless
> > > > >>> of whether it's easy or hard to implement intx routing in a given
> > > > >>> chipset, I currently can't probe for it and make useful decisions 
> > > > >>> about
> > > > >>> whether or not to enable kvm support without potentially hitting an
> > > > >>> assert.  It's arguable how important intx acceleration is for 
> > > > >>> specific
> > > > >>> applications, so while I'd like all chipsets to implement it, I 
> > > > >>> don't
> > > > >>> know that it should be a gating factor to chipset integration.  
> > > > >>> Thanks,
> > > > >>>
> > > > >>> Alex
> > > > >>
> > > > >> Yes but there's nothing kvm specific in the routing API,
> > > > >> and IIRC it actually works fine without kvm.
> > > > > 
> > > > > Correct, but intx routing isn't very useful without kvm.
> > > > 
> > > > Right now: yes. Long-term: no. The concept in general is also required
> > > > for decoupling I/O paths lock-wise from our main thread. We need to
> > > > explore the IRQ path and cache it in order to avoid taking lots of locks
> > > > on each delivery, possibly even the BQL. But we will likely need
> > > > something smarter at that point, i.e. something PCI-independent.
> > > 
> > > That sounds great long term, but in the interim I think this trivial
> > > extension to the API is more than justified.  I hope that it can go in
> > > soon so we can get vfio-pci kvm intx acceleration in before freeze
> > > deadlines get much closer.  Thanks,
> > > 
> > > Alex
> > 
> > Simply reorder the patches:
> > 1. add vfio acceleration with no fallback
> > 2. add way for intx routing to fail
> > 3. add vfio fallback if intx routing fails
> > 
> > Then we can apply 1 and argue about the need for 2/3
> > afterwards.
> 
> And patches 2-6 of this series; are they also far too controversial to
> consider applying now?

You mean 3-6, right?
I think they are fine. Sorry about not making this clear.
Would you like me to apply them right away?
I delayed that in case there's fallout from splitting patches 1-2
but looking at it more closely they are unrelated so this seems unlikely.
Pls let me know.

-- 
MSR

Reply via email to