RE: [PATCH v2 00/12] New paravirtual PCI front-end for Hyper-V VMs

2015-09-15 Thread Jake Oshins
> -Original Message-
> From: Marc Zyngier [mailto:marc.zyng...@arm.com]
> Sent: Tuesday, September 15, 2015 2:57 AM
> To: Jake Oshins <ja...@microsoft.com>; gre...@linuxfoundation.org; KY
> Srinivasan <k...@microsoft.com>; linux-ker...@vger.kernel.org;
> de...@linuxdriverproject.org; o...@aepfle.de; a...@canonical.com;
> vkuzn...@redhat.com; linux-...@vger.kernel.org; bhelg...@google.com;
> t...@linutronix.de; Jiang Liu <jiang@linux.intel.com>
> Subject: Re: [PATCH v2 00/12] New paravirtual PCI front-end for Hyper-V
> VMs
> 


> >
> > Is there a way to do that with the infrastructure that you're
> > introducing?
> 
> The ACPI/GSI stuff is a red herring, and is completely unrelated to the
> problem you're trying to solve. What I think is of interest to you is
> contained in the first three patches.
> 
> In your 4th patch, you have the following code:
> 
> + pci_domain = pci_domain_nr(bus);
> + d = irq_find_matching_host(NULL, DOMAIN_BUS_PCI_MSI,
> _domain);
> 
> which really feels like you're trying to create a namespace that is
> parallel to the one defined by the device_node parameter. What I'm
> trying to do is to be able to replace the device_node by something more
> generic (at the moment, you can either pass a device_node or some token
> that the irqdomain subsystem generates for you - see patch #7 for an
> example).
> 
> You could pass this token to pci_msi_create_irq_domain (which obviously
> needs some repainting not to take a device_node), store it in your bus
> structure, and perform the lookup based on this value. Or store the
> actual domain there, whatever.
> 
> What I want to do is really to make this device_node pointer for systems
> that do not have a DT node to pass there, which is exactly your case (by
> the look of it, the bus number is your identifier of choice, but I
> suspect a pointer to an internal structure would be better suited).
> 
>   M.
> --

Got it.  I'll rebase on your changes and send this series again, using the 
strategy that you outline here.  I may wait a little while until your patches 
make it into linux-next.

Thanks again,
Jake Oshins

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [PATCH v2 00/12] New paravirtual PCI front-end for Hyper-V VMs

2015-09-15 Thread Marc Zyngier
On 14/09/15 18:59, Jake Oshins wrote:
>> -Original Message-
>> From: Marc Zyngier [mailto:marc.zyng...@arm.com]
>> Sent: Monday, September 14, 2015 8:01 AM
>> To: Jake Oshins <ja...@microsoft.com>; gre...@linuxfoundation.org; KY
>> Srinivasan <k...@microsoft.com>; linux-ker...@vger.kernel.org;
>> de...@linuxdriverproject.org; o...@aepfle.de; a...@canonical.com;
>> vkuzn...@redhat.com; linux-...@vger.kernel.org; bhelg...@google.com;
>> t...@linutronix.de; Jiang Liu <jiang@linux.intel.com>
>> Subject: Re: [PATCH v2 00/12] New paravirtual PCI front-end for Hyper-V
>> VMs
>>
>> Hi Jake,
>>
>> In the future, please CC me on anything that touches irqdomains, along
>> with Jiang Liu as we both co-maintain this piece of code.
>>
> 
> Absolutely.  Sorry for that omission.
> 
>> On 11/09/15 01:00, ja...@microsoft.com wrote:
>>> From: Jake Oshins <ja...@microsoft.com>
>>>
>>> The patch series updates the one sent about a month ago in three ways.  It
>>> integrated with other IRQ domain work done in linux-next in that time, it
>>> distributes interrupts to multiple virtual processors in the guest VM, and 
>>> it
>>> incorporates feedback from Thomas Gleixner and others.
>>>
>>> These patches change the IRQ domain code so that an IRQ domain can
>> match on both
>>> bus type and on the PCI domain.  The IRQ domain match code is modified
>> so that
>>> IRQ domains can have a "rank," allowing for a default one which matches
>> every
>>> x86 PC and more specific ones that replace the default.
>>
>> I'm not really fond of this approach. We already have a way to match an
>> IRQ domain, and that's the device node. It looks to me that you're going
>> through a lot of pain inventing a new infrastructure to avoid divorcing
>> the two. If you could lookup your PCI IRQ domain directly based some
>> (non-DT) identifier, and then possibly fallback to the default one,
>> would that help?
>>
>> If so, here's the deal: I have been working on a patch series that
>> addresses the above for unrelated reasons (ACPI support on arm64). It
>> has been posted twice already:
>>
>> http://lists.infradead.org/pipermail/linux-arm-kernel/2015-July/358768.html
>>
>> and the latest version is there:
>>
>> https://git.kernel.org/cgit/linux/kernel/git/maz/arm-
>> platforms.git/log/?h=irq/gsi-irq-domain-v3
>>
>> I have the feeling that you could replace a lot of your patches with
>> this infrastructure.
>>
>> Thoughts?
>>
>>  M.
>> --
> 
> First, thank you so much for reviewing this.  I've read the patch
> series above, but I'm sure that I might have misinterpreted it.  It
> seems to merge the DT and ACPI GSI infrastructure, which I think is a
> great idea.  I'm not sure, however, that it would, as it stands,
> provide what I need here.  Please do tell me if I'm wrong.
> 
> The series above allows you to supply different IRQ domains for
> separate parts of the ACPI GSI space, which is fine for IRQs which
> are actually defined by ACPI.  Message-signaled interrupts (MSI),
> however, aren't defined by ACPI.  ACPI only talks about the routing
> of interrupts with pins and traces (or ones which have equivalent
> mechanisms like the INTx# protocol in PCI Express.)
> 
> What the older DT layer code allowed was for the PCI driver to look
> up an IRQ domain by walking up the device tree looking for a node
> that claimed to be an IRQ domain.  The match() function on the IRQ
> domain allowed it to say that it supported interrupts on PCI buses.
> 
> What's not clear to me is how I would create an IRQ domain that
> matches not on ACPI GSI ranges (because ACPI doesn't talk about MSI)
> and not just on generic PCI buses.  I need to be able to ask for an
> IRQ domain "from my parent" which doesn't really exist without the OF
> device tree or "for a specific PCI bus domain."  That second one is
> what I was trying to enable.
> 
> Is there a way to do that with the infrastructure that you're
> introducing?

The ACPI/GSI stuff is a red herring, and is completely unrelated to the
problem you're trying to solve. What I think is of interest to you is
contained in the first three patches.

In your 4th patch, you have the following code:

+   pci_domain = pci_domain_nr(bus);
+   d = irq_find_matching_host(NULL, DOMAIN_BUS_PCI_MSI, _domain);

which really feels like you're trying to create a namespace that is
parallel to the one defined by the device_node parameter. What I'm
trying to do is to be able to replace the device_node by

Re: [PATCH v2 00/12] New paravirtual PCI front-end for Hyper-V VMs

2015-09-14 Thread Marc Zyngier
Hi Jake,

In the future, please CC me on anything that touches irqdomains, along
with Jiang Liu as we both co-maintain this piece of code.

On 11/09/15 01:00, ja...@microsoft.com wrote:
> From: Jake Oshins 
> 
> The patch series updates the one sent about a month ago in three ways.  It
> integrated with other IRQ domain work done in linux-next in that time, it
> distributes interrupts to multiple virtual processors in the guest VM, and it
> incorporates feedback from Thomas Gleixner and others.
> 
> These patches change the IRQ domain code so that an IRQ domain can match on 
> both
> bus type and on the PCI domain.  The IRQ domain match code is modified so that
> IRQ domains can have a "rank," allowing for a default one which matches every
> x86 PC and more specific ones that replace the default.

I'm not really fond of this approach. We already have a way to match an
IRQ domain, and that's the device node. It looks to me that you're going
through a lot of pain inventing a new infrastructure to avoid divorcing
the two. If you could lookup your PCI IRQ domain directly based some
(non-DT) identifier, and then possibly fallback to the default one,
would that help?

If so, here's the deal: I have been working on a patch series that
addresses the above for unrelated reasons (ACPI support on arm64). It
has been posted twice already:

http://lists.infradead.org/pipermail/linux-arm-kernel/2015-July/358768.html

and the latest version is there:

https://git.kernel.org/cgit/linux/kernel/git/maz/arm-platforms.git/log/?h=irq/gsi-irq-domain-v3

I have the feeling that you could replace a lot of your patches with
this infrastructure.

Thoughts?

M.
-- 
Jazz is not dead. It just smells funny...
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


RE: [PATCH v2 00/12] New paravirtual PCI front-end for Hyper-V VMs

2015-09-14 Thread Jake Oshins
> -Original Message-
> From: Marc Zyngier [mailto:marc.zyng...@arm.com]
> Sent: Monday, September 14, 2015 8:01 AM
> To: Jake Oshins <ja...@microsoft.com>; gre...@linuxfoundation.org; KY
> Srinivasan <k...@microsoft.com>; linux-ker...@vger.kernel.org;
> de...@linuxdriverproject.org; o...@aepfle.de; a...@canonical.com;
> vkuzn...@redhat.com; linux-...@vger.kernel.org; bhelg...@google.com;
> t...@linutronix.de; Jiang Liu <jiang@linux.intel.com>
> Subject: Re: [PATCH v2 00/12] New paravirtual PCI front-end for Hyper-V
> VMs
> 
> Hi Jake,
> 
> In the future, please CC me on anything that touches irqdomains, along
> with Jiang Liu as we both co-maintain this piece of code.
> 

Absolutely.  Sorry for that omission.

> On 11/09/15 01:00, ja...@microsoft.com wrote:
> > From: Jake Oshins <ja...@microsoft.com>
> >
> > The patch series updates the one sent about a month ago in three ways.  It
> > integrated with other IRQ domain work done in linux-next in that time, it
> > distributes interrupts to multiple virtual processors in the guest VM, and 
> > it
> > incorporates feedback from Thomas Gleixner and others.
> >
> > These patches change the IRQ domain code so that an IRQ domain can
> match on both
> > bus type and on the PCI domain.  The IRQ domain match code is modified
> so that
> > IRQ domains can have a "rank," allowing for a default one which matches
> every
> > x86 PC and more specific ones that replace the default.
> 
> I'm not really fond of this approach. We already have a way to match an
> IRQ domain, and that's the device node. It looks to me that you're going
> through a lot of pain inventing a new infrastructure to avoid divorcing
> the two. If you could lookup your PCI IRQ domain directly based some
> (non-DT) identifier, and then possibly fallback to the default one,
> would that help?
> 
> If so, here's the deal: I have been working on a patch series that
> addresses the above for unrelated reasons (ACPI support on arm64). It
> has been posted twice already:
> 
> http://lists.infradead.org/pipermail/linux-arm-kernel/2015-July/358768.html
> 
> and the latest version is there:
> 
> https://git.kernel.org/cgit/linux/kernel/git/maz/arm-
> platforms.git/log/?h=irq/gsi-irq-domain-v3
> 
> I have the feeling that you could replace a lot of your patches with
> this infrastructure.
> 
> Thoughts?
> 
>   M.
> --

First, thank you so much for reviewing this.  I've read the patch series above, 
but I'm sure that I might have misinterpreted it.  It seems to merge the DT and 
ACPI GSI infrastructure, which I think is a great idea.  I'm not sure, however, 
that it would, as it stands, provide what I need here.  Please do tell me if 
I'm wrong.

The series above allows you to supply different IRQ domains for separate parts 
of the ACPI GSI space, which is fine for IRQs which are actually defined by 
ACPI.  Message-signaled interrupts (MSI), however, aren't defined by ACPI.  
ACPI only talks about the routing of interrupts with pins and traces (or ones 
which have equivalent mechanisms like the INTx# protocol in PCI Express.)

What the older DT layer code allowed was for the PCI driver to look up an IRQ 
domain by walking up the device tree looking for a node that claimed to be an 
IRQ domain.  The match() function on the IRQ domain allowed it to say that it 
supported interrupts on PCI buses.

What's not clear to me is how I would create an IRQ domain that matches not on 
ACPI GSI ranges (because ACPI doesn't talk about MSI) and not just on generic 
PCI buses.  I need to be able to ask for an IRQ domain "from my parent" which 
doesn't really exist without the OF device tree or "for a specific PCI bus 
domain."  That second one is what I was trying to enable.

Is there a way to do that with the infrastructure that you're introducing?

Thanks again,
Jake Oshins

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2 00/12] New paravirtual PCI front-end for Hyper-V VMs

2015-09-10 Thread jakeo
From: Jake Oshins 

The patch series updates the one sent about a month ago in three ways.  It
integrated with other IRQ domain work done in linux-next in that time, it
distributes interrupts to multiple virtual processors in the guest VM, and it
incorporates feedback from Thomas Gleixner and others.

These patches change the IRQ domain code so that an IRQ domain can match on both
bus type and on the PCI domain.  The IRQ domain match code is modified so that
IRQ domains can have a "rank," allowing for a default one which matches every
x86 PC and more specific ones that replace the default.

The next step is to make it possible to implement an IRQ domain in a module,
by exporting a few functions.  This seems to make sense here, rather than
linking it into the kernel, because pulling it into the kernel would involve
pulling a lot of other Hyper-V-related code into the kernel, since the IRQ
domain implementation has to send messages to and receive messages from the
hypervisor, and those facilities are currently built as modules.

After that, a couple of new Hyper-V-related facilities are exported from 
hv_vmbus.ko, so that the PCI front end can correlate Linux CPUs with virtual
processor IDs and to make hypercalls.

The last patch is a new front-end driver that exposes new root PCI buses that
(virtually) contain devices being passed through into the VM.

Jake Oshins (12):
  kernel:irq:  Change signature of irq_domain_ops match() method, adding
*bus_data
  kernel:irq: Change signature of irq_find_matching_host()
  kernel:irq: Allow for ranked matches on IRQ domains
  drivers:pci: Add IRQ domain lookup by PCI domain
  drivers:hv: Export a function that maps Linux CPU num onto Hyper-V
proc num
  drivers:hv: Export do_hypercall()
  drivers:x86:pci: Make it possible to implement a PCI MSI IRQ Domain in
a module.
  drivers:pci:msi: Store PCI domain (segment) as part of IRQ domain
  kernel:irq: Implement msi match function
  kernel:irq: Return a higher ranked match when the IRQ domain matches a
specific PCI domain
  drivers:hv: Define the channel type for Hyper-V PCI Express
pass-through
  drivers:pci:hv: New paravirtual PCI front-end for Hyper-V VMs

 MAINTAINERS  |1 +
 arch/powerpc/platforms/512x/mpc5121_ads_cpld.c   |2 +-
 arch/powerpc/platforms/cell/interrupt.c  |2 +-
 arch/powerpc/platforms/embedded6xx/flipper-pic.c |3 +-
 arch/powerpc/platforms/powermac/pic.c|3 +-
 arch/powerpc/platforms/powernv/opal-irqchip.c|2 +-
 arch/powerpc/platforms/ps3/interrupt.c   |2 +-
 arch/powerpc/sysdev/ehv_pic.c|3 +-
 arch/powerpc/sysdev/i8259.c  |2 +-
 arch/powerpc/sysdev/ipic.c   |2 +-
 arch/powerpc/sysdev/mpic.c   |2 +-
 arch/powerpc/sysdev/qe_lib/qe_ic.c   |2 +-
 arch/powerpc/sysdev/xics/xics-common.c   |2 +-
 arch/x86/include/asm/msi.h   |4 +
 arch/x86/kernel/apic/msi.c   |5 +-
 arch/x86/kernel/apic/vector.c|2 +
 drivers/hv/hv.c  |3 +-
 drivers/hv/vmbus_drv.c   |   17 +
 drivers/irqchip/irq-gic-v3-its-pci-msi.c |2 +-
 drivers/irqchip/irq-gic-v3-its-platform-msi.c|2 +-
 drivers/of/irq.c |2 +-
 drivers/pci/Kconfig  |7 +
 drivers/pci/host/Makefile|1 +
 drivers/pci/host/hv_pcifront.c   | 2244 ++
 drivers/pci/msi.c|2 +
 drivers/pci/of.c |2 +-
 drivers/pci/probe.c  |   11 +
 include/linux/hyperv.h   |   14 +
 include/linux/irqdomain.h|9 +-
 include/linux/msi.h  |4 +
 kernel/irq/chip.c|1 +
 kernel/irq/irqdomain.c   |   48 +-
 kernel/irq/msi.c |   24 +
 33 files changed, 2394 insertions(+), 38 deletions(-)
 create mode 100644 drivers/pci/host/hv_pcifront.c

-- 
1.9.1

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel