[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 liyi changed: What|Removed |Added Status|NEW |RESOLVED Resolution||INVALID --- Comment #14 from liyi 2012-12-03 13:18:21 --- sorry to disturb all of you. this issue is fixed at the latest qemu-kvm version -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #13 from liyi 2012-11-29 19:12:47 --- yes. -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #12 from Alex Williamson 2012-11-29 18:55:46 --- (In reply to comment #11) > please omit the BCM5709S. > > i have BCM5709 BCM5716S intel82599 environment, but now , cannot access > the BCM5716S and intel82599. > > now, the test on BCM5709 is ok, but the test on BCM5716S intel82599 before is > failed. Are you saying MSI-X affinity works as expected on the BCM5709? -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #11 from liyi 2012-11-29 18:20:59 --- please omit the BCM5709S. i have BCM5709 BCM5716S intel82599 environment, but now , cannot access the BCM5716S and intel82599. now, the test on BCM5709 is ok, but the test on BCM5716S intel82599 before is failed. -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #10 from liyi 2012-11-29 18:18:21 --- sorry, i find the test on BCM5709 is ok. now, i cannot access my physical machine (with BCM5709S,or intel 82599), i will give the debug info asap. Thanks -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #9 from Alex Williamson 2012-11-29 18:08:44 --- (In reply to comment #8) > ok,i am sure the irqbalance is not running in the guest. > you are right, BCM5716S using the MSI-X interrupts default. > > 1:i have set the smp_affinity cannot work correctly using the MSI-X. but the > test is ok after insmod the bnx2 with disable_msi=1. > > 2: lspci -vvv: > 00:06.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit > Ethernet (rev 20) ... > 00:06.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit > Ethernet (rev 20) Where's the BCM5716? -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #8 from liyi 2012-11-29 17:52:47 --- ok,i am sure the irqbalance is not running in the guest. you are right, BCM5716S using the MSI-X interrupts default. 1:i have set the smp_affinity cannot work correctly using the MSI-X. but the test is ok after insmod the bnx2 with disable_msi=1. 2: lspci -vvv: 00:06.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) Subsystem: Dell PowerEdge R710 BCM5709 Gigabit Ethernet Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Capabilities: [ac] Express (v2) Endpoint, MSI 00 DevCap:MaxPayload 512 bytes, PhantFunc 0, Latency L0s <4us, L1 <64us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl:Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta:CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap:Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Latency L0 <4us, L1 <4us ClockPM- Suprise- LLActRep- BwNot- LnkCtl:ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk- ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta:Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB Capabilities: [48] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 PME-Enable- DSel=0 DScale=0 PME- Capabilities: [a0] MSI-X: Enable+ Mask- TabSize=9 Vector table: BAR=0 offset=c000 PBA: BAR=0 offset=e000 Capabilities: [58] Message Signalled Interrupts: Mask- 64bit- Count=1/16 Enable- Address: Data: Kernel driver in use: bnx2 Kernel modules: bnx2 00:06.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) Subsystem: Dell PowerEdge R710 BCM5709 Gigabit Ethernet Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Capabilities: [ac] Express (v2) Endpoint, MSI 00 DevCap:MaxPayload 512 bytes, PhantFunc 0, Latency L0s <4us, L1 <64us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl:Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta:CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap:Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Latency L0 <4us, L1 <4us ClockPM- Suprise- LLActRep- BwNot- LnkCtl:ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk- ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta:Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB Capabilities: [48] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 PME-Enable- DSel=0 DScale=0 PME- Capabilities: [a0] MSI-X: Enable+ Mask- TabSize=9 Vector table: BAR=0 offset=c000 PBA: BAR=0 offset=e000 Capabilities: [58] Message Signalled Interrupts: Mask- 64bit- Count=1/16 Enable- Address: Data: Kernel driver in use: bnx2 Kernel modules: bnx2 the guest os: 3.7-rc6 (the same as host) -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #7 from Alex Williamson 2012-11-29 16:33:38 --- (In reply to comment #6) > 1:i have set the irq (associate the passthrough device) to the vcpu2 on the > guest os.but look at /proc/interrupts on the guest and the interrupt count > with the pinned IRQ increases a lot of vcpus except the vcpu0. Perhaps double check that irqbalance is not running the in guest, ps aux | grep irqbalance. > 2 my qemu kvm version is 1.2 , and have not add the patch > http://www.spinics.net/lists/kvm/msg83109.html from, maybe it will help me. > are you sure that your qemu-kvm is from the > git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git? Those patches are against git://git.qemu.org/qemu.git I believe the BCM5716 will try to use MSI-X interrupts, not MSI. Please provide lspci -vvv of the device in the guest and we can verify what interrupt mode it's working in. What is the guest operating system? Thanks. -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #6 from liyi 2012-11-29 16:21:16 --- 1:i have set the irq (associate the passthrough device) to the vcpu2 on the guest os.but look at /proc/interrupts on the guest and the interrupt count with the pinned IRQ increases a lot of vcpus except the vcpu0. 2 my qemu kvm version is 1.2 , and have not add the patch http://www.spinics.net/lists/kvm/msg83109.html from, maybe it will help me. are you sure that your qemu-kvm is from the git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git? -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #5 from Alex Williamson 2012-11-29 16:10:55 --- (In reply to comment #4) > 1: i have disable the irqbalance in the host, and the irq affinity on the host > is ok.you can see the 4 step in comment 0. > > 2: you see in comment 0, we have set irq affinity on the host is pcpu1. > and so reduce the ipi interrupt, so we want to set the irq affinity at the > vcpu2 on the guest os according , but the irq > affinity on the guest os when using msi-x is failed. How does it fail? When I test this, I look at /proc/interrupts on the guest and the interrupt count on the row with the pinned IRQ only increases in the column under the target CPU. Perhaps if you included some logs and exact commands used I could better understand where things are going wrong. Thanks. -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #4 from liyi 2012-11-29 15:38:10 --- 1: i have disable the irqbalance in the host, and the irq affinity on the host is ok.you can see the 4 step in comment 0. 2: you see in comment 0, we have set irq affinity on the host is pcpu1. and so reduce the ipi interrupt, so we want to set the irq affinity at the vcpu2 on the guest os according , but the irq affinity on the guest os when using msi-x is failed. -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #3 from Alex Williamson 2012-11-27 18:13:07 --- I tested a BCM5716 on 3.7.0-rc7 with both qemu-kvm-1.2.0 and current qemu.git using pci-assign. MSI-X pinning works exactly as expected. Note that Linux MSI affinity is setup lazily on the next interrupt for a vector, so it's normal that after setting the affinity for a vector that you might see a single interrupt on another CPU before the interrupt is moved. Also note that setting the affinity in the guest only changes the affinity of the virtual interrupt to the guest, the physical interrupt affinity must be separately configured on the host. Perhaps the steps your missing in comment 0 above is to disable irqbalance in the host and set the irq affinity of the kvm interrupts in the host. -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 --- Comment #2 from liyi 2012-11-27 00:57:28 --- sorry, i am not clearly it. 1:i am sure the device using the MSI-X, the test is failed. check the attribute, entry->msi_attrib.is_msix is 1. pls, the qemu kvm version is 1.2. also, when using the virtio driver, i find the the nercard using the msi-x, but the test is ok. and i have test intel 82599 SR-IOV, passthrough the VF to the guest os, the test is failed using the msi-x as the BCM5716S. -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
Alex, Thanks for your reply, and i will check it agiain with msi-x. YiLi 2012/11/27 Alex Williamson : > On Tue, 2012-11-27 at 00:47 +0800, yi li wrote: >> hi Alex, >> >> the qemu-kvm version 1.2. > > And is the device making use of MSI-X or MSI interrupts. MSI-X should > work on 1.2, MSI does not yet support vector updates for affinity, but > patches are welcome. Thanks, > > Alex > >> 2012/11/26 Alex Williamson : >> > On Fri, 2012-11-23 at 11:06 +0800, yi li wrote: >> >> Hi Guys, >> >> >> >> there have a issue about smp_affinity cannot work correctly on guest >> >> os when PCI passthrough device using msi/msi-x with KVM. >> >> >> >> My reason: >> >> pcpu will occur a lot of ipi interrupt to find the vcpu to handle the >> >> irq. so the guest os will VM_EXIT frequelty. right? >> >> >> >> if smp_affinity can work correctly on guest os, the best way is that >> >> the vcpu handle the irq is cputune at the pcpu which handle the >> >> kvm:pci-bus irq on the host.but unfortunly, i find that smp_affinity >> >> can not work correctly on guest os when msi/msi-x. >> >> >> >> how to reproduce: >> >> 1: passthrough a netcard (Brodcom BCM5716S) to the guest os >> >> >> >> 2: ifup the netcard, the card will use msi-x interrupt default, and close >> >> the >> >> irqbalance service >> >> >> >> 3: echo 4 > cat /proc/irq/NETCARDIRQ/smp_affinity, so we assume the vcpu2 >> >> handle the irq. >> >> >> >> 4: we have set and set the irq kvm:pci-bus >> >> to >> >> the pcpu1 on the host. >> >> >> >> we think this configure will reduce the ipi interrupt when inject >> >> interrupt to >> >> the guest os. but this irq is not only handle on vcpu2. maybe it is >> >> not our expect。 >> > >> > What version of qemu-kvm/qemu are you using? There's been some work >> > recently specifically to enable this. Thanks, >> > >> > Alex >> > > > > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 Alex Williamson changed: What|Removed |Added CC||alex.william...@redhat.com --- Comment #1 from Alex Williamson 2012-11-26 19:32:15 --- MSI-X SMP affinity should be working, MSI SMP affinity is not currently implemented. Please clarify whether the device in question is actually making use of MSI or MSI-X. Thanks. -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
On Tue, 2012-11-27 at 00:47 +0800, yi li wrote: > hi Alex, > > the qemu-kvm version 1.2. And is the device making use of MSI-X or MSI interrupts. MSI-X should work on 1.2, MSI does not yet support vector updates for affinity, but patches are welcome. Thanks, Alex > 2012/11/26 Alex Williamson : > > On Fri, 2012-11-23 at 11:06 +0800, yi li wrote: > >> Hi Guys, > >> > >> there have a issue about smp_affinity cannot work correctly on guest > >> os when PCI passthrough device using msi/msi-x with KVM. > >> > >> My reason: > >> pcpu will occur a lot of ipi interrupt to find the vcpu to handle the > >> irq. so the guest os will VM_EXIT frequelty. right? > >> > >> if smp_affinity can work correctly on guest os, the best way is that > >> the vcpu handle the irq is cputune at the pcpu which handle the > >> kvm:pci-bus irq on the host.but unfortunly, i find that smp_affinity > >> can not work correctly on guest os when msi/msi-x. > >> > >> how to reproduce: > >> 1: passthrough a netcard (Brodcom BCM5716S) to the guest os > >> > >> 2: ifup the netcard, the card will use msi-x interrupt default, and close > >> the > >> irqbalance service > >> > >> 3: echo 4 > cat /proc/irq/NETCARDIRQ/smp_affinity, so we assume the vcpu2 > >> handle the irq. > >> > >> 4: we have set and set the irq kvm:pci-bus > >> to > >> the pcpu1 on the host. > >> > >> we think this configure will reduce the ipi interrupt when inject > >> interrupt to > >> the guest os. but this irq is not only handle on vcpu2. maybe it is > >> not our expect。 > > > > What version of qemu-kvm/qemu are you using? There's been some work > > recently specifically to enable this. Thanks, > > > > Alex > > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
hi Alex, the qemu-kvm version 1.2. Thanks. YiLi 2012/11/26 Alex Williamson : > On Fri, 2012-11-23 at 11:06 +0800, yi li wrote: >> Hi Guys, >> >> there have a issue about smp_affinity cannot work correctly on guest >> os when PCI passthrough device using msi/msi-x with KVM. >> >> My reason: >> pcpu will occur a lot of ipi interrupt to find the vcpu to handle the >> irq. so the guest os will VM_EXIT frequelty. right? >> >> if smp_affinity can work correctly on guest os, the best way is that >> the vcpu handle the irq is cputune at the pcpu which handle the >> kvm:pci-bus irq on the host.but unfortunly, i find that smp_affinity >> can not work correctly on guest os when msi/msi-x. >> >> how to reproduce: >> 1: passthrough a netcard (Brodcom BCM5716S) to the guest os >> >> 2: ifup the netcard, the card will use msi-x interrupt default, and close the >> irqbalance service >> >> 3: echo 4 > cat /proc/irq/NETCARDIRQ/smp_affinity, so we assume the vcpu2 >> handle the irq. >> >> 4: we have set and set the irq kvm:pci-bus to >> the pcpu1 on the host. >> >> we think this configure will reduce the ipi interrupt when inject interrupt >> to >> the guest os. but this irq is not only handle on vcpu2. maybe it is >> not our expect。 > > What version of qemu-kvm/qemu are you using? There's been some work > recently specifically to enable this. Thanks, > > Alex > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
On Fri, 2012-11-23 at 11:06 +0800, yi li wrote: > Hi Guys, > > there have a issue about smp_affinity cannot work correctly on guest > os when PCI passthrough device using msi/msi-x with KVM. > > My reason: > pcpu will occur a lot of ipi interrupt to find the vcpu to handle the > irq. so the guest os will VM_EXIT frequelty. right? > > if smp_affinity can work correctly on guest os, the best way is that > the vcpu handle the irq is cputune at the pcpu which handle the > kvm:pci-bus irq on the host.but unfortunly, i find that smp_affinity > can not work correctly on guest os when msi/msi-x. > > how to reproduce: > 1: passthrough a netcard (Brodcom BCM5716S) to the guest os > > 2: ifup the netcard, the card will use msi-x interrupt default, and close the > irqbalance service > > 3: echo 4 > cat /proc/irq/NETCARDIRQ/smp_affinity, so we assume the vcpu2 > handle the irq. > > 4: we have set and set the irq kvm:pci-bus to > the pcpu1 on the host. > > we think this configure will reduce the ipi interrupt when inject interrupt to > the guest os. but this irq is not only handle on vcpu2. maybe it is > not our expect。 What version of qemu-kvm/qemu are you using? There's been some work recently specifically to enable this. Thanks, Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
Hi Guys, there have a issue about smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM. My reason: pcpu will occur a lot of ipi interrupt to find the vcpu to handle the irq. so the guest os will VM_EXIT frequelty. right? if smp_affinity can work correctly on guest os, the best way is that the vcpu handle the irq is cputune at the pcpu which handle the kvm:pci-bus irq on the host.but unfortunly, i find that smp_affinity can not work correctly on guest os when msi/msi-x. how to reproduce: 1: passthrough a netcard (Brodcom BCM5716S) to the guest os 2: ifup the netcard, the card will use msi-x interrupt default, and close the irqbalance service 3: echo 4 > cat /proc/irq/NETCARDIRQ/smp_affinity, so we assume the vcpu2 handle the irq. 4: we have set and set the irq kvm:pci-bus to the pcpu1 on the host. we think this configure will reduce the ipi interrupt when inject interrupt to the guest os. but this irq is not only handle on vcpu2. maybe it is not our expect。 YiLi Thanks -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[Bug 50891] The smp_affinity cannot work correctly on guest os when PCI passthrough device using msi/msi-x with KVM
https://bugzilla.kernel.org/show_bug.cgi?id=50891 liyi changed: What|Removed |Added Summary|The smp_affinity cannot |The smp_affinity cannot |work correctly when PCI |work correctly on guest os |passthrough device using|when PCI passthrough device |msi/msi-x with KVM |using msi/msi-x with KVM -- Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are on the CC list for the bug. You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html