Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-24 Thread jacob jacob
On Tue, Mar 24, 2015 at 10:53 AM, Shannon Nelson
 wrote:
> On Tue, Mar 24, 2015 at 7:13 AM, jacob jacob  wrote:
>> After update to latest firmware and using version 1.2.37 of i40e
>> driver, things are looking better with PCI passthrough.
>>
>> ]# ethtool -i eth3
>> driver: i40e
>> version: 1.2.37
>> firmware-version: f4.33.31377 a1.2 n4.42 e1930
>> bus-info: :00:07.0
>> supports-statistics: yes
>> supports-test: yes
>> supports-eeprom-access: yes
>> supports-register-dump: yes
>> supports-priv-flags: yes
>
> I'm glad the updates helped as we expected.
>
>>
>> There are still issues running dpdk 1.8.0 from the VM using the pci
>> passthrough devices and looks like it puts the devices in a bad state.
>> i40e driver will not bind after this happens and a host reboot is
>> required to recover.
>
> Did you make sure to unbind the i40e device from pci-stub after you
> were done with using it in the VM?

The issue is running dpdk from within the vm itself. Is it possible
that the igb_uio driver needs additional updates/functionality to be
at parity with 1.2.37 version of i40e driver?

>
>> I'll post further updates as i make progress.
>> Thanks for all the help.
>>
>
> Good luck,
> sln
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-24 Thread jacob jacob
After update to latest firmware and using version 1.2.37 of i40e
driver, things are looking better with PCI passthrough.

]# ethtool -i eth3
driver: i40e
version: 1.2.37
firmware-version: f4.33.31377 a1.2 n4.42 e1930
bus-info: :00:07.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

There are still issues running dpdk 1.8.0 from the VM using the pci
passthrough devices and looks like it puts the devices in a bad state.
i40e driver will not bind after this happens and a host reboot is
required to recover.
I'll post further updates as i make progress.
Thanks for all the help.

On Mon, Mar 23, 2015 at 3:19 AM, Stefan Assmann  wrote:
> On 20.03.2015 21:55, jacob jacob wrote:
>> On Thu, Mar 19, 2015 at 10:18 AM, Stefan Assmann  wrote:
>>> On 19.03.2015 15:04, jacob jacob wrote:
>>>> Hi Stefan,
>>>> have you been able to get PCI passthrough working without any issues
>>>> after the upgrade?
>>>
>>> My XL710 fails to transfer regular TCP traffic (netperf). If that works
>>> for you then you're already one step ahead of me. Afraid I can't help
>>> you there.
>>
>> I have data transfer working when trying the test runs on the host
>> itself. Are you seeing problems when directly trying the TCP traffic
>> from the host itself?
>
> Correct.
>
>> The issues that i am seeing are specific to the case when the devices
>> are passed via PCI passthrough into the VM.
>>
>> Any ideas whether this would be a kvm/qemu or i40e driver issue?
>> (Updating to the latest firmware and using latest i40e driver didn't
>> seem to help.)
>
> Hard to say, that's probably something for Intel to look into.
>
>   Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-20 Thread jacob jacob
On Thu, Mar 19, 2015 at 10:18 AM, Stefan Assmann  wrote:
> On 19.03.2015 15:04, jacob jacob wrote:
>> Hi Stefan,
>> have you been able to get PCI passthrough working without any issues
>> after the upgrade?
>
> My XL710 fails to transfer regular TCP traffic (netperf). If that works
> for you then you're already one step ahead of me. Afraid I can't help
> you there.

I have data transfer working when trying the test runs on the host
itself. Are you seeing problems when directly trying the TCP traffic
from the host itself?
The issues that i am seeing are specific to the case when the devices
are passed via PCI passthrough into the VM.

Any ideas whether this would be a kvm/qemu or i40e driver issue?
(Updating to the latest firmware and using latest i40e driver didn't
seem to help.)


>
>   Stefan
>
>> Thanks
>> Jacob
>>
>> On Thu, Mar 19, 2015 at 4:15 AM, Stefan Assmann  wrote:
>>> On 18.03.2015 23:06, Shannon Nelson wrote:
>>>> On Wed, Mar 18, 2015 at 3:01 PM, Shannon Nelson
>>>>  wrote:
>>>>>
>>>>>
>>>>> On Wed, Mar 18, 2015 at 8:40 AM, jacob jacob  wrote:
>>>>>>
>>>>>> On Wed, Mar 18, 2015 at 11:24 AM, Bandan Das  wrote:
>>>>>>>
>>>>>>> Actually, Stefan suggests that support for this card is still sketchy
>>>>>>> and your best bet is to try out net-next
>>>>>>> http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git
>>>>>>>
>>>>>>> Also, could you please post more information about your hardware setup
>>>>>>> (chipset/processor/firmware version on the card etc) ?
>>>>>>
>>>>>> Host CPU : Model name:Intel(R) Xeon(R) CPU E5-2630 v2 @ 
>>>>>> 2.60GHz
>>>>>>
>>>>>> Manufacturer Part Number:  XL710QDA1BLK
>>>>>> Ethernet controller: Intel Corporation Ethernet Controller XL710 for
>>>>>> 40GbE QSFP+ (rev 01)
>>>>>>  #ethtool -i enp9s0
>>>>>> driver: i40e
>>>>>> version: 1.2.6-k
>>>>>> firmware-version: f4.22 a1.1 n04.24 e800013fd
>>>>>> bus-info: :09:00.0
>>>>>> supports-statistics: yes
>>>>>> supports-test: yes
>>>>>> supports-eeprom-access: yes
>>>>>> supports-register-dump: yes
>>>>>> supports-priv-flags: no
>>>>>>
>>>>
>>>> Jacob,
>>>>
>>>> It looks like you're using a NIC with the e800013fd firmware from last
>>>> summer, and from a separate message that you saw these issues with
>>>> both the 1.2.2-k and the 1.2.37 version drivers.  I suggest the next
>>>> step would be to update the NIC firmware as there are some performance
>>>> and stability updates available that deal with similar issues.  Please
>>>> see the Intel Networking support webpage at
>>>> https://downloadcenter.intel.com/download/24769 and look for the
>>>> NVMUpdatePackage.zip.  This should take care of several of the things
>>>> Stefan might describe as "sketchy" :-).
>>>
>>> Interesting, the following might explain why my XL710 feels a bit
>>> sketchy then. ;-)
>>> # ethtool -i p4p1
>>> driver: i40e
>>> version: 1.2.37-k
>>> firmware-version: f4.22.26225 a1.1 n4.24 e12ef
>>> Looks like the firmware on this NIC is even older.
>>>
>>> I tried to update the firmware with nvmupdate64e and the first thing I
>>> noticed is that you cannot update the firmware even with todays linux
>>> git. The tool errors out because it cannot access the NVM. Only with a
>>> recent net-next kernel I was able to update the firmware.
>>> ethtool -i p4p1
>>> driver: i40e
>>> version: 1.2.37-k
>>> firmware-version: f4.33.31377 a1.2 n4.42 e1932
>>>
>>> However during the update I got a lot of errors in dmesg.
>>> [  301.796664] i40e :82:00.0: ARQ Error: Unknown event 0x0702 received
>>> [  301.893933] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
>>> [  302.005223] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
>>> [...]
>>> [  387.884635] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
>>> [  387.896862] i40e :82:00.0: ARQ Overflow Error detected
>>> [  387.902995] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
>>> [...]
>>

Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-19 Thread jacob jacob
On Thu, Mar 19, 2015 at 5:53 PM, jacob jacob  wrote:
> On Thu, Mar 19, 2015 at 5:42 PM, Shannon Nelson
>  wrote:
>> On Thu, Mar 19, 2015 at 2:04 PM, jacob jacob  wrote:
>>> I have updated to latest firmware and still no luck.
>>
>> [...]
>>
>>> [   61.554132] i40e :00:06.0 eth2: the driver failed to link
>>> because an unqualified module was detected. <<<<<<<<<<<<<<<<<<<<
>>> [   61.555331] IPv6: ADDRCONF(NETDEV_UP): eth2: link is not ready
>>>
>>
>> So I assume you're getting traffic on the other port and it doesn't
>> complain about "unqualified module"?  Does the problem move if you
>> swap the cables?  The usual problem here is a QSFP connector that
>> isn't compatible with the NIC.  I don't have a pointer handy to the
>> official list, but you should be able to get that from your NIC
>> supplier.
>
> No. That is not the case.
>
> The traffic counters are for the dpdk test run which uses igb_uio driver.
>
> As i mentioned earlier, have tried all combinations of cable, pci slot
> , card etc to isolate any hw issues.

Everything works fine on the host and hence the
modules are verified to be working.
The issue is seen only when the device is passed through to a VM
running on the same host.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-19 Thread jacob jacob
On Thu, Mar 19, 2015 at 5:42 PM, Shannon Nelson
 wrote:
> On Thu, Mar 19, 2015 at 2:04 PM, jacob jacob  wrote:
>> I have updated to latest firmware and still no luck.
>
> [...]
>
>> [   61.554132] i40e :00:06.0 eth2: the driver failed to link
>> because an unqualified module was detected. <<<<<<<<<<<<<<<<<<<<
>> [   61.555331] IPv6: ADDRCONF(NETDEV_UP): eth2: link is not ready
>>
>
> So I assume you're getting traffic on the other port and it doesn't
> complain about "unqualified module"?  Does the problem move if you
> swap the cables?  The usual problem here is a QSFP connector that
> isn't compatible with the NIC.  I don't have a pointer handy to the
> official list, but you should be able to get that from your NIC
> supplier.

No. That is not the case.

The traffic counters are for the dpdk test run which uses igb_uio driver.

As i mentioned earlier, have tried all combinations of cable, pci slot
, card etc to isolate any hw issues.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-19 Thread jacob jacob
I have updated to latest firmware and still no luck.


]# ethtool -i eth1
driver: i40e
version: 1.2.37
firmware-version: f4.33.31377 a1.2 n4.42 e1930
bus-info: :00:05.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes


Seeing similar results as before :
1)Everything works fine on host (used i40e version 1.2.37 and dpdk 1.8.0)

2)In vm tried both i40e driver version 1.2.37 and dpdk 1.8.0 and data
tx  fails (I have 2 40G interfaces passed through to a VM):See the
following error now in the VM which looks interesting...

[5.449672] i40e :00:06.0: f4.33.31377 a1.2 n4.42 e1930
[5.525061] i40e :00:06.0: FCoE capability is disabled
[5.528786] i40e :00:06.0: MAC address: 68:05:ca:2e:80:50
[5.534491] i40e :00:06.0: SAN MAC: 68:05:ca:2e:80:54
[5.544081] i40e :00:06.0: AQ Querying DCB configuration failed: aq_err 1
[5.545870] i40e :00:06.0: DCB init failed -53, disabled
[5.547462] i40e :00:06.0: fcoe queues = 0
[5.548970] i40e :00:06.0: irq 43 for MSI/MSI-X
[5.548987] i40e :00:06.0: irq 44 for MSI/MSI-X
[5.549012] i40e :00:06.0: irq 45 for MSI/MSI-X
[5.549028] i40e :00:06.0: irq 46 for MSI/MSI-X
[5.549044] i40e :00:06.0: irq 47 for MSI/MSI-X
[5.549059] i40e :00:06.0: irq 48 for MSI/MSI-X
[5.549074] i40e :00:06.0: irq 49 for MSI/MSI-X
[5.549089] i40e :00:06.0: irq 50 for MSI/MSI-X
[5.549103] i40e :00:06.0: irq 51 for MSI/MSI-X
[5.549117] i40e :00:06.0: irq 52 for MSI/MSI-X
[5.549132] i40e :00:06.0: irq 53 for MSI/MSI-X
[5.549146] i40e :00:06.0: irq 54 for MSI/MSI-X
[5.549160] i40e :00:06.0: irq 55 for MSI/MSI-X
[5.549174] i40e :00:06.0: irq 56 for MSI/MSI-X
[5.579062] i40e :00:06.0: enabling bridge mode: VEB
[5.615344] i40e :00:06.0: PHC enabled
[5.636028] i40e :00:06.0: PCI-Express: Speed 8.0GT/s Width x8
[5.639822] audit: type=1305 audit(1426797692.463:4): audit_pid=345
old=0 auid=4294967295 ses=4294967295
subj=system_u:system_r:auditd_t:s0 res=1
[5.651225] i40e :00:06.0: Features: PF-id[0] VFs: 128 VSIs:
130 QP: 4 RX: 1BUF RSS FD_ATR FD_SB NTUPLE PTP
[   12.720451] SELinux: initialized (dev tmpfs, type tmpfs), uses
transition SIDs
[   15.909477] SELinux: initialized (dev tmpfs, type tmpfs), uses
transition SIDs
[   61.553491] i40e :00:06.0 eth2: NIC Link is Down
[   61.554132] i40e :00:06.0 eth2: the driver failed to link
because an unqualified module was detected. <<<<<<<<<<<<<<<<<<<<
[   61.555331] IPv6: ADDRCONF(NETDEV_UP): eth2: link is not ready



With dpdk, see the following output in the VM:

testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  -- Forward statistics for port 0  --
  RX-packets: 41328971   RX-dropped: 0 RX-total: 41328971
  TX-packets: 0  TX-dropped: 0 TX-total: 0
  

  -- Forward statistics for port 1  --
  RX-packets: 0  RX-dropped: 0 RX-total: 0
  TX-packets: 41328972   TX-dropped: 0 TX-total: 41328972
  

  +++ Accumulated forward statistics for all ports+++
  RX-packets: 41328971   RX-dropped: 0 RX-total: 41328971
  TX-packets: 41328972   TX-dropped: 0 TX-total: 41328972
  


Here it can be seen that one of the ports transmits just fine.  I have
verified that it is not a card,pci port or any such hw issues..

On Thu, Mar 19, 2015 at 4:15 AM, Stefan Assmann  wrote:
> On 18.03.2015 23:06, Shannon Nelson wrote:
>> On Wed, Mar 18, 2015 at 3:01 PM, Shannon Nelson
>>  wrote:
>>>
>>>
>>> On Wed, Mar 18, 2015 at 8:40 AM, jacob jacob  wrote:
>>>>
>>>> On Wed, Mar 18, 2015 at 11:24 AM, Bandan Das  wrote:
>>>>>
>>>>> Actually, Stefan suggests that support for this card is still sketchy
>>>>> and your best bet is to try out net-next
>>>>> http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git
>>>>>
>>>>> Also, could you please post more information about your hardware setup
>>>>> (chipset/processor/firmware version on the card etc) ?
>>>>
>>>> Host CPU : Model name:Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz
>>>>
>>>> Manufacturer Part Number:  XL710QDA1BLK
>>>> Ethernet controller: Intel Corporation Ethernet Controller XL710 for
>>>&g

Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-19 Thread jacob jacob
Hi Stefan,
have you been able to get PCI passthrough working without any issues
after the upgrade?
Thanks
Jacob

On Thu, Mar 19, 2015 at 4:15 AM, Stefan Assmann  wrote:
> On 18.03.2015 23:06, Shannon Nelson wrote:
>> On Wed, Mar 18, 2015 at 3:01 PM, Shannon Nelson
>>  wrote:
>>>
>>>
>>> On Wed, Mar 18, 2015 at 8:40 AM, jacob jacob  wrote:
>>>>
>>>> On Wed, Mar 18, 2015 at 11:24 AM, Bandan Das  wrote:
>>>>>
>>>>> Actually, Stefan suggests that support for this card is still sketchy
>>>>> and your best bet is to try out net-next
>>>>> http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git
>>>>>
>>>>> Also, could you please post more information about your hardware setup
>>>>> (chipset/processor/firmware version on the card etc) ?
>>>>
>>>> Host CPU : Model name:Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz
>>>>
>>>> Manufacturer Part Number:  XL710QDA1BLK
>>>> Ethernet controller: Intel Corporation Ethernet Controller XL710 for
>>>> 40GbE QSFP+ (rev 01)
>>>>  #ethtool -i enp9s0
>>>> driver: i40e
>>>> version: 1.2.6-k
>>>> firmware-version: f4.22 a1.1 n04.24 e800013fd
>>>> bus-info: :09:00.0
>>>> supports-statistics: yes
>>>> supports-test: yes
>>>> supports-eeprom-access: yes
>>>> supports-register-dump: yes
>>>> supports-priv-flags: no
>>>>
>>
>> Jacob,
>>
>> It looks like you're using a NIC with the e800013fd firmware from last
>> summer, and from a separate message that you saw these issues with
>> both the 1.2.2-k and the 1.2.37 version drivers.  I suggest the next
>> step would be to update the NIC firmware as there are some performance
>> and stability updates available that deal with similar issues.  Please
>> see the Intel Networking support webpage at
>> https://downloadcenter.intel.com/download/24769 and look for the
>> NVMUpdatePackage.zip.  This should take care of several of the things
>> Stefan might describe as "sketchy" :-).
>
> Interesting, the following might explain why my XL710 feels a bit
> sketchy then. ;-)
> # ethtool -i p4p1
> driver: i40e
> version: 1.2.37-k
> firmware-version: f4.22.26225 a1.1 n4.24 e12ef
> Looks like the firmware on this NIC is even older.
>
> I tried to update the firmware with nvmupdate64e and the first thing I
> noticed is that you cannot update the firmware even with todays linux
> git. The tool errors out because it cannot access the NVM. Only with a
> recent net-next kernel I was able to update the firmware.
> ethtool -i p4p1
> driver: i40e
> version: 1.2.37-k
> firmware-version: f4.33.31377 a1.2 n4.42 e1932
>
> However during the update I got a lot of errors in dmesg.
> [  301.796664] i40e :82:00.0: ARQ Error: Unknown event 0x0702 received
> [  301.893933] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
> [  302.005223] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
> [...]
> [  387.884635] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
> [  387.896862] i40e :82:00.0: ARQ Overflow Error detected
> [  387.902995] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
> [...]
> [  391.583799] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  391.714217] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  391.842656] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  391.973080] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  392.107586] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  392.244140] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  392.373966] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
>
> Not sure if that flash was actually successful or not.
>
>   Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-19 Thread jacob jacob
I was going to try this on fedora 21...now not very sure if that makes
much sense..

On Thu, Mar 19, 2015 at 4:15 AM, Stefan Assmann  wrote:
> On 18.03.2015 23:06, Shannon Nelson wrote:
>> On Wed, Mar 18, 2015 at 3:01 PM, Shannon Nelson
>>  wrote:
>>>
>>>
>>> On Wed, Mar 18, 2015 at 8:40 AM, jacob jacob  wrote:
>>>>
>>>> On Wed, Mar 18, 2015 at 11:24 AM, Bandan Das  wrote:
>>>>>
>>>>> Actually, Stefan suggests that support for this card is still sketchy
>>>>> and your best bet is to try out net-next
>>>>> http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git
>>>>>
>>>>> Also, could you please post more information about your hardware setup
>>>>> (chipset/processor/firmware version on the card etc) ?
>>>>
>>>> Host CPU : Model name:Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz
>>>>
>>>> Manufacturer Part Number:  XL710QDA1BLK
>>>> Ethernet controller: Intel Corporation Ethernet Controller XL710 for
>>>> 40GbE QSFP+ (rev 01)
>>>>  #ethtool -i enp9s0
>>>> driver: i40e
>>>> version: 1.2.6-k
>>>> firmware-version: f4.22 a1.1 n04.24 e800013fd
>>>> bus-info: :09:00.0
>>>> supports-statistics: yes
>>>> supports-test: yes
>>>> supports-eeprom-access: yes
>>>> supports-register-dump: yes
>>>> supports-priv-flags: no
>>>>
>>
>> Jacob,
>>
>> It looks like you're using a NIC with the e800013fd firmware from last
>> summer, and from a separate message that you saw these issues with
>> both the 1.2.2-k and the 1.2.37 version drivers.  I suggest the next
>> step would be to update the NIC firmware as there are some performance
>> and stability updates available that deal with similar issues.  Please
>> see the Intel Networking support webpage at
>> https://downloadcenter.intel.com/download/24769 and look for the
>> NVMUpdatePackage.zip.  This should take care of several of the things
>> Stefan might describe as "sketchy" :-).
>
> Interesting, the following might explain why my XL710 feels a bit
> sketchy then. ;-)
> # ethtool -i p4p1
> driver: i40e
> version: 1.2.37-k
> firmware-version: f4.22.26225 a1.1 n4.24 e12ef
> Looks like the firmware on this NIC is even older.
>
> I tried to update the firmware with nvmupdate64e and the first thing I
> noticed is that you cannot update the firmware even with todays linux
> git. The tool errors out because it cannot access the NVM. Only with a
> recent net-next kernel I was able to update the firmware.
> ethtool -i p4p1
> driver: i40e
> version: 1.2.37-k
> firmware-version: f4.33.31377 a1.2 n4.42 e1932
>
> However during the update I got a lot of errors in dmesg.
> [  301.796664] i40e :82:00.0: ARQ Error: Unknown event 0x0702 received
> [  301.893933] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
> [  302.005223] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
> [...]
> [  387.884635] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
> [  387.896862] i40e :82:00.0: ARQ Overflow Error detected
> [  387.902995] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
> [...]
> [  391.583799] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  391.714217] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  391.842656] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  391.973080] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  392.107586] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  392.244140] i40e :82:00.0: NVMUpdate write failed err=-53 status=0x0 
> errno=-16 module=70 offset=0x0 size=2
> [  392.373966] i40e :82:00.0: ARQ Error: Unknown event 0x0703 received
>
> Not sure if that flash was actually successful or not.
>
>   Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-18 Thread jacob jacob
On Wed, Mar 18, 2015 at 11:24 AM, Bandan Das  wrote:
> [Ccing netdev and Stefan]
> Bandan Das  writes:
>
>> jacob jacob  writes:
>>
>>> On Mon, Mar 16, 2015 at 2:12 PM, Bandan Das  wrote:
>>>> jacob jacob  writes:
>>>>
>>>>> I also see the following in dmesg in the VM.
>>>>>
>>>>> [0.095758] ACPI: PCI Root Bridge [PCI0] (domain  [bus 00-ff])
>>>>> [0.096006] acpi PNP0A03:00: ACPI _OSC support notification failed,
>>>>> disabling PCIe ASPM
>>>>> [0.096915] acpi PNP0A03:00: Unable to request _OSC control (_OSC
>>>>> support mask: 0x08)
>>>> IIRC, For OSC control, after BIOS is done with (whatever initialization
>>>> it needs to do), it clears a bit so that the OS can take over. This 
>>>> message,
>>>> you are getting is a sign of a bug in the BIOS (usually). But I don't
>>>> know if this is related to your problem. Does "dmesg | grep -e DMAR -e 
>>>> IOMMU"
>>>> give anything useful ?
>>>
>>> Do not see anything useful in the output..
>>
>> Ok, Thanks. Can you please post the output as well ?
>>
>>>>> [0.097072] acpi PNP0A03:00: fail to add MMCONFIG information,
>>>>> can't access extended PCI configuration space under this bridge.
>>>>>
>>>>> Does this indicate any issue related to PCI passthrough?
>>>>>
>>>>> Would really appreciate any input on how to bebug this further.
>>>>
>>>> Did you get a chance to try a newer kernel ?
>>> Currently am using 3.18.7-200.fc21.x86_64 which is pretty recent.
>>> Are you suggesting trying the newer kernel just on the host? (or VM too?)
>> Both preferably to 3.19. But it's just a wild guess. I saw i40e related 
>> fixes,
>> particularly "i40e: fix un-necessary Tx hangs" in 3.19-rc5. This is not 
>> exactly
>> what you are seeing but I was still wondering if it could help.
>
> Actually, Stefan suggests that support for this card is still sketchy
> and your best bet is to try out net-next
> http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git
>
> Also, could you please post more information about your hardware setup
> (chipset/processor/firmware version on the card etc) ?

Host CPU : Model name:Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz

Manufacturer Part Number:  XL710QDA1BLK
Ethernet controller: Intel Corporation Ethernet Controller XL710 for
40GbE QSFP+ (rev 01)
 #ethtool -i enp9s0
driver: i40e
version: 1.2.6-k
firmware-version: f4.22 a1.1 n04.24 e800013fd
bus-info: :09:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no


> Thanks,
> Bandan
>
>> Meanwhile, I am trying to get hold of a card myself to try and reproduce
>> it at my end.
>>
>> Thanks,
>> Bandan
>>
>>>>> On Fri, Mar 13, 2015 at 10:08 AM, jacob jacob  wrote:
>>>>>>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
>>>>>>> driver. Just to rule out the possibility that there might be some 
>>>>>>> driver fixes that
>>>>>>> could help with this, it might be a good idea to try a 3.19 or later 
>>>>>>> upstream
>>>>>>> kernel.
>>>>>>>
>>>>>>
>>>>>> I tried with the latest DPDK release too (dpdk-1.8.0) and see the same 
>>>>>> issue.
>>>>>> As mentioned earlier, i do not see any issues at all when running
>>>>>> tests using either i40e or dpdk on the host itself.
>>>>>> This is the reason why i am suspecting if it is anything to do with 
>>>>>> KVM/libvirt.
>>>>>> Both with regular PCI passthrough and VF passthrough i see issues. It
>>>>>> is always pointing to some issue with packet transmission. Receive
>>>>>> seems to work ok.
>>>>>>
>>>>>>
>>>>>> On Thu, Mar 12, 2015 at 8:02 PM, Bandan Das  wrote:
>>>>>>> jacob jacob  writes:
>>>>>>>
>>>>>>>> On Thu, Mar 12, 2015 at 3:07 PM, Bandan Das  wrote:
>>>>>>>>> jacob jacob  writes:
>>>>>>>>>
>>>>>>>>>>  Hi,
>>>>>>>>>>
>>>>>>>>>>  Seeing failures when trying to do PCI passthrough of Intel XL710 40

Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-16 Thread jacob jacob
On Mon, Mar 16, 2015 at 3:49 PM, Bandan Das  wrote:
> jacob jacob  writes:
>
>> On Mon, Mar 16, 2015 at 2:12 PM, Bandan Das  wrote:
>>> jacob jacob  writes:
>>>
>>>> I also see the following in dmesg in the VM.
>>>>
>>>> [0.095758] ACPI: PCI Root Bridge [PCI0] (domain  [bus 00-ff])
>>>> [0.096006] acpi PNP0A03:00: ACPI _OSC support notification failed,
>>>> disabling PCIe ASPM
>>>> [0.096915] acpi PNP0A03:00: Unable to request _OSC control (_OSC
>>>> support mask: 0x08)
>>> IIRC, For OSC control, after BIOS is done with (whatever initialization
>>> it needs to do), it clears a bit so that the OS can take over. This message,
>>> you are getting is a sign of a bug in the BIOS (usually). But I don't
>>> know if this is related to your problem. Does "dmesg | grep -e DMAR -e 
>>> IOMMU"
>>> give anything useful ?
>>
>> Do not see anything useful in the output..
>
> Ok, Thanks. Can you please post the output as well ?
>
dmesg | grep -e DMAR -e IOMMU
[0.00] ACPI: DMAR 0xBDF8B818 000160 (v01 INTEL
S2600GL  06222004 INTL 20090903)
[0.00] Intel-IOMMU: enabled
[0.168227] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap
d2078c106f0466 ecap f020df
[0.169529] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap
d2078c106f0466 ecap f020df
[0.171409] IOAPIC id 2 under DRHD base  0xfbffe000 IOMMU 0
[0.171865] IOAPIC id 0 under DRHD base  0xebffc000 IOMMU 1
[0.172319] IOAPIC id 1 under DRHD base  0xebffc000 IOMMU 1
[3.433119] IOMMU 0 0xfbffe000: using Queued invalidation
[3.433611] IOMMU 1 0xebffc000: using Queued invalidation
[3.434170] IOMMU: hardware identity mapping for device :00:00.0
[3.434664] IOMMU: hardware identity mapping for device :00:01.0
[3.435175] IOMMU: hardware identity mapping for device :00:01.1
.
.
[3.500268] IOMMU: Setting RMRR:
[3.502559] IOMMU: Prepare 0-16MiB unity mapping for LPC


>>>> [0.097072] acpi PNP0A03:00: fail to add MMCONFIG information,
>>>> can't access extended PCI configuration space under this bridge.
>>>>
>>>> Does this indicate any issue related to PCI passthrough?
>>>>
>>>> Would really appreciate any input on how to bebug this further.
>>>
>>> Did you get a chance to try a newer kernel ?
>> Currently am using 3.18.7-200.fc21.x86_64 which is pretty recent.
>> Are you suggesting trying the newer kernel just on the host? (or VM too?)
> Both preferably to 3.19. But it's just a wild guess. I saw i40e related fixes,
> particularly "i40e: fix un-necessary Tx hangs" in 3.19-rc5. This is not 
> exactly
> what you are seeing but I was still wondering if it could help.
>
> Meanwhile, I am trying to get hold of a card myself to try and reproduce
> it at my end.

Thx. Please let me know if there is anything else that i could try out.
Since the NIC works just fine on the host, doesn't it rule out any
i40e driver related issue?

>
> Thanks,
> Bandan
>
>>>> On Fri, Mar 13, 2015 at 10:08 AM, jacob jacob  wrote:
>>>>>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
>>>>>> driver. Just to rule out the possibility that there might be some driver 
>>>>>> fixes that
>>>>>> could help with this, it might be a good idea to try a 3.19 or later 
>>>>>> upstream
>>>>>> kernel.
>>>>>>
>>>>>
>>>>> I tried with the latest DPDK release too (dpdk-1.8.0) and see the same 
>>>>> issue.
>>>>> As mentioned earlier, i do not see any issues at all when running
>>>>> tests using either i40e or dpdk on the host itself.
>>>>> This is the reason why i am suspecting if it is anything to do with 
>>>>> KVM/libvirt.
>>>>> Both with regular PCI passthrough and VF passthrough i see issues. It
>>>>> is always pointing to some issue with packet transmission. Receive
>>>>> seems to work ok.
>>>>>
>>>>>
>>>>> On Thu, Mar 12, 2015 at 8:02 PM, Bandan Das  wrote:
>>>>>> jacob jacob  writes:
>>>>>>
>>>>>>> On Thu, Mar 12, 2015 at 3:07 PM, Bandan Das  wrote:
>>>>>>>> jacob jacob  writes:
>>>>>>>>
>>>>>>>>>  Hi,
>>>>>>>>>
>>>>>>>>>  Seeing failures when trying to do PCI passthrough of Intel XL710 40G
>>>>>>>>> i

Re: [Qemu-devel] PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-16 Thread jacob jacob
On Mon, Mar 16, 2015 at 2:12 PM, Bandan Das  wrote:
> jacob jacob  writes:
>
>> I also see the following in dmesg in the VM.
>>
>> [0.095758] ACPI: PCI Root Bridge [PCI0] (domain  [bus 00-ff])
>> [0.096006] acpi PNP0A03:00: ACPI _OSC support notification failed,
>> disabling PCIe ASPM
>> [0.096915] acpi PNP0A03:00: Unable to request _OSC control (_OSC
>> support mask: 0x08)
> IIRC, For OSC control, after BIOS is done with (whatever initialization
> it needs to do), it clears a bit so that the OS can take over. This message,
> you are getting is a sign of a bug in the BIOS (usually). But I don't
> know if this is related to your problem. Does "dmesg | grep -e DMAR -e IOMMU"
> give anything useful ?

Do not see anything useful in the output..

>> [0.097072] acpi PNP0A03:00: fail to add MMCONFIG information,
>> can't access extended PCI configuration space under this bridge.
>>
>> Does this indicate any issue related to PCI passthrough?
>>
>> Would really appreciate any input on how to bebug this further.
>
> Did you get a chance to try a newer kernel ?
Currently am using 3.18.7-200.fc21.x86_64 which is pretty recent.
Are you suggesting trying the newer kernel just on the host? (or VM too?)

>> On Fri, Mar 13, 2015 at 10:08 AM, jacob jacob  wrote:
>>>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
>>>> driver. Just to rule out the possibility that there might be some driver 
>>>> fixes that
>>>> could help with this, it might be a good idea to try a 3.19 or later 
>>>> upstream
>>>> kernel.
>>>>
>>>
>>> I tried with the latest DPDK release too (dpdk-1.8.0) and see the same 
>>> issue.
>>> As mentioned earlier, i do not see any issues at all when running
>>> tests using either i40e or dpdk on the host itself.
>>> This is the reason why i am suspecting if it is anything to do with 
>>> KVM/libvirt.
>>> Both with regular PCI passthrough and VF passthrough i see issues. It
>>> is always pointing to some issue with packet transmission. Receive
>>> seems to work ok.
>>>
>>>
>>> On Thu, Mar 12, 2015 at 8:02 PM, Bandan Das  wrote:
>>>> jacob jacob  writes:
>>>>
>>>>> On Thu, Mar 12, 2015 at 3:07 PM, Bandan Das  wrote:
>>>>>> jacob jacob  writes:
>>>>>>
>>>>>>>  Hi,
>>>>>>>
>>>>>>>  Seeing failures when trying to do PCI passthrough of Intel XL710 40G
>>>>>>> interface to KVM vm.
>>>>>>>  0a:00.1 Ethernet controller: Intel Corporation Ethernet
>>>>>>> Controller XL710 for 40GbE QSFP+ (rev 01)
>>>>>>
>>>>>> You are assigning the PF right ? Does assigning VFs work or it's
>>>>>> the same behavior ?
>>>>>
>>>>> Yes.Assigning VFs worked ok.But this had other issues while bringing down 
>>>>> VMs.
>>>>> Interested in finding out if PCI passthrough of 40G intel XL710
>>>>> interface is qualified in some specific kernel/kvm release.
>>>>
>>>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
>>>> driver. Just to rule out the possibility that there might be some driver 
>>>> fixes that
>>>> could help with this, it might be a good idea to try a 3.19 or later 
>>>> upstream
>>>> kernel.
>>>>
>>>>>>> From dmesg on host:
>>>>>>>
>>>>>>>> [80326.559674] kvm: zapping shadow pages for mmio generation wraparound
>>>>>>>> [80327.271191] kvm [175994]: vcpu0 unhandled rdmsr: 0x1c9
>>>>>>>> [80327.271689] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a6
>>>>>>>> [80327.272201] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a7
>>>>>>>> [80327.272681] kvm [175994]: vcpu0 unhandled rdmsr: 0x3f6
>>>>>>>> [80327.376186] kvm [175994]: vcpu0 unhandled rdmsr: 0x606
>>>>>>
>>>>>> These are harmless and are related to unimplemented PMU msrs,
>>>>>> not VFIO.
>>>>>>
>>>>>> Bandan
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>>>> the body of a message to majord...@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-16 Thread jacob jacob
I also see the following in dmesg in the VM.

[0.095758] ACPI: PCI Root Bridge [PCI0] (domain  [bus 00-ff])
[0.096006] acpi PNP0A03:00: ACPI _OSC support notification failed,
disabling PCIe ASPM
[0.096915] acpi PNP0A03:00: Unable to request _OSC control (_OSC
support mask: 0x08)
[0.097072] acpi PNP0A03:00: fail to add MMCONFIG information,
can't access extended PCI configuration space under this bridge.

Does this indicate any issue related to PCI passthrough?

Would really appreciate any input on how to bebug this further.

On Fri, Mar 13, 2015 at 10:08 AM, jacob jacob  wrote:
>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
>> driver. Just to rule out the possibility that there might be some driver 
>> fixes that
>> could help with this, it might be a good idea to try a 3.19 or later upstream
>> kernel.
>>
>
> I tried with the latest DPDK release too (dpdk-1.8.0) and see the same issue.
> As mentioned earlier, i do not see any issues at all when running
> tests using either i40e or dpdk on the host itself.
> This is the reason why i am suspecting if it is anything to do with 
> KVM/libvirt.
> Both with regular PCI passthrough and VF passthrough i see issues. It
> is always pointing to some issue with packet transmission. Receive
> seems to work ok.
>
>
> On Thu, Mar 12, 2015 at 8:02 PM, Bandan Das  wrote:
>> jacob jacob  writes:
>>
>>> On Thu, Mar 12, 2015 at 3:07 PM, Bandan Das  wrote:
>>>> jacob jacob  writes:
>>>>
>>>>>  Hi,
>>>>>
>>>>>  Seeing failures when trying to do PCI passthrough of Intel XL710 40G
>>>>> interface to KVM vm.
>>>>>  0a:00.1 Ethernet controller: Intel Corporation Ethernet
>>>>> Controller XL710 for 40GbE QSFP+ (rev 01)
>>>>
>>>> You are assigning the PF right ? Does assigning VFs work or it's
>>>> the same behavior ?
>>>
>>> Yes.Assigning VFs worked ok.But this had other issues while bringing down 
>>> VMs.
>>> Interested in finding out if PCI passthrough of 40G intel XL710
>>> interface is qualified in some specific kernel/kvm release.
>>
>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
>> driver. Just to rule out the possibility that there might be some driver 
>> fixes that
>> could help with this, it might be a good idea to try a 3.19 or later upstream
>> kernel.
>>
>>>>> From dmesg on host:
>>>>>
>>>>>> [80326.559674] kvm: zapping shadow pages for mmio generation wraparound
>>>>>> [80327.271191] kvm [175994]: vcpu0 unhandled rdmsr: 0x1c9
>>>>>> [80327.271689] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a6
>>>>>> [80327.272201] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a7
>>>>>> [80327.272681] kvm [175994]: vcpu0 unhandled rdmsr: 0x3f6
>>>>>> [80327.376186] kvm [175994]: vcpu0 unhandled rdmsr: 0x606
>>>>
>>>> These are harmless and are related to unimplemented PMU msrs,
>>>> not VFIO.
>>>>
>>>> Bandan
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-13 Thread jacob jacob
> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
> driver. Just to rule out the possibility that there might be some driver 
> fixes that
> could help with this, it might be a good idea to try a 3.19 or later upstream
> kernel.
>

I tried with the latest DPDK release too (dpdk-1.8.0) and see the same issue.
As mentioned earlier, i do not see any issues at all when running
tests using either i40e or dpdk on the host itself.
This is the reason why i am suspecting if it is anything to do with KVM/libvirt.
Both with regular PCI passthrough and VF passthrough i see issues. It
is always pointing to some issue with packet transmission. Receive
seems to work ok.


On Thu, Mar 12, 2015 at 8:02 PM, Bandan Das  wrote:
> jacob jacob  writes:
>
>> On Thu, Mar 12, 2015 at 3:07 PM, Bandan Das  wrote:
>>> jacob jacob  writes:
>>>
>>>>  Hi,
>>>>
>>>>  Seeing failures when trying to do PCI passthrough of Intel XL710 40G
>>>> interface to KVM vm.
>>>>  0a:00.1 Ethernet controller: Intel Corporation Ethernet
>>>> Controller XL710 for 40GbE QSFP+ (rev 01)
>>>
>>> You are assigning the PF right ? Does assigning VFs work or it's
>>> the same behavior ?
>>
>> Yes.Assigning VFs worked ok.But this had other issues while bringing down 
>> VMs.
>> Interested in finding out if PCI passthrough of 40G intel XL710
>> interface is qualified in some specific kernel/kvm release.
>
> So, it could be the i40e driver then ? Because IIUC, VFs use a separate
> driver. Just to rule out the possibility that there might be some driver 
> fixes that
> could help with this, it might be a good idea to try a 3.19 or later upstream
> kernel.
>
>>>> From dmesg on host:
>>>>
>>>>> [80326.559674] kvm: zapping shadow pages for mmio generation wraparound
>>>>> [80327.271191] kvm [175994]: vcpu0 unhandled rdmsr: 0x1c9
>>>>> [80327.271689] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a6
>>>>> [80327.272201] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a7
>>>>> [80327.272681] kvm [175994]: vcpu0 unhandled rdmsr: 0x3f6
>>>>> [80327.376186] kvm [175994]: vcpu0 unhandled rdmsr: 0x606
>>>
>>> These are harmless and are related to unimplemented PMU msrs,
>>> not VFIO.
>>>
>>> Bandan
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-12 Thread jacob jacob
On Thu, Mar 12, 2015 at 3:07 PM, Bandan Das  wrote:
> jacob jacob  writes:
>
>>  Hi,
>>
>>  Seeing failures when trying to do PCI passthrough of Intel XL710 40G
>> interface to KVM vm.
>>  0a:00.1 Ethernet controller: Intel Corporation Ethernet
>> Controller XL710 for 40GbE QSFP+ (rev 01)
>
> You are assigning the PF right ? Does assigning VFs work or it's
> the same behavior ?

Yes.Assigning VFs worked ok.But this had other issues while bringing down VMs.
Interested in finding out if PCI passthrough of 40G intel XL710
interface is qualified in some specific kernel/kvm release.
>
>> From dmesg on host:
>>
>>> [80326.559674] kvm: zapping shadow pages for mmio generation wraparound
>>> [80327.271191] kvm [175994]: vcpu0 unhandled rdmsr: 0x1c9
>>> [80327.271689] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a6
>>> [80327.272201] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a7
>>> [80327.272681] kvm [175994]: vcpu0 unhandled rdmsr: 0x3f6
>>> [80327.376186] kvm [175994]: vcpu0 unhandled rdmsr: 0x606
>
> These are harmless and are related to unimplemented PMU msrs,
> not VFIO.
>
> Bandan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-12 Thread jacob jacob
Hi Alex,
Thanks for the response.

I tried both pci-assign and vfio-pci. The issue is seen in both cases.
i40e driver complains about data tx timeout.

# libvirtd --version
libvirtd (libvirt) 1.2.9.2

Name: qemu-system-x86
Arch: x86_64
Epoch   : 2
Version : 2.1.3
Release : 2.fc21

Kernel  3.18.7-200.fc21.x86_64


Rgds
Jacob

On Thu, Mar 12, 2015 at 12:26 PM, Alex Williamson
 wrote:
> On Thu, 2015-03-12 at 12:17 -0400, jacob jacob wrote:
>>  Hi,
>>
>>  Seeing failures when trying to do PCI passthrough of Intel XL710 40G
>> interface to KVM vm.
>
> How is the device being assigned, pci-assign or vfio-pci?  What QEMU
> version?  What host kernel version?  Thanks,
>
> Alex
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


PCI passthrough of 40G ethernet interface (Openstack/KVM)

2015-03-12 Thread jacob jacob
 Hi,

 Seeing failures when trying to do PCI passthrough of Intel XL710 40G
interface to KVM vm.
 0a:00.1 Ethernet controller: Intel Corporation Ethernet
Controller XL710 for 40GbE QSFP+ (rev 01)

>From dmesg on host:

> [80326.559674] kvm: zapping shadow pages for mmio generation wraparound
> [80327.271191] kvm [175994]: vcpu0 unhandled rdmsr: 0x1c9
> [80327.271689] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a6
> [80327.272201] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a7
> [80327.272681] kvm [175994]: vcpu0 unhandled rdmsr: 0x3f6
> [80327.376186] kvm [175994]: vcpu0 unhandled rdmsr: 0x606
>
 The pci device is still available in the VM but stat transfer fails.

 With the i40e driver, the data transfer fails.
 Relevant dmesg output:

>  [   11.544088] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex, 
> Flow Control: None
> [   11.689178] i40e :00:06.0 eth2: NIC Link is Up 40 Gbps Full Duplex, 
> Flow Control: None
> [   16.704071] [ cut here ]
> [   16.705053] WARNING: CPU: 1 PID: 0 at net/sched/sch_generic.c:303 
> dev_watchdog+0x23e/0x250()
> [   16.705053] NETDEV WATCHDOG: eth1 (i40e): transmit queue 1 timed out
> [   16.705053] Modules linked in: cirrus ttm drm_kms_helper i40e drm ppdev 
> serio_raw i2c_piix4 virtio_net parport_pc ptp virtio_balloon crct10dif_pclmul 
> pps_core parport pvpanic crc32_pclmul ghash_clmulni_intel virtio_blk 
> crc32c_intel virtio_pci virtio_ring virtio ata_generic pata_acpi
> [   16.705053] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 
> 3.18.7-200.fc21.x86_64 #1
> [   16.705053] Hardware name: Fedora Project OpenStack Nova, BIOS 
> 1.7.5-20140709_153950- 04/01/2014
> [   16.705053]   2e5932b294d0c473 88043fc83d48 
> 8175e686
> [   16.705053]   88043fc83da0 88043fc83d88 
> 810991d1
> [   16.705053]  88042958f5c0 0001 88042865f000 
> 0001
> [   16.705053] Call Trace:
> [   16.705053][] dump_stack+0x46/0x58
> [   16.705053]  [] warn_slowpath_common+0x81/0xa0
> [   16.705053]  [] warn_slowpath_fmt+0x55/0x70
> [   16.705053]  [] dev_watchdog+0x23e/0x250
> [   16.705053]  [] ? dev_graft_qdisc+0x80/0x80
> [   16.705053]  [] call_timer_fn+0x3a/0x120
> [   16.705053]  [] ? dev_graft_qdisc+0x80/0x80
> [   16.705053]  [] run_timer_softirq+0x212/0x2f0
> [   16.705053]  [] __do_softirq+0x124/0x2d0
> [   16.705053]  [] irq_exit+0x125/0x130
> [   16.705053]  [] smp_apic_timer_interrupt+0x48/0x60
> [   16.705053]  [] apic_timer_interrupt+0x6d/0x80
> [   16.705053][] ? hrtimer_start+0x18/0x20
> [   16.705053]  [] ? native_safe_halt+0x6/0x10
> [   16.705053]  [] ? rcu_eqs_enter+0xa3/0xb0
> [   16.705053]  [] default_idle+0x1f/0xc0
> [   16.705053]  [] arch_cpu_idle+0xf/0x20
> [   16.705053]  [] cpu_startup_entry+0x3c5/0x410
> [   16.705053]  [] start_secondary+0x1af/0x1f0
> [   16.705053] ---[ end trace 7bda53aeda558267 ]---
> [   16.705053] i40e :00:05.0 eth1: tx_timeout recovery level 1
> [   16.705053] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx ring 0 
> disable timeout
> [   16.744198] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx ring 
> 64 disable timeout
> [   16.779322] i40e :00:05.0: i40e_ptp_init: added PHC on eth1
> [   16.791819] i40e :00:05.0: PF 40 attempted to control timestamp mode 
> on port 1, which is owned by PF 1
> [   16.933869] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex, 
> Flow Control: None
> [   18.853624] SELinux: initialized (dev tmpfs, type tmpfs), uses transition 
> SIDs
> [   22.720083] i40e :00:05.0 eth1: tx_timeout recovery level 2
> [   22.826993] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx ring 0 
> disable timeout
> [   22.935288] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx ring 
> 64 disable timeout
> [   23.669555] i40e :00:05.0: i40e_ptp_init: added PHC on eth1
> [   23.682067] i40e :00:05.0: PF 40 attempted to control timestamp mode 
> on port 1, which is owned by PF 1
> [   23.722423] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex, 
> Flow Control: None
> [   23.800206] i40e :00:06.0: i40e_ptp_init: added PHC on eth2
> [   23.813804] i40e :00:06.0: PF 48 attempted to control timestamp mode 
> on port 0, which is owned by PF 0
> [   23.855275] i40e :00:06.0 eth2: NIC Link is Up 40 Gbps Full Duplex, 
> Flow Control: None
> [   38.720091] i40e :00:05.0 eth1: tx_timeout recovery level 3
> [   38.725844] random: nonblocking pool is initialized
> [   38.729874] i40e :00:06.0: HMC error interrupt
> [   38.733425] i40e :00:06.0: i40e_vsi_control_tx: VSI seid 518 Tx ring 0 
> disable timeout
> [   38.738886] i40e :00:06.0: i40e_vsi_control_tx: VSI seid 521 Tx ring 
> 64 disable timeout
> [   39.689569] i40e :00:06.0: i40e_ptp_init: added PHC on eth2
> [   39.704197] i40e :00:06.0: PF 48 attempted to control timestamp mode 
> on port 0, which is owned by PF 0
> [   39.746879] i40e :00:06.0