Re: [ovirt-devel] SR-IOV feature
- Original Message - From: Dan Kenigsberg dan...@redhat.com To: Alona Kaplan alkap...@redhat.com, bazu...@redhat.com Cc: Itamar Heim ih...@redhat.com, Eldan Hildesheim ehild...@redhat.com, Nir Yechiel nyech...@redhat.com, devel@ovirt.org Sent: Thursday, October 30, 2014 7:47:31 PM Subject: Re: [ovirt-devel] SR-IOV feature On Sun, Oct 26, 2014 at 06:39:00AM -0400, Alona Kaplan wrote: On 10/05/2014 07:02 AM, Alona Kaplan wrote: Hi all, Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration). You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV Thanks, Alona. ___ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel Glad to see this. some questions: Note: this feature is about exposing a virtualized (or VirtIO) vNic to the guest, and not about exposing the PCI device to it. This restriction is necessary for migration to be supported. did not understand this sentence - are you hinting to macvtap? Most likely macvtap, yes. Additionally I think Martin Poledník is looking into direct sr-iov attachment to VMs as part of the pci passthrough work he is doing. add/edit profile so i gather the implementation is at profile level, which is at logical network level? how does this work exactly? can this logical network be vlan tagged or must be native? if vlan tagged who does the tagging for the passthrough device? (I see later on vf_vlan is one of the parameters to vdsm, just wondering how the mapping can be at host level if this is a passthrough device)? is this because the use of virtio (macvtap)? The logical network can be vlan tagged. As you mentioned the vf_vlan is one of the parameters to the vdsm (on create verb). Setting the vlan on the vf is done as follows- ip link set {DEVICE} vf {NUM} [ vlan VLANID ] It is written in the notes section. It is not related to the use of virtio. The vlan can be set on the vf whether it is connected to the vm via macvtap or directly. Are you sure about this? I think that when a host device is attached to a VM, it disappears from the host, and the the guest can send arbitrary unmodified packets through the wire. But I may well be wrong. I think you are correct for the case of mtu (that's why I added it as an open issue- Is applying MTU on VF supported by libvirt?). But as I understand from the documentation (although I didn't test it by myself)- that is the purpose of ip link set {DEVICE} vf {NUM} vlan VLANID The documentation says- all traffic sent from the VF will be tagged with the specified VLAN ID. Incoming traffic will be filtered for the specified VLAN ID, and will have all VLAN tags stripped before being passed to the VF. Note- It is also supported by libvirt. As you can read in- http://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/Virtualization_Deployment_and_Administration_Guide/sub-sub-section-libvirt-dom-xml-devices-setting-vlan-tag.html type='hostdev' SR-IOV interfaces do support transparent vlan tagging of guest traffic. wouldn't it be better to support both macvtap and passthrough and just flag the VM as non migratable in that case? Martin Polednik is working on pci-passthrough- http://www.ovirt.org/Features/hostdev_passthrough Maybe we should wait for his feature to be ready and then combine it with the sr-iov feature. As I see in his feature page he plans to attach a specific device directly to the vm. We can combine his feature with the sr-iov feature- 1. The network profile will have type property- bridge (the regular configuration we have today, vnic-tap-bridge-physical nic). virtio(in the current feature design it is called passthrough, vnic-macvtap-vf) pci-passthrough(vnic-vf) 2. Attaching a network profile with pci-passthrough type to a vnic will mark the vm as non-migratable. This marking can be tuned by the admin. If the admin requests migration despite the pci-passthrough type, Vdsm can auto-unplug the PCI device before migration, and plug it back on the destination. That would allow some kind of migration to guests that are willing to see a PCI device disappear and re-appear. Added it as an open issue to the feature page. 3. When running a vm with pci-passthrough vnic a free VF will be attached to the vm with the vlan and mtu configuration of the profile/network (same as for virio profile, as described in the feature page). The benefit of it is that the user won't have to choose the vf directly and will be able to set vlan and mtu on the vf. also (and doesn't have
Re: [ovirt-devel] SR-IOV feature
On Sun, Oct 26, 2014 at 06:39:00AM -0400, Alona Kaplan wrote: On 10/05/2014 07:02 AM, Alona Kaplan wrote: Hi all, Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration). You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV Thanks, Alona. ___ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel Glad to see this. some questions: Note: this feature is about exposing a virtualized (or VirtIO) vNic to the guest, and not about exposing the PCI device to it. This restriction is necessary for migration to be supported. did not understand this sentence - are you hinting to macvtap? Most likely macvtap, yes. Additionally I think Martin Poledník is looking into direct sr-iov attachment to VMs as part of the pci passthrough work he is doing. add/edit profile so i gather the implementation is at profile level, which is at logical network level? how does this work exactly? can this logical network be vlan tagged or must be native? if vlan tagged who does the tagging for the passthrough device? (I see later on vf_vlan is one of the parameters to vdsm, just wondering how the mapping can be at host level if this is a passthrough device)? is this because the use of virtio (macvtap)? The logical network can be vlan tagged. As you mentioned the vf_vlan is one of the parameters to the vdsm (on create verb). Setting the vlan on the vf is done as follows- ip link set {DEVICE} vf {NUM} [ vlan VLANID ] It is written in the notes section. It is not related to the use of virtio. The vlan can be set on the vf whether it is connected to the vm via macvtap or directly. Are you sure about this? I think that when a host device is attached to a VM, it disappears from the host, and the the guest can send arbitrary unmodified packets through the wire. But I may well be wrong. wouldn't it be better to support both macvtap and passthrough and just flag the VM as non migratable in that case? Martin Polednik is working on pci-passthrough- http://www.ovirt.org/Features/hostdev_passthrough Maybe we should wait for his feature to be ready and then combine it with the sr-iov feature. As I see in his feature page he plans to attach a specific device directly to the vm. We can combine his feature with the sr-iov feature- 1. The network profile will have type property- bridge (the regular configuration we have today, vnic-tap-bridge-physical nic). virtio(in the current feature design it is called passthrough, vnic-macvtap-vf) pci-passthrough(vnic-vf) 2. Attaching a network profile with pci-passthrough type to a vnic will mark the vm as non-migratable. This marking can be tuned by the admin. If the admin requests migration despite the pci-passthrough type, Vdsm can auto-unplug the PCI device before migration, and plug it back on the destination. That would allow some kind of migration to guests that are willing to see a PCI device disappear and re-appear. 3. When running a vm with pci-passthrough vnic a free VF will be attached to the vm with the vlan and mtu configuration of the profile/network (same as for virio profile, as described in the feature page). The benefit of it is that the user won't have to choose the vf directly and will be able to set vlan and mtu on the vf. also (and doesn't have to be in first phase) what happens if i ran out of hosts with sr-iov (or they failed) - can i fail back to non pcipassthrough profile for backup (policy question at vm level if more important to have sr-iov or more important it will run even without it since it provides a critical service, with a [scheduling] preference to run on sr-iov? (oh, i see this is in the futures section already. :) A benefit of this Nice to have passthrough is that one could set it on vNic profiles that are already used by VMs. Once they are migrated to a new host, the passthrough-ness request would take effect. management, display and migration properties are not relevant for the VFs configuration just wondering - any technical reason we can't put the management on a VF (not saying its a priority to do so)? Today we mark the logical network with a role (management/display/migration) when attaching it to the cluster. A logical network can be attached to one physical nic (PF). We can't use the current attachment of a role for sr-iov, since the network can be configured as vf allowed on more than one nic (maybe even on all the nics). If the network is vf allowed on the nic, a vnic with this network can be attached to a free vf on the nic. So we can't use the logical network
[ovirt-devel] SR-IOV feature
Hi all, Currently SR-IOV in oVirt is only supported using vdsm-hook [1]. This feature will add SR-IOV support to oVirt management system (including migration). You are more than welcome to review the feature page- http://www.ovirt.org/Feature/SR-IOV Thanks, Alona. ___ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel