Re: ipv6 can not work for direct type interface

2023-08-06 Thread Yalan Zhang
Hi there,

Finally I got the answer from google search, refer to
https://blog.flyingpenguintech.org/2017/12/ipv6-with-macvtap-and-libvirt.html
This is expected. With trustGuestRxFilters="yes", ipv6 works well.

# virsh dumpxml vm1  --xpath //interface

  
  
  
  
  
  




BR,
Yalan

On Tue, Jul 11, 2023 at 9:48 AM Yalan Zhang  wrote:

> Hi there,
>
> I have a question regarding direct type interfaces. Would someone be able
> to take a look at it?
> When I start 2 VMs on the same host with interface "direct type + bridge
> mode", just as below:
>  
>   
>   
>   
> 
>
> The 2 VMs can connect to each other via ipv4, but can not connect to each
> other via ipv6.
> Maybe it's related to some kernel parameters, but I don't know how to
> debug.
> Is there anyone who can help me?
> Thank you!
>
>
> BR,
> Yalan
>


ipv6 can not work for direct type interface

2023-07-10 Thread Yalan Zhang
Hi there,

I have a question regarding direct type interfaces. Would someone be able
to take a look at it?
When I start 2 VMs on the same host with interface "direct type + bridge
mode", just as below:
 
  
  
  


The 2 VMs can connect to each other via ipv4, but can not connect to each
other via ipv6.
Maybe it's related to some kernel parameters, but I don't know how to
debug.
Is there anyone who can help me?
Thank you!


BR,
Yalan


Re: SR-IOV pool with static MAC address and vlan

2023-04-03 Thread Yalan Zhang
Hi Paul,

You can set the static MAC address and VLAN in the interface section of the
VM xml.
So it will not change even when the VM shuts down and restarts.
VM interface xml:



  

  


And set the network(vf pool) with xml like below:
# cat hostnet.xml

 hostdev_net
  

  

(eth0 is the PF's name, refer to
https://libvirt.org/formatnetwork.html#connectivity, hostdev part)
or:
# cat hostnet.xml

 hostdev_net
  
   
  
  

(the address is VF's pci address, you can specify multiple addresses at
once)

# virsh net-define hostnet.xml
# virsh net-start hostdev_net

Hope it helps.

BR
Yalan


On Sun, Apr 2, 2023 at 10:06 AM Paul B. Henson  wrote:

> I'm planning to set up a libvirt/kvm system using a card with SR-IOV
> support. I'd like to use the vf pool option rather than statically
> assigning a vf to each vm. However, I'd also like each vm to have a
> static MAC address and I have multiple VLANs they will be on.
>
> I found in the documentation a syntax for specifying a MAC and a vlan
> when the vf is statically assigned, but I don't see anything in the
> documentation about doing so when using a vf pool?
>
> Did I miss something? Is it possible to do so? If not, is there some
> other way to handle MAC addresses and vlans with a vf pool, or if you
> need those are you stuck with static assignment?
>
> Thanks much...
>
>


question about virtio related options: ats=on and iommu=on

2023-01-08 Thread Yalan Zhang
Hi,

I have a question about the virtio related options, could someone please
help to confirm?
In my understanding, the ats='on' should depend on the "iommu=on", but I'm
not sure about it.
Please check the details below.
Thank you!

Details:
1. set vm with intel iommu device, and enable iommu for virtio  filesystem
device:
# virsh dumpxml rhel  --xpath //iommu

  


Set iommu="on" ats="on" for virtio filesystem device:
# virsh dumpxml rhel --xpath //filesystem

  
  

  
  
  
  


2. failed to start the vm since "iommu_platform=true is not supported by
the device"
# virsh start rhel
error: Failed to start domain 'rhel'
error: internal error: qemu unexpectedly closed the monitor:
2023-01-09T03:48:03.198629Z qemu-kvm: -device
{"driver":"vhost-user-fs-pci","iommu_platform":true,"ats":true,"id":"fs0","chardev":"chr-vu-fs0","queue-size":512,"tag":"mount_tag1","bus":"pci.4","addr":"0x0"}:
iommu_platform=true is not supported by the device

3. update the xml to be with only ats="on", vm can start successfully,
which is not expected.
# virsh dumpxml rhel --xpath //filesystem

  
  

  
  
  
  
  


the qemu cmd line:
-chardev
socket,id=chr-vu-fs0,path=/var/lib/libvirt/qemu/domain-5-rhel/fs0-fs.sock \
-device
'{"driver":"vhost-user-fs-pci","ats":true,"id":"fs0","chardev":"chr-vu-fs0","queue-size":512,"tag":"mount_tag1","bus":"pci.4","addr":"0x0"}'

Refer to libvirt.org:
QEMU's virtio devices have some attributes related to the virtio transport
under the driver element: The iommu attribute enables the use of emulated
IOMMU by the device.
The attribute ats controls the Address Translation Service support for PCIe
devices. This is needed to make use of IOTLB support (see IOMMU devices).
Possible values are on or off.

In my understanding, the ats='on' should depend on the iommu='on', so if
the device does not support iommu, it should not support ats, eigher.
I'm not sure if the understanding is correct.
Could someone please help to confirm it?
Thank you!


Yalan


Re: Predictable and consistent net interface naming in guests

2022-12-11 Thread Yalan Zhang
Hi Igor,

I have tried some scenarios and recorded the status in this document[1].
Could you please help to check the test result?
Is my test matrix enough? (I will test again once qemu is ready)
Thank you!

BTW, current test results for pxb:
Q35+ pcie-expander-bus - works
PC + pci-expander-bus  - not work


[1]
https://docs.google.com/document/d/1C5wseFWLTpNPaeRls8Z8yppocLslTvz9HCIojR9bHHY/edit#


Yalan


On Fri, Dec 9, 2022 at 5:39 AM Igor Mammedov  wrote:

> On Thu, Dec 8, 2022 at 5:44 PM Laine Stump  wrote:
> >
> > On 12/8/22 11:15 AM, Julia Suvorova wrote:
> > > On Thu, Nov 3, 2022 at 9:26 AM Amnon Ilan  wrote:
> > >>
> > >>
> > >>
> > >> On Thu, Nov 3, 2022 at 12:13 AM Amnon Ilan  wrote:
> > >>>
> > >>>
> > >>>
> > >>> On Wed, Nov 2, 2022 at 6:47 PM Laine Stump  wrote:
> > 
> >  On 11/2/22 11:58 AM, Igor Mammedov wrote:
> > > On Wed, 2 Nov 2022 15:20:39 +
> > > Daniel P. Berrangé  wrote:
> > >
> > >> On Wed, Nov 02, 2022 at 04:08:43PM +0100, Igor Mammedov wrote:
> > >>> On Wed, 2 Nov 2022 10:43:10 -0400
> > >>> Laine Stump  wrote:
> > >>>
> >  On 11/1/22 7:46 AM, Igor Mammedov wrote:
> > > On Mon, 31 Oct 2022 14:48:54 +
> > > Daniel P. Berrangé  wrote:
> > >
> > >> On Mon, Oct 31, 2022 at 04:32:27PM +0200, Edward Haas wrote:
> > >>> Hi Igor and Laine,
> > >>>
> > >>> I would like to revive a 2 years old discussion [1] about
> consistent network
> > >>> interfaces in the guest.
> > >>>
> > >>> That discussion mentioned that a guest PCI address may
> change in two cases:
> > >>> - The PCI topology changes.
> > >>> - The machine type changes.
> > >>>
> > >>> Usually, the machine type is not expected to change,
> especially if one
> > >>> wants to allow migrations between nodes.
> > >>> I would hope to argue this should not be problematic in
> practice, because
> > >>> guest images would be made per a specific machine type.
> > >>>
> > >>> Regarding the PCI topology, I am not sure I understand what
> changes
> > >>> need to occur to the domxml for a defined guest PCI address
> to change.
> > >>> The only think that I can think of is a scenario where
> hotplug/unplug is
> > >>> used,
> > >>> but even then I would expect existing devices to preserve
> their PCI address
> > >>> and the plug/unplug device to have a reserved address
> managed by the one
> > >>> acting on it (the management system).
> > >>>
> > >>> Could you please help clarify in which scenarios the PCI
> topology can cause
> > >>> a mess to the naming of interfaces in the guest?
> > >>>
> > >>> Are there any plans to add the acpi_index support?
> > >>
> > >> This was implemented a year & a half ago
> > >>
> > >>  https://libvirt.org/formatdomain.html#network-interfaces
> > >>
> > >> though due to QEMU limitations this only works for the old
> > >> i440fx chipset, not Q35 yet.
> > >
> > > Q35 should work partially too. In its case acpi-index support
> > > is limited to hotplug enabled root-ports and PCIe-PCI bridges.
> > > One also has to enable ACPI PCI hotplug (it's enled by default
> > > on recent machine types) for it to work (i.e.it's not
> supported
> > > in native PCIe hotplug mode).
> > >
> > > So if mgmt can put nics on root-ports/bridges, then acpi-index
> > > should just work on Q35 as well.
> > 
> >  With only a few exceptions (e.g. the first ich9 audio device,
> which is
> >  placed directly on the root bus at 00:1B.0 because that is
> where the
> >  ich9 audio device is located on actual Q35 hardware), libvirt
> will
> >  automatically put all PCI devices (including network
> interfaces) on a
> >  pcie-root-port.
> > 
> >  After seeing reports that "acpi index doesn't work with Q35
> >  machinetypes" I just assumed that was correct and didn't try
> it. But
> >  after seeing the "should work partially" statement above, I
> tried it
> >  just now and an  of a Q35 guest that had its PCI
> address
> >  auto-assigned by libvirt (and so was placed on a
> pcie-root-port)m and
> >  had  was given the name "eno4". So what
> exactly is it
> >  that *doesn't* work?
> > >>>
> > >>>   From QEMU side:
> > >>> acpi-index requires:
> > >>>1. acpi pci hotplug enabled (which is default on relatively
> new q35 machine types)
> > >>>2. hotpluggble pci bus (root-port, various pci bridges)
> > >>>3. NIC can be cold or hotplugged, guest should pick up
> acpi-index of the device
> > >>>   currently plugged into slot
> > >>> what doesn't work:
> > >>>1. device attached to 

Re: How to check tap device for sndbuf when I set sndbuf=0 in the interface xml

2022-10-31 Thread Yalan Zhang
cc libvirt-users for more inputs

Hi,

I'm trying to find out how to check sndbuf for a tap device, could you
please help to check it?

In kernel 2.6.18, the tap device's default sndbuf is 1MB. There is a RFE
bug[1] and patch[2] to introduce the option below in libvirt to adjust the
sndbuf.

1600

It is said in the patch[2] that when we set sndbuf=0, we actually set it to
0x.
How to check if it is set successfully? I have checked the below files, the
value didn't change after I start a vm with sndbuf=0
# grep . /proc/sys/net/core/*mem_default
/proc/sys/net/core/rmem_default:212992
/proc/sys/net/core/wmem_default:212992

And from the doc[3]: "The default value is set by the
/proc/sys/net/core/wmem_default file and the maximum allowed value is set
by the /proc/sys/net/core/wmem_max file."
Current kernel 5.14.0-177.el9 has some updates about sndbuf:
# cat  /proc/sys/net/core/wmem_default
212992
# cat  /proc/sys/net/core/wmem_max
212992

Thank you!


[1] Bug 665293  - RFE:
Allow setting size of send buffer per TAP device in QEMU driver
[2] https://listman.redhat.com/archives/libvir-list/2011-January/032763.html
[3] https://man7.org/linux/man-pages/man7/socket.7.html

Yalan


Re: [libvirt-users] [virtual interface] detach interface during boot succeed with no changes

2022-09-08 Thread Yalan Zhang
Hi Peter,

Thank you for pointing that out, I will track the issue on that bug instead.

Yalan


On Thu, Sep 8, 2022 at 3:58 PM Peter Krempa  wrote:

> On Thu, Sep 08, 2022 at 15:16:56 +0800, Yalan Zhang wrote:
> > Hi Laine,
> >
> > As for the hot-unplug behavior, I have one more question about it, could
> > you please help to confirm?
> >
> > unplugging a PCI device properly requires cooperation from the guest OS.
> > > If the guest OS isn't running yet, the unplug won't complete, so qemu
> > > (and libvirt) still show the device as plugged into the guest.
> > >
> > > virsh reports success on the unplug because unplugging a device is done
> > > asynchronously - the "success" means "libvirt successfully told qemu to
> > > unplug the device, qemu has told the virtual machine to unplug the
> > > device, and is waiting for acknowledgment from the virtual machine that
> > > the guest has completed removal". At some later time the guest OS may
> > > complete its part of the unplug; when that happens, qemu will get a
> > > notification and will send an event to libvirt - at that time the
> device
> > > will be removed from libvirt's list of devices.
> > >
> > > tl;dr - this is all expected.
> > >
> >
> > The question is that, when I unplug it during boot, the virsh cmd will
> > succeed but the interface still exists, which is expected.
> > After the vm boot successfully, the guest OS will *not* complete this
> > removal. When I tried to detach it again, it reported that the device was
> > in the process of unplugging.
> > Is this acceptable?
> >
> > # virsh detach-interface rhel_new network 52:54:00:36:a8:d4
> > Interface detached successfully
> > # virsh domiflist rhel_new
> >  Interface   Type  SourceModelMAC
> > -
> >  vnet4   network   default   virtio   52:54:00:36:a8:d4
> >
> > # virsh detach-interface rhel_new network 52:54:00:36:a8:d4
> > error: Failed to detach interface
> > error: internal error: unable to execute QEMU command 'device_del':
> Device
> > net0 is already in the process of unplug
>
> The same problem was already reported for disks:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=2087047
>
> https://gitlab.com/libvirt/libvirt/-/issues/309
>
> The main problem is that qemu doesn't re-send the request to unplug the
> device and rather reprots an eroror. At the same time the guest OS
> doesn't notice it any more, so the unplug can't be finished until the VM
> is rebooted.
>
>


Re: [libvirt-users] [virtual interface] detach interface during boot succeed with no changes

2022-09-08 Thread Yalan Zhang
Hi Laine,

As for the hot-unplug behavior, I have one more question about it, could
you please help to confirm?

unplugging a PCI device properly requires cooperation from the guest OS.
> If the guest OS isn't running yet, the unplug won't complete, so qemu
> (and libvirt) still show the device as plugged into the guest.
>
> virsh reports success on the unplug because unplugging a device is done
> asynchronously - the "success" means "libvirt successfully told qemu to
> unplug the device, qemu has told the virtual machine to unplug the
> device, and is waiting for acknowledgment from the virtual machine that
> the guest has completed removal". At some later time the guest OS may
> complete its part of the unplug; when that happens, qemu will get a
> notification and will send an event to libvirt - at that time the device
> will be removed from libvirt's list of devices.
>
> tl;dr - this is all expected.
>

The question is that, when I unplug it during boot, the virsh cmd will
succeed but the interface still exists, which is expected.
After the vm boot successfully, the guest OS will *not* complete this
removal. When I tried to detach it again, it reported that the device was
in the process of unplugging.
Is this acceptable?

# virsh detach-interface rhel_new network 52:54:00:36:a8:d4
Interface detached successfully
# virsh domiflist rhel_new
 Interface   Type  SourceModelMAC
-
 vnet4   network   default   virtio   52:54:00:36:a8:d4

# virsh detach-interface rhel_new network 52:54:00:36:a8:d4
error: Failed to detach interface
error: internal error: unable to execute QEMU command 'device_del': Device
net0 is already in the process of unplug

Thank you!

Yalan


On Thu, Sep 14, 2017 at 12:48 AM Laine Stump  wrote:

> On 09/04/2017 03:37 AM, Yalan Zhang wrote:
> > Hi guys,
> >
> > when I detach an interface from vm during boot (vm boot not finished), it
> > always fail. I'm not sure if there is an existing bug. I have
> > confirmed with someone that for disk, there is similar behavior, if
> > this is also acceptable?
>
> unplugging a PCI device properly requires cooperation from the guest OS.
> If the guest OS isn't running yet, the unplug won't complete, so qemu
> (and libvirt) still show the device as plugged into the guest.
>
> virsh reports success on the unplug because unplugging a device is done
> asynchronously - the "success" means "libvirt successfully told qemu to
> unplug the device, qemu has told the virtual machine to unplug the
> device, and is waiting for acknowledgment from the virtual machine that
> the guest has completed removal". At some later time the guest OS may
> complete its part of the unplug; when that happens, qemu will get a
> notification and will send an event to libvirt - at that time the device
> will be removed from libvirt's list of devices.
>
> tl;dr - this is all expected.
>
>
> >
> > # virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2;  virsh
> > detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
> > dumpxml rhel7.2 |grep /interface -B9
> > Domain rhel7.2 destroyed
> >
> > Domain rhel7.2 started
> >
> > Interface detached successfully
> >
> >> function='0x0'/>
> > 
> > 
> >   
> >   
> >   
> >   
> >   
> >> function='0x0'/>
> > 
> >
> > When I detach after the vm boot, expand the sleep time to 10, it will
> succeed.
> >
> > # virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10;  virsh
> > detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
> > dumpxml rhel7.2 |grep /interface -B9
> > Domain rhel7.2 destroyed
> >
> > Domain rhel7.2 started
> >
> > Interface detached successfully
> >
> >
> > ---
> > Best Regards,
> > Yalan Zhang
> > IRC: yalzhang
> > Internal phone: 8389413
> >
> > ___
> > libvirt-users mailing list
> > libvirt-users@redhat.com
> > https://www.redhat.com/mailman/listinfo/libvirt-users
> >
>
>


Re: Number of the max supported VFs are different

2022-03-07 Thread Yalan Zhang
Hi Michal,

Get it, Thank you!


On Mon, Mar 7, 2022 at 7:36 PM Michal Prívozník  wrote:

> On 3/3/22 03:55, Yalan Zhang wrote:
> > Hi there,
> >
> > I have an Intel X520 network card, and I find the max supported VFs are
> > different.
> > Please check below outputs:
> >
> > # lspci -vvv -s 04:00.0
> > 04:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520
> > Adapter (rev 01)
> > ...
> >  Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
> > IOVCap: Migration-, Interrupt Message Number: 000
> > IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+
> > IOVSta: Migration-
> > ** Initial VFs: 64, Total VFs: 64**, Number of VFs: 0, Function
> > Dependency Link: 00
> > VF offset: 128, stride: 2, Device ID: 10ed
> > Supported Page Size: 0553, System Page Size: 0001
> > Region 0: Memory at 9440 (64-bit, prefetchable)
> > Region 3: Memory at 9450 (64-bit, prefetchable)
> > VF Migration: offset: , BIR: 0
> > Kernel driver in use: ixgbe
> > Kernel modules: ixgbe
> >
> > # cat /sys/class/net/enp4s0f0/device/sriov_totalvfs
> > 63
> >
> > # echo 64 > /sys/class/net/enp4s0f0/device/sriov_numvfs
> > -bash: echo: write error: Numerical result out of range
> > # echo 63 > /sys/class/net/enp4s0f0/device/sriov_numvfs
> > # cat /sys/class/net/enp4s0f0/device/sriov_numvfs
> > 63
> >
> > The lspci command says the Total VFs supported is 64, while in the file
> > "sriov_totalvfs"  says it's 63.
> > And the sriov_numvfs file will take precedence.
> > Why are the numbers different?  Just a little curious.
>
> I believe this comes from the driver implementation:
>
> /*  ixgbe driver limit the max number of VFs could be enabled to
>  *  63 (IXGBE_MAX_VF_FUNCTIONS - 1)
>  */
> #define IXGBE_MAX_VFS_DRV_LIMIT  (IXGBE_MAX_VF_FUNCTIONS - 1)
>
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h#n7
>
> Michal
>
>


Number of the max supported VFs are different

2022-03-02 Thread Yalan Zhang
Hi there,

I have an Intel X520 network card, and I find the max supported VFs are
different.
Please check below outputs:

# lspci -vvv -s 04:00.0
04:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter
(rev 01)
...
 Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
IOVCap: Migration-, Interrupt Message Number: 000
IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+
IOVSta: Migration-
** Initial VFs: 64, Total VFs: 64**, Number of VFs: 0, Function Dependency
Link: 00
VF offset: 128, stride: 2, Device ID: 10ed
Supported Page Size: 0553, System Page Size: 0001
Region 0: Memory at 9440 (64-bit, prefetchable)
Region 3: Memory at 9450 (64-bit, prefetchable)
VF Migration: offset: , BIR: 0
Kernel driver in use: ixgbe
Kernel modules: ixgbe

# cat /sys/class/net/enp4s0f0/device/sriov_totalvfs
63

# echo 64 > /sys/class/net/enp4s0f0/device/sriov_numvfs
-bash: echo: write error: Numerical result out of range
# echo 63 > /sys/class/net/enp4s0f0/device/sriov_numvfs
# cat /sys/class/net/enp4s0f0/device/sriov_numvfs
63

The lspci command says the Total VFs supported is 64, while in the file
"sriov_totalvfs"  says it's 63.
And the sriov_numvfs file will take precedence.
Why are the numbers different?  Just a little curious.
Thank you!

---
Best Regards,
Yalan Zhang
IRC: yalzhang


Re: qemu+ssh connections to a remote libvirt fail as ssh banner configured

2022-02-13 Thread Yalan Zhang
Hi Jiri,

Get it! Have tried and it works well, Thank you all!

On Thu, Feb 10, 2022 at 6:14 PM Jiri Denemark  wrote:

> On Thu, Feb 10, 2022 at 17:47:43 +0800, Yalan Zhang wrote:
> > Thank you! I tried /etc/motd, and it does not impact the libvirt
> connection.
> > Happy to learn something new!
>
> Alternatively if you really need to run commands in .bashrc which can
> potentially print some output, you can put them after a check for
> interactive shell:
>
> if [[ $- != *i* ]] ; then
> # Shell is non-interactive.  Be done now!
> return
> fi
>
> echo "Interactive shell here. How are you?"
>
> Jirka
>
>


Re: qemu+ssh connections to a remote libvirt fail as ssh banner configured

2022-02-10 Thread Yalan Zhang
Thank you! I tried /etc/motd, and it does not impact the libvirt connection.
Happy to learn something new!

On Thu, Feb 10, 2022 at 4:50 PM Daniel P. Berrangé 
wrote:

> On Thu, Feb 10, 2022 at 09:33:38AM +0100, Michal Prívozník wrote:
> > On 2/10/22 09:02, Daniel P. Berrangé wrote:
> > > On Thu, Feb 10, 2022 at 09:52:52AM +0800, Yalan Zhang wrote:
> > >> Hi there,
> > >>
> > >> I have a system configured with ssh login banner like as below:
> > >> # cat ~/.bashrc
> > >> ...
> > >> echo
> > >>
> "="
> > >> echo "== This machine is occupied by xxx for testing now. If you
> are
> > >> about to use it, contact xxx first =="
> > >> echo
> > >>
> "="
> > >>
> > >> It works as expected that whenever someone logs into this system by
> ssh,
> > >> he/she will see this warning message.
> > >> But it seems such settings will impact a virsh client connection with
> ssh,
> > >> when I try to connect the libvirt daemon on this system, it will
> error out :
> > >> # virsh -c qemu+ssh://${my_host}/system list --all
> > >> root@${my_host}'s password:
> > >> error: failed to connect to the hypervisor
> > >> error: packet 1027423545 bytes received from server too large, want
> 33554432
> > >
> > > Libvirt is tunnelling an RPC protocol over the SSH connection.
> > > Your bashrc is printing this text onto the SSH conmnection and
> > > that corrupts the libvirt RPC protocol.
> > >
> > > If you want to print something whjen people login use the
> > > /etc/motd file which is designed for this pupose, don't
> > > print stuff from a .bashrc.  Libvirt gives the options to
> > > SSH that prevent display of /etc/motd contents, so that
> > > its RPC protocol doesn't get corrupted.
> >
> > One more thing, I wasn't able to reproduce when virt-ssh-helper was
> > used. But maybe I wasn't trying hard enough.
>
> That should be affected in exactly the same way. It still relies on
> stdout/stdin being clean data channels.
>
> Regards,
> Daniel
> --
> |: https://berrange.com  -o-
> https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-
> https://www.instagram.com/dberrange :|
>
>


qemu+ssh connections to a remote libvirt fail as ssh banner configured

2022-02-09 Thread Yalan Zhang
Hi there,

I have a system configured with ssh login banner like as below:
# cat ~/.bashrc
...
echo
"="
echo "== This machine is occupied by xxx for testing now. If you are
about to use it, contact xxx first =="
echo
"="

It works as expected that whenever someone logs into this system by ssh,
he/she will see this warning message.
But it seems such settings will impact a virsh client connection with ssh,
when I try to connect the libvirt daemon on this system, it will error out :
# virsh -c qemu+ssh://${my_host}/system list --all
root@${my_host}'s password:
error: failed to connect to the hypervisor
error: packet 1027423545 bytes received from server too large, want 33554432

I have searched and found some related explanations[1], and [2] says "The
virsh man page doesn't mention ssh, so it sounds like the file
/usr/share/doc/libvirt-doc/remote.html shipped with libvirt-doc could use a
patch mentioning this."
But I can not find anything about this currently on
file:///usr/share/doc/libvirt-docs/html/remote.html.
Could we have this documented for reference with all the possibilities?
Thank you!

[1]
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/868753/comments/17
[2]
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/868753/comments/14


---
Best Regards,
Yalan Zhang
IRC: yalzhang


Re: Question about the Qos support status for different type interfaces

2021-03-28 Thread Yalan Zhang
Hi Michal,

Thank you for the clarification.


---
Best Regards,
Yalan Zhang
IRC: yalzhang


On Fri, Mar 26, 2021 at 8:19 PM Michal Privoznik 
wrote:

> On 3/26/21 12:31 PM, Yalan Zhang wrote:
> > Hi there,
> >
> > I have a question about the Qos support status for different type
> > interfaces.
> > Some types of interface do not support Qos, such as hostdev, user type,
> > mcast type, but the behavior are different, for hostdev, the guest can
> > not start with a meaningful error message, but for other types, vm can
> > start successfully with a warning message in the libvirtd log. I
> > doubt that if it is necessary to keep the behavior consistent for these
> > different types?
> >
> > There are 2 history bugs for them, I should have thought further and
> > asked early when testing the bugs.
> > Bug 1319044 <https://bugzilla.redhat.com/show_bug.cgi?id=1319044>- log
> > error when  requested on a 
> > Bug 1524230 <https://bugzilla.redhat.com/show_bug.cgi?id=1524230>-
> > vhostuser type interface do not support bandwidth, but no warning message
> >
> > Thank you for looking into this and very appreciate your feedback!
> >
>
> The reason is historical baggage - as usual. When QoS was fist
> introduced it supported only a very few interface types. Soon we've
> learned that users put XML snippets in for other types too. Back then we
> had no validation callbacks => we could not reject such XMLs because we
> did not do it from the beginning. So there might be some domain XMLs
> still that contain QoS for unsupported type and those would be lost if
> libvirt started rejecting such XMLs.
>
> With validation callbacks things are a bit better - the domain would not
> be lost on libvirtd upgrade; though it would still be unable to start.
> I'm not sure that's much better.
>
> Hence, we're keeping status quo. I'm open for ideas though.
>
> Michal
>
>


Question about the Qos support status for different type interfaces

2021-03-26 Thread Yalan Zhang
Hi there,

I have a question about the Qos support status for different type
interfaces.
Some types of interface do not support Qos, such as hostdev, user type,
mcast type, but the behavior are different, for hostdev, the guest can not
start with a meaningful error message, but for other types, vm can start
successfully with a warning message in the libvirtd log. I doubt that if it
is necessary to keep the behavior consistent for these different types?

There are 2 history bugs for them, I should have thought further and asked
early when testing the bugs.
Bug 1319044 <https://bugzilla.redhat.com/show_bug.cgi?id=1319044> - log
error when  requested on a 
Bug 1524230 <https://bugzilla.redhat.com/show_bug.cgi?id=1524230> - vhostuser
type interface do not support bandwidth, but no warning message

Thank you for looking into this and very appreciate your feedback!

1. start vm with user type interface set with Qos:
 
  
  


  
  
  


# cat /var/log/libvirt/libvirtd.log | grep bandwidth
2021-03-26 10:47:11.452+: 20185: warning :
qemuBuildInterfaceCommandLine:8223 : setting bandwidth on interfaces of
type 'user' is not implemented yet

2. start with hostdev type interface with Qos setting:

  
  

  
  


  
  


# virsh start rh
error: Failed to start domain 'rh'
error: unsupported configuration: interface 52:54:00:07:27:b0 - bandwidth
settings are not supported for hostdev interfaces


---
Best Regards,
Yalan Zhang
IRC: yalzhang


Re: about the script /etc/qemu-ifup with nmcli command

2021-01-04 Thread Yalan Zhang
Hi,

Could anyone familiar with NetworkManager help with this?
This question has been bothering me for a long time.
Thank you very much!

---
Best Regards,
Yalan Zhang
IRC: yalzhang


On Wed, Oct 21, 2020 at 6:31 PM Yalan Zhang  wrote:

> Hi,
>
> I have tried the qemu-ifup script as below with nmcli command as brctl is
> deprecated on rhel8, but the guest network can not work.
> I think the script needs update. Could you please help to have a look?
> Thank you in advance.
>
> 1. prepare a linux bridge on the host named br0;
>
> 2. prepare the qemu-ifup script as below:
> # cat /etc/qemu-ifup
> #!/bin/bash
> # A br0 bridge should be already set up.
> # Compare with:
> # http://en.wikibooks.org/wiki/QEMU/Networking#qemu-ifup
> #
> # For the bridge setup, see:
> # http://wiki.libvirt.org/page/Networking#Fedora.2FRHEL_Bridging
> # http://gist.github.com/393525
> ip link set "$1" up
> nmcli c add type bridge-slave ifname $1 con-name $1 master br0 autoconnect
> yes
>
> 3. start vm with below interface setting:
> # virsh dumpxml rh | grep /interface -B5
> 
>   
>   
>   
>function='0x0'/>
> 
> # virsh start rh
> Domain rh started
>
> 4.check on guest, the interface can not get dhcp ip address;
>
> 5. check on host,
> # nmcli con
> NAMEUUID  TYPE  DEVICE
> br0 f68f73c7-10ee-40c1-bb09-3366d11ac896  bridgebr0
> ...
> vnet0   90a48d77-dccc-4b59-98f5-09f8cbd62458  ethernet  --
>
> # nmcli dev
> DEVICE  TYPE  STATE   CONNECTION
> br0 bridgeconnected   br0
> ...
> vnet0   tun   unmanaged   --
>
> 6. hotplug a bridge type interface and compare the tap devices:
> # virsh attach-interface rh bridge br0 --model virtio
> Interface attached successfully
>
> # nmcli con
> NAMEUUID  TYPE  DEVICE
> br0 f68f73c7-10ee-40c1-bb09-3366d11ac896  bridgebr0
> vnet1   07c2a1f8-396f-4d5f-b61f-ef2ddb42ed93  tun   vnet1--->the
> hot-plugged one
> ...
> vnet0   90a48d77-dccc-4b59-98f5-09f8cbd62458  ethernet  --   > the
> ethernet one
>
> # nmcli dev
> DEVICE  TYPE  STATE   CONNECTION
> vnet1   tun   connected (externally)  vnet1 --->the
> hot-plugged one
> vnet0   tun   unmanaged   -- > the ethernet one
> ...
>
> 7. from the outputs above, the back-end tun device for ethernet type
> interface is unmanaged.
> I don't know how to update the script to fix it. Could you please help?
>
>
> ---
> Best Regards,
> Yalan Zhang
> IRC: yalzhang
>


Re: How to exit console in L2 vm?

2020-11-30 Thread Yalan Zhang
I see. Thank you for the detailed explanation.


---
Best Regards,
Yalan Zhang
IRC: yalzhang


On Mon, Nov 30, 2020 at 3:31 PM Peter Krempa  wrote:

> On Mon, Nov 30, 2020 at 14:50:33 +0800, Yalan Zhang wrote:
> > Hi,
> >
> > I have a question about nested virtualization. The scenario is as below:
> > 1. Prepare the nested environment, start L2 guest.
> > 2. On the host, connect the L1 vm console, then on L1 guest, connect the
> L2
> > guest console:
> > (host)# virsh console L1_vm
> > Connected to domain L1_vm
> > Escape character is ^] (Ctrl + ])
> > ...
> > (L1 vm)# virsh console L2_vm
> > Connected to domain L2_vm
> > Escape character is ^] (Ctrl + ])
> > ...
> > (L2 vm)# <=== press " ^] " to exit the console, it return to the **host**
> > (host)#
> >
> > Is it expected that "^]" in L2 guest will exit thoroughly to the host,
> not
> > the L1 guest?
>
> Yes it is expected since the keystroke goes through L1 first.
>
> You can use the '-e' switch of virsh to set the console escape character
> in one of the clients differently:
>
> $ virsh -e '^[' console 1
> Connected to domain fedora32
> Escape character is ^[ (Ctrl + [)
>
>


How to exit console in L2 vm?

2020-11-29 Thread Yalan Zhang
Hi,

I have a question about nested virtualization. The scenario is as below:
1. Prepare the nested environment, start L2 guest.
2. On the host, connect the L1 vm console, then on L1 guest, connect the L2
guest console:
(host)# virsh console L1_vm
Connected to domain L1_vm
Escape character is ^] (Ctrl + ])
...
(L1 vm)# virsh console L2_vm
Connected to domain L2_vm
Escape character is ^] (Ctrl + ])
...
(L2 vm)# <=== press " ^] " to exit the console, it return to the **host**
(host)#

Is it expected that "^]" in L2 guest will exit thoroughly to the host, not
the L1 guest?
Thank you!


---
Best Regards,
Yalan Zhang
IRC: yalzhang


Re: about the new added attributes "check" and "type" for interface mac element

2020-11-01 Thread Yalan Zhang
Hi,

I have filed a bug about the error messages,
https://bugzilla.redhat.com/show_bug.cgi?id=1892130
And I will track the questions on that bug, please help to update on the
bug comments about the questions.
Thank you!

---
Best Regards,
Yalan Zhang
IRC: yalzhang


On Wed, Oct 21, 2020 at 10:51 AM Yalan Zhang  wrote:

> Hi all,
>
> I have done some tests for the new attributes "check" and "type", could
> you please help to have a check?  And I have some questions about the
> patch, please help to have a look, Thank you!
>
> The questions:
> 1. in step 4 below, the error message should be updated:
> Actual results:
> XML error: invalid mac address **check** value: 'next'. Valid values are
> "generated" and "static".
> expected results:
> XML error: invalid mac address **type** value: 'next'. Valid values are
> "generated" and "static".
>
> 2. I have checked the vmware OUI definition and found this:
> https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-1B6A280E-0C77-4775-8F84-4B3F40673178.html
> it says the VMware OUI is 00:50:56, not 00:0c:29 in the patches.  Am I
> missing something?
>
> 3. Could you please tell more about the user story? as I can not
> understand the scenario when " it will ignore all the checks libvirt does
> about the origin of the MAC address(whether or not it's in a VMWare OUI)
> and forward the original one to the ESX server telling it not to check it
> either". Does it happen when we try to transform a kvm guest to a vmware
> guest?
>
> 4. How to test it as a libvirt QE? Are the test scenarios below enough
> without ESX env?
>
>
> Test steps:
>
> 1. Start vm with different configuration with mac in "00:0c:29" range:
> # virsh dumpxml rhel | grep /interface -B12
> ...
> 
>   
>   
>   
>function='0x0'/>
> 
> 
>   
>   
>   
>function='0x0'/>
> 
> 
>   
>   
>   
>function='0x0'/>
> 
> 
>   
>   
>   
>function='0x0'/>
> 
>
> # virsh start rhel
> Domain rhel started
>
> 2. login guest and check the interfaces:
> # ip addr
> ...
> 2: enp5s0:  mtu 1500 qdisc fq_codel state
> UP group default qlen 1000
> link/ether 00:0c:29:e7:9b:cb brd ff:ff:ff:ff:ff:ff
> inet 192.168.122.142/24 brd 192.168.122.255 scope global dynamic
> noprefixroute enp5s0
>valid_lft 3584sec preferred_lft 3584sec
> inet6 fe80::351c:686a:863e:4a7f/64 scope link noprefixroute
>valid_lft forever preferred_lft forever
> 3: enp11s0:  mtu 1500 qdisc fq_codel
> state UP group default qlen 1000
> link/ether 00:0c:29:3b:e0:50 brd ff:ff:ff:ff:ff:ff
> inet 192.168.122.202/24 brd 192.168.122.255 scope global dynamic
> noprefixroute enp11s0
>valid_lft 3584sec preferred_lft 3584sec
> inet6 fe80::2b79:4675:6c59:6822/64 scope link noprefixroute
>valid_lft forever preferred_lft forever
> 4: enp12s0:  mtu 1500 qdisc fq_codel
> state UP group default qlen 1000
> link/ether 00:0c:29:73:f6:dc brd ff:ff:ff:ff:ff:ff
> inet 192.168.122.33/24 brd 192.168.122.255 scope global dynamic
> noprefixroute enp12s0
>valid_lft 3584sec preferred_lft 3584sec
> inet6 fe80::e43d:555:ba85:4030/64 scope link noprefixroute
>valid_lft forever preferred_lft forever
> 5: enp13s0:  mtu 1500 qdisc fq_codel
> state UP group default qlen 1000
> link/ether 00:0c:29:aa:dc:6c brd ff:ff:ff:ff:ff:ff
> inet 192.168.122.161/24 brd 192.168.122.255 scope global dynamic
> noprefixroute enp13s0
>valid_lft 3584sec preferred_lft 3584sec
> inet6 fe80::f32d:e2e8:9c8b:47fd/64 scope link noprefixroute
>valid_lft forever preferred_lft forever
>
>
> 3. start vm without the "check" and "type" attributes, and check the live
> xml do not include these attributes, either.
>  # virsh start vm1
> virshDomain vm1 started
> # virsh dumpxml vm1 | grep /interface -B8
> 
> 
>   
>portid='b02dc78f-69ad-4db7-870c-f371fd730537' bridge='virbr0'/>
>   
>   
>   
>function='0x0'/>
> 
>
> 4. negative test:
> Set "" in virsh edit
> # virsh edit vm1
> error: XML document failed to validate against schema: Unable to validate
> doc against /usr/share/libvirt/schemas/domain.rng
> Extra element devices in interleave
> Element domain failed to validate content
>
> Failed. Try again? [y,n,i,f,?]:   > press 'i'
> error: XML error: invalid mac address check value: 'next'. Valid values
> are "generated" and "static".
> Failed. Try again? [y,n,f,?]:
>
>
> ---
> Best Regards,
> Yalan Zhang
> IRC: yalzhang
>


about the script /etc/qemu-ifup with nmcli command

2020-10-21 Thread Yalan Zhang
Hi,

I have tried the qemu-ifup script as below with nmcli command as brctl is
deprecated on rhel8, but the guest network can not work.
I think the script needs update. Could you please help to have a look?
Thank you in advance.

1. prepare a linux bridge on the host named br0;

2. prepare the qemu-ifup script as below:
# cat /etc/qemu-ifup
#!/bin/bash
# A br0 bridge should be already set up.
# Compare with:
# http://en.wikibooks.org/wiki/QEMU/Networking#qemu-ifup
#
# For the bridge setup, see:
# http://wiki.libvirt.org/page/Networking#Fedora.2FRHEL_Bridging
# http://gist.github.com/393525
ip link set "$1" up
nmcli c add type bridge-slave ifname $1 con-name $1 master br0 autoconnect
yes

3. start vm with below interface setting:
# virsh dumpxml rh | grep /interface -B5

  
  
  
  

# virsh start rh
Domain rh started

4.check on guest, the interface can not get dhcp ip address;

5. check on host,
# nmcli con
NAMEUUID  TYPE  DEVICE
br0 f68f73c7-10ee-40c1-bb09-3366d11ac896  bridgebr0
...
vnet0   90a48d77-dccc-4b59-98f5-09f8cbd62458  ethernet  --

# nmcli dev
DEVICE  TYPE  STATE   CONNECTION
br0 bridgeconnected   br0
...
vnet0   tun   unmanaged   --

6. hotplug a bridge type interface and compare the tap devices:
# virsh attach-interface rh bridge br0 --model virtio
Interface attached successfully

# nmcli con
NAMEUUID  TYPE  DEVICE
br0 f68f73c7-10ee-40c1-bb09-3366d11ac896  bridgebr0
vnet1   07c2a1f8-396f-4d5f-b61f-ef2ddb42ed93  tun   vnet1--->the
hot-plugged one
...
vnet0   90a48d77-dccc-4b59-98f5-09f8cbd62458  ethernet  --   > the
ethernet one

# nmcli dev
DEVICE  TYPE  STATE   CONNECTION
vnet1   tun   connected (externally)  vnet1 --->the
hot-plugged one
vnet0   tun   unmanaged   -- > the ethernet one
...

7. from the outputs above, the back-end tun device for ethernet type
interface is unmanaged.
I don't know how to update the script to fix it. Could you please help?


---
Best Regards,
Yalan Zhang
IRC: yalzhang


about the new added attributes "check" and "type" for interface mac element

2020-10-20 Thread Yalan Zhang
Hi all,

I have done some tests for the new attributes "check" and "type", could you
please help to have a check?  And I have some questions about the patch,
please help to have a look, Thank you!

The questions:
1. in step 4 below, the error message should be updated:
Actual results:
XML error: invalid mac address **check** value: 'next'. Valid values are
"generated" and "static".
expected results:
XML error: invalid mac address **type** value: 'next'. Valid values are
"generated" and "static".

2. I have checked the vmware OUI definition and found this:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-1B6A280E-0C77-4775-8F84-4B3F40673178.html
it says the VMware OUI is 00:50:56, not 00:0c:29 in the patches.  Am I
missing something?

3. Could you please tell more about the user story? as I can not understand
the scenario when " it will ignore all the checks libvirt does about the
origin of the MAC address(whether or not it's in a VMWare OUI) and forward
the original one to the ESX server telling it not to check it either". Does
it happen when we try to transform a kvm guest to a vmware guest?

4. How to test it as a libvirt QE? Are the test scenarios below enough
without ESX env?


Test steps:

1. Start vm with different configuration with mac in "00:0c:29" range:
# virsh dumpxml rhel | grep /interface -B12
...

  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


# virsh start rhel
Domain rhel started

2. login guest and check the interfaces:
# ip addr
...
2: enp5s0:  mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 00:0c:29:e7:9b:cb brd ff:ff:ff:ff:ff:ff
inet 192.168.122.142/24 brd 192.168.122.255 scope global dynamic
noprefixroute enp5s0
   valid_lft 3584sec preferred_lft 3584sec
inet6 fe80::351c:686a:863e:4a7f/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
3: enp11s0:  mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 00:0c:29:3b:e0:50 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.202/24 brd 192.168.122.255 scope global dynamic
noprefixroute enp11s0
   valid_lft 3584sec preferred_lft 3584sec
inet6 fe80::2b79:4675:6c59:6822/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
4: enp12s0:  mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 00:0c:29:73:f6:dc brd ff:ff:ff:ff:ff:ff
inet 192.168.122.33/24 brd 192.168.122.255 scope global dynamic
noprefixroute enp12s0
   valid_lft 3584sec preferred_lft 3584sec
inet6 fe80::e43d:555:ba85:4030/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
5: enp13s0:  mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 00:0c:29:aa:dc:6c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.161/24 brd 192.168.122.255 scope global dynamic
noprefixroute enp13s0
   valid_lft 3584sec preferred_lft 3584sec
inet6 fe80::f32d:e2e8:9c8b:47fd/64 scope link noprefixroute
   valid_lft forever preferred_lft forever


3. start vm without the "check" and "type" attributes, and check the live
xml do not include these attributes, either.
 # virsh start vm1
virshDomain vm1 started
# virsh dumpxml vm1 | grep /interface -B8


  
  
  
  
  
  


4. negative test:
Set "" in virsh edit
# virsh edit vm1
error: XML document failed to validate against schema: Unable to validate
doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
Element domain failed to validate content

Failed. Try again? [y,n,i,f,?]:   > press 'i'
error: XML error: invalid mac address check value: 'next'. Valid values are
"generated" and "static".
Failed. Try again? [y,n,f,?]:


---
Best Regards,
Yalan Zhang
IRC: yalzhang


Re: [libvirt-users] Question about disabling UFO on guest

2020-08-05 Thread Yalan Zhang
Hi Bao,

Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
As I know, if you want to disable both for receive and transmit,  it can
only be turned off via 'host_ufo=off,gso=off,guest_ufo=off'.

 ===> for receive
 > for transmit
 

Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1387477#c4


---
Best Regards,
Yalan Zhang
IRC: yalzhang


On Sun, Dec 24, 2017 at 1:57 AM Bao Nguyen  wrote:

> Hello everyone,
>
> I would like to ask a question regarding to disable UFO of virtio vNIC in
> my guest. I have read the document at
> https://libvirt.org/formatdomain.html
>
>
> *host*
>
> The csum, gso, tso4, tso6, ecn and ufo attributes with possible
> values on and off can be used to turn off host offloading options. By
> default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
> only)* The mrg_rxbuf attribute can be used to control mergeable rx
> buffers on the host side. Possible values are on (default) and off. *Since
> 1.2.13 (QEMU only)*
>
> *guest*
>
> The csum, tso4, tso6, ecn and ufo attributes with possible
> values on and off can be used to turn off guest offloading options. By
> default, the supported offloads are enabl
>
> ed by QEMU.
> *Since 1.2.9 (QEMU only)*
>
>
> Then I disabled UFO on my vNIC on guest as the following configuration
>
> 
>
>   
>
> 
>
> 
>
> 
>
>  queues='5' rx_queue_size='256' tx_queue_size='256'>
>
>   **
>
>   **
>
> 
>
>
>
>   
>
> 
>
>
> Then I reboot my node to get the change effect and it works. However, can
> I disable the UFO without touching the host OS? or it always has to disable
> on both host and guest like that?
>
>
> Thanks,
>
> Brs,
>
> Natsu
>
>
> ___
> libvirt-users mailing list
> libvirt-users@redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users


Re: Could you please help with questions about the net failover feature

2020-07-06 Thread Yalan Zhang
Hi Laine,

For the feature testing before, I only test the linux bridge setting as in
2), it works.
Now I tried 1), to use macvtap bridge mode connected to the PF, it can not
work as the hostdev interface can not get dhcp ip address on the guest.
Check on host, the /var/log/messages and dmesg both says:

"Jul  6 04:54:45 dell-per730-xx kernel: ixgbe :82:00.1 enp130s0f1: 1
Spoofed packets detected
..
Jul  6 04:56:17 dell-per730-xx kernel: ixgbe :82:00.1 enp130s0f1: 1
Spoofed packets detected
Jul  6 04:56:54 dell-per730-xx kernel: ixgbe :82:00.1 enp130s0f1: 1
Spoofed packets detected
"
(enp130s0f1 is the PF's interface name, and :82:00.1 is the PF's pci
address)
# rpm -q kernel
kernel-4.18.0-193.4.1.el8_2.x86_64

Could you please help to confirm if this is a kernel bug?  Thank you very
much!



You have two choices for the backup virtio interface:

1) it can be a macvtap device connected to the PF of the same SRIOV device.

2) it can be a standard tap device connected to a Linux host bridge
(created outside libvirt in the host system network config) that is
attached to the PF (or alternately one of the VFs that isn't being used
for VMs, or to another physical ethernet adapter on the host that is
connected to the same network.




---
Best Regards,
Yalan Zhang
IRC: yalzhang


On Sun, Mar 22, 2020 at 6:50 AM Laine Stump  wrote:

> On 3/21/20 1:08 AM, Yalan Zhang wrote:
>
> > In my understanding, the standby and primary hostdev interface may be in
> > different subnet.
>
> There is only one hostdev device in the team pair (that will be the one
> with  since it needs to be unplugged
> during migration). The other device must be a virtio device (the one
> with ). And no, they cannot be on different
> subnets. They must both connect into the same ethernet "collision
> domain", such that the guest could assign the same IP address to either
> of them and be able to communicate on the network.
>
> There is some explanation of the use case for this option. and some
> example config, here:
>
>  https://www.libvirt.org/formatdomain.html#elementsTeaming
>
> > I'm not sure whether it is correct. Could you please help to explain?
> > Thank you in advance.
> >
> > For example, primary hostdev is connected to vf-pool with ,
> > while the standby is connected to NAT network with " forward dev='eth0'".
> > The standby interface will get ip as 192.168.122.x, but after NAT, it
> > will be in the same subnet of the vf.
>  >
> > So after the VF is unplugged, the packet will still broadcast in the
> > same subnet, and the vm will get the packet as the standby share the
> > same mac. Right?
>
> No, not right :-)
>
> The VF of an SRIOV network adapter is connected directly to the physical
> network, and will have an IP address that is on that network. Tap
> devices plugged into the default network (or any other libvirt network
> based on a bridge device that is created/managed by libvirt) have no
> direct connection to the physical network, and are on a different
> subnet. The fact that traffic from the guest *seems* to be coming from
> an IP on the physical subnet is meaningless. The *guest* needs to be
> able to use both NICs using the same IP address, and anything plugged
> into the default network will need to have an IP address on a different
> subnet from the perspective of the guest.
>
> You have two choices for the backup virtio interface:
>
> 1) it can be a macvtap device connected to the PF of the same SRIOV device.
>
> 2) it can be a standard tap device connected to a Linux host bridge
> (created outside libvirt in the host system network config) that is
> attached to the PF (or alternately one of the VFs that isn't being used
> for VMs, or to another physical ethernet adapter on the host that is
> connected to the same network.
>
>
> It is simplest to have the same name refer to the connection on the
> source and destination hosts of a migration. That can be handled by
> creating a libvirt network to refer to the bridge device created outside
> libvirt (or to the PF directly if you're going to use macvtap.
>
> For example, if you're going to use macvtap, and the PF's name on the
> host is ens4f0, you'd just create this network:
>
>
>  persistent-net
>  
>
>  
> 
>
> any guest interface with this:
>
>   
> 
> 
> 
> 
> 
>   
>
> will get a macvtap device that's connected to ens4f0 in bridge mode.
>
> Or, if your host has a bridge device called br0 that is directly
> attached to the physical network (in whatever manner, it doesn't
> matter), you can define the network this

Do we need "amd_iommu=on" for AMD system anymore?

2020-06-29 Thread Yalan Zhang
Hi,

Since long time ago, to enable the SR-IOV VF pci passthrough function, I'm
always adding "amd_iommu=on" into kernel cmdline on AMD system.
But recently I found even I do not do this action, IOMMU is still enabled
by kernel on AMD system.
After searching, I found there is no such setting any more, refer to
https://github.com/torvalds/linux/blob/master/Documentation/admin-guide/kernel-parameters.txt#L286

There are only 3 possible value as below:
amd_iommu= fullflush
off
force_isolation

Could anyone can help to confirm the changes?  Thank you!
And another question, it is said that the "iommu=pt" option is to improves
IO performance for devices in the host, it is not a must for VF PCI
passthrough, right?
I'm not sure about the user cases.

[Reference]
1. Reference about adding "amd_iommu=on", and it may be outdated:
http://dpdk-guide.gitlab.io/dpdk-guide/setup/iommu.html
2. On AMD system without adding "amd_iommu=on"  in the kernel cmdline, the
iommu is enabled:
# cat /proc/cmdline
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-193.el8.x86_64
root=/dev/mapper/rhel_hp--dl385g10--16-root ro crashkernel=auto
resume=/dev/mapper/rhel_hp--dl385g10--16-swap
rd.lvm.lv=rhel_hp-dl385g10-16/root
rd.lvm.lv=rhel_hp-dl385g10-16/swap console=ttyS0,115200n81

# dmesg | grep -i iommu
[3.712029] iommu: Default domain type: Passthrough
[6.736019] pci :00:00.2: IOMMU performance counters supported
...
[6.780040] pci :e0:00.2: IOMMU performance counters supported
[6.786740] pci :00:01.0: Adding to iommu group 0
[6.791876] pci :00:01.1: Adding to iommu group 1
[6.797015] pci :00:01.2: Adding to iommu group 2
[6.802150] pci :00:01.4: Adding to iommu group 3
...
[7.866463] pci :e0:00.2: Found IOMMU cap 0x40
[7.920222] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4
counters/bank).
[7.927428] perf/amd_iommu: Detected AMD IOMMU #1 (2 banks, 4
counters/bank)
...


---
Best Regards,
Yalan Zhang
IRC: yalzhang


Could you please help with questions about the net failover feature

2020-03-20 Thread Yalan Zhang
Hi laine,

I have leave some questions on IRC, but my VPN broken time after time.
Please ignore the questions on IRC.

In my understanding, the standby and primary hostdev interface may be in
different subnet.
I'm not sure whether it is correct. Could you please help to explain? Thank
you in advance.

For example, primary hostdev is connected to vf-pool with ,
while the standby is connected to NAT network with " forward dev='eth0'".
The standby interface will get ip as 192.168.122.x, but after NAT, it will
be in the same subnet of the vf.
So after the VF is unplugged, the packet will still broadcast in the same
subnet, and the vm will get the packet as the standby share the same mac.
Right?

Thank you!

---
Best Regards,
Yalan Zhang
IRC: yalzhang


Re: [libvirt-users] [Libvirt failed to claim Virtual Functions on hostdev network after VM reboot]

2019-05-28 Thread Yalan Zhang
Hi Fuzail,

Even there are  8 VFs in total, you only specified 2 in the hostnet
network.
And connections='2' means the hostnet networks is connected 2 times, so
both of the 2 VFs in hostnet are occupied.
Please try to define network like this:

  hostnet
  



It will include all the 8 VFs from PF ens2.

---
Best Regards,
Yalan Zhang
IRC: yalzhang


On Fri, May 24, 2019 at 6:34 PM Erik Skultety  wrote:

> On Thu, May 23, 2019 at 10:37:56PM +0530, Fuzail Ahmad wrote:
> > Problem Statement:
> >
> > Libvirt failed to claim Virtual Functions  on hostdev network after VM
> > reboot.
> >
> > Version-Release number of selected component (if applicable):
> >
> > libvirtd (libvirt) 2.0.0
>
> I'd just like to point out that 2.0.0 is very old libvirt, even debian 9
> has
> 3.0.n, can you verify you're experiencing the same kind of issue with the
> current upstream?
>
> Regards,
> Erik
>
> ___
> libvirt-users mailing list
> libvirt-users@redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users
>
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] Network filters with clean-traffic not working on Debian Stretch

2018-12-28 Thread Yalan Zhang
Hi Sam,

You can find the rules by below command, and it looks as below:
# ebtables -t nat --list
Bridge table: nat

Bridge chain: PREROUTING, entries: 2, policy: ACCEPT
-j PREROUTING_direct
-i vnet0 -j libvirt-I-vnet0

Bridge chain: OUTPUT, entries: 1, policy: ACCEPT
-j OUTPUT_direct

Bridge chain: POSTROUTING, entries: 2, policy: ACCEPT
-j POSTROUTING_direct
-o vnet0 -j libvirt-O-vnet0

Bridge chain: PREROUTING_direct, entries: 0, policy: RETURN

Bridge chain: POSTROUTING_direct, entries: 0, policy: RETURN

Bridge chain: OUTPUT_direct, entries: 0, policy: RETURN

Bridge chain: libvirt-I-vnet0, entries: 9, policy: ACCEPT
-j I-vnet0-mac
-p IPv4 -j I-vnet0-ipv4-ip
-p IPv4 -j ACCEPT
-p ARP -j I-vnet0-arp-mac
-p ARP -j I-vnet0-arp-ip
-p ARP -j ACCEPT
-p 0x8035 -j I-vnet0-rarp
-p 0x835 -j ACCEPT
-j DROP

Bridge chain: libvirt-O-vnet0, entries: 4, policy: ACCEPT
-p IPv4 -j O-vnet0-ipv4
-p ARP -j ACCEPT
-p 0x8035 -j O-vnet0-rarp
-j DROP

Bridge chain: I-vnet0-mac, entries: 2, policy: ACCEPT
-s 52:54:0:3a:40:b7 -j RETURN
-j DROP

Bridge chain: I-vnet0-ipv4-ip, entries: 3, policy: ACCEPT
-p IPv4 --ip-src 0.0.0.0 --ip-proto udp -j RETURN
-p IPv4 --ip-src 172.16.1.2 -j RETURN
-j DROP

Bridge chain: O-vnet0-ipv4, entries: 1, policy: ACCEPT
-j ACCEPT

Bridge chain: I-vnet0-arp-mac, entries: 2, policy: ACCEPT
-p ARP --arp-mac-src 52:54:0:3a:40:b7 -j RETURN
-j DROP

Bridge chain: I-vnet0-arp-ip, entries: 2, policy: ACCEPT
-p ARP --arp-ip-src 172.16.1.2 -j RETURN
-j DROP

Bridge chain: I-vnet0-rarp, entries: 2, policy: ACCEPT
-p 0x8035 -s 52:54:0:3a:40:b7 -d Broadcast --arp-op Request_Reverse
--arp-ip-src 0.0.0.0 --arp-ip-dst 0.0.0.0 --arp-mac-src 52:54:0:3a:40:b7
--arp-mac-dst 52:54:0:3a:40:b7 -j ACCEPT
-j DROP

Bridge chain: O-vnet0-rarp, entries: 2, policy: ACCEPT
-p 0x8035 -d Broadcast --arp-op Request_Reverse --arp-ip-src 0.0.0.0
--arp-ip-dst 0.0.0.0 --arp-mac-src 52:54:0:3a:40:b7 --arp-mac-dst
52:54:0:3a:40:b7 -j ACCEPT
-j DROP

For interface set as:

  
  
  
  
  

  
  
  




---
Best Regards,
Yalan Zhang
IRC: yalzhang


On Wed, Dec 26, 2018 at 12:28 AM fatal  wrote:

> Hello,
>
> I'm recently stumbled over the libvirt network filter capabilities and
> got pretty excited. Unfortunately I'm not able to get the the
> "clean-traffic" filterset working. I'm using a freshly installed Debian
> Stretch with libvirt, qemu and KVM.
>
> My config snippet looks as follows:
>
> sudo virsh edit 
>
> [...]
> 
>   
>   
>   
>   
> 
>
>function='0x0'/>
> 
> 
>   
>   
>   
>   
> 
>
>function='0x0'/>
> 
> [...]
>
> I restarted the VM from within the VM, did a "virsh reboot ",
> restarted libvirtd and even did a reboot of the host - just to be sure.
> Unfortunately neither "iptables -L" nor "ebtables --list" show any
> entries added by libvirt. Also omitting the "parameter name='IP'" part
> didn't change anything.
>
> There are no error messages in /var/log/syslog nor in
> /var/log/libvirt/qemu/
>
> My main references were:
>
> https://libvirt.org/firewall.html
> https://libvirt.org/formatnwfilter.html
>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-applying_network_filtering
>
> https://www.berrange.com/posts/2011/10/03/guest-mac-spoofing-denial-of-service-and-preventing-it-with-libvirt-and-kvm/
>
> Any help really would be much appreciated!
>
> Thanks a lot!
>
> Sam
>
> ___
> libvirt-users mailing list
> libvirt-users@redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users
>
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] multiple devices in the same iommu group in L1 guest

2018-07-03 Thread Yalan Zhang
Hi,

I have a guest enabled vIOMMU, but on the guest there are several devices
in the same iommu group.
Could someone help to check if I missed something?
Thank you very much!

1. guest xml:
# virsh edit q
...

hvm
/usr/share/OVMF/OVMF_CODE.secboot.fd
/var/lib/libvirt/qemu/nvram/q_VARS.fd
  
...
 
 ...

  
 

  
...

...
 
  
  
  


  
  
  
  


  


...
2. guest has 'intel_iommu=on' enabled in kernel cmdline, then reboot guest

3. log in guest to check:
# dmesg  | grep -i DMAR
[0.00] ACPI: DMAR 7d83f000 00050 (v01 BOCHS  BXPCDMAR
0001 BXPC 0001)
[0.00] DMAR: IOMMU enabled
[0.155178] DMAR: Host address width 39
[0.155180] DMAR: DRHD base: 0x00fed9 flags: 0x1
[0.155221] DMAR: dmar0: reg_base_addr fed9 ver 1:0 cap
12008c22260286 ecap f00f5e
[0.155228] DMAR: ATSR flags: 0x1
[0.155231] DMAR-IR: IOAPIC id 0 under DRHD base  0xfed9 IOMMU 0
[0.155232] DMAR-IR: Queued invalidation will be enabled to support
x2apic and Intr-remapping.
[0.156843] DMAR-IR: Enabled IRQ remapping in x2apic mode
[2.112369] DMAR: No RMRR found
[2.112505] DMAR: dmar0: Using Queued invalidation
[2.112669] DMAR: Setting RMRR:
[2.112671] DMAR: Prepare 0-16MiB unity mapping for LPC
[2.112820] DMAR: Setting identity map for device :00:1f.0 [0x0 -
0xff]
[2.211577] DMAR: Intel(R) Virtualization Technology for Directed I/O
===> This is expected

# dmesg  | grep -i iommu  |grep device
[2.212267] iommu: Adding device :00:00.0 to group 0
[2.212287] iommu: Adding device :00:01.0 to group 1
[2.212372] iommu: Adding device :00:02.0 to group 2
[2.212392] iommu: Adding device :00:02.1 to group 2
[2.212411] iommu: Adding device :00:02.2 to group 2
[2.212444] iommu: Adding device :00:02.3 to group 2
[2.212464] iommu: Adding device :00:02.4 to group 2
[2.212482] iommu: Adding device :00:02.5 to group 2
[2.212520] iommu: Adding device :00:1d.0 to group 3
[2.212533] iommu: Adding device :00:1d.1 to group 3
[2.212541] iommu: Adding device :00:1d.2 to group 3
[2.212550] iommu: Adding device :00:1d.7 to group 3
[2.212567] iommu: Adding device :00:1f.0 to group 4
[2.212576] iommu: Adding device :00:1f.2 to group 4
[2.212585] iommu: Adding device :00:1f.3 to group 4
[2.212599] iommu: Adding device :01:00.0 to group 2
[2.212605] iommu: Adding device :02:01.0 to group 2
[2.212621] iommu: Adding device :04:00.0 to group 2
[2.212634] iommu: Adding device :05:00.0 to group 2
[2.212646] iommu: Adding device :06:00.0 to group 2
[2.212657] iommu: Adding device :07:00.0 to group 2
> several devices in the same iommu group

# virsh nodedev-dumpxml pci__07_00_0

  pci__07_00_0
  /sys/devices/pci:00/:00:02.5/:07:00.0
  pci__00_02_5
  
e1000
  
  
0
7
0
0
82540EM Gigabit Ethernet Controller
Intel Corporation

  
  
  
  
  
  
  
  
  
  
  
  

  


Thus, can not attach the device to L2 guest:
# cat hostdev.xml


  

  
# virsh attach-device rhel hostdev.xml
error: Failed to attach device from hostdev.xml
error: internal error: unable to execute QEMU command 'device_add': vfio
error: :07:00.0: group 2 is not viable


---
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] problem when use tls to connect libvirt

2017-12-06 Thread Yalan Zhang
Hi guys,

I met a problem when I use tls to connect libvirt.
When I set the CN in client.info, server.info as hostname(FDQN), the tls
check will fail with ip; and vice versa, when set CN as ip address, the tls
check will fail with hostname. Only use what we set in can succeed. If this
is expected? or I there was some issue in my env. or setup steps?


1. set tls env with hostname, then it will fail to check with ip

# virsh -c qemu+tls://192.168.122.4/system
2017-12-06 13:24:52.346+: 3954: info : libvirt version: x.x.x, package:
4.el7 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>,
2017-11-30-07:57:27, x.x.x.redhat.com)
2017-12-06 13:24:52.346+: 3954: info : hostname: work.englab.cn
2017-12-06 13:24:52.346+: 3954: warning :
virNetTLSContextCheckCertificate:1125 : Certificate check failed
Certificate [session] owner does not match the hostname 192.168.122.4
error: failed to connect to the hypervisor
error: authentication failed: Failed to verify peer's certificate

2. use the hostname as what we set can succeed.

# virsh -c qemu+tls://test.englab.cn/system
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
   'quit' to quit

virsh #


# ping test.englab.cn
PING test.englab.cn (192.168.122.4) 56(84) bytes of data.
64 bytes from test.englab.cn (192.168.122.4): icmp_seq=1 ttl=64 time=0.235
ms
64 bytes from test.englab.cn (192.168.122.4): icmp_seq=2 ttl=64 time=0.204
ms
...



---
Best Regards,
Yalan Zhang
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] libvirt/dnsmasq is not adhering to static DHCP assignments

2017-10-30 Thread Yalan Zhang
Please check the dnsmasq hostsfile, should have all the records there:

# cat /var/lib/libvirt/dnsmasq/osc_mgmt.hostsfile
52:54:00:2c:85:92,192.168.80.1,openstack-controller-00
52:54:00:e2:4b:25,192.168.80.2,openstack-database-00
52:54:00:50:91:04,192.168.80.3,openstack-keystone-00
52:54:00:fe:5b:36,192.168.80.7,openstack-rabbitmq-00
52:54:00:95:ca:bd,192.168.80.5,openstack-glance-00





---
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413

On Mon, Oct 30, 2017 at 8:26 AM, Dagmawi Biru <ad...@dbvoid.com> wrote:

> Given the following network configuration:
> ===
> 
>   osc_mgmt
>   d93fe709-14ae-4a0e-8989-aeaa8c76c513
>   
>   
>   
>   
> 
>   
>ip='192.168.80.1'/>
>ip='192.168.80.2'/>
>ip='192.168.80.3'/>
>ip='192.168.80.7'/>
>ip='192.168.80.5'/>
> 
>   
> 
>
> When attempting to bring up the relevant interface in the virtual machine,
> I get an incorrect IP address assigned, different from the one I statically
> set up per the XML above. As you can see, the device with MAC
> '52:54:00:e2:4b:25' should really be getting 192.168.80.2, but what happens
> when we bring this interface up is this:
>
> ===
> root@openstack-database-00:/home/osc# ifup ens11
> Internet Systems Consortium DHCP Client 4.3.3
> Copyright 2004-2015 Internet Systems Consortium.
> All rights reserved.
> For info, please visit https://www.isc.org/software/dhcp/
>
> Listening on LPF/ens11/...
> Sending on   LPF/ens11/...
> Sending on   Socket/fallback
> DHCPDISCOVER on ens11 to 255.255.255.255 port 67 interval 3
> (xid=0x6769e42a)
> DHCPREQUEST of 192.168.80.27 on ens11 to 255.255.255.255 port 67
> (xid=0x2ae46967)
> DHCPOFFER of 192.168.80.27 from 192.168.80.254
> DHCPACK of 192.168.80.27 from 192.168.80.254
> bound to 192.168.80.27 -- renewal in 1407 seconds.
>
>
> Some additional info about the VM and network it's attached to
> ===
>
> [root@dragon dnsmasq]# virsh domiflist openstack-database-00
> Interface  Type   Source Model   MAC
> ---
> vnet12 bridge br20   virtio  52:54:00:6c:ce:b9
> vnet13 networkVM_MGMTrtl8139 52:54:00:7d:ca:87
> vnet14 networkosc_mgmt   rtl8139 52:54:00:e2:4b:25
>
> [root@dragon dnsmasq]# virsh net-info osc_mgmt
> Name:   osc_mgmt
> UUID:   d93fe709-14ae-4a0e-8989-aeaa8c76c513
> Active: yes
> Persistent: yes
> Autostart:  yes
> Bridge: osc_mgmt
> ===
>
>
> What's strange is that the first VM seems to work correctly and gets an
> assigned address of 192.168.80.1, but for some reason the others don't. Any
> ideas?
>
>
>
>
> ___
> libvirt-users mailing list
> libvirt-users@redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users
>
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] Need to increase the rx and tx buffer size of my interface

2017-10-26 Thread Yalan Zhang
Hi Ashish,

IMO, it is yes, no way to increase tx_queue_size for direct type interface






---
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413

On Thu, Oct 26, 2017 at 3:38 PM, Ashish Kurian <ashish...@gmail.com> wrote:

> Hi Yalan,
>
> In the previous email you mentioned "tx_queue_size='512' will not work in
> the guest with direct type interface, in fact, no matter what you set, it
> will not work and guest will get the default '256'. "
>
> So if I am using macvtap for my interfaces, then the device type will
> always be direct type. Does it mean that there is no way I can increase the
> buffer size with the macvtap interfaces?
>
>
>
> Best Regards,
> Ashish Kurian
>
> On Thu, Oct 26, 2017 at 9:04 AM, Ashish Kurian <ashish...@gmail.com>
> wrote:
>
>> Hi Yalan,
>>
>> Thank you for your comment on qemu-kvm-rhev
>>
>> I am waiting for a response about my previous email with the logs
>> attached. I do not understand what is the problem.
>>
>>
>> On Oct 26, 2017 8:58 AM, "Yalan Zhang" <yalzh...@redhat.com> wrote:
>>
>> Hi Ashish,
>>
>> Please never mind for qemu-kvm-rhev.
>> qemu with the code base 2.10.0 will support the tx_queue_size and
>> rx_queue_size.
>>
>> Thank you~
>>
>>
>>
>>
>>
>> ---
>> Best Regards,
>> Yalan Zhang
>> IRC: yalzhang
>> Internal phone: 8389413
>>
>> On Thu, Oct 26, 2017 at 2:22 PM, Yalan Zhang <yalzh...@redhat.com> wrote:
>>
>>> Hi Ashish,
>>>
>>> Are these packages available for free? How can I install them?
>>> => You did have vhost backend driver. Do not set >> name='qemu'...>, by default it will use vhost as backend driver.
>>>
>>>  Is it possible to have my interfaces with an IP address inside the VM
>>> to be bridged to the physical interfaces on the host?
>>> => Yes, you can create a linux bridge with physical interface connected,
>>> and use bridge type interface. Refer to https://libvirt.org/formatd
>>> omain.html#elementsNICSBridge
>>> direct type is also ok (but your host and guest have no access to each
>>> other).
>>>
>>> Is it also a possibility that I change the rx and tx buffer on the
>>> physical interface on the host and it is reflected automatically inside the
>>> VM as you said it will always receive the default value of the host?
>>> => No, it do not receive the default value of the host. It's the default
>>> value related with the virtual device driver on the guest.
>>> hostdev type interface will passthrough the physical interface or VF of
>>> the host to guest, it will get the device's parameters for rx and tx buffer.
>>>
>>>
>>>
>>> ---
>>> Best Regards,
>>> Yalan Zhang
>>> IRC: yalzhang
>>> Internal phone: 8389413
>>>
>>> On Thu, Oct 26, 2017 at 1:30 PM, Ashish Kurian <ashish...@gmail.com>
>>> wrote:
>>>
>>>> Hi Yalan,
>>>>
>>>> Thank you for your response. I do not have the following packages
>>>> installed
>>>>
>>>> vhost backend driver
>>>> qemu-kvm-rhev package
>>>>
>>>> Are these packages available for free? How can I install them?
>>>>
>>>> In my KVM VM, I must have an IP address to the interfaces that I am
>>>> trying to increasing the buffers. That is the reason I was using macvtap
>>>> (direct type interface). Is it possible to have my interfaces with an IP
>>>> address inside the VM to be bridged to the physical interfaces on the host?
>>>>
>>>> Is it also a possibility that I change the rx and tx buffer on the
>>>> physical interface on the host and it is reflected automatically inside the
>>>> VM as you said it will always receive the default value of the host?
>>>>
>>>>
>>>> Best Regards,
>>>> Ashish Kurian
>>>>
>>>> On Thu, Oct 26, 2017 at 6:45 AM, Yalan Zhang <yalzh...@redhat.com>
>>>> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> I have tested with your xml in the first mail, and it works for 
>>>>> rx_queue_size(see
>>>>> below).
>>>>> multiqueue need to work with vhost backend driver. And when you set
>>>>> "queues=1" it will ignored.
>>>>>
>>>>> Please check your qemu-kvm-rhev 

Re: [libvirt-users] Need to increase the rx and tx buffer size of my interface

2017-10-26 Thread Yalan Zhang
Hi Ashish,

Are these packages available for free? How can I install them?
=> You did have vhost backend driver. Do not set ,
by default it will use vhost as backend driver.

 Is it possible to have my interfaces with an IP address inside the VM to
be bridged to the physical interfaces on the host?
=> Yes, you can create a linux bridge with physical interface connected,
and use bridge type interface. Refer to https://libvirt.org/
formatdomain.html#elementsNICSBridge
direct type is also ok (but your host and guest have no access to each
other).

Is it also a possibility that I change the rx and tx buffer on the physical
interface on the host and it is reflected automatically inside the VM as
you said it will always receive the default value of the host?
=> No, it do not receive the default value of the host. It's the default
value related with the virtual device driver on the guest.
hostdev type interface will passthrough the physical interface or VF of the
host to guest, it will get the device's parameters for rx and tx buffer.



---
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413

On Thu, Oct 26, 2017 at 1:30 PM, Ashish Kurian <ashish...@gmail.com> wrote:

> Hi Yalan,
>
> Thank you for your response. I do not have the following packages installed
>
> vhost backend driver
> qemu-kvm-rhev package
>
> Are these packages available for free? How can I install them?
>
> In my KVM VM, I must have an IP address to the interfaces that I am trying
> to increasing the buffers. That is the reason I was using macvtap (direct
> type interface). Is it possible to have my interfaces with an IP address
> inside the VM to be bridged to the physical interfaces on the host?
>
> Is it also a possibility that I change the rx and tx buffer on the
> physical interface on the host and it is reflected automatically inside the
> VM as you said it will always receive the default value of the host?
>
>
> Best Regards,
> Ashish Kurian
>
> On Thu, Oct 26, 2017 at 6:45 AM, Yalan Zhang <yalzh...@redhat.com> wrote:
>
>> Hi Ashish,
>>
>> I have tested with your xml in the first mail, and it works for 
>> rx_queue_size(see
>> below).
>> multiqueue need to work with vhost backend driver. And when you set
>> "queues=1" it will ignored.
>>
>> Please check your qemu-kvm-rhev package, should be newer than
>> qemu-kvm-rhev-2.9.0-16.el7_4.2
>> And the logs?
>>
>> tx_queue_size='512' will not work in the guest with direct type
>> interface, in fact, no matter what you set, it will not work and guest will
>> get the default '256'.
>> We only support vhost-user backend to have more than 256. refer to
>> https://libvirt.org/formatdomain.html#elementsNICSEthernet
>>
>> tx_queue_size
>> The optional tx_queue_size attribute controls the size of virtio ring
>> for each queue as described above. The default value is hypervisor
>> dependent and may change across its releases. Moreover, some hypervisors
>> may pose some restrictions on actual value. For instance, QEMU v2.9
>> requires value to be a power of two from [256, 1024] range. In addition to
>> that, this may work only for a subset of interface types, e.g.
>> aforementioned QEMU enables this option only for vhostuser type. Since
>> 3.7.0 (QEMU and KVM only)
>> multiqueue only supports vhost as backend driver.
>>
>> # rpm -q libvirt qemu-kvm-rhev
>> libvirt-3.2.0-14.el7_4.3.x86_64
>> qemu-kvm-rhev-2.9.0-16.el7_4.9.x86_64
>>
>> 1. the xml as below
>>
>>   
>>   
>>   
>>   > tx_queue_size='512'>
>> > ufo='off' mrg_rxbuf='off'/>
>> 
>>   
>>   > function='0x0'/>
>> 
>>
>> 2. after start the vm, check the qemu command line:
>> *-netdev
>> tap,fds=26:28:29:30:31,id=hostnet0,vhost=on,vhostfds=32:33:34:35:36*
>> -device virtio-net-pci,csum=off,gso=off,host_tso4=off,host_tso6=off,
>> host_ecn=off,host_ufo=off,mrg_rxbuf=off,guest_csum=off,guest
>> _tso4=off,guest_tso6=off,guest_ecn=off,guest_ufo=off,
>> *mq=on,vectors=12,rx_queue_size=512,tx_queue_size=512*,netdev=hostnet
>> 0,id=net0,mac=52:54:00:00:b5:99,bus=pci.0,addr=0x3
>>
>> 3. check on guest
>> # ethtool -g eth0
>> Ring parameters for eth0:
>> Pre-set maximums:
>> RX: *512 ==> rx_queue_size works*
>> RX Mini: 0
>> RX Jumbo: 0
>> TX: *256   ===> no change*
>> Current hardware settings:
>> RX: *512 **==> rx_queue_size works*
>> RX Mini: 0
>> RX Jumbo: 0
>> TX: *256 ===> no change*
>>
>> # ethtool -l eth0
>> Channel parameters for eth0:
>

Re: [libvirt-users] Need to increase the rx and tx buffer size of my interface

2017-10-25 Thread Yalan Zhang
Hi Ashish,

I have tested with your xml in the first mail, and it works for
rx_queue_size(see
below).
multiqueue need to work with vhost backend driver. And when you set
"queues=1" it will ignored.

Please check your qemu-kvm-rhev package, should be newer than
qemu-kvm-rhev-2.9.0-16.el7_4.2
And the logs?

tx_queue_size='512' will not work in the guest with direct type interface,
in fact, no matter what you set, it will not work and guest will get the
default '256'.
We only support vhost-user backend to have more than 256. refer to
https://libvirt.org/formatdomain.html#elementsNICSEthernet

tx_queue_size
The optional tx_queue_size attribute controls the size of virtio ring for
each queue as described above. The default value is hypervisor dependent
and may change across its releases. Moreover, some hypervisors may pose
some restrictions on actual value. For instance, QEMU v2.9 requires value
to be a power of two from [256, 1024] range. In addition to that, this may
work only for a subset of interface types, e.g. aforementioned QEMU enables
this option only for vhostuser type. Since 3.7.0 (QEMU and KVM only)
multiqueue only supports vhost as backend driver.

# rpm -q libvirt qemu-kvm-rhev
libvirt-3.2.0-14.el7_4.3.x86_64
qemu-kvm-rhev-2.9.0-16.el7_4.9.x86_64

1. the xml as below
   
  
  
  
  


  
  


2. after start the vm, check the qemu command line:
*-netdev
tap,fds=26:28:29:30:31,id=hostnet0,vhost=on,vhostfds=32:33:34:35:36*
-device
virtio-net-pci,csum=off,gso=off,host_tso4=off,host_tso6=off,host_ecn=off,host_ufo=off,mrg_rxbuf=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,guest_ufo=off,
*mq=on,vectors=12,rx_queue_size=512,tx_queue_size=512*
,netdev=hostnet0,id=net0,mac=52:54:00:00:b5:99,bus=pci.0,addr=0x3

3. check on guest
# ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX: *512 ==> rx_queue_size works*
RX Mini: 0
RX Jumbo: 0
TX: *256   ===> no change*
Current hardware settings:
RX: *512 **==> rx_queue_size works*
RX Mini: 0
RX Jumbo: 0
TX: *256 ===> no change*

# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX: 0
TX: 0
Other: 0
Combined: *5  ==> queues what we set*
Current hardware settings:
RX: 0
TX: 0
Other: 0
Combined: 1


If change to qemu as driver,
# virsh edit rhel7
..
  
  
  
  
  


  
  

..
Domain rhel7 XML configuration edited. ==> the xml can validate and save

# virsh start rhel7
Domain rhel7 started


# virsh dumpxml rhel7 | grep /interface -B9
  
  
  
  **


  
  
  



* -netdev tap,fds=26:28:29:30:31*,id=hostnet0 -device
virtio-net-pci,csum=off,gso=off,host_tso4=off,host_tso6=off,host_ecn=off,host_ufo=off,mrg_rxbuf=off,guest_csum=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,guest_ufo=off,
*rx_queue_size=512,tx_queue_size=512*
,netdev=hostnet0,id=net0,mac=52:54:00:00:b5:99,bus=pci.0,addr=0x3

*"mq=on,vectors=12" is missing*, indicates there is no multiqueue

and check on guest

# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX: 0
TX: 0
Other: 0
Combined: 1  ==> no multiqueue
Current hardware settings:
RX: 0
TX: 0
Other: 0
Combined: 1

# ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX: *512*
RX Mini: 0
RX Jumbo: 0
TX: 256
Current hardware settings:
RX: *512*
RX Mini: 0
RX Jumbo: 0
TX: 256




---
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413

On Thu, Oct 26, 2017 at 2:33 AM, Ashish Kurian <ashish...@gmail.com> wrote:

> Hi Michal,
>
> An update to what I have already said : when I try adding  name='qemu' txmode='iothread' ioeventfd='on' event_idx='off' queues='1'
> rx_queue_size='512' tx_queue_size='512'> although it showed me the error as
> mentioned, when I checked the xml again I saw that  txmode='iothread' ioeventfd='on' event_idx='off' > is added to the
> interface.
>
> The missing parameters are : queues='1' rx_queue_size='512'
> tx_queue_size='512'
>
> Best Regards,
> Ashish Kurian
>
> On Wed, Oct 25, 2017 at 5:07 PM, Ashish Kurian <ashish...@gmail.com>
> wrote:
>
>> Hi Michal,
>>
>> What I found was that when I restarted the machine and did a virsh edit
>> command to see the xml config, I see that it is was not actually changed.
>> This suggests why I saw 256 again after restarting.
>>
>> So now I tried again to edit the xml via virsh edit command and used the
>> following to set the parameters.
>>
>> > queues='1' rx_queue_size='512' tx_queue_size='512'>
>> 
>>
>> It was not accepted and I got the error saying :
>>
>>
>> error: XML document failed to validate against schema: Unable to validate
>> doc against /usr/share/libvirt/schemas/domain.rng
>> Extra element devices in interleave
>> Element domain fail

[libvirt-users] question about how to set rng device on vm

2017-10-25 Thread Yalan Zhang
Hi Amos,

I'm a libvirt QE, and I can not understand the setting on libvirt.org for
rng device.
Could you please help to explain a little?
(The xml in  https://libvirt.org/formatdomain.html#elementsRng)

  

/dev/random


  **
*  *

  


How did it work with source mode='bind' and source mode='connect' together?
which process on guest or host will act as server part, which for client
part?

One detail example:
start a vm with below device, and no egd running on host:
 
  


  
  


qemu command line:
-chardev udp,id=charrng0,host=127.0.0.1,port=1234,localaddr=,localport=1234
-object rng-egd,id=objrng0,chardev=charrng0 -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x9


In my understanding the purpose of the rng device on guest is to provide
guest a hardware RNG device /dev/hwrng which obtain seeds from the host.
The source can be /dev/random on host, then the xml will be:

  /dev/random


can be hardware on host:

  /dev/hwrng


can be edg daemon running on host:
   
  

  

(on host, there should be a egd daemon running on tcp 127.0.0.1:1234
 # egd.pl --debug-client --nofork localhost:1234)

Thank you very much and look forward for your response!


---
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] [virtual interface] detach interface during boot succeed with no changes

2017-09-04 Thread Yalan Zhang
Hi guys,

when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?

# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2;  virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed

Domain rhel7.2 started

Interface detached successfully

  


  
  
  
  
  
  


When I detach after the vm boot, expand the sleep time to 10, it will succeed.

# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10;  virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed

Domain rhel7.2 started

Interface detached successfully


---
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users