Re: [libvirt-users] OVS / KVM / libvirt / MTU

2019-07-31 Thread Michal Privoznik

On 7/29/19 9:23 PM, Sven Vogel wrote:

Hi Michal,

Thanks for your answer.

I don’t understand why an interface created without mtu gets only 1500 visible 
in the virtual machine but if I create an interface with mtu higher than 1500 
e.g. 2000 the bridge will change too. Before the bridge was e.g. by 9000.
I ask because you wrote if I don’t set an mtu of the interface I will get the 
mtu of the bridge. But it seems so.

Can you clarify it a little better for me?


I don't know enough about OVS internals to answer that, sorry. Maybe we 
should ask OVS developers why OVS bridge behaves this way.


Michal

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] OVS / KVM / libvirt / MTU

2019-07-31 Thread Alvin Starr

In general if no MTU is set on interface creation the default value is 1500.

On OVS the bridge MTU is automatically set to the smallest port MTU. So 
you just have to set the MTU of each port of the bridge.


Take a look at: https://bugzilla.redhat.com/show_bug.cgi?id=1160897
It is a bit of a pain to read but seems to confirm the statement about 
the OVS MTU value being set by the MTU of the ports.


I have been using OVS with Xen and a few years ago I had to wrap my head 
around this problem.
Once you start setting the MTU to something other than 1500 you will 
find you need to set ALL the MTU values of ALL your interfaces.

Otherwise some inherited default will bite you.


On 7/31/19 4:39 AM, Michal Privoznik wrote:

On 7/29/19 9:23 PM, Sven Vogel wrote:

Hi Michal,

Thanks for your answer.

I don’t understand why an interface created without mtu gets only 
1500 visible in the virtual machine but if I create an interface with 
mtu higher than 1500 e.g. 2000 the bridge will change too. Before the 
bridge was e.g. by 9000.
I ask because you wrote if I don’t set an mtu of the interface I will 
get the mtu of the bridge. But it seems so.


Can you clarify it a little better for me?


I don't know enough about OVS internals to answer that, sorry. Maybe 
we should ask OVS developers why OVS bridge behaves this way.


Michal

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


--
Alvin Starr   ||   land:  (647)478-6285
Netvel Inc.   ||   Cell:  (416)806-0133
al...@netvel.net  ||

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] libvirt/dnsmasq is not adhering to static DHCP assignments

2019-07-31 Thread Christian Kujau
This is basically a continuation of an older posting[0] I found, but 
apparently no solution has been posted. So, I'm trying to setup static 
DHCP leases with dnsmasq that is being started by libvirtd:


---
$ sudo virsh net-dumpxml --network default

  [...]
  
  

  
  
---


And the domain is indeed being started with that hardware address:


---
$ virsh dumpxml --domain f30 | grep -B1 -A3 mac\ a

  
  
  
  
---
  


But for some reason the domain gets a different address assigned, albeit 
from the correct DHCP pool:


---
$ ssh 192.168.122.233 "ip addr show"
[...]
2: enp1s0:  mtu 1500 qdisc fq_codel state 
UP group default qlen 1000
link/ether 08:00:27:e2:81:39 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.233/24 brd 192.168.122.255 scope global dynamic enp1s0
---


See below for even more details. I've removed the virbr0.status file, 
restarted the network, still it gets this .233 address instead of the 
configured .139. Also, I'm not able to add a debug log to the 
libvirtd/dnsmasq instance, although it should be possible[1], the 
xmlns:dnsmasq stanza is overwritten as soon as the configuration is saved. 
Short of a debug (queries) log, strace on the libvirtd/dnsmasq process 
reveals that the client seems to request this strange IP address:

23002 19:01:59.053558 write(11, "<30>Jul 31 19:01:59 dnsmasq-dhcp[23002]: 
DHCPREQUEST(virbr0) 192.168.122.233 08:00:27:e2:81:39 ", 95) = 95
23002 19:01:59.054024 write(11, "<30>Jul 31 19:01:59 dnsmasq-dhcp[23002]: 
DHCPACK(virbr0) 192.168.122.233 08:00:27:e2:81:39 f30", 94) = 94

So I restarted the dhclient process in the domain (a freshly installed 
Fedora 30), removed all state files with "192.168.122.233" in them, but 
still the domains gets assigned the .233 instead of the .139 address.

The next step would be to disable libvirtd/dnsmasq altogether and run my 
own dnsmasq instance, but I wanted to avoid that. Any idead on where to 
look next?

Thanks,
Christian.

[0] https://www.redhat.com/archives/libvirt-users/2017-October/msg00070.html
[1] https://libvirt.org/formatnetwork.html#elementsNamespaces


# cd /var/lib/libvirt/dnsmasq/
# grep -r . . | fgrep -v \#
./default.conf:strict-order
./default.conf:port=0
./default.conf:pid-file=/var/run/libvirt/network/default.pid
./default.conf:except-interface=lo
./default.conf:bind-dynamic
./default.conf:interface=virbr0
./default.conf:dhcp-range=192.168.122.130,192.168.122.250,255.255.255.0
./default.conf:dhcp-no-override
./default.conf:dhcp-authoritative
./default.conf:dhcp-lease-max=121
./default.conf:dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
./default.hostsfile:08:00:27:e2:81:39,192.168.56.139,f30
./virbr0.status:[
./virbr0.status:  {
./virbr0.status:"ip-address": "192.168.122.233",
./virbr0.status:"mac-address": "08:00:27:e2:81:39",
./virbr0.status:"hostname": "f30",
./virbr0.status:"client-id": 
"ff:27:e2:81:39:00:04:8c:ad:c4:7d:04:e8:4b:de:93:4b:76:d8:75:82:86:c8",
./virbr0.status:"expiry-time": 1564628519
./virbr0.status:  }
./virbr0.status:]





-- 
BOFH excuse #40:

not enough memory, go get system upgrade

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] Why librbd disallow VM live migration if the disk cache mode is not none or directsync

2019-07-31 Thread Ming-Hung Tsai
Michal Privoznik  :
>
> On 7/29/19 3:51 AM, Ming-Hung Tsai wrote:
> > I'm curious that why librbd sets this limitation? The rule first
> > appeared in librbd.git commit d57485f73ab. Theoretically, a
> > write-through cache is also safe for VM migration, if the cache
> > implementation guarantees that cache invalidation and disk write are
> > synchronous operations.
> >
> > For example, I'm using Ceph RBD images as VM storage backend. The Ceph
> > librbd supports synchronous write-through cache, by setting
> > rbd_cache_max_dirty to zero, and setting
> > rbd_cache_block_writes_upfront to true, thus it would be safe for VM
> > migration. Is that true? Any suggestion would be appreciated. Thanks.
>
> The commit you refer to is very old and my hunch is that things looked
> different in 2012. Things might have changed since then and if
> write-through wasn't safe ~7 years ago, it might be safe now (with some
> tuning).
>
> Michal

Yes, but now the limitation is still there, in
qemuMigrationSrcIsSafe(). Should we relax the limitation?


Ming-Hung Tsai

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users