Re: veth won't be configured in libvirt managed LXC container

2014-10-22 Thread Dan Williams
On Sun, 2014-10-19 at 16:03 +0200, Lubomir Rintel wrote:
 On Fri, 2014-10-17 at 09:28 -0400, Dan Winship wrote:
  There's a bunch of discussion about this in
  https://bugzilla.gnome.org/show_bug.cgi?id=731014. The short answer is
  it's complicated, because veths get used for a bunch of different
  things in different situations...
 
 I'm not sure I understand the outcome there.

I'm not sure there was a concrete outcome yet.

There are really two cases here:

1) inside the container/VM - treat like normal Ethernet, including
default DHCP if that's enabled in the NM configuration

2) on the host - treat as default-unmanaged and assume connections that
are configured by Docker/LXC/libvirt/etc without touching the
interfaces.

We used to always do #1, but now we always do #2 because of Docker.  It
would be great to do both at the appropriate time...  does that all make
sense?

 Not assuming assuming the connection on a device that is in fact not
 configured does not imply doing DHCP on that device. If the user won't

The question is, what does configured mean?

NM considers an interface with only an IPv6LL address to be configured,
because that's a valid network configuration.  Unfortunately the kernel
assigns an IPv6LL address automatically on IFF_UP, whether the
administrator intends that or not.  So NM doesn't know whether the
administrator *intends* the IPv6LL-only configuration, or whether they
did nothing.  We don't have a solution for this yet.

(Tangent: technically any interface that is IFF_UP is configured
because something/someone explicitly set that.  This would also handle
L2-only configurations, which is something we want to do.
Unfortunately, you have to set the device IFF_UP to get carrier events,
and then the kernel assigns IPv6LL address too :(

The ideal situation for all this would be (a) treat any IFF_UP interface
as configured and assume it's connection even if L2-only, (b) enable
carrier detection without IFF_UP, (c) don't always set IFF_UP on
devices.

We investigated kernel changes to disambiguate carrier detection from
IFF_UP, but that was a non-starter as it would require changes to every
single driver, or complicated core changes to work around having to
change every driver...)

 create a connection for the veth device NM won't touch it anyway, would
 it?

It shouldn't touch veth automatically because they are default
unmanaged, which means you have to explicitly activated/deactivate
them.

Dan

  -- Dan
 
 Lubo
 
  On 10/16/2014 07:08 AM, Lubomir Rintel wrote:
   Hi,
   
   currently it is impossible to get useful network configuration for LXC 
   containers on boot. (At least if they're managed via libvirt; I have no
   idea if anything is different with native LXC tooling). They're supposed
   to obtain their configuration via DHCP, but instead connection is assumed.
   Firstly because there's an IPv6 local link address that (I think) gets
   assigned when libvirt ups the interface and secondly because it's a 
   software link.
   
   Why do we assume connection on all software links? Virtual ethernet 
   devices
   are supposed to behave much like ordinary ethernet devices; they have 
   carrier detection, etc.
   
   I'm following up with the patches that resolve the problem for me, but 
   I'm not quite sure about the special case for veth. 
   
   Thoughts?
   
   Thank you,
   Lubo
   
   ___
   networkmanager-list mailing list
   networkmanager-list@gnome.org
   https://mail.gnome.org/mailman/listinfo/networkmanager-list
   
 
 
 ___
 networkmanager-list mailing list
 networkmanager-list@gnome.org
 https://mail.gnome.org/mailman/listinfo/networkmanager-list


___
networkmanager-list mailing list
networkmanager-list@gnome.org
https://mail.gnome.org/mailman/listinfo/networkmanager-list


Re: VPN + dnsmasq = split dns?

2014-10-22 Thread Dan Williams
On Tue, 2014-10-21 at 07:09 +0200, Olav Morken wrote:
 On Mon, Oct 20, 2014 at 16:20:03 -0500, Dan Williams wrote:
  On Fri, 2014-10-10 at 21:17 +0200, Olav Morken wrote:
   Hi,
   
   I am trying to set up Network Manager to connect to an OpenVPN server, 
   and have trouble understanding how it applies the DNS settings it 
   receives from the server.
  
  Sorry for the late reply...
  
  Which version of NM do you have, and what distro?
 
 It's XUbuntu 14.04 with network-manager 0.9.8.8-0ubuntu7
 
 (I guess I should have been clearer about it being included at the end
 of my original message :) )
 
   Basically, as far as I can tell, it automatically assumes that I want 
   to use split dns, and limits the DNS servers it receives from the 
   OpenVPN servers to the domains it assumes belongs to this 
   configuration. However, it also ignores the existing DNS servers it 
   has configured.
  
  By default, NM will not do split DNS, which means when the VPN is
  connected, the VPN nameservers replace the existing nameservers.  This
  is required to ensure that if for some reason the VPN nameservers cannot
  be contacted, that your queries don't fall back to the non-VPN
  nameservers and return bogus (and potentially malicious) results.
  
  But, if you add dns=dnsmasq to
  the /etc/NetworkManager/NetworkManager.conf file and install 'dnsmasq',
  then NM will run in split DNS mode.  Here, NM will spawn a private copy
  of dnsmasq and send it configuration to direct any queries ending in the
  domain passed back from the openvpn server (or entered into the NM
  configuration for that VPN connection) to the VPN nameservers, and
  everything else to the non-VPN nameservers.
 
 That is quite a large change in behavior for someone running with
 dnsmasq. I also think it is the wrong behavior when we are pushing a
 default route over the VPN. With a default route over the VPN it is
 likely that we would want all traffic, including DNS traffic over the
 VPN. It is also likely that the user would end up trying to contact
 the local DNS servers over the VPN, which would break.

If you want everything to go to the VPN nameservers, then 'dns=dnsmasq'
isn't what you want, since that is what enables this local caching
nameserver configuration.  I guess you just want the non-local-caching
configuration, where you can just not specify dns=.

   That leaves us with a dnsmasq configured with two nameservers it will 
   query for two specific subdomains, and no nameservers it will use for 
   other domains. The result is that dnsmasq is only willing to respond 
   to DNS queries for those subdomains, and respond with REFUSED for 
   every other domain.
   
   I assume that this is not the way it is supposed to work, since that 
   would mean that everyone connecting to a VPN would be unable to access 
   most of the Internet. I therefore assume that there is something wrong 
   with my configuration.
  
  That sounds like a bug; do you know if you have any custom dnsmasq
  configuration on that system?  Also check two thigns:
  
  1) /etc/resolv.conf should have 127.0.0.1 as the only namesever
  2) Look in /var/run/NetworkManager (or /run/NetworkManager) for the
  'dnsmasq.conf' file which is what NM sends to dnsmasq
  
  (the only caveat here is that if you run Ubuntu, this procedure may not
  apply as the info is sent to dnsmasq over D-Bus)
 
 I wasn't aware the Ubuntu had such significant changes to Network
 Manager. In that case, I think the behavior we am seeing is
 Ubuntu-specific.
 
 There is no customization of the dnsmasq settings on this system. (In
 fact the behavior has been observed on several different Ubuntu
 installations.)
 
 From the logs (included at the end of my original message):
 
   dnsmasq[1464]: setting upstream servers from DBus
   dnsmasq[1464]: using nameserver 198.51.100.168#53 for domain 
 0.192.in-addr.arpa
   dnsmasq[1464]: using nameserver 198.51.100.168#53 for domain example.org
   dnsmasq[1464]: using nameserver 198.51.100.57#53 for domain 
 0.192.in-addr.arpa
   dnsmasq[1464]: using nameserver 198.51.100.57#53 for domain example.org
 
 Nothing in the log about the original (non-VPN) DNS servers, so I am
 guessing they were removed.

I think with Ubuntu, dns=dnsmasq might be enabled by default.  Can you
check /etc/NetworkManager/NetworkManager.conf and if so, remove that
line?

  Let us know what the results are!
 
 For what it is worth, after futher testing we have determined that it
 is the IPv6 configuration that breaks the DNS config. We have seen
 three different behaviors, depending on the VPN config:
 
 1. VPN with only IPv4 address and default route:
 
The DNS servers are added as global DNS servers.
 
 2. VPN with both IPv4 and IPV6 addresses and default routes, but only
IPv4 DNS servers pushed through VPN configuration:
 
The DNS servers are added as local DNS servers, with no global
DNS servers.
 
 3. VPN with both IPv4 and IPV6 addresses and default routes,