Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 11:33:43PM +0200, Martin Pitt wrote:
> Martin Pitt [2016-05-31 22:45 +0200]:
> > Can you please give a sketch how to look up the source port that the
> > resolver uses? That'd be a good piece of information for the upstream
> > bug report too, as it's not at all obvious.
> 
> Look up, and also how to forge it -- as creating a RAW_SOCKET requires
> root privileges, so I suppose it can be done with a normal UDP socket
> somehow?

You can forge the source port very easily by just calling bind() with
the wanted source port.

The difficulty is with forging the source address. You can use any IP
which the machine already has, but you can't typically use anything
else.


That's why such attacks usually involve a second computer (or container
or VM) on which you have root access and which is attached to the same
subnet as the first. It doesn't need to be in the path (so no MITM),
just to be closed to the target and have a route to it.

As you have root access to that second computer, you can write a tiny
bit of code that runs on it and will send any raw packet that you need.


So if I was to perform such an attack, I'd have a tiny service on my
laptop which listens on a port for a string containing the IP address of
the DNS server to impersonate and its port.

Then I'd have another piece of software on the machine I want to poison
which does the DNS query for the record I want to poison, immediately
looks up the source port and DNS server IP which was used and send those
to my laptop. My laptop then immediately replaces those two in a
pre-generated PCAP containing 32768 UDP packets (one for each of the
transaction IDs) and dumps the generated pcap onto the wire.


This entirely avoids having to go through the whole kernel stack to
generate a real UDP connection. You just dump all 32768 packets into the
network card in one shot.

Then even if it takes a while for the target to process them all, you
are almost guaranteed to have them all ahead of the real reply in the
queue and so have a pretty good chance to indeed poison the cache.


> 
> Thanks!
> 
> Martin

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Martin Pitt
Martin Pitt [2016-05-31 22:45 +0200]:
> Can you please give a sketch how to look up the source port that the
> resolver uses? That'd be a good piece of information for the upstream
> bug report too, as it's not at all obvious.

Look up, and also how to forge it -- as creating a RAW_SOCKET requires
root privileges, so I suppose it can be done with a normal UDP socket
somehow?

Thanks!

Martin
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 10:45:24PM +0200, Martin Pitt wrote:
> Hello Marc,
> 
> Stéphane, Marc, thanks for these!
> 
> Marc Deslauriers [2016-05-31 16:08 -0400]:
> > > I seem to remember it being a timing attack. If you can control when the
> > > initial DNS query happens, which as an unprivileged user you can by just
> > > doing a local DNS query and you know what upstream server is being hit,
> > > which you also know by being able to look at /etc/resolv.conf, then you
> > > can generate fake DNS replies locally (DNS is UDP so the source can
> > > trivially be spoofed) which will arrive before the real reply and end up
> > > in your cache, letting you override any record you want.
> 
> > 2- a cache poisoning attack. Because the resolver is local, source port
> > randomization is futile when a local user can trivially look up which source
> > port was selected when a particular request was made and can respond with a
> > spoofed UDP packet faster than the real dns server. No MITM required.
> 
> ATM resolved uses randomized ID fields (16 bits), which means that you
> need an average of 32.768 tries to get an acceptable answer into
> resolved, which you can probably do in the order of a minute. It does
> not use source port randomization though, which would lift the average
> time to the magnitude of a month.
> 
> Can you please give a sketch how to look up the source port that the
> resolver uses? That'd be a good piece of information for the upstream
> bug report too, as it's not at all obvious.

stgraber@dakara:~$ netstat -nAinet | grep 53
udp0  0 172.17.0.51:50662   172.17.20.30:53 ESTABLISHED

That gives me the source ip and port for the current DNS query as an
unprivileged user. I can then spoof a DNS reply that matches this.

The rest then depends on how random the transaction ID is, there have
been attacks related to that in the past:
  
https://blogs.technet.microsoft.com/srd/2008/04/09/ms08-020-how-predictable-is-the-dns-transaction-id/

Note that in the case of those attacks, they didn't have nearly as much
information as you would by controling when the query happens and being
able to check the source port immediately.


So yes, the random transaction ID sure helps, so long as it's actually
random and so long as you get a DNS reply reasonably quickly.

I think your estimate of a minute isn't anywhere near accurate. One
could pretty easily pre-generate all 32768 packets in pcap format, just
replace the source address and port once known and then inject the whole
pcap onto the network much much faster than that.


> > 1- a privacy issue. It is trivial for a local user to probe if a site was
> > visited by another local user.
> 
> I assume by looking at the time that it takes to get a response?

stgraber@dakara:~$ dig www.ubuntu.com @172.17.20.30
; <<>> DiG 9.10.3-P4-Ubuntu <<>> www.ubuntu.com @172.17.20.30
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24839
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.ubuntu.com.IN  A

;; ANSWER SECTION:
www.ubuntu.com. 600 IN  A   91.189.89.118

;; Query time: 123 msec
;; SERVER: 172.17.20.30#53(172.17.20.30)
;; WHEN: Tue May 31 17:06:19 EDT 2016
;; MSG SIZE  rcvd: 59

stgraber@dakara:~$ dig www.ubuntu.com @127.0.0.1
; <<>> DiG 9.10.3-P4-Ubuntu <<>> www.ubuntu.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63104
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.ubuntu.com.IN  A

;; ANSWER SECTION:
www.ubuntu.com. 594 IN  A   91.189.89.118

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Tue May 31 17:06:25 EDT 2016
;; MSG SIZE  rcvd: 59



The first query shows you the TTL for the record from the recursive
server used by the local resolver, here we see it's 600 seconds, the
second request hits the local cache which returns a TTL of 594 seconds.
Meaning that the DNS record was accessed by someone on the machine
within the last 6 seconds.

Do that with some sensitive website and you can know when someone on the
machine accessed it.



Note that the above wasn't done through resolved.

> Thanks,
> 
> Martin
> 
> -- 
> Martin Pitt| http://www.piware.de
> Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
> 
> -- 
> ubuntu-devel mailing list
> ubuntu-devel@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Steve Langasek
On Tue, May 31, 2016 at 09:38:51PM +0200, Martin Pitt wrote:
> > In the past, resolved would use a single shared cache for the whole
> > system, which would allow for local cache poisoning by unprivileged
> > users on the system. That's the reason why the dnsmasq instance we spawn
> > with Network Manager doesn't have caching enabled and that becomes even
> > more critical when we're talking about doing the same change on servers.

> Indeed Tony mentioned this in today's meeting with Mathieu and me --
> this renders most of the efficiency gain of having a local DNS
> resolver moot.

However, reducing the number of DNS queries with caching is not a
requirement.  The request was for the local resolver to cache information
about upstream resolvers being *available*, so that each process would not
have to find out for itself that the primary DNS server is offline and fail
over (with annoying timeouts).

Running a cache with the local resolver causes problems that we don't have
solutions for.  Correct is more important than fast, we should run without
caching.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
Ubuntu Developerhttp://www.debian.org/
slanga...@ubuntu.com vor...@debian.org


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Martin Pitt
Hello Marc,

Stéphane, Marc, thanks for these!

Marc Deslauriers [2016-05-31 16:08 -0400]:
> > I seem to remember it being a timing attack. If you can control when the
> > initial DNS query happens, which as an unprivileged user you can by just
> > doing a local DNS query and you know what upstream server is being hit,
> > which you also know by being able to look at /etc/resolv.conf, then you
> > can generate fake DNS replies locally (DNS is UDP so the source can
> > trivially be spoofed) which will arrive before the real reply and end up
> > in your cache, letting you override any record you want.

> 2- a cache poisoning attack. Because the resolver is local, source port
> randomization is futile when a local user can trivially look up which source
> port was selected when a particular request was made and can respond with a
> spoofed UDP packet faster than the real dns server. No MITM required.

ATM resolved uses randomized ID fields (16 bits), which means that you
need an average of 32.768 tries to get an acceptable answer into
resolved, which you can probably do in the order of a minute. It does
not use source port randomization though, which would lift the average
time to the magnitude of a month.

Can you please give a sketch how to look up the source port that the
resolver uses? That'd be a good piece of information for the upstream
bug report too, as it's not at all obvious.

> 1- a privacy issue. It is trivial for a local user to probe if a site was
> visited by another local user.

I assume by looking at the time that it takes to get a response?

Thanks,

Martin

-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Marc Deslauriers
On 2016-05-31 03:52 PM, Stéphane Graber wrote:
> On Tue, May 31, 2016 at 09:38:51PM +0200, Martin Pitt wrote:
>> Hello Stéphane,
>>
>> Stéphane Graber [2016-05-31 11:23 -0400]:
>>> So in the past there were two main problems with using resolved, I'd
>>> like to confirm both of them have now been taken care of:
>>>
>>>  1) Does resolved now support split DNS support?
>>> That is, can Network Manager instruct it that only *.example.com
>>> should be sent to the DNS servers provided by a given VPN?
>>
>> resolved has a D-Bus API SetLinkDomains(), similar in spirit to
>> dnsmasq. However, NM does not yet know about this, and only indirectly
>> talks to resolved via writing /etc/resolv.conf (again indirectly via
>> resolvconf). So the functionality on the resolved is there, but we
>> don't use it yet. This is being tracked in the blueprint.
> 
> Ok and does it support configuring this per-domain thing through
> configuration files?
> 
> That's needed so that LXC, LXD, libvirt, ... can ship a file defining a
> domain for their bridge which is then forwarded to their dnsmasq
> instance.
> 
> I don't believe we do this automatically anywhere but it was planned to
> do it this cycle for LXD and quite possibly for LXC and libvirt too (so
> you can resolve .lxd or .libvirt).
> 
>>
>>>  2) Does resolved now maintain a per-uid cache or has caching been
>>> disabled entirely?
>>
>> No, it uses a global cache.
>>
>>> In the past, resolved would use a single shared cache for the whole
>>> system, which would allow for local cache poisoning by unprivileged
>>> users on the system. That's the reason why the dnsmasq instance we spawn
>>> with Network Manager doesn't have caching enabled and that becomes even
>>> more critical when we're talking about doing the same change on servers.
>>
>> Indeed Tony mentioned this in today's meeting with Mathieu and me --
>> this renders most of the efficiency gain of having a local DNS
>> resolver moot. Do you have a link to describing the problem? This was
>> requested in LP: #903854, but neither that bug nor the referenced
>> blueprint explain that.
>>
>> How would an unprivileged local user change the cache in resolved? The
>> only way how to get a result into resolvconf's cache is through a
>> response from the forwarding DNS server. If a user can do that, what
>> stops her from doing the same for non-cached lookups?
>>
>> The caches certainly need to be dropped whenever the set of
>> nameservers *changes*, but this already happens. (But this is required
>> for functioning correctly, not necessarily a security guard).
>>
>> If you have some pointers to the attack, I'm happy to forward this to
>> an upstream issue and discuss it there (or file an issue yourself,
>> tha'd be appreciated). If this is an issue, it should be fixed
>> upstream, not downstream by disabling caching completely.
> 
> I seem to remember it being a timing attack. If you can control when the
> initial DNS query happens, which as an unprivileged user you can by just
> doing a local DNS query and you know what upstream server is being hit,
> which you also know by being able to look at /etc/resolv.conf, then you
> can generate fake DNS replies locally (DNS is UDP so the source can
> trivially be spoofed) which will arrive before the real reply and end up
> in your cache, letting you override any record you want.
> 
> For entries that are already cached, you can just query their TTL and
> time the attack to begin exactly as the cached record expires.
> 
> 
> This would then let an unprivileged user hijack just about any DNS
> record unless you have a per-uid cache, in which case they'd only hurt
> themselves.
> 
> 
> 
> Anyway, you definitely want to talk to the security team :)
> 

My memory is a bit fuzzy, but I believe there were two concerns with a
system-wide cache:

1- a privacy issue. It is trivial for a local user to probe if a site was
visited by another local user.

2- a cache poisoning attack. Because the resolver is local, source port
randomization is futile when a local user can trivially look up which source
port was selected when a particular request was made and can respond with a
spoofed UDP packet faster than the real dns server. No MITM required.

>>
>>> Additionally, what's the easiest way to undo this change on a server?
>>
>> Uninstall libnss-resolve, or systemctl disable systemd-resolved, I'd
>> say.
>>
>>> I have a few deployments where I run upwards of 4000 containers on a
>>> single system. Such systems have a main DNS resolver on the host and all
>>> containers talking to it. I'm not too fond of adding an extra 4000
>>> processes to such systems.
>>
>> I don't actually intend this to be in containers, particularly as
>> LXC/LXD already sets up its own dnsmasq on the host. That's why I only
>> seeded it to ubuntu-standard, not to minimal. The
>> images.linuxcontainers.org images (rightfully) don't have
>> ubuntu-standard, so they won't get libnss-resolve and a

Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 09:50:03PM +0200, Martin Pitt wrote:
> Hello Stéphane,
> 
> Stéphane Graber [2016-05-31 11:31 -0400]:
> > One more thing on that point which was just brought up in:
> > https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1571967
> > 
> > In the past, with dnsmasq on desktop we could ship a .d file which would
> > instruct the system dnsmasq to forward all ".lxc" or ".lxd" queries to
> > the LXC or LXD dnsmasq instance.
> 
> Per-domain DNS servers can't be configured globally via files in
> resolved, only per network device. However, you said in the bug that
> this isn't working on the host anyway, only from within containers.
> And for those lxc sets up its own dnsmasq which the containers use
> as DNS server, so nothing should change in that regard, unless you are
> planning to replace lxc's dnsmasq as well.
> 
> FYI, this can be made to work on the host if lxc/lxd would register
> containers in machined, then libnss-mymachines will resolve those
> names.
> 
> Thanks,
> 
> Martin


We were hoping to ship a dnsmasq.d file this cycle that would make .lxc,
.lxd and .libvirt point to their respective dnsmasq instance.

It's not the case right now which is why it only works from inside
containers, but it's something we were hoping to change.


As far as registering containers with systemd, it's my understanding
that unprivileged processes cannot do that. As the upstream of LXC and
LXD, I'm also not very keen on having to implement yet another
systemd-specific feature anyway when we already run a standard service
(DNS server) which exports that data in a normally, perfectly usable
form.


Anyway, that part isn't particularly critical for me.

Not regressing split DNS support for VPN and not compromising system
security with unsafe cache settings is way more important.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 09:38:51PM +0200, Martin Pitt wrote:
> Hello Stéphane,
> 
> Stéphane Graber [2016-05-31 11:23 -0400]:
> > So in the past there were two main problems with using resolved, I'd
> > like to confirm both of them have now been taken care of:
> > 
> >  1) Does resolved now support split DNS support?
> > That is, can Network Manager instruct it that only *.example.com
> > should be sent to the DNS servers provided by a given VPN?
> 
> resolved has a D-Bus API SetLinkDomains(), similar in spirit to
> dnsmasq. However, NM does not yet know about this, and only indirectly
> talks to resolved via writing /etc/resolv.conf (again indirectly via
> resolvconf). So the functionality on the resolved is there, but we
> don't use it yet. This is being tracked in the blueprint.

Ok and does it support configuring this per-domain thing through
configuration files?

That's needed so that LXC, LXD, libvirt, ... can ship a file defining a
domain for their bridge which is then forwarded to their dnsmasq
instance.

I don't believe we do this automatically anywhere but it was planned to
do it this cycle for LXD and quite possibly for LXC and libvirt too (so
you can resolve .lxd or .libvirt).

> 
> >  2) Does resolved now maintain a per-uid cache or has caching been
> > disabled entirely?
> 
> No, it uses a global cache.
> 
> > In the past, resolved would use a single shared cache for the whole
> > system, which would allow for local cache poisoning by unprivileged
> > users on the system. That's the reason why the dnsmasq instance we spawn
> > with Network Manager doesn't have caching enabled and that becomes even
> > more critical when we're talking about doing the same change on servers.
> 
> Indeed Tony mentioned this in today's meeting with Mathieu and me --
> this renders most of the efficiency gain of having a local DNS
> resolver moot. Do you have a link to describing the problem? This was
> requested in LP: #903854, but neither that bug nor the referenced
> blueprint explain that.
> 
> How would an unprivileged local user change the cache in resolved? The
> only way how to get a result into resolvconf's cache is through a
> response from the forwarding DNS server. If a user can do that, what
> stops her from doing the same for non-cached lookups?
> 
> The caches certainly need to be dropped whenever the set of
> nameservers *changes*, but this already happens. (But this is required
> for functioning correctly, not necessarily a security guard).
> 
> If you have some pointers to the attack, I'm happy to forward this to
> an upstream issue and discuss it there (or file an issue yourself,
> tha'd be appreciated). If this is an issue, it should be fixed
> upstream, not downstream by disabling caching completely.

I seem to remember it being a timing attack. If you can control when the
initial DNS query happens, which as an unprivileged user you can by just
doing a local DNS query and you know what upstream server is being hit,
which you also know by being able to look at /etc/resolv.conf, then you
can generate fake DNS replies locally (DNS is UDP so the source can
trivially be spoofed) which will arrive before the real reply and end up
in your cache, letting you override any record you want.

For entries that are already cached, you can just query their TTL and
time the attack to begin exactly as the cached record expires.


This would then let an unprivileged user hijack just about any DNS
record unless you have a per-uid cache, in which case they'd only hurt
themselves.



Anyway, you definitely want to talk to the security team :)

> 
> > Additionally, what's the easiest way to undo this change on a server?
> 
> Uninstall libnss-resolve, or systemctl disable systemd-resolved, I'd
> say.
> 
> > I have a few deployments where I run upwards of 4000 containers on a
> > single system. Such systems have a main DNS resolver on the host and all
> > containers talking to it. I'm not too fond of adding an extra 4000
> > processes to such systems.
> 
> I don't actually intend this to be in containers, particularly as
> LXC/LXD already sets up its own dnsmasq on the host. That's why I only
> seeded it to ubuntu-standard, not to minimal. The
> images.linuxcontainers.org images (rightfully) don't have
> ubuntu-standard, so they won't get libnss-resolve and an enabled
> resolved.

But our recommended images are the cloud images and they sure do include
ubuntu-standard:

root@xenial:~# dpkg -l | grep ubuntu-standard
ii  ubuntu-standard  1.361   amd64  
  The Ubuntu standard system


The images.linuxcontainers.org images are tiny images which some of our
users prefer over the recommended official ones published on the Ubuntu
infrastructure. But if the intent is for this change not to affect
containers, then it also must deal with our recommended images.

> 
> Thanks,
> 
> Martin


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


sig

Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Martin Pitt
Hello Stéphane,

Stéphane Graber [2016-05-31 11:31 -0400]:
> One more thing on that point which was just brought up in:
> https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1571967
> 
> In the past, with dnsmasq on desktop we could ship a .d file which would
> instruct the system dnsmasq to forward all ".lxc" or ".lxd" queries to
> the LXC or LXD dnsmasq instance.

Per-domain DNS servers can't be configured globally via files in
resolved, only per network device. However, you said in the bug that
this isn't working on the host anyway, only from within containers.
And for those lxc sets up its own dnsmasq which the containers use
as DNS server, so nothing should change in that regard, unless you are
planning to replace lxc's dnsmasq as well.

FYI, this can be made to work on the host if lxc/lxd would register
containers in machined, then libnss-mymachines will resolve those
names.

Thanks,

Martin
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Martin Pitt
Hello Stéphane,

Stéphane Graber [2016-05-31 11:23 -0400]:
> So in the past there were two main problems with using resolved, I'd
> like to confirm both of them have now been taken care of:
> 
>  1) Does resolved now support split DNS support?
> That is, can Network Manager instruct it that only *.example.com
> should be sent to the DNS servers provided by a given VPN?

resolved has a D-Bus API SetLinkDomains(), similar in spirit to
dnsmasq. However, NM does not yet know about this, and only indirectly
talks to resolved via writing /etc/resolv.conf (again indirectly via
resolvconf). So the functionality on the resolved is there, but we
don't use it yet. This is being tracked in the blueprint.

>  2) Does resolved now maintain a per-uid cache or has caching been
> disabled entirely?

No, it uses a global cache.

> In the past, resolved would use a single shared cache for the whole
> system, which would allow for local cache poisoning by unprivileged
> users on the system. That's the reason why the dnsmasq instance we spawn
> with Network Manager doesn't have caching enabled and that becomes even
> more critical when we're talking about doing the same change on servers.

Indeed Tony mentioned this in today's meeting with Mathieu and me --
this renders most of the efficiency gain of having a local DNS
resolver moot. Do you have a link to describing the problem? This was
requested in LP: #903854, but neither that bug nor the referenced
blueprint explain that.

How would an unprivileged local user change the cache in resolved? The
only way how to get a result into resolvconf's cache is through a
response from the forwarding DNS server. If a user can do that, what
stops her from doing the same for non-cached lookups?

The caches certainly need to be dropped whenever the set of
nameservers *changes*, but this already happens. (But this is required
for functioning correctly, not necessarily a security guard).

If you have some pointers to the attack, I'm happy to forward this to
an upstream issue and discuss it there (or file an issue yourself,
tha'd be appreciated). If this is an issue, it should be fixed
upstream, not downstream by disabling caching completely.

> Additionally, what's the easiest way to undo this change on a server?

Uninstall libnss-resolve, or systemctl disable systemd-resolved, I'd
say.

> I have a few deployments where I run upwards of 4000 containers on a
> single system. Such systems have a main DNS resolver on the host and all
> containers talking to it. I'm not too fond of adding an extra 4000
> processes to such systems.

I don't actually intend this to be in containers, particularly as
LXC/LXD already sets up its own dnsmasq on the host. That's why I only
seeded it to ubuntu-standard, not to minimal. The
images.linuxcontainers.org images (rightfully) don't have
ubuntu-standard, so they won't get libnss-resolve and an enabled
resolved.

Thanks,

Martin

-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 11:23:01AM -0400, Stéphane Graber wrote:
> On Tue, May 31, 2016 at 11:34:41AM +0200, Martin Pitt wrote:
> > Hello all,
> > 
> > yesterday I landed [1] in Yakkety which changes how DNS resolution
> > works -- i. e. how names like "www.ubuntu.com" get translated to an IP
> > address like 1.2.3.4.
> > 
> > Until now, we used two different approaches for this:
> > 
> >  * On desktops and touch, NetworkManager launched "dnsmasq" configured
> >as effectively a local DNS server which forwards requests to the
> >"real" DNS servers that get picked up usually via DHCP. Thus
> >/etc/resolv.conf said "nameserver 127.0.0.1" and it was rather
> >non-obvious to show the real DNS servers. (This was one of the
> >complaints/triggers that led to creating this blueprint).  But
> >dnsmasq does proper rotation and fallback between multiple
> >nameservers, i. e. if one does not respond it uses the next one
> >without long timeouts.

One more thing on that point which was just brought up in:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1571967

In the past, with dnsmasq on desktop we could ship a .d file which would
instruct the system dnsmasq to forward all ".lxc" or ".lxd" queries to
the LXC or LXD dnsmasq instance.

We were planning on doing so by default this cycle, so it'd be good to
confirm that resolved doesn't regress things in this regard.

> > 
> >  * On servers, cloud images etc. we did not have any local DNS server.
> >Configured DNS servers (via DHCP or static configuration in
> >/etc/network/interfaces) were put into /etc/resolv.conf, and
> >every program (via glibc's builtin resolver) directly contacted
> >those.
> > 
> >This had the major drawback that if the first DNS server does not
> >respond (or is slow), then *every* DNS lookup suffers from a ~ 10s
> >timeout, which makes every network operation awfully slow.
> >Addressing this was the main motivation for the blueprint. On top
> >of that, there was no local caching, thus requesting the same name
> >again would do another lookup.
> > 
> > As of today, we now have one local resolver service for all Ubuntu
> > products; we picked "resolved" as that is small and lightweight,
> > already present (part of the systemd package), does not require D-Bus
> > (unlike dnsmasq), supports DNSSEC, provides transparent fallback to
> > contacting the real DNS servers directly (in case anything goes wrong
> > with the local resolver), and avoids the first issue above that
> > /etc/resolv.conf always shows 127.0.0.1.
> > 
> > Now DNS resolution goes via a new "libnss-resolve" NSS module which
> > talks to resolved [2]. /etc/resolv.conf has the "real" nameservers,
> > broken name servers are handled efficiently, and we have local DNS
> > caching. NetworkManager now stops launching a dnsmasq instance.
> > 
> > I've had this running on my laptop for about three weeks now without
> > noticing problems, but there may well be some corner cases where this
> > causes problems. If you encounter a regression that causes DNS names
> > to not get resolved correctly, please do "ubuntu-bug libnss-resolve"
> > with the details.
> > 
> > Thanks,
> > 
> > Martin
> 
> 
> So in the past there were two main problems with using resolved, I'd
> like to confirm both of them have now been taken care of:
> 
>  1) Does resolved now support split DNS support?
> That is, can Network Manager instruct it that only *.example.com
> should be sent to the DNS servers provided by a given VPN?
> 
> That's a very important feature that the current dnsmasq integration
> gives us which amongst other things avoids leaking DNS queries to your
> employer when you're not routing all your traffic to the VPN and also
> greatly reduces the overall network latency when using a VPN with a far
> away endpoint.
> 
> It's also a critical feature for anyone who wants to run multiple
> VPNs in parallel, which NetworkManager 1.2 now supports.
> 
>  2) Does resolved now maintain a per-uid cache or has caching been
> disabled entirely?
> 
> In the past, resolved would use a single shared cache for the whole
> system, which would allow for local cache poisoning by unprivileged
> users on the system. That's the reason why the dnsmasq instance we spawn
> with Network Manager doesn't have caching enabled and that becomes even
> more critical when we're talking about doing the same change on servers.
> 
> If not done already, I'd very strongly suggest a full audit of
> resolved by the security team with a focus on its caching mechanism.
> 
> 
> Additionally, what's the easiest way to undo this change on a server?
> 
> I have a few deployments where I run upwards of 4000 containers on a
> single system. Such systems have a main DNS resolver on the host and all
> containers talking to it. I'm not too fond of adding an extra 4000
> processes to such systems.
> 
> -- 
> 

Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 11:34:41AM +0200, Martin Pitt wrote:
> Hello all,
> 
> yesterday I landed [1] in Yakkety which changes how DNS resolution
> works -- i. e. how names like "www.ubuntu.com" get translated to an IP
> address like 1.2.3.4.
> 
> Until now, we used two different approaches for this:
> 
>  * On desktops and touch, NetworkManager launched "dnsmasq" configured
>as effectively a local DNS server which forwards requests to the
>"real" DNS servers that get picked up usually via DHCP. Thus
>/etc/resolv.conf said "nameserver 127.0.0.1" and it was rather
>non-obvious to show the real DNS servers. (This was one of the
>complaints/triggers that led to creating this blueprint).  But
>dnsmasq does proper rotation and fallback between multiple
>nameservers, i. e. if one does not respond it uses the next one
>without long timeouts.
> 
>  * On servers, cloud images etc. we did not have any local DNS server.
>Configured DNS servers (via DHCP or static configuration in
>/etc/network/interfaces) were put into /etc/resolv.conf, and
>every program (via glibc's builtin resolver) directly contacted
>those.
> 
>This had the major drawback that if the first DNS server does not
>respond (or is slow), then *every* DNS lookup suffers from a ~ 10s
>timeout, which makes every network operation awfully slow.
>Addressing this was the main motivation for the blueprint. On top
>of that, there was no local caching, thus requesting the same name
>again would do another lookup.
> 
> As of today, we now have one local resolver service for all Ubuntu
> products; we picked "resolved" as that is small and lightweight,
> already present (part of the systemd package), does not require D-Bus
> (unlike dnsmasq), supports DNSSEC, provides transparent fallback to
> contacting the real DNS servers directly (in case anything goes wrong
> with the local resolver), and avoids the first issue above that
> /etc/resolv.conf always shows 127.0.0.1.
> 
> Now DNS resolution goes via a new "libnss-resolve" NSS module which
> talks to resolved [2]. /etc/resolv.conf has the "real" nameservers,
> broken name servers are handled efficiently, and we have local DNS
> caching. NetworkManager now stops launching a dnsmasq instance.
> 
> I've had this running on my laptop for about three weeks now without
> noticing problems, but there may well be some corner cases where this
> causes problems. If you encounter a regression that causes DNS names
> to not get resolved correctly, please do "ubuntu-bug libnss-resolve"
> with the details.
> 
> Thanks,
> 
> Martin


So in the past there were two main problems with using resolved, I'd
like to confirm both of them have now been taken care of:

 1) Does resolved now support split DNS support?
That is, can Network Manager instruct it that only *.example.com
should be sent to the DNS servers provided by a given VPN?

That's a very important feature that the current dnsmasq integration
gives us which amongst other things avoids leaking DNS queries to your
employer when you're not routing all your traffic to the VPN and also
greatly reduces the overall network latency when using a VPN with a far
away endpoint.

It's also a critical feature for anyone who wants to run multiple
VPNs in parallel, which NetworkManager 1.2 now supports.

 2) Does resolved now maintain a per-uid cache or has caching been
disabled entirely?

In the past, resolved would use a single shared cache for the whole
system, which would allow for local cache poisoning by unprivileged
users on the system. That's the reason why the dnsmasq instance we spawn
with Network Manager doesn't have caching enabled and that becomes even
more critical when we're talking about doing the same change on servers.

If not done already, I'd very strongly suggest a full audit of
resolved by the security team with a focus on its caching mechanism.


Additionally, what's the easiest way to undo this change on a server?

I have a few deployments where I run upwards of 4000 containers on a
single system. Such systems have a main DNS resolver on the host and all
containers talking to it. I'm not too fond of adding an extra 4000
processes to such systems.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Martin Pitt
Hey Dave,

Dave Morley [2016-05-31 11:02 +0100]:
> How will this work on the phone if it is only enabled in yakkety?

I'm not intending/planning on changing the behaviour on stable
releases, of course. This is only ≥ 16.10. So as long as touch
products are built from 16.04 (or even 15.04), it won't affect them.

> How will this affect landing phone silos?

This doesn't affect building packages or even most of their runtime
behaviour. This only hooks into glibc's resolving of DNS names (i. e.
gethostbyname()), in the same way as e. g. libnss-mdns4 (Avahi) does.

> Have you tests with a 3g/4g dongle so you have 2 dns servers up at
> the same time

I did test with 2 DNS servers: one is my wifi router, the other the
Canonical VPN server. I don't have a 3G dongle, but this is unrelated
to the type of network card you have. This only affects the IP level,
nothing below.

> , how is suspend, reboot and flight mode scenarios handled?

Same answer as above really, this shouldn't affect any of this.

> I think that is all the questions I can think of :)
>
> You Make It, I'll Break It!
> 
> I Love My Job :)

Keep'em coming, I'm sure there's some warts left. E. g. I already got
a crash report via apport (some assertion failure), which I'll look
into. They aren't the end of the world as stuff just silently falls
back to contacting the real DNS servers directly, but this is why we
need field testing.

Thanks!

Martin
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Martin Pitt
Hello Martin,

Martin Wimpress [2016-05-31 10:51 +0100]:
> Is libnss-resolve automatically seeded via a Depends or does it require
> manual seeding?

It is now seeded (Recommends of ubuntu-standard) and also a recommends
of network-manager to ensure this also gets in on upgrades if someone
removed the metapackage.

Martin

-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Go shared libraries are coming

2016-05-31 Thread Martin Packman
On 25/05/2016, Michael Hudson-Doyle  wrote:
>
> I've attempted to document the new world at
> https://docs.google.com/document/d/1IOlBWWgcDeB9PfRORENESYj8iJt4W2EwsbYcpg4akBE/edit#

Thank you for the clear write-up.

Is the thought that for instance all the -dev packages juju currently
depends on should move to providing a shared library? I presume the
current line we have over not splitting up the github.com/juju
namespace into multiple packages till we expect to have multiple
consumers remains.

Martin

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Martin Wimpress
Hi,

On my phone and travelling so can't trivially find out the answer to the
following question right now.

Is libnss-resolve automatically seeded via a Depends or does it require
manual seeding?

Regards, Martin.
On 31 May 2016 11:36, "Martin Pitt"  wrote:

> Hello all,
>
> yesterday I landed [1] in Yakkety which changes how DNS resolution
> works -- i. e. how names like "www.ubuntu.com" get translated to an IP
> address like 1.2.3.4.
>
> Until now, we used two different approaches for this:
>
>  * On desktops and touch, NetworkManager launched "dnsmasq" configured
>as effectively a local DNS server which forwards requests to the
>"real" DNS servers that get picked up usually via DHCP. Thus
>/etc/resolv.conf said "nameserver 127.0.0.1" and it was rather
>non-obvious to show the real DNS servers. (This was one of the
>complaints/triggers that led to creating this blueprint).  But
>dnsmasq does proper rotation and fallback between multiple
>nameservers, i. e. if one does not respond it uses the next one
>without long timeouts.
>
>  * On servers, cloud images etc. we did not have any local DNS server.
>Configured DNS servers (via DHCP or static configuration in
>/etc/network/interfaces) were put into /etc/resolv.conf, and
>every program (via glibc's builtin resolver) directly contacted
>those.
>
>This had the major drawback that if the first DNS server does not
>respond (or is slow), then *every* DNS lookup suffers from a ~ 10s
>timeout, which makes every network operation awfully slow.
>Addressing this was the main motivation for the blueprint. On top
>of that, there was no local caching, thus requesting the same name
>again would do another lookup.
>
> As of today, we now have one local resolver service for all Ubuntu
> products; we picked "resolved" as that is small and lightweight,
> already present (part of the systemd package), does not require D-Bus
> (unlike dnsmasq), supports DNSSEC, provides transparent fallback to
> contacting the real DNS servers directly (in case anything goes wrong
> with the local resolver), and avoids the first issue above that
> /etc/resolv.conf always shows 127.0.0.1.
>
> Now DNS resolution goes via a new "libnss-resolve" NSS module which
> talks to resolved [2]. /etc/resolv.conf has the "real" nameservers,
> broken name servers are handled efficiently, and we have local DNS
> caching. NetworkManager now stops launching a dnsmasq instance.
>
> I've had this running on my laptop for about three weeks now without
> noticing problems, but there may well be some corner cases where this
> causes problems. If you encounter a regression that causes DNS names
> to not get resolved correctly, please do "ubuntu-bug libnss-resolve"
> with the details.
>
> Thanks,
>
> Martin
>
> [1]
> https://blueprints.launchpad.net/ubuntu/+spec/foundations-y-local-resolver
> [2] This is configured in /etc/nsswitch.conf ("hosts: files ... resolve
> dns")
> --
> Martin Pitt| http://www.piware.de
> Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
>
> --
> ubuntu-devel mailing list
> ubuntu-devel@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
>
>
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Dave Morley
On Tue, 31 May 2016 11:34:41 +0200
Martin Pitt  wrote:

> Hello all,
> 
> yesterday I landed [1] in Yakkety which changes how DNS resolution
> works -- i. e. how names like "www.ubuntu.com" get translated to an IP
> address like 1.2.3.4.
> 
> Until now, we used two different approaches for this:
> 
>  * On desktops and touch, NetworkManager launched "dnsmasq" configured
>as effectively a local DNS server which forwards requests to the
>"real" DNS servers that get picked up usually via DHCP. Thus
>/etc/resolv.conf said "nameserver 127.0.0.1" and it was rather
>non-obvious to show the real DNS servers. (This was one of the
>complaints/triggers that led to creating this blueprint).  But
>dnsmasq does proper rotation and fallback between multiple
>nameservers, i. e. if one does not respond it uses the next one
>without long timeouts.
> 
>  * On servers, cloud images etc. we did not have any local DNS server.
>Configured DNS servers (via DHCP or static configuration in
>/etc/network/interfaces) were put into /etc/resolv.conf, and
>every program (via glibc's builtin resolver) directly contacted
>those.
> 
>This had the major drawback that if the first DNS server does not
>respond (or is slow), then *every* DNS lookup suffers from a ~ 10s
>timeout, which makes every network operation awfully slow.
>Addressing this was the main motivation for the blueprint. On top
>of that, there was no local caching, thus requesting the same name
>again would do another lookup.
> 
> As of today, we now have one local resolver service for all Ubuntu
> products; we picked "resolved" as that is small and lightweight,
> already present (part of the systemd package), does not require D-Bus
> (unlike dnsmasq), supports DNSSEC, provides transparent fallback to
> contacting the real DNS servers directly (in case anything goes wrong
> with the local resolver), and avoids the first issue above that
> /etc/resolv.conf always shows 127.0.0.1.
> 
> Now DNS resolution goes via a new "libnss-resolve" NSS module which
> talks to resolved [2]. /etc/resolv.conf has the "real" nameservers,
> broken name servers are handled efficiently, and we have local DNS
> caching. NetworkManager now stops launching a dnsmasq instance.
> 
> I've had this running on my laptop for about three weeks now without
> noticing problems, but there may well be some corner cases where this
> causes problems. If you encounter a regression that causes DNS names
> to not get resolved correctly, please do "ubuntu-bug libnss-resolve"
> with the details.
> 
> Thanks,
> 
> Martin
> 
> [1]
> https://blueprints.launchpad.net/ubuntu/+spec/foundations-y-local-resolver
> [2] This is configured in /etc/nsswitch.conf ("hosts: files ...
> resolve dns")

How will this work on the phone if it is only enabled in yakkety? Ho
w will this affect landing phone silos? Have you tests with a 3g/4g
dongle so you have 2 dns servers up at the same time, how is suspend,
reboot and flight mode scenarios handled?

I think that is all the questions I can think of :)

-- 
You Make It, I'll Break It!

I Love My Job :)

http://www.canonical.com
http://www.ubuntu.com


pgpxT58oXBEgf.pgp
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Go shared libraries are coming

2016-05-31 Thread Michael Hudson-Doyle
On 31 May 2016 at 12:48, Martin Packman  wrote:
> On 25/05/2016, Michael Hudson-Doyle  wrote:
>>
>> I've attempted to document the new world at
>> https://docs.google.com/document/d/1IOlBWWgcDeB9PfRORENESYj8iJt4W2EwsbYcpg4akBE/edit#
>
> Thank you for the clear write-up.

I'm glad it came across clearly!

> Is the thought that for instance all the -dev packages juju currently
> depends on should move to providing a shared library? I presume the
> current line we have over not splitting up the github.com/juju
> namespace into multiple packages till we expect to have multiple
> consumers remains.

Yes, that sounds right to me. TBH, I've not really thought super hard
about juju as it is still in this strange "some deps from source
package / some deps from archive" middle ground, but I assume that
some sanity will emerge here (or at least stability). There's no real
reason to make separate packages for the different github.com/juju
things until there is an actual reason.

Cheers,
mwh
PS: can we stop including jujud in the juju-2.0 binary packages yet?

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


ANN: DNS resolver changes in yakkety

2016-05-31 Thread Martin Pitt
Hello all,

yesterday I landed [1] in Yakkety which changes how DNS resolution
works -- i. e. how names like "www.ubuntu.com" get translated to an IP
address like 1.2.3.4.

Until now, we used two different approaches for this:

 * On desktops and touch, NetworkManager launched "dnsmasq" configured
   as effectively a local DNS server which forwards requests to the
   "real" DNS servers that get picked up usually via DHCP. Thus
   /etc/resolv.conf said "nameserver 127.0.0.1" and it was rather
   non-obvious to show the real DNS servers. (This was one of the
   complaints/triggers that led to creating this blueprint).  But
   dnsmasq does proper rotation and fallback between multiple
   nameservers, i. e. if one does not respond it uses the next one
   without long timeouts.

 * On servers, cloud images etc. we did not have any local DNS server.
   Configured DNS servers (via DHCP or static configuration in
   /etc/network/interfaces) were put into /etc/resolv.conf, and
   every program (via glibc's builtin resolver) directly contacted
   those.

   This had the major drawback that if the first DNS server does not
   respond (or is slow), then *every* DNS lookup suffers from a ~ 10s
   timeout, which makes every network operation awfully slow.
   Addressing this was the main motivation for the blueprint. On top
   of that, there was no local caching, thus requesting the same name
   again would do another lookup.

As of today, we now have one local resolver service for all Ubuntu
products; we picked "resolved" as that is small and lightweight,
already present (part of the systemd package), does not require D-Bus
(unlike dnsmasq), supports DNSSEC, provides transparent fallback to
contacting the real DNS servers directly (in case anything goes wrong
with the local resolver), and avoids the first issue above that
/etc/resolv.conf always shows 127.0.0.1.

Now DNS resolution goes via a new "libnss-resolve" NSS module which
talks to resolved [2]. /etc/resolv.conf has the "real" nameservers,
broken name servers are handled efficiently, and we have local DNS
caching. NetworkManager now stops launching a dnsmasq instance.

I've had this running on my laptop for about three weeks now without
noticing problems, but there may well be some corner cases where this
causes problems. If you encounter a regression that causes DNS names
to not get resolved correctly, please do "ubuntu-bug libnss-resolve"
with the details.

Thanks,

Martin

[1] https://blueprints.launchpad.net/ubuntu/+spec/foundations-y-local-resolver
[2] This is configured in /etc/nsswitch.conf ("hosts: files ... resolve dns")
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel