Re: pastebinit default target on Ubuntu

2024-04-15 Thread Stéphane Graber
On Mon, Apr 15, 2024 at 4:14 PM Steve Langasek
 wrote:
>
> On Mon, Apr 15, 2024 at 04:48:17PM +0100, Robie Basak wrote:
> > Prior to Noble, the pastebinit command defaulted to paste.ubuntu.com. In
> > Noble, this has changed to dpaste.com due to an upstream change[1].
>
> > What do Ubuntu developers think the default should be? If it should
> > remain paste.ubuntu.com, we can ask upstream to change it back, or add a
> > delta for now.
>
> > Reason to keep it dpaste.com:
>
> > People have complained that the login requirement makes it unusable for
> > helping Ubuntu users at large who don't necessarily have an Ubuntu SSO
> > account.
>
> > Reason to keep it paste.ubuntu.com:
>
> > I'm not keen on relying on third party services when not necessary,
> > especially ad-supported ones. I have no reason to distrust the current
> > operator, but in general, these things tend to go wrong sooner or later.
>
> > There was more discussion on IRC[2].
>
> > [1] 
> > https://github.com/pastebinit/pastebinit/commit/5c668fb3ed9b4a103eb22b16e603050a539951e0
> > [2] https://irclogs.ubuntu.com/2024/04/15/%23ubuntu-devel.html#t14:17
>
> I was not pleased to see the switch to dpaste.com.  I've found that it's
> pretty unusable on mobile, and I don't like this pointing to a service we
> don't control.
>
> And if there are issues with the usability of paste.ubuntu.com, uh, we own
> that service?  So let's work with our IS team to make it fit for purpose.
> (I don't know why it currently requires a login to *view* paste contents;
> that seems straightforwardly a bug that we should just get sorted.)

That's because pastebin servers are frequently abused as a way to get
free mass storage.

It's not very practical to require login to post to a pastebin as the
whole point is for a tool like "pastebinit" to work without needing
user configuration as it's commonly used as a debug tool on cloud
instances and other random servers random than a user's personal
system.

With that in mind, a bunch of folks noticed that you could abuse a
service like paste.ubuntu.com by pushing large files (base64 encoded
or the like) and then retrieve them with a very trivial amount of html
parsing (if no raw option is offered directly).


There are obviously alternatives to this, but they tend to require a
bunch more server side logic, basically trying to find the right set
of restrictions to both poster and reader so that legitimate users can
use the service normally while abusers get sufficiently annoyed to
stay away from it.

> --
> Steve Langasek   Give me a lever long enough and a Free OS
> Debian Developer   to set it on, and I can move the world.
> Ubuntu Developer   https://www.debian.org/
> slanga...@ubuntu.com vor...@debian.org
> --
> ubuntu-devel mailing list
> ubuntu-devel@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel



-- 
Stéphane

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Should we be reverting iptables to iptables-legacy for eoan?

2019-09-10 Thread Stéphane Graber
On Tue, Sep 10, 2019 at 8:12 PM Jamie Strandboge  wrote:
>
> On Tue, 10 Sep 2019, Julian Andres Klode wrote:
>
> > Hi folks,
> >
> > it turns out that lxd is broken by iptables now using the nft
> > based stuff, because lxd is still using the legacy one from
> > inside the snap.
> >
> > This provides a terrible experience because networking in lxd
> > is not working at all once you enable ufw.
>
> Is this because ufw uses whatever is on the system (ie, nft default) and
> lxd is using whatever is in the snap (ie, historic iptables which with
> iptables 1.8 translates to 'iptables-legacy') and this means that both
> backends are in use, which results in a mixed view of things?

Right, ufw goes through nft, setting up a bunch of rejects, then lxd
starts up, loads its own rules which would normally allow its traffic
but this doesn't work as the nft ruleset runs in its entirety ahead of
running the legacy iptables rules and so a final reject decision was
already made. There are in theory some ways you can have both nft and
legacy working at the same time, though that ends up being a
debugging/management nightmare as there are no tools that query both
and give you a unified view of what's going on.

> The man pages for iptables-nft/iptables-legacy don't call this out as a
> problem, but README.Debian does:
>
>   NOTE: make sure you don't mix iptables-legacy and iptables-nft
>   rulesets in the same machine at the same time just for sanity and to
>   avoid unexpected behaviours in your network.
>
> > I'd suggest we increase the priority of iptables-legacy for eoan,
> > so that it is the default, and move the switch to xtables-nft-based
> > one to next release.
> >
> > This will allow us to have working lxd networking, and gives
> > the lxd team some breathing room.
>
> This makes sense to me. Besides, there are still bugs trickling in where
> the nft backend isn't acting fully compatibly anyway. However, it would
> be nice to use the nft backend by default by 20.04, ie, at the opening
> of the cycle.

Agreed. We've added an item on the LXD roadmap to deal with nft next
cycle. We've had requests coming in from users of other distros
already and I've done a fair amount of research around nft ahead of
this work, but it will require quite a lot of changes on our side as
we are using a fair amount of traditional iptables/ip6tables and then
a bunch of very weird ebtables (which is what failed when run with
compat wrappers in our case).

> That said, this is a very real immediate problem for snaps on 19.10+,
> Buster+ and Ubuntu Core (CC'ing Samuele and Michael specifically) since
> the only available bases are based on xenial and bionic and snaps can
> only stage-packages from xenial and bionic, so snaps that plugs
> 'firewall-control' (or classic snaps) will by definition by far tend to
> use 'legacy' (unless they build iptables from source). Even though the
> system might have the nft backend enabled. Furthermore, the man page for
> iptables-nft says that kernel >= 4.17 is required for the nft backend
> (but ISTR issues in Debian with > 4.19) and its entirely possible
> someone could be running a system with an older kernel with a newer user
> space (eg, core20 snap with bionic kernel snap).
>
> I'm not sure how to fix this. The only thing that seems reasonable is
> for snaps to only be able to use the correct one that matches the
> host/kernel. To pull that off snapd would need to examine the
> capabilities and configuration of the running system then bind mount
> iptables/etc from the host into runtime of the snap (the review-tools
> could flag if snaps ship these binaries). Alternatively, an iptables
> snapcraft part could be created that at runtime can interrogate whether
> to use the nft or legacy backends (or snapd exposes which backend is in
> use to the snap so it can decide).
>
> I suspect that Michael and Samuele will want to move the snap-specific
> parts of this discussion to the snapcraft forum, but +1 from me to
> prefer 'legacy' for the time being in Ubuntu.

I opened https://bugs.launchpad.net/ubuntu/+source/iptables/+bug/1843468
to track this on LP.
If going with the revert, this should get closed and another bug be
opened to track the work to properly switch to nft early next cycle.


Stéphane

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Should we be reverting iptables to iptables-legacy for eoan?

2019-09-10 Thread Stéphane Graber
Hi,

I'm strongly on the revert camp. This change landed in the release
pocket after Feature Freeze and effectively causes the majority of
software in the Ubuntu archive to rely on compatibility wrappers to
function.
Those wrappers aren't perfect match for the tools they replace,
causing failures or worse, success but leading to non-matching rules
in nft.

Due to both legacy iptables and nft functioning at the same time, we
may also now be in the very confusing situation of having some rules
loaded into nft while some older tools load directly into iptables,
causing only half the rules to be visible.

I agree that nft is the future, but this needs coordination, we need
to figure out what packages in main are using
iptables/ip6tables/ebtables and make sure that they're either
perfectly compatible with the compat wrappers OR have native nft
support.
And for software that grows direct nft support, they will need to
detect whether nft is in use rather than just check if it's supported
by the kernel, that's necessary to avoid ending up with rules in both.

For LXD specifically, we think it would take us about 3 weeks of
engineering work to sort this in a way that can work on all
distributions, properly detecting and supporting:
 - no nft present
 - nft present but old iptables used
 - nft present and used

The other problem we'll need to look into is that many of the nft
tools appear and user guides start with doing a full clear of nft
before populating it and using it, so that may cause some headaches
due to ordering as LXD needs to add some rules, obviously doesn't want
things reset underneath it. nft is actually more flexible than
iptables for updating the ruleset, but we need to ensure it's setup
properly so we don't end up with surprises.

Stéphane


On Tue, Sep 10, 2019 at 5:12 PM Julian Andres Klode
 wrote:
>
> Hi folks,
>
> it turns out that lxd is broken by iptables now using the nft
> based stuff, because lxd is still using the legacy one from
> inside the snap.
>
> This provides a terrible experience because networking in lxd
> is not working at all once you enable ufw.
>
> I'd suggest we increase the priority of iptables-legacy for eoan,
> so that it is the default, and move the switch to xtables-nft-based
> one to next release.
>
> This will allow us to have working lxd networking, and gives
> the lxd team some breathing room.
>
> --
> debian developer - deb.li/jak | jak-linux.org - free software dev
> ubuntu core developer  i speak de, en
>
> --
> ubuntu-devel mailing list
> ubuntu-devel@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel



-- 
Stéphane

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Consider switching to systemd-coredump for default kernel core dumps

2019-02-05 Thread Stéphane Graber
On Wed, Feb 06, 2019 at 04:57:32AM +0530, Prasanna V. Loganathar wrote:
> >
> > So having all the dumps centralized on the host when you don't know what
> > container it came from and may no longer have any of the environment
> > information is completely useless.
> >
> 
> Adding on to my previous reply, perhaps there's a misunderstanding on this
> one? Who's forwarding dumps to the host to centralize them? I don't
> understand what you mean here. If the default is either a plain file, or
> systemd-coredump (with it being installed inside the container), the
> container can handle it on it's own leaving the dump file inside the
> crashed container. Today, with ubuntu's default configuration and apport,
> it's just gone -- Post-mortem? Well, too bad, apport inside the container
> just burned the corpse - Good luck!.. whereas non-ubuntu distros mostly
> leave it intact.

You don't understand how core dump handlers work unfortunately.

When using a core dump handler (rather than a core dump file), any crash
causes the core to be dumped through stdin of a path that's set in the
core_pattern.

This path isn't container aware or context aware in any way. So when
that's set to /bin/whatever, it will always be /bin/whatever of the host
that will receive the crash and that command will be spawned by the
kernel as real root with full privileges (rather than the limited set
that the crashed program might have had).

The only way to have a crash in a container result in a helper in the
container to be run is by having the helper on the host be aware of
containers and have a safe way to call its counterpart in the container
(and let me tell you that doing that safely isn't easy!).

> Places where forwarding dumps to a hosts take place, I'd assume it's a
> sophisticated enough infrastructure where the core dump handler is
> overridden in the first place.

Nope, everything landing through the helper on the host is what happens
whenever you have a core pattern that starts with a "|". Similarly, any
pattern which isn't just a file name (like "core") but is instead an
absolute path, will be treated as an absolute path on the host, not in
the container where the crash occurred.

> 
> On Wed, Feb 6, 2019 at 4:44 AM Prasanna V. Loganathar 
> wrote:
> 
> > Except for the part where a coredump for an unknown binary is useless.
> >
> >
> > If I have a service that crashes in a container. I'd like to get back into
> > the container and inspect why. Simply throwing out unknown binary crashes
> > doesn't exactly seem like a stellar decision to me. And unknown binary to
> > whom?. It may not be known in the ubuntu/debian repos - that doesn't mean
> > the devs don't have debug symbols, and reproducible binary builds which
> > they can corelate with in post-mortem circumstances with or without the
> > actual environment. In fact, I'd presume that's true in 100% of container
> > usages. So, I'm not entirely sure what you're talking about in terms of a
> > small user base.
> >
> > That's why our current setup is to forward the crash to the containers
> >> when the containers are detected to know how to handle coredumps and
> >> otherwise throw them out as they're not going to be of any use.
> >
> >
> > This raises all the alarms for me, and is the biggest problem I have with
> > apport as the default. Debian simply dumps to a 'core' file.
> > RHEL/Fedora/CentOS does either that, or handles it with systemd-coredump
> > which ends up with a dump in "/var/lib/systemd/coredump". Apport cannot
> > auto report. But throwing out the whole dump is just plain wrong to me. By
> > what I'm proposing everyone's happy. Any of the potential solutions I
> > state, leaves the dump intact, and apport can just sit on top of it, and
> > analyse the files and throw dumps that it handles away, or even just become
> > a pure reporter instead of handling the dumps on its own.
> >
> >
> > On Wed, Feb 6, 2019 at 4:02 AM Stéphane Graber 
> > wrote:
> >
> >> On Wed, Feb 06, 2019 at 03:35:23AM +0530, Prasanna V. Loganathar wrote:
> >> > >
> >> > > You need to have apport itself installed in the container, I suspect
> >> > > that the Docker containers do not have it.
> >> >
> >> >
> >> > This will make no difference. Doing an `apt update && apt install -y
> >> > apport` will not do any good, as apport is set to disable itself on
> >> > containers.
> >>
> >> And indeed Docker doesn't even have an init system running so that'd be
> >> pretty useless as the apport forwarding service wouldn't work.
> >>
>

Re: Consider switching to systemd-coredump for default kernel core dumps

2019-02-05 Thread Stéphane Graber
On Wed, Feb 06, 2019 at 03:35:23AM +0530, Prasanna V. Loganathar wrote:
> >
> > You need to have apport itself installed in the container, I suspect
> > that the Docker containers do not have it.
> 
> 
> This will make no difference. Doing an `apt update && apt install -y
> apport` will not do any good, as apport is set to disable itself on
> containers.

And indeed Docker doesn't even have an init system running so that'd be
pretty useless as the apport forwarding service wouldn't work.

> > Having the dump handled by apport on the host would be useless as the
> > host doesn't know what binary was used to produce the dump (and so can't
> > be used with retracers) nor can it capture any of the crash environment
> > information as the relevant data is in the container, not in the host.
> >
> 
> Precisely my point. But this is not a problem with systemd-coredump, as it
> simply just manages the dumps. Doesn't need retracing. All
> RHEL/Fedora/CentOS based distros work without any modifications, as
> expected. With containers everywhere, I think it's reconsidering the
> default and unifying things around system-coredump effort simplifies things
> for everyone. coredumpctl is also much nicer to work with. apport should
> easily be able to just watch over the dumps and do it's reporting on
> desktop systems from there.

Except for the part where a coredump for an unknown binary is useless.

So having all the dumps centralized on the host when you don't know what
container it came from and may no longer have any of the environment
information is completely useless.

That's why our current setup is to forward the crash to the containers
when the containers are detected to know how to handle coredumps and
otherwise throw them out as they're not going to be of any use.


For people who are actively debugging a container that's getting crashes
and know what binary to use with the dumped core file, it's not a huge
deal to change the core dump handler (stopping the apport service will
even do that for you).

Those people make for a fraction of a percent of Ubuntu's userbase,
everyone else strongly benefits from crash reports being handed over to
apport where extra data is captured and the resulting crashes can be
send to errors.ubuntu.com for retracing and aggregation.


> On Wed, Feb 6, 2019 at 3:22 AM Stéphane Graber  wrote:
> 
> > On Wed, Feb 06, 2019 at 03:05:20AM +0530, Prasanna V. Loganathar wrote:
> > > Hi Stephane,
> > >
> > > Ah. I had overlooked this one. It does work well in lxc. Thanks for
> > > pointing that out.
> > > However, apport's default is to do nothing in containers.
> > >
> > > docker run --name testbox --rm -it ubuntu bash
> > > > sleep 10 & kill -ABRT $(pgrep sleep)
> > >
> > > This has no /var/crash directory. There are no dumps. Doesn't help
> > > with "--ulimit core=x:x" to docker option either.
> >
> > You need to have apport itself installed in the container, I suspect
> > that the Docker containers do not have it.
> >
> > Having the dump handled by apport on the host would be useless as the
> > host doesn't know what binary was used to produce the dump (and so can't
> > be used with retracers) nor can it capture any of the crash environment
> > information as the relevant data is in the container, not in the host.
> >
> > >
> > > On Tue, Feb 5, 2019 at 10:32 PM Stéphane Graber 
> > wrote:
> > > >
> > > > On Tue, Feb 05, 2019 at 06:36:48AM +0530, Prasanna V. Loganathar wrote:
> > > > > Hi Stephane,
> > > > >
> > > > > Thanks for the reply. Please correct me if I'm wrong.
> > > > >
> > > > > lxc launch ubuntu:18:04 testbox
> > > > > lxc exec testbox bash
> > > > > sleep 100 & kill -ABRT $(pgrep sleep)
> > > >
> > > > stgraber@castiana:~$ lxc launch ubuntu:18.04 testbox
> > > > Creating testbox
> > > > Starting testbox
> > > > stgraber@castiana:~$ lxc exec testbox bash
> > > > root@testbox:~# sleep 10 &
> > > > [1] 303
> > > > root@testbox:~# kill -ABRT 303
> > > > root@testbox:~#
> > > > [1]+  Aborted (core dumped) sleep 10
> > > > root@testbox:~# ls -lh /var/crash
> > > > total 512
> > > > -rw-r- 1 root root 34K Feb  5 17:02 _bin_sleep.0.crash
> > > > root@testbox:~#
> > > >
> > > > > There's no crash dump anywhere.
> > > > >
> > > > > /var/crash is empty. No `core` file, etc. The default i

Re: Consider switching to systemd-coredump for default kernel core dumps

2019-02-05 Thread Stéphane Graber
On Wed, Feb 06, 2019 at 03:05:20AM +0530, Prasanna V. Loganathar wrote:
> Hi Stephane,
> 
> Ah. I had overlooked this one. It does work well in lxc. Thanks for
> pointing that out.
> However, apport's default is to do nothing in containers.
> 
> docker run --name testbox --rm -it ubuntu bash
> > sleep 10 & kill -ABRT $(pgrep sleep)
> 
> This has no /var/crash directory. There are no dumps. Doesn't help
> with "--ulimit core=x:x" to docker option either.

You need to have apport itself installed in the container, I suspect
that the Docker containers do not have it.

Having the dump handled by apport on the host would be useless as the
host doesn't know what binary was used to produce the dump (and so can't
be used with retracers) nor can it capture any of the crash environment
information as the relevant data is in the container, not in the host.

> 
> On Tue, Feb 5, 2019 at 10:32 PM Stéphane Graber  wrote:
> >
> > On Tue, Feb 05, 2019 at 06:36:48AM +0530, Prasanna V. Loganathar wrote:
> > > Hi Stephane,
> > >
> > > Thanks for the reply. Please correct me if I'm wrong.
> > >
> > > lxc launch ubuntu:18:04 testbox
> > > lxc exec testbox bash
> > > sleep 100 & kill -ABRT $(pgrep sleep)
> >
> > stgraber@castiana:~$ lxc launch ubuntu:18.04 testbox
> > Creating testbox
> > Starting testbox
> > stgraber@castiana:~$ lxc exec testbox bash
> > root@testbox:~# sleep 10 &
> > [1] 303
> > root@testbox:~# kill -ABRT 303
> > root@testbox:~#
> > [1]+  Aborted (core dumped) sleep 10
> > root@testbox:~# ls -lh /var/crash
> > total 512
> > -rw-r- 1 root root 34K Feb  5 17:02 _bin_sleep.0.crash
> > root@testbox:~#
> >
> > > There's no crash dump anywhere.
> > >
> > > /var/crash is empty. No `core` file, etc. The default is
> > > crashes just vaporises itself - that's my biggest concern. While, for
> > > instance on a debian - you can rely on a 'core' file, or on
> > > CentOS/RHEL, you can always rely on a systemd-coredump infrastructure,
> > > and addressed so nicely by the coredumpctl command.
> > >
> > > With systemd being the init, and coredumpctl being very well
> > > architectured to manage this, I just don't see why Ubuntu shouldn't
> > > make use of that and have apport only do what it does best - which is
> > > primarily reporting.
> > >
> > >
> > >
> > > On Tue, Feb 5, 2019 at 1:57 AM Stéphane Graber  
> > > wrote:
> > > >
> > > > On Sat, Feb 02, 2019 at 03:34:22AM +0530, Prasanna V. Loganathar wrote:
> > > > > Hi folks,
> > > > >
> > > > > Currently, `$ sysctl kernel.core_pattern` gives
> > > > > `kernel.core_pattern = |/usr/share/apport/apport %p %s %c %d %P`
> > > > >
> > > > > This is usually fine, however, when run from containers or lxc this
> > > > > will just error out, with no core dump being produced, due to it being
> > > > > disabled. Adding to the problem: with container or lxc, you can't just
> > > > > change it per container, and should be changed on the host. And it's
> > > > > not reasonable to expect apport in all containers either. Since, this
> > > > > is a common problem, I think it's important to consider a change in
> > > > > the default handling.
> > > > >
> > > > > There are multiple options to deal with this:
> > > > >
> > > > > 1. Drop apport as default core_pattern handler and switch to a simple
> > > > > file in either /var/crash (this requires creating /var/crash as a part
> > > > > of the installation as it's currently created by apport), or /tmp and
> > > > > let apport handle the core dump after the fact, and report it.
> > > > >
> > > > > 2. Switch to systemd-coredump, and default to it, since it already
> > > > > does this very well and provides "coredumpctl" which is much nicer to
> > > > > work with. systemd-coredump also is a part of the systemd suite of
> > > > > utils and doesn't pull in a larger dependency as apport -- which to
> > > > > date, isn't as robust (I still have "core" files being left all over
> > > > > the place due to the fact that apport reset's itself and crashes
> > > > > during startup aren't handled properly). This also has a nice
> > > > > advantage of unifying the OSS community in terms of coredump handle

Re: Consider switching to systemd-coredump for default kernel core dumps

2019-02-05 Thread Stéphane Graber
On Tue, Feb 05, 2019 at 06:36:48AM +0530, Prasanna V. Loganathar wrote:
> Hi Stephane,
> 
> Thanks for the reply. Please correct me if I'm wrong.
> 
> lxc launch ubuntu:18:04 testbox
> lxc exec testbox bash
> sleep 100 & kill -ABRT $(pgrep sleep)

stgraber@castiana:~$ lxc launch ubuntu:18.04 testbox
Creating testbox
Starting testbox
stgraber@castiana:~$ lxc exec testbox bash
root@testbox:~# sleep 10 &
[1] 303
root@testbox:~# kill -ABRT 303
root@testbox:~# 
[1]+  Aborted (core dumped) sleep 10
root@testbox:~# ls -lh /var/crash
total 512
-rw-r- 1 root root 34K Feb  5 17:02 _bin_sleep.0.crash
root@testbox:~# 

> There's no crash dump anywhere.
> 
> /var/crash is empty. No `core` file, etc. The default is
> crashes just vaporises itself - that's my biggest concern. While, for
> instance on a debian - you can rely on a 'core' file, or on
> CentOS/RHEL, you can always rely on a systemd-coredump infrastructure,
> and addressed so nicely by the coredumpctl command.
> 
> With systemd being the init, and coredumpctl being very well
> architectured to manage this, I just don't see why Ubuntu shouldn't
> make use of that and have apport only do what it does best - which is
> primarily reporting.
> 
> 
> 
> On Tue, Feb 5, 2019 at 1:57 AM Stéphane Graber  wrote:
> >
> > On Sat, Feb 02, 2019 at 03:34:22AM +0530, Prasanna V. Loganathar wrote:
> > > Hi folks,
> > >
> > > Currently, `$ sysctl kernel.core_pattern` gives
> > > `kernel.core_pattern = |/usr/share/apport/apport %p %s %c %d %P`
> > >
> > > This is usually fine, however, when run from containers or lxc this
> > > will just error out, with no core dump being produced, due to it being
> > > disabled. Adding to the problem: with container or lxc, you can't just
> > > change it per container, and should be changed on the host. And it's
> > > not reasonable to expect apport in all containers either. Since, this
> > > is a common problem, I think it's important to consider a change in
> > > the default handling.
> > >
> > > There are multiple options to deal with this:
> > >
> > > 1. Drop apport as default core_pattern handler and switch to a simple
> > > file in either /var/crash (this requires creating /var/crash as a part
> > > of the installation as it's currently created by apport), or /tmp and
> > > let apport handle the core dump after the fact, and report it.
> > >
> > > 2. Switch to systemd-coredump, and default to it, since it already
> > > does this very well and provides "coredumpctl" which is much nicer to
> > > work with. systemd-coredump also is a part of the systemd suite of
> > > utils and doesn't pull in a larger dependency as apport -- which to
> > > date, isn't as robust (I still have "core" files being left all over
> > > the place due to the fact that apport reset's itself and crashes
> > > during startup aren't handled properly). This also has a nice
> > > advantage of unifying the OSS community in terms of coredump handler.
> > > apport can handle things from the core dumps that systemd generates,
> > > further on desktops.
> > >
> > > 3. Employ a tiny helper script, as the default core dump handler,
> > > which looks for specified programs such as "apport", "abrt",
> > > systemd-coredump" and pipes to them, or pipes it to /var/crash, or
> > > /tmp during it's absence. This does have the disadvantage of growing
> > > with it's own config, rather quickly.
> > >
> > > That being said, I highly suggest option 2 be used in the upcoming
> > > versions, and apport be a layer sitting on top of the coredumps
> > > generated by systemd-coredumps by either hooking into it, or by
> > > watching it's folders.
> > >
> > > I've also filed this as a bug here:
> > > https://bugs.launchpad.net/ubuntu/+bug/1813403
> >
> > Are you aware that apport is aware of containers (LXC & LXD) and will
> > just forward your crash to apport inside the container?
> >
> > That actually tends to be a better way to handle things that
> > centralizing all crashes on the host as monitoring tools running inside
> > the containers will operate as normal, reporting on the presence of a
> > crash file, as well, sending the bug report to Launchpad will then work
> > properly, capturing the details from the container rather than
> > confusingly mixing package details of the host with a crash that may
> > have come from a completely different release.
> >
> > Ther

Re: Consider switching to systemd-coredump for default kernel core dumps

2019-02-04 Thread Stéphane Graber
On Sat, Feb 02, 2019 at 03:34:22AM +0530, Prasanna V. Loganathar wrote:
> Hi folks,
> 
> Currently, `$ sysctl kernel.core_pattern` gives
> `kernel.core_pattern = |/usr/share/apport/apport %p %s %c %d %P`
> 
> This is usually fine, however, when run from containers or lxc this
> will just error out, with no core dump being produced, due to it being
> disabled. Adding to the problem: with container or lxc, you can't just
> change it per container, and should be changed on the host. And it's
> not reasonable to expect apport in all containers either. Since, this
> is a common problem, I think it's important to consider a change in
> the default handling.
> 
> There are multiple options to deal with this:
> 
> 1. Drop apport as default core_pattern handler and switch to a simple
> file in either /var/crash (this requires creating /var/crash as a part
> of the installation as it's currently created by apport), or /tmp and
> let apport handle the core dump after the fact, and report it.
> 
> 2. Switch to systemd-coredump, and default to it, since it already
> does this very well and provides "coredumpctl" which is much nicer to
> work with. systemd-coredump also is a part of the systemd suite of
> utils and doesn't pull in a larger dependency as apport -- which to
> date, isn't as robust (I still have "core" files being left all over
> the place due to the fact that apport reset's itself and crashes
> during startup aren't handled properly). This also has a nice
> advantage of unifying the OSS community in terms of coredump handler.
> apport can handle things from the core dumps that systemd generates,
> further on desktops.
> 
> 3. Employ a tiny helper script, as the default core dump handler,
> which looks for specified programs such as "apport", "abrt",
> systemd-coredump" and pipes to them, or pipes it to /var/crash, or
> /tmp during it's absence. This does have the disadvantage of growing
> with it's own config, rather quickly.
> 
> That being said, I highly suggest option 2 be used in the upcoming
> versions, and apport be a layer sitting on top of the coredumps
> generated by systemd-coredumps by either hooking into it, or by
> watching it's folders.
> 
> I've also filed this as a bug here:
> https://bugs.launchpad.net/ubuntu/+bug/1813403

Are you aware that apport is aware of containers (LXC & LXD) and will
just forward your crash to apport inside the container?

That actually tends to be a better way to handle things that
centralizing all crashes on the host as monitoring tools running inside
the containers will operate as normal, reporting on the presence of a
crash file, as well, sending the bug report to Launchpad will then work
properly, capturing the details from the container rather than
confusingly mixing package details of the host with a crash that may
have come from a completely different release.

There's obviously nothing wrong with someone opting out of apport and
switching to whatever core dump handler they want, but for Ubuntu,
apport has much better integration with the distribution, has hooks in
many packages and was designed with proper container support.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Transition of LXD from deb to snap

2018-09-17 Thread Stéphane Graber
On Wed, Sep 12, 2018 at 02:42:00PM -0400, Stéphane Graber wrote:
> On Tue, Aug 21, 2018 at 04:39:46PM -0400, Stéphane Graber wrote:
> > On Tue, Aug 21, 2018 at 03:35:11PM -0400, Stéphane Graber wrote:
> > > On Tue, Aug 21, 2018 at 12:25:33PM -0700, Steve Langasek wrote:
> > > > On Tue, Aug 21, 2018 at 03:03:48PM -0400, Stéphane Graber wrote:
> > > > > > > A branch of the stable channel of those tracks will be created and
> > > > > > > closed per policy on seeded snaps (allowing to push emergency 
> > > > > > > snaps to
> > > > > > > those users bypassing the upstream).
> > > > 
> > > > > > Excellent!
> > > > 
> > > > > I actually had a question about that part, the wiki says to create an
> > > > > ubuntu-18.10 branch and use that during snap installation.
> > > > 
> > > > > But then what's responsible for switching this to a ubuntu-19.04 
> > > > > branch
> > > > > during the next upgrade?
> > > > 
> > > > Support for this has landed in ubuntu-release-upgrader 1:18.10.8 in 
> > > > cosmic;
> > > > LP: #1748581.  Note that this is based on a whitelist of known seeded 
> > > > snaps
> > > > that are encoded in u-r-u as part of the quirks handling (which is not 
> > > > ideal
> > > > since it duplicates the package seeds), so this will need to be updated 
> > > > to
> > > > include the lxd snap here.
> > > 
> > > Hmm, interesting, though it looks like the logic in the upgrader here is
> > > a bit lacking and may lead to data corruption or at least broken snaps.
> > > 
> > > It appears to just run "snap refresh 
> > > --channel=stable/ubuntu-18.10" which means a potential track switch and
> > > channel switch for users that have seen decided to switch to another
> > > channel or track.
> > > 
> > > 
> > > Commented in the bug, I suspect this bug needs to be re-open and the
> > > logic revisited.
> > > 
> > > https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1748581
> > > 
> > > > > Since the same version of the snap will be pushed to all Ubuntu 
> > > > > series,
> > > > > wouldn't it make more sense to have it just be "ubuntu", saving us the
> > > > > trouble of having to figure out what to do on Ubuntu release upgrades
> > > > > and reflecting the fact that the snap is the same for all series.
> > > > 
> > > > This escape hatch exists precisely for the case that the upstream stable
> > > > snap does not integrate correctly in a release-agnostic fashion and
> > > > per-Ubuntu-release quirking is needed.  Better to have it and never use 
> > > > it
> > > > than to need it and not have it.
> > > 
> > > Yeah, if we fix the upgrader to handle the above properly
> > > (and as suggested in the LP bug), then that should be fine.
> > 
> > 
> > I've now updated the PPA with version ~ppa5.
> > 
> > This includes:
> >  - Using the ubuntu-XX.XX branches (created and closed)
> >  - Debconf prompts are now high except for the unreachable store one
> >which is critical
> >  - Added logic to select the "3.0" track for LTS releaes
> >  - Added noninteractive logic to print a message a 0, 10 and 20 minutes,
> >trying connection every minute and giving up after 30.
> >  - Stop & disable the old systemd services to avoid any conflicts
> > 
> > I've validated all combinations of those variables:
> >  - series: 18.04 and 18.10
> >  - debconf: interactive (curses), interactive (text) and noninteractive
> >  - connectivity: available, unavailable, available after a few minutes
> 
> All rdepends of lxd and lxd-client have been tested to behave properly
> with the LXD snap and our deb shim.
> 
> I will now be uploading the transitional LXD deb to the archive, holding
> it in -proposed until Monday so anyone interested can test it that way
> and we can sort out any autopkgtest issue that may arise.
> 
> On Monday, I'll remove the blocker tag which should have it migrate to
> the release pocket, I'll land a seed change at the same time, removing
> lxd and lxd-client from the supported server seed and replacing lxd with
> snap:lxd in the server ship seed.
> 
> 
> This will then cause our images to start shipping the LXD snap rather
> than the deb, we can then sort out any issue that show up as a result
> of that change (should there be any).
> 
> 
> Note that at this time, the LXD snap isn't using socket activation yet.
> We have code in place for that in the edge channel but want to do more
> tests on it before we roll it out to all our users. This means that
> starting on Monday, LXD will be starting up unconditionally on 18.10
> images. This is a temporary situation and will be corrected by release.

The package is now in the release pocket and the seeds have been updated.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Transition of LXD from deb to snap

2018-09-12 Thread Stéphane Graber
On Tue, Aug 21, 2018 at 04:39:46PM -0400, Stéphane Graber wrote:
> On Tue, Aug 21, 2018 at 03:35:11PM -0400, Stéphane Graber wrote:
> > On Tue, Aug 21, 2018 at 12:25:33PM -0700, Steve Langasek wrote:
> > > On Tue, Aug 21, 2018 at 03:03:48PM -0400, Stéphane Graber wrote:
> > > > > > A branch of the stable channel of those tracks will be created and
> > > > > > closed per policy on seeded snaps (allowing to push emergency snaps 
> > > > > > to
> > > > > > those users bypassing the upstream).
> > > 
> > > > > Excellent!
> > > 
> > > > I actually had a question about that part, the wiki says to create an
> > > > ubuntu-18.10 branch and use that during snap installation.
> > > 
> > > > But then what's responsible for switching this to a ubuntu-19.04 branch
> > > > during the next upgrade?
> > > 
> > > Support for this has landed in ubuntu-release-upgrader 1:18.10.8 in 
> > > cosmic;
> > > LP: #1748581.  Note that this is based on a whitelist of known seeded 
> > > snaps
> > > that are encoded in u-r-u as part of the quirks handling (which is not 
> > > ideal
> > > since it duplicates the package seeds), so this will need to be updated to
> > > include the lxd snap here.
> > 
> > Hmm, interesting, though it looks like the logic in the upgrader here is
> > a bit lacking and may lead to data corruption or at least broken snaps.
> > 
> > It appears to just run "snap refresh 
> > --channel=stable/ubuntu-18.10" which means a potential track switch and
> > channel switch for users that have seen decided to switch to another
> > channel or track.
> > 
> > 
> > Commented in the bug, I suspect this bug needs to be re-open and the
> > logic revisited.
> > 
> > https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1748581
> > 
> > > > Since the same version of the snap will be pushed to all Ubuntu series,
> > > > wouldn't it make more sense to have it just be "ubuntu", saving us the
> > > > trouble of having to figure out what to do on Ubuntu release upgrades
> > > > and reflecting the fact that the snap is the same for all series.
> > > 
> > > This escape hatch exists precisely for the case that the upstream stable
> > > snap does not integrate correctly in a release-agnostic fashion and
> > > per-Ubuntu-release quirking is needed.  Better to have it and never use it
> > > than to need it and not have it.
> > 
> > Yeah, if we fix the upgrader to handle the above properly
> > (and as suggested in the LP bug), then that should be fine.
> 
> 
> I've now updated the PPA with version ~ppa5.
> 
> This includes:
>  - Using the ubuntu-XX.XX branches (created and closed)
>  - Debconf prompts are now high except for the unreachable store one
>which is critical
>  - Added logic to select the "3.0" track for LTS releaes
>  - Added noninteractive logic to print a message a 0, 10 and 20 minutes,
>trying connection every minute and giving up after 30.
>  - Stop & disable the old systemd services to avoid any conflicts
> 
> I've validated all combinations of those variables:
>  - series: 18.04 and 18.10
>  - debconf: interactive (curses), interactive (text) and noninteractive
>  - connectivity: available, unavailable, available after a few minutes

All rdepends of lxd and lxd-client have been tested to behave properly
with the LXD snap and our deb shim.

I will now be uploading the transitional LXD deb to the archive, holding
it in -proposed until Monday so anyone interested can test it that way
and we can sort out any autopkgtest issue that may arise.

On Monday, I'll remove the blocker tag which should have it migrate to
the release pocket, I'll land a seed change at the same time, removing
lxd and lxd-client from the supported server seed and replacing lxd with
snap:lxd in the server ship seed.


This will then cause our images to start shipping the LXD snap rather
than the deb, we can then sort out any issue that show up as a result
of that change (should there be any).


Note that at this time, the LXD snap isn't using socket activation yet.
We have code in place for that in the edge channel but want to do more
tests on it before we roll it out to all our users. This means that
starting on Monday, LXD will be starting up unconditionally on 18.10
images. This is a temporary situation and will be corrected by release.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Transition of LXD from deb to snap

2018-08-21 Thread Stéphane Graber
On Tue, Aug 21, 2018 at 03:35:11PM -0400, Stéphane Graber wrote:
> On Tue, Aug 21, 2018 at 12:25:33PM -0700, Steve Langasek wrote:
> > On Tue, Aug 21, 2018 at 03:03:48PM -0400, Stéphane Graber wrote:
> > > > > A branch of the stable channel of those tracks will be created and
> > > > > closed per policy on seeded snaps (allowing to push emergency snaps to
> > > > > those users bypassing the upstream).
> > 
> > > > Excellent!
> > 
> > > I actually had a question about that part, the wiki says to create an
> > > ubuntu-18.10 branch and use that during snap installation.
> > 
> > > But then what's responsible for switching this to a ubuntu-19.04 branch
> > > during the next upgrade?
> > 
> > Support for this has landed in ubuntu-release-upgrader 1:18.10.8 in cosmic;
> > LP: #1748581.  Note that this is based on a whitelist of known seeded snaps
> > that are encoded in u-r-u as part of the quirks handling (which is not ideal
> > since it duplicates the package seeds), so this will need to be updated to
> > include the lxd snap here.
> 
> Hmm, interesting, though it looks like the logic in the upgrader here is
> a bit lacking and may lead to data corruption or at least broken snaps.
> 
> It appears to just run "snap refresh 
> --channel=stable/ubuntu-18.10" which means a potential track switch and
> channel switch for users that have seen decided to switch to another
> channel or track.
> 
> 
> Commented in the bug, I suspect this bug needs to be re-open and the
> logic revisited.
> 
> https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1748581
> 
> > > Since the same version of the snap will be pushed to all Ubuntu series,
> > > wouldn't it make more sense to have it just be "ubuntu", saving us the
> > > trouble of having to figure out what to do on Ubuntu release upgrades
> > > and reflecting the fact that the snap is the same for all series.
> > 
> > This escape hatch exists precisely for the case that the upstream stable
> > snap does not integrate correctly in a release-agnostic fashion and
> > per-Ubuntu-release quirking is needed.  Better to have it and never use it
> > than to need it and not have it.
> 
> Yeah, if we fix the upgrader to handle the above properly
> (and as suggested in the LP bug), then that should be fine.


I've now updated the PPA with version ~ppa5.

This includes:
 - Using the ubuntu-XX.XX branches (created and closed)
 - Debconf prompts are now high except for the unreachable store one
   which is critical
 - Added logic to select the "3.0" track for LTS releaes
 - Added noninteractive logic to print a message a 0, 10 and 20 minutes,
   trying connection every minute and giving up after 30.
 - Stop & disable the old systemd services to avoid any conflicts

I've validated all combinations of those variables:
 - series: 18.04 and 18.10
 - debconf: interactive (curses), interactive (text) and noninteractive
 - connectivity: available, unavailable, available after a few minutes


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Transition of LXD from deb to snap

2018-08-21 Thread Stéphane Graber
On Tue, Aug 21, 2018 at 12:25:33PM -0700, Steve Langasek wrote:
> On Tue, Aug 21, 2018 at 03:03:48PM -0400, Stéphane Graber wrote:
> > > > A branch of the stable channel of those tracks will be created and
> > > > closed per policy on seeded snaps (allowing to push emergency snaps to
> > > > those users bypassing the upstream).
> 
> > > Excellent!
> 
> > I actually had a question about that part, the wiki says to create an
> > ubuntu-18.10 branch and use that during snap installation.
> 
> > But then what's responsible for switching this to a ubuntu-19.04 branch
> > during the next upgrade?
> 
> Support for this has landed in ubuntu-release-upgrader 1:18.10.8 in cosmic;
> LP: #1748581.  Note that this is based on a whitelist of known seeded snaps
> that are encoded in u-r-u as part of the quirks handling (which is not ideal
> since it duplicates the package seeds), so this will need to be updated to
> include the lxd snap here.

Hmm, interesting, though it looks like the logic in the upgrader here is
a bit lacking and may lead to data corruption or at least broken snaps.

It appears to just run "snap refresh 
--channel=stable/ubuntu-18.10" which means a potential track switch and
channel switch for users that have seen decided to switch to another
channel or track.


Commented in the bug, I suspect this bug needs to be re-open and the
logic revisited.

https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1748581

> > Since the same version of the snap will be pushed to all Ubuntu series,
> > wouldn't it make more sense to have it just be "ubuntu", saving us the
> > trouble of having to figure out what to do on Ubuntu release upgrades
> > and reflecting the fact that the snap is the same for all series.
> 
> This escape hatch exists precisely for the case that the upstream stable
> snap does not integrate correctly in a release-agnostic fashion and
> per-Ubuntu-release quirking is needed.  Better to have it and never use it
> than to need it and not have it.

Yeah, if we fix the upgrader to handle the above properly
(and as suggested in the LP bug), then that should be fine.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Transition of LXD from deb to snap

2018-08-21 Thread Stéphane Graber
On Tue, Aug 21, 2018 at 11:41:18AM -0700, Steve Langasek wrote:
> Hi Stéphane,

Hi Steve,

> On Mon, Aug 20, 2018 at 05:13:11PM -0400, Stéphane Graber wrote:
> 
> 
> 
> > The package in its preinst will attempt to contact the snap store,
> > holding package upgrade until it's reachable, then ask the user about
> > what track they'd like to use, followed by installing the LXD snap,
> > waiting for it to be online and automatically run the migration tool.
> > 
> > Should any error happen, the deb content will remain on disk (as all
> > this is done in preinst). We can then provide individual instructions
> > for failed migrations and update lxd.migrate to account for any cases
> > that it couldn't handled.
> 
> 
> 
> > The current plan is to have Ubuntu LTS releases default to installing
> > from the most recent LTS track (currently "3.0") while non-LTS Ubuntu
> > releases should default to installing from the "latest" track.
> 
> > The debconf prompt will always be shown, so this will only affect the
> > default selection.
> 
> It is impossible to enforce that debconf prompts are shown in all cases. 
> Unattended-upgrades are a thing.  Manually setting
> DEBIAN_FRONTEND=noninteractive is a thing.  DEBIAN_PRIORITY=critical is a
> thing (and while it's probably reasonable for the "cannot talk to the store"
> error prompt to be shown at critical priority, I think track selection
> fits the definition of a high-priority debconf prompt, not critical).

Fair enough, moved all prompts to "high" except for the unreachable
store one which remains "critical".

> This is a general rule anyway, but please be sure that the package behavior
> is sane when the debconf prompts are not shown.

The preinst both attempts to show debconf prompts AND prints relevant
messages to the terminal.

> - What should the behavior be if the store cannot be reached and the prompt
>   cannot be shown?  Should the preinst loop indefinitely (causing
>   unattended-upgrades to hold the apt lock forever until the admin
>   intervenes), or should the preinst abort (causing an apt transaction
>   failure)?

My current logic is to trigger the debconf prompt and if unusccesful to
print "Unable to contact the store, trying again in 1 minute" to the
terminal and wait a minute before trying again.

> - If it should abort, you may find /usr/sbin/update-secureboot-policy in
>   shim-signed to be useful prior art. ("$seen_key")
> - If it should loop forever, please ensure the maintainer script outputs
>   sufficient information to stdout/stderr for the apparent hang to be
>   debuggable from just the apt log.  (But maybe don't generate output for
>   every loop iteration, since making /var/log ENOSPC is not the ideal way
>   for the admin to discover the problem either.)

Good point, I'll change the logic to be a bit less aggressive logging wise.

I think I'll actually go with a hybrid of the two options:
 - If possible, show the debconf critical prompt
 - If that doesn't work, print to the terminal that we'll retry every
   minute for the next 30 minutes.
 - Log something again after 10 and 20 minutes
 - Abort at 30

> > A branch of the stable channel of those tracks will be created and
> > closed per policy on seeded snaps (allowing to push emergency snaps to
> > those users bypassing the upstream).
> 
> Excellent!

I actually had a question about that part, the wiki says to create an
ubuntu-18.10 branch and use that during snap installation.

But then what's responsible for switching this to a ubuntu-19.04 branch
during the next upgrade?


Since the same version of the snap will be pushed to all Ubuntu series,
wouldn't it make more sense to have it just be "ubuntu", saving us the
trouble of having to figure out what to do on Ubuntu release upgrades
and reflecting the fact that the snap is the same for all series.

Stéphane


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Transition of LXD from deb to snap

2018-08-21 Thread Stéphane Graber
On Tue, Aug 21, 2018 at 05:07:47PM +0200, Matthias Klose wrote:
> On 21.08.2018 17:01, Stéphane Graber wrote:
> > On Tue, Aug 21, 2018 at 04:58:21PM +0200, Matthias Klose wrote:
> >> On 21.08.2018 16:56, Stéphane Graber wrote:
> >>> On Tue, Aug 21, 2018 at 11:46:34AM +0200, Matthias Klose wrote:
> >>>> On 20.08.2018 23:13, Stéphane Graber wrote:
> >>>>> If you're running downstream software which interacts with LXD, I'd
> >>>>> strongly recommend you try switching to the snap, either using the
> >>>>> package in that PPA or manually by installing the LXD snap and then
> >>>>> running "lxd.migrate".
> >>>>
> >>>> there are (and for are for a while) currently failing lxd autopkg tests
> >>>> triggered by other packages (see update_excuses).  What's the future of 
> >>>> these?
> >>>> Short term fixing those please, and long term?
> >>>
> >>> LXD 3.0.2 which we'll be uploading this week has a "fix" for those
> >>> errors (effectively shellcheck becoming more verbose).
> >>>
> >>> The empty LXD deb will not contain any autopkgtest so once the
> >>> transition to the snap is done, autopkgtest will effectively always be a
> >>> no-op.
> >>
> >> ... which is interpreted as a failing autopkg test.  You have to add an 
> >> always
> >> succeeding autopkg test.
> > 
> > Sure but as I said just a paragraph above this, LXD 3.0.2, which we'll
> > be uploading this week, has a "fix" for this.
> 
> so if we have to wait for every autopkg test fix several weeks, that doesn't
> work in the archive.  we have a feature freeze tomorrow, and other packages
> depend on that.  From my point of view there is something very wrong if we 
> have
> to wait that long.  Multiply that time by other needed autopkg test fixes, and
> we are at this point again accumulating packages in the -proposed pocket for 
> no
> reason ...

Can you relax a bit maybe? LXD 3.0.2 was tagged last Thursday, we're
finishing the release announcement so that it can go in the changelog
and then we'll upload it.

The LXD testsuite takes quite a while to run, so it'd be a waste of time
to have to figure out what to cherry-pick, testbuild, upload, get things
to the release pocket to then do it all over again a few hours later for
the upstream bugfix release.

> sorry, but that's not pro-active archive work.

> Matthias

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Transition of LXD from deb to snap

2018-08-21 Thread Stéphane Graber
On Tue, Aug 21, 2018 at 04:58:21PM +0200, Matthias Klose wrote:
> On 21.08.2018 16:56, Stéphane Graber wrote:
> > On Tue, Aug 21, 2018 at 11:46:34AM +0200, Matthias Klose wrote:
> >> On 20.08.2018 23:13, Stéphane Graber wrote:
> >>> If you're running downstream software which interacts with LXD, I'd
> >>> strongly recommend you try switching to the snap, either using the
> >>> package in that PPA or manually by installing the LXD snap and then
> >>> running "lxd.migrate".
> >>
> >> there are (and for are for a while) currently failing lxd autopkg tests
> >> triggered by other packages (see update_excuses).  What's the future of 
> >> these?
> >> Short term fixing those please, and long term?
> > 
> > LXD 3.0.2 which we'll be uploading this week has a "fix" for those
> > errors (effectively shellcheck becoming more verbose).
> > 
> > The empty LXD deb will not contain any autopkgtest so once the
> > transition to the snap is done, autopkgtest will effectively always be a
> > no-op.
> 
> ... which is interpreted as a failing autopkg test.  You have to add an always
> succeeding autopkg test.

Sure but as I said just a paragraph above this, LXD 3.0.2, which we'll
be uploading this week, has a "fix" for this.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Transition of LXD from deb to snap

2018-08-21 Thread Stéphane Graber
On Tue, Aug 21, 2018 at 11:46:34AM +0200, Matthias Klose wrote:
> On 20.08.2018 23:13, Stéphane Graber wrote:
> > If you're running downstream software which interacts with LXD, I'd
> > strongly recommend you try switching to the snap, either using the
> > package in that PPA or manually by installing the LXD snap and then
> > running "lxd.migrate".
> 
> there are (and for are for a while) currently failing lxd autopkg tests
> triggered by other packages (see update_excuses).  What's the future of these?
> Short term fixing those please, and long term?

LXD 3.0.2 which we'll be uploading this week has a "fix" for those
errors (effectively shellcheck becoming more verbose).

The empty LXD deb will not contain any autopkgtest so once the
transition to the snap is done, autopkgtest will effectively always be a
no-op.

> Matthias

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Transition of LXD from deb to snap

2018-08-20 Thread Stéphane Graber
Hello,

As some of you may know, we've slowly been pushing LXD users to use the
snap rather than the deb, whenever their environment allows it.

Starting with Ubuntu 18.10, we will be replacing the deb package
traditionally shipped in Ubuntu by the LXD snap.


There are a few pieces to that migration so I'll go into details about
each of them:


== Dependency management ==
LXD currently has the following reverse dependencies:

 - adapt (depends)
 - nova-compute-lxd (depends)
 - autopkgtest (suggests)
 - snapcraft (suggests)

We'll make sure that each of those are capable of handling the LXD snap.

That typically means either using the command line tool directly or if
using the API, looking at both /var/lib/lxd/unix.socket and
/var/snap/lxd/common/lxd/unix.socket

We will keep a deb in the archive, which will provide the "lxd" binary
package and act as a shim which installs the snap and proceeds with data
migration if required.


== Data migration ==
The LXD snap ships with a "lxd.migrate" command which handles moving of
all data from the deb path (/var/lib/lxd) over to the snap path
(/var/snap/lxd/common/lxd).

The package in its preinst will attempt to contact the snap store,
holding package upgrade until it's reachable, then ask the user about
what track they'd like to use, followed by installing the LXD snap,
waiting for it to be online and automatically run the migration tool.

Should any error happen, the deb content will remain on disk (as all
this is done in preinst). We can then provide individual instructions
for failed migrations and update lxd.migrate to account for any cases
that it couldn't handled.


== Seed management ==
LXD is currently seeded in the server seed, making it included in all
Ubuntu Server installations, including cloud images.

We expect that this will continue to happen and have landed experimental
socket activation support in our edge snap to allow for this (as always
running LXD in all cloud instances is obviously unacceptable).

The main blocker we have right now is that snapd socket activation is
misbheaving on Fedora, making it impractical for us to enable socket
activation in the stable channel.

If we don't have a resolution on the Fedora side within a couple of
weeks, I expect we'll temporarily unseed LXD from Ubuntu so we can move
forward with the rest of the plan, then seed LXD as a snap as soon as
socket activation works properly for all our users.


== Channels and tracks ==
LXD has a track per upstream LTS release as well as a "latest" track.

The current plan is to have Ubuntu LTS releases default to installing
from the most recent LTS track (currently "3.0") while non-LTS Ubuntu
releases should default to installing from the "latest" track.

The debconf prompt will always be shown, so this will only affect the
default selection.


A branch of the stable channel of those tracks will be created and
closed per policy on seeded snaps (allowing to push emergency snaps to
those users bypassing the upstream).

== LXD in the LTSes ==
Nothing is going to change for LXD in existing Ubuntu releases.

We'll keep maintaining those debs in both -updates and -backports until
such time as the Ubuntu release becomes EOL.

This change applies ONLY to Ubuntu 18.10 and later.


== Timeline ==
I was hoping to have all of this done prior to Feature Freeze, but it's
clear that this will not happen.

The current plan is to have the reverse dependencies and remaining
socket activation problems sorted within the next 2 weeks, at which
point we can upload the migration deb to the archive and transition the
seeds to pointing to the snap.

This should still provide with plenty of time to deal with any issue
that shows up and Ubuntu 18.10 feels like the right time to do this work
so we can be very confident that the 18.04 to 20.04 upgrade will go
smoothly down the line.

== How to help ==
We have a Launchpad bug open to track the reverse dependencies part of
this here:

https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1788040

And have a PPA which contains the current version of the upgrade deb here:

ppa:stgraber/experimental-devirt


If you're running downstream software which interacts with LXD, I'd
strongly recommend you try switching to the snap, either using the
package in that PPA or manually by installing the LXD snap and then
running "lxd.migrate".


Should you have any questions or issues, feel free to respond here or
contact me directly on IRC or by e-mail.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: RFC: baseline requirements for Ubuntu rootfs: xattrs and fscaps

2018-08-05 Thread Stéphane Graber
On Sun, Aug 05, 2018 at 11:18:49AM -0400, Stéphane Graber wrote:
> On Wed, Aug 01, 2018 at 05:58:56PM -0700, Steve Langasek wrote:
> > A recent customer bug report revealed that we have packages in the standard
> > Ubuntu system (mtr-tiny) which are making use of filesystem capabilities, to
> > reduce the need for suid binaries on the system:
> > 
> > $ getcap /usr/bin/mtr-packet 
> > /usr/bin/mtr-packet = cap_net_raw+ep
> > $
> > 
> > The customer bug report arose because today, we are not handling all Ubuntu
> > images in an xattr-safe manner.  E.g., on a freshly-launched cosmic lxd
> > container here:
> > 
> > $ lxc exec caring-calf -- getcap /usr/bin/mtr-packet
> > $
> > 
> > This prevents the software from working as intended by the Debian
> > maintainer; the command will only succeed as root in such an environment,
> > where it is intended to be runnable as a non-root user.
> > 
> > We have previously dealt with such an incompatibility in the iputils package
> > by introducing an Ubuntu delta
> > (https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1302192), restoring
> > use of suid in place of fscaps.  This is suboptimal because:
> > 
> >  - It violates the principle of least privilege; why give processes full
> >root privs if cap_net_raw is all they need?
> >  - It's a game of whack-a-mole.  We fixed iputils in response to bug
> >reports, but the wrong privileges on mtr-packet went unnoticed.  There
> >may be other bugs in the future again caused by failing to preserve
> >xattrs.
> > 
> > I am therefore proposing that we explicitly raise the requirements for
> > Ubuntu root filesystems to always be transported in an xattr-preserving
> > manner.
> > 
> > This will require bugfixes in various places, but ideally on a one-time
> > basis only.  The primary areas of concern are:
> > 
> >  - Where root filesystems are distributed as tarballs, they are not
> >currently created with --xattrs; this will need to be changed.
> >  - Users who are unpacking root tarballs need to take care to pass
> >--xattrs-include=* to tar.
> >  - Users who are backing up or streaming Ubuntu root filesystems with tar or
> >rsync will need to take care to pass non-default xattr-preserving options
> >(tar --xattrs; rsync -X).
> >  - GNU tar's xattrs format incompatible with other unpack implementations
> >(e.g. libarchive)[1].  Anyone using another unpacker will necessarily
> >end up without fscaps.
> >  - lxd will require some additional work in order for fscaps to work as
> >expected in unprivileged containers[2].
> 
> We've taken care of this part now:
>  - https://github.com/lxc/lxd/pull/4861
>  - https://github.com/lxc/lxd/pull/4868
>  - https://github.com/lxc/lxd/pull/4870
>  - https://github.com/lxc/lxd/pull/4875
> 
> Note that this requires a 4.14 or higher kernel to work as it requires
> support for unprivileged file capabilities.
> 
> For Ubuntu specifically, this particular patchset (written by Serge
> Hallyn) is getting backported to our 4.4 kernel, but that won't be true
> for other distributions.
> 
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1778286

And just noticed that we have another problem on Xenial as our
squashfs-tools there has broken xattr support in unsquashfs.

I filed a bug and will follow-up with a SRU shortly:

https://bugs.launchpad.net/ubuntu/+source/squashfs-tools/+bug/1785499


> > The parts of this that require changes to Ubuntu seem doable within the
> > 18.10 timeframe, and backportable to 18.04 (where the handling of mtr-tiny
> > is also buggy), after which we could drop the Ubuntu delta for iputils.
> > 
> > The parts of this that involve changes to user behavior are less
> > controllable; hence raising visibility on this question in a public forum.
> > 
> > Thoughts?
> > 
> > -- 
> > Steve Langasek   Give me a lever long enough and a Free OS
> > Debian Developer   to set it on, and I can move the world.
> > Ubuntu Developer   https://www.debian.org/
> > slanga...@ubuntu.com         vor...@debian.org
> > 
> > [1] https://github.com/opencontainers/image-spec/issues/725
> > [2] https://github.com/lxc/lxd/issues/4862
> 
> 
> 
> > -- 
> > ubuntu-devel mailing list
> > ubuntu-devel@lists.ubuntu.com
> > Modify settings or unsubscribe at: 
> > https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
> 
> 
> -- 
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com



-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: RFC: baseline requirements for Ubuntu rootfs: xattrs and fscaps

2018-08-05 Thread Stéphane Graber
On Wed, Aug 01, 2018 at 05:58:56PM -0700, Steve Langasek wrote:
> A recent customer bug report revealed that we have packages in the standard
> Ubuntu system (mtr-tiny) which are making use of filesystem capabilities, to
> reduce the need for suid binaries on the system:
> 
> $ getcap /usr/bin/mtr-packet 
> /usr/bin/mtr-packet = cap_net_raw+ep
> $
> 
> The customer bug report arose because today, we are not handling all Ubuntu
> images in an xattr-safe manner.  E.g., on a freshly-launched cosmic lxd
> container here:
> 
> $ lxc exec caring-calf -- getcap /usr/bin/mtr-packet
> $
> 
> This prevents the software from working as intended by the Debian
> maintainer; the command will only succeed as root in such an environment,
> where it is intended to be runnable as a non-root user.
> 
> We have previously dealt with such an incompatibility in the iputils package
> by introducing an Ubuntu delta
> (https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1302192), restoring
> use of suid in place of fscaps.  This is suboptimal because:
> 
>  - It violates the principle of least privilege; why give processes full
>root privs if cap_net_raw is all they need?
>  - It's a game of whack-a-mole.  We fixed iputils in response to bug
>reports, but the wrong privileges on mtr-packet went unnoticed.  There
>may be other bugs in the future again caused by failing to preserve
>xattrs.
> 
> I am therefore proposing that we explicitly raise the requirements for
> Ubuntu root filesystems to always be transported in an xattr-preserving
> manner.
> 
> This will require bugfixes in various places, but ideally on a one-time
> basis only.  The primary areas of concern are:
> 
>  - Where root filesystems are distributed as tarballs, they are not
>currently created with --xattrs; this will need to be changed.
>  - Users who are unpacking root tarballs need to take care to pass
>--xattrs-include=* to tar.
>  - Users who are backing up or streaming Ubuntu root filesystems with tar or
>rsync will need to take care to pass non-default xattr-preserving options
>(tar --xattrs; rsync -X).
>  - GNU tar's xattrs format incompatible with other unpack implementations
>(e.g. libarchive)[1].  Anyone using another unpacker will necessarily
>end up without fscaps.
>  - lxd will require some additional work in order for fscaps to work as
>expected in unprivileged containers[2].

We've taken care of this part now:
 - https://github.com/lxc/lxd/pull/4861
 - https://github.com/lxc/lxd/pull/4868
 - https://github.com/lxc/lxd/pull/4870
 - https://github.com/lxc/lxd/pull/4875

Note that this requires a 4.14 or higher kernel to work as it requires
support for unprivileged file capabilities.

For Ubuntu specifically, this particular patchset (written by Serge
Hallyn) is getting backported to our 4.4 kernel, but that won't be true
for other distributions.

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1778286

> 
> The parts of this that require changes to Ubuntu seem doable within the
> 18.10 timeframe, and backportable to 18.04 (where the handling of mtr-tiny
> is also buggy), after which we could drop the Ubuntu delta for iputils.
> 
> The parts of this that involve changes to user behavior are less
> controllable; hence raising visibility on this question in a public forum.
> 
> Thoughts?
> 
> -- 
> Steve Langasek   Give me a lever long enough and a Free OS
> Debian Developer   to set it on, and I can move the world.
> Ubuntu Developer   https://www.debian.org/
> slanga...@ubuntu.com vor...@debian.org
> 
> [1] https://github.com/opencontainers/image-spec/issues/725
> [2] https://github.com/lxc/lxd/issues/4862



> -- 
> ubuntu-devel mailing list
> ubuntu-devel@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: autopkgtest-build-lxd failing with bionic

2018-02-15 Thread Stéphane Graber
On Thu, Feb 15, 2018 at 04:10:01PM +0100, Martin Pitt wrote:
> Hello Timo,
> 
> Timo Aaltonen [2018-02-15 16:50 +0200]:
> > On 14.02.2018 22:03, Dimitri John Ledkov wrote:
> > > Hi,
> > > 
> > > I am on bionic and managed to build bionic container for testing using:
> > > 
> > > $ autopkgtest-build-lxd ubuntu-daily:bionic/amd64
> > > 
> > > Note this uses Ubuntu Foundations provided container as the base,
> > > rather than the third-party image that you are using from "images"
> > > remote.
> > > 
> > > Why are you using images: remote?
> > 
> > Because that's what the manpage suggests :)
> 
> Right, and quite deliberately. At least back in "my days", the ubuntu: and
> ubuntu-daily: images had a lot of fat in them which made them both
> unnecessarily slow (extra download time, requires more RAM/disk, etc.) and 
> also
> undesirable for test correctness, as having all of the unnecessary bits
> preinstalled easily hides missing dependencies.
> 
> The latter can be alleviated by purging stuff of course, and that's what
> happens for the cloud VM images in OpenStack:
> 
>   
> https://anonscm.debian.org/cgit/autopkgtest/autopkgtest.git/tree/setup-commands/setup-testbed#n242
> 
> But this takes even more time, and so far just hasn't been necessary as the
> images: ones were just right - they contain exactly what a generic container
> image is supposed to contain and are pleasantly small and fast.
> 
> > > Is the failure reproducible with ubuntu-daily:bionic?
> > > 
> > > If you can build images with ubuntu-daily:bionic, then you need to
> > > contact and file an issue with images: remote provider.
> > 
> > ubuntu-daily: works, images: fails for artful and bionic while xenial
> > works, and the image server is:
> > 
> > https://images.linuxcontainers.org/
> 
> These are being advertised and used a lot, so maybe Stephane's LXD team can
> help with fixing these? Them having no network at all sounds like a grave bug
> which should be fixed either way.
> 
> That said, it could of course be that the setup script just needs some
> adjustments for the netplan changes:
> https://anonscm.debian.org/cgit/autopkgtest/autopkgtest.git/tree/setup-commands/setup-testbed
> As this doesn't know about netplan at all, just ifupdown.
> 
> Martin

stgraber@castiana:~$ lxc launch images:ubuntu/bionic/amd64 bionic
Creating bionic
Starting bionic

stgraber@castiana:~$ lxc launch images:ubuntu/artful/amd64 artful
Creating artful
Starting artful

stgraber@castiana:~$ lxc list
+-+-++--++---+
|NAME |  STATE  |  IPV4  | 
IPV6 |TYPE| SNAPSHOTS |
+-+-++--++---+
| artful  | RUNNING | 10.204.119.187 (eth0)  | 
2001:470:b368:4242:216:3eff:fe27:799b (eth0) | PERSISTENT | 0 |
+-+-++--++---+
| bionic  | RUNNING | 10.204.119.248 (eth0)  | 
2001:470:b368:4242:216:3eff:fe8c:7741 (eth0) | PERSISTENT | 0     |
+-+-++--++---+


And confirmed that networking inside both of them works fine here.

I wonder if it's a netplan vs ifupdown thing hitting autopkgtest in this case?

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Unblocking SRU uploader permissions

2017-02-02 Thread Stéphane Graber
self-police?
> 
> * Should we require them to have some wider experience, but less than we
>   might for a more generalist core dev applicant, on the basis that
>   we're bending to unblock the SRUs as we have no other suitable ACL
>   method? In other words, because we don't have an SRU-specific upload
>   ACL, should we lower the bar to make progress?

We can create an ACL which allows archive upload to all released series.

That'd be identical to the "ubuntu-sru" ACL but for upload rather than
queue admin.

> * Should we maintain the bar and require potential SRU uploaders to
>   obtain a more wide breadth of experience appropriate for a core dev,
>   and only then grant them core dev to unblock their SRU uploads?
>   Perhaps requiring them to go through MOTU and make substantial
>   contributions to universe, or through other more limited packagesets
>   first?
> 
> * Based on the tooling the DMB uses to grant upload access, it seems to
>   me that Launchpad may be able to allow the DMB to adjust ACLs to
>   permit upload to stable releases but not the development release.
>   Would this work? It wouldn't help with the initial development release
>   upload often required in an SRU, but would help with the subsequent
>   SRU uploads, so would at least relieve some of the burden.

Right and that seems to me like the way to go to deal with those applications.

> I'd appreciate feedback from the wider Ubuntu developer community on
> what the DMB should do here.
> 
> Thanks,
> 
> Robie (acting for himself as a single DMB member)

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Using the dummy0 interface for a local-only service to be broadcasted by Avahi

2016-12-29 Thread Stéphane Graber
On Thu, Dec 29, 2016 at 04:38:05PM -0200, Till Kamppeter wrote:
> On 12/29/2016 04:31 PM, Stéphane Graber wrote:
> > On Thu, Dec 29, 2016 at 04:14:52PM -0200, Till Kamppeter wrote:
> > > On 12/29/2016 02:37 PM, Stéphane Graber wrote:
> > > > > How can I assign a different name to a dummy interface? Can I freely 
> > > > > choose
> > > > > a name somehow, for example "ippusbxd"? Or have I to use "dummy1", 
> > > > > "dummy2",
> > > > > ... (loading the dummy kernel module with an option to support more 
> > > > > than one
> > > > > interface)?
> > > > 
> > > > root@castiana:~# ip link add ippusbxd type dummy
> > > > root@castiana:~# ip link set ippusbxd up
> > > > root@castiana:~# ifconfig ippusbxd
> > > > ippusbxd: flags=195<UP,BROADCAST,RUNNING,NOARP>  mtu 1500
> > > > inet6 fe80::3004:2dff:feb6:b5c7  prefixlen 64  scopeid 
> > > > 0x20
> > > > ether 32:04:2d:b6:b5:c7  txqueuelen 1000  (Ethernet)
> > > > RX packets 0  bytes 0 (0.0 B)
> > > > RX errors 0  dropped 0  overruns 0  frame 0
> > > > TX packets 2  bytes 140 (140.0 B)
> > > > TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> > > > 
> > > > 
> > > > Which gets you your own dummy device with its IPv6 link-local address.
> > > 
> > > Thank you very much. I copied and pasted the commands and got an ifconfig
> > > output similar to yours, only with different IP and MAC addresses and
> > > different values in the statistics.
> > > 
> > > Then I tried to bind to the IPv6 IP address of this entry, on port 6 
> > > and
> > > this did not work.
> > > 
> > > Do I have to create an additional IP address? If yes, how? Do I have to 
> > > run
> > > additional commands (route?)? Which ones?
> > > 
> > >Till
> > 
> > Link-local addresses are slightly special in that they are indeed link 
> > local.
> > 
> > So you can't bind fe80::3004:2dff:feb6:b5c7 as you could in theory have
> > the same address on multiple interfaces. Instead, you need to tell
> > bind() what interface to bind on. This is typically indicated as
> > fe80::3004:2dff:feb6:b5c7%ippusbxd.
> > 
> > 
> > For example:
> > 
> > stgraber@castiana:~$ nc -l fe80::3004:2dff:feb6:b5c7 1234
> > nc: Invalid argument
> > 
> > ^ Fails because the kernel doesn't know what interface you want.
> > 
> > stgraber@castiana:~$ nc -l fe80::3004:2dff:feb6:b5c7%ippusbxd 1234
> > 
> > ^ Works
> > 
> 
> Thank you. I want to bind with the bind(2) function in C. How do I supply
> the interface here or what function do I need to call instead?
> 
>Till

#include 
#include 

int main(int argc, char *argv[])
{
   int s;
   struct sockaddr_in6 ip6;

   // Basic inet6 socket
   s = socket(AF_INET6, SOCK_DGRAM, 0);

   // Initialize the ip6 struct
   ip6.sin6_family=AF_INET6;
   ip6.sin6_addr=in6addr_any;
   ip6.sin6_port=htons(1234);
   ip6.sin6_scope_id=if_nametoindex("ippusbxd");
   inet_pton(AF_INET6, "fe80::3004:2dff:feb6:b5c7", (void 
*)_addr.s6_addr);

   // Bind
   bind(s, (struct sockaddr *), sizeof(struct sockaddr_in6));
}


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Using the dummy0 interface for a local-only service to be broadcasted by Avahi

2016-12-29 Thread Stéphane Graber
On Thu, Dec 29, 2016 at 04:14:52PM -0200, Till Kamppeter wrote:
> On 12/29/2016 02:37 PM, Stéphane Graber wrote:
> > > How can I assign a different name to a dummy interface? Can I freely 
> > > choose
> > > a name somehow, for example "ippusbxd"? Or have I to use "dummy1", 
> > > "dummy2",
> > > ... (loading the dummy kernel module with an option to support more than 
> > > one
> > > interface)?
> > 
> > root@castiana:~# ip link add ippusbxd type dummy
> > root@castiana:~# ip link set ippusbxd up
> > root@castiana:~# ifconfig ippusbxd
> > ippusbxd: flags=195<UP,BROADCAST,RUNNING,NOARP>  mtu 1500
> > inet6 fe80::3004:2dff:feb6:b5c7  prefixlen 64  scopeid 0x20
> > ether 32:04:2d:b6:b5:c7  txqueuelen 1000  (Ethernet)
> > RX packets 0  bytes 0 (0.0 B)
> > RX errors 0  dropped 0  overruns 0  frame 0
> > TX packets 2  bytes 140 (140.0 B)
> > TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> > 
> > 
> > Which gets you your own dummy device with its IPv6 link-local address.
> 
> Thank you very much. I copied and pasted the commands and got an ifconfig
> output similar to yours, only with different IP and MAC addresses and
> different values in the statistics.
> 
> Then I tried to bind to the IPv6 IP address of this entry, on port 6 and
> this did not work.
> 
> Do I have to create an additional IP address? If yes, how? Do I have to run
> additional commands (route?)? Which ones?
> 
>Till

Link-local addresses are slightly special in that they are indeed link local.

So you can't bind fe80::3004:2dff:feb6:b5c7 as you could in theory have
the same address on multiple interfaces. Instead, you need to tell
bind() what interface to bind on. This is typically indicated as
fe80::3004:2dff:feb6:b5c7%ippusbxd.


For example:

stgraber@castiana:~$ nc -l fe80::3004:2dff:feb6:b5c7 1234
nc: Invalid argument

^ Fails because the kernel doesn't know what interface you want.

stgraber@castiana:~$ nc -l fe80::3004:2dff:feb6:b5c7%ippusbxd 1234

^ Works

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Using the dummy0 interface for a local-only service to be broadcasted by Avahi

2016-12-29 Thread Stéphane Graber
On Thu, Dec 29, 2016 at 01:46:30PM -0200, Till Kamppeter wrote:
> On 12/29/2016 01:12 PM, Stéphane Graber wrote:
> > On Thu, Dec 29, 2016 at 01:02:29PM -0200, Till Kamppeter wrote:
> > > Is there no way to dynamically (with checking what is currently in use)
> > > select a small free IPv4 address space? For example in the 10.0.0.0/8 
> > > range
> > > there are probably only some 10.X.Y.0/24 subranges used. If not, which 
> > > IPv6
> > > range is free for such a dummy0 interface? As it is local only and current
> > > Linux supports IPv6 by default it would be no problem to be IPv6-only. It
> > > would also need a host name as IPv6 IP addresses are awkward.
> > 
> > There is no way to do so for IPv4 as even if you check your local
> > interfaces and routing tables, you can't know what subnets are hidden
> > behind your router.
> > 
> 
> Are addresses in the 169.254.0.0/16 not suitable?

It's not suitable because the whole 169.254.0.0/16 subnet is typically
routed to your primary network device. Having a second route for it or a
route for a subset of it on another device would effectively mask part
of it.

> 
> > For IPv6, you can generate a random ULA subnet which is near guaranteed
> > to be unique and conflict free.
> > 
> 
> How does one do this? Which interface will it use, can I Bonjour-broadcast
> it only on the local machine?

ip -6 addr add fd00:::::1/64 where all the x's are random
values should be fine. There are more officially documented ways to come
up with a 48bit or 64bit ULA subnet mentioned in the various RFCs.

> 
> > Depending on exactly what you want to do, a link-local IPv6 address may
> > also be a better fit as it then absolutely cannot conflict with
> > anything.
> > 
> 
> Also how does one do this? Which interface will it use, can I
> Bonjour-broadcast it only on the local machine?

Every network interface with IPv6 enabled comes up with one, those are the
fe80::/64 subnets you see on your machine.

The loopback device doesn't have one, but a dummy device would.

> 
> > > > Making avahi work on 'lo' certainly sounds even nicer.
> > > > 
> > > 
> > > Would this be very complicated (would need upstream work on Avahi 
> > > probably)?
> > > It is said that multicast is needed and "lo" does not support multicast. 
> > > Is
> > > that true?
> > 
> > I sure wouldn't recommend using "dummy0". Using a differently named
> > device using the dummy driver would probably be fine though.
> > 
> > The reason to stay away from the "dummy0" name is that it's used in test
> > suites and other networking tools that simply call to "ip link add
> > dummy" and then (and that's the problem), call "ip link del dummy"
> > afterwards.
> > 
> 
> How can I assign a different name to a dummy interface? Can I freely choose
> a name somehow, for example "ippusbxd"? Or have I to use "dummy1", "dummy2",
> ... (loading the dummy kernel module with an option to support more than one
> interface)?

root@castiana:~# ip link add ippusbxd type dummy
root@castiana:~# ip link set ippusbxd up
root@castiana:~# ifconfig ippusbxd
ippusbxd: flags=195<UP,BROADCAST,RUNNING,NOARP>  mtu 1500
inet6 fe80::3004:2dff:feb6:b5c7  prefixlen 64  scopeid 0x20
ether 32:04:2d:b6:b5:c7  txqueuelen 1000  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2  bytes 140 (140.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


Which gets you your own dummy device with its IPv6 link-local address.

> 
>Till
> 

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Using the dummy0 interface for a local-only service to be broadcasted by Avahi

2016-12-29 Thread Stéphane Graber
On Thu, Dec 29, 2016 at 01:02:29PM -0200, Till Kamppeter wrote:
> On 12/29/2016 09:42 AM, Martin Pitt wrote:
> > lxc/lxd used to hardcode an IP range like this, and it had to be dropped
> > because it caused conflicts on "real" existing networks. The 10.0.0.0/8 
> > range
> > is reserved and very actively being used for local networks, including
> > Canonical's own VPN, and thus prone to create conflicts. If you need to do
> > something like that, I don't see a way to get away with a static IPv4 
> > address.
> > Maybe IPv6 has some clever address schema that makes conflicts improbable.
> > 
> 
> Is there no way to dynamically (with checking what is currently in use)
> select a small free IPv4 address space? For example in the 10.0.0.0/8 range
> there are probably only some 10.X.Y.0/24 subranges used. If not, which IPv6
> range is free for such a dummy0 interface? As it is local only and current
> Linux supports IPv6 by default it would be no problem to be IPv6-only. It
> would also need a host name as IPv6 IP addresses are awkward.

There is no way to do so for IPv4 as even if you check your local
interfaces and routing tables, you can't know what subnets are hidden
behind your router.

For IPv6, you can generate a random ULA subnet which is near guaranteed
to be unique and conflict free.

Depending on exactly what you want to do, a link-local IPv6 address may
also be a better fit as it then absolutely cannot conflict with
anything.

> 
> > > Does it cause any problems using "dummy0" for a production purpose? Is 
> > > there
> > > any better way? Perhaps even one which would allow me to work with
> > > localhost?
> > 
> > Making avahi work on 'lo' certainly sounds even nicer.
> > 
> 
> Would this be very complicated (would need upstream work on Avahi probably)?
> It is said that multicast is needed and "lo" does not support multicast. Is
> that true?

I sure wouldn't recommend using "dummy0". Using a differently named
device using the dummy driver would probably be fine though.

The reason to stay away from the "dummy0" name is that it's used in test
suites and other networking tools that simply call to "ip link add
dummy" and then (and that's the problem), call "ip link del dummy"
afterwards.

> 
> > > How should I implement this? Simply run above commands from maintainer
> > > scripts of ippusbxd? Get them run when the first IPP-over-USB printer is
> > > detected via UDEV? Or implementation in network-manager or so?
> > 
> > Creating the interface in postinst is a no-go. It won't survive a reboot 
> > and it
> > can't/must not be done when installing into chroots. It could ship or 
> > generate
> > an ifupdown/netplan/systemd-networkd configuration file, though.
> 
> OK.
> 
> Are there packages which I could take as an example?
> 
> Another thought is an architecture extension to cups-browsed (which I
> already plan for the phone):
> 
> cups-browsed will open a socket (only root can access) to receive commands.
> 
> The client which sends the commands will also be cups-browsed.
> 
> When cups-browsed is started (as root) and there is already a cups-browsed
> running, it will send its command line options through the socket to the
> already running cups-browsed and exit (so that only one daemon instance is
> running at any time).
> 
> When cups-browsed is started (as root) and there is no cups-browsed already
> running it keeps running as the current daemon instance, also applying all
> its command line options.
> 
> On the Ubuntu phone this way the print dialog could tell to cups-browsed to
> create a print queue to a given Bonjour service (printer) which the user has
> selected, instead of cups-browsed creating automatically queues for all
> printers and waking up all these printers.
> 
> For an IPP-over-USB printer UDEV would not directly call the systemd service
> of ippusbxd and then cups-browsed be informed by a Bonjour broadcast from
> ippusbxd, but instead, UDEV calls the systemd service o cups-browsed with
> the info about the USB printer and cups-browsed calls ippusbxd, this way
> knowing the printer's data and the fact that the printer is there and so it
> also creates the queue. With the socket architecture UDEV does not need to
> take care whether cups-browsed is already running.
> 
> WDYT?
> 
> My favorite is clearly that UDEV creates ippusbxd and ippusbxd does a
> local-only Bonjour broadcast so that we get a complete emulation of a
> network printer. This does not require changes in cups-browsed and it allows
> the use of the printer also without cups-browsed.
> 
> So I would very mu

Re: Installation Media and supportability of i386 in 18.04 LTS Re: Ubuntu Desktop on i386

2016-06-29 Thread Stéphane Graber
On Wed, Jun 29, 2016 at 09:33:24PM +, Robert Ancell wrote:
> On Thu, Jun 30, 2016 at 4:41 AM Steve Langasek <steve.langa...@ubuntu.com>
> wrote:
> 
> >
> > On Wed, Jun 29, 2016 at 08:18:54AM +0100, Martin Wimpress wrote:
> > > Excuse the top posting, only have a phone available.
> >
> > > Ubuntu MATE works with a few organisations around the world, one in my
> > own
> > > country, that refurbish donated computers, install them with Ubuntu MATE
> > > and give (or sell them for next to nothing) to schools, disadvantaged
> > > families and people who otherwise wouldn't be able to afford a computer.
> >
> > > I'll get in touch with them and find how, or if, this decision would
> > affect
> > > them.
> >
> > For various reasons (e.g. compatibility with legacy 32-bit apps on the
> > desktop; IoT devices running Ubuntu Core), we should not expect to be
> > dropping i386 as an architecture from the archive before 18.04.
> >
> > Individual flavor communities should therefore feel comfortable making
> > their
> > own decision about whether to continue providing i386 images in 18.04,
> > independent of what we decide for the Ubuntu Desktop and Server flavors -
> > with the caveat that, since the forcing function for dropping these flavors
> > is security supportability of key applications, community flavors should
> > avoid representing to their users a level of support that they are in no
> > position to deliver on.
> >
> >
> It may be worth considering disabling i386 builds for individual packages
> to reduce the support costs. That way the core packages can build for i386
> and be used in IoT systems while the graphical application stacks can stop
> building for i386. There would be some challenges to negotiate overlap with
> the flavours (i.e. MATE might want their stack i386 and Unity not) and a
> practical way to do this (we don't want to have an Ubuntu version of the
> packages that come from Debian with only a change to the "Architecture"
> field in debian/control).
> 
> --Robert

Doing so would prevent desktop users from installing binary 32bit
packages that rely on Ubuntu's multi-arch support.

I'm not sure how much of an issue this still is, given that a bunch of
them finally have 64bit builds now, but it may still be a problem for a
number of commercial software.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-06-09 Thread Stéphane Graber
On Thu, Jun 09, 2016 at 10:44:32AM +0200, Martin Pitt wrote:
> Stéphane Graber [2016-06-07 16:47 -0400]:
> > > > And so long as having a common solution can be done without regressions
> > > > and without hand wavy answers like "web browsers will just have to
> > > > switch to some new systemd DBUS API", I don't mind the change.
> > > 
> > > Oh, come on.. NSS is neither a systemd API nor is it "new" in any
> > > sense of the word.. it's decades old, and with not doing it you have a
> > > lot of other breakage/restrictions. But, as Go is apparently our new
> > > hotness, we have to live with it I guess.
> > 
> > I wasn't talking about NSS. I was talking about web browsers or any other
> > piece of software that needs the complete DNS reply and still should use
> > the per-domain DNS servers as setup by Network Manager.
> 
> Well, I *was* talking about NSS.. If browsers do the above, that's
> still incomplete, as every other NSS module is still being
> disregarded. So my sympathies are limited, but I know I can't win a
> war with "Use our default browser Firefox then" :-)
> 
> If a program wants to ignore NSS and reimplement DNS lookups, then
> indeed they either need a local DNS server or do the resolved lookup
> over D-Bus directly.
> 
> > Today everything works because /etc/resolv.conf points to 127.0.1.1
> > which then does the per-domain DNS correctly. So whether you hit it
> > directly or through NSS, you get the same result.
> 
> Still not entirely true, but certainly closer to the actual result
> than without a DNS server.
> 
> > We don't have anything right now that lets you manage such a dnsmasq, so
> > some integration code would have to be written for resolvconf I'd expect
> > to setup and manage that dnsmasq instance.
> 
> Right, and it would also require changes to
> NetworkManager/networkd/ifupdown to actually push changes into that,
> which is quite some amount of work.
> 
> So that, or we wait until resolved actually offers the option to run a
> local DNS server (upstream is planning to do that soon actually, for
> precisely the Chromium case). Then we can put "127.0.0.N:53" into
> /etc/resolv.conf for programs which don't do NSS (Chromium & Go), and
> keep libnss-resolve for everything else, which will then completely
> ignore /etc/resolv.conf.
> 
> In both cases we lose the feature that /etc/resolv.conf shows the
> "real" nameservers, but as we can't have that without Chromium &
> friends doing proper NSS lookups or D-Bus calls, this seems to be
> infeasible at the moment. (But also not that important -- we can still
> put a comment into it that shows the command to see the real DNS
> servers).
> 
> So, how about this as a plan of action:
> 
>  - Revert the NetworkManager change for now, and do start a local
>dnsmasq again, so that /etc/resolv.conf will point back to
>127.0.1.1.
> 
>This will keep the current behaviour on the desktop for chromium's
>sake, but do proper NSS resolution for everything else, including
>on the server.
> 
>The main downside is that we temporarily have one extra process on
>the desktop (resolved *and* dnsmasq).
> 
>  - Once resolved gets capable of listening to 127.0.0.53:53, configure
>it like that and drop NetworkManager's dnsmasq again.
> 
>This should be "weeks" rather than "releases". If not, we can
>always choose to disable resolved around 16.10 beta too, if the
>extra process on desktop is a concern.
> 
> Does that sound acceptable? If not, I'd just revert the whole thing
> for now, as I'm afraid I won't have time to rework
> resolvconf/networkd/NetworkManager etc. to maintain a local dnsmasq
> instance that also applies to servers. (But we could still keep the
> adjusted spec for the future, then).

Yep, that sounds reasonable to me.

> As for security issues, AFAICS the remaining ones are:
> 
>  - Investigate the DNS cache poisoning corner case.  With 16 bit ID
>and 16 bit port randomization this is considerably hard to do, the
>attach scenario only makes sense on a non-sniffable network (wifi
>is always sniffable, and I'm not convinced that ethernet networks
>are completely unsniffable!). So I think we should continue to
>discuss this.
> 
>  - The issue of being able to determine whether another user on the
>same computer visited a particular address. That's not relevant
>for home setups, but it is for universities, companies etc. where a
>lot of people use the same DNS server. OTOH local caching gives you
>a lot of performance increase. 

Re: ANN: DNS resolver changes in yakkety

2016-06-07 Thread Stéphane Graber
On Tue, Jun 07, 2016 at 09:30:49PM +0200, Martin Pitt wrote:
> Stéphane Graber [2016-06-07 12:22 -0400]:
> > So, minus the security problems that have been mentioned so far, I can't
> > think of any major problems with using resolved on servers.
> 
> It would be as you said that some Go programs and python-dns don't use
> NSS and thus won't respect per-domain DNS servers (nor mdns, or
> nss-ldap, nss-winbind, nss-mymachines, etc.)

Right, but that's status quo on the server as there currently is nothing
using per-domain DNS there.

> > We'll definitely want to make sure that it doesn't start in containers
> > by default as that'd significantly increase the process count for no
> > good reason on systems with hundreds or thousands of containers.
> 
> Yes, simple enough to add ConditionVirtualization=!lxc to resolved or
> dnsmasq.
> 
> > And it'd be nice if there was a way to only have resolved run when we
> > have multiple DNS servers as otherwise, with caching disabled (and I
> > suspect we will turn it off), it'd just make things slower.
> 
> It still provides DNSSEC validation, if only in "allow downgrade" mode
> for now (we have to start slow). And there's no problem in caching
> these.
> 
> > Yeah, we certainly don't want dnsmasq running on servers.
> 
> So if we can't use resolved because of Go programs, and shouldn't use
> dnsmasq (or similar DNS servers) on servers, then what *should* we use
> on servers then?

As I said, I don't have a problem with resolved on servers, so long as
we don't degrade our security story (so having caching off and the other
issues investigated).

> 
> > And so long as having a common solution can be done without regressions
> > and without hand wavy answers like "web browsers will just have to
> > switch to some new systemd DBUS API", I don't mind the change.
> 
> Oh, come on.. NSS is neither a systemd API nor is it "new" in any
> sense of the word.. it's decades old, and with not doing it you have a
> lot of other breakage/restrictions. But, as Go is apparently our new
> hotness, we have to live with it I guess.

I wasn't talking about NSS. I was talking about web browsers or any other
piece of software that needs the complete DNS reply and still should use
the per-domain DNS servers as setup by Network Manager.

Today everything works because /etc/resolv.conf points to 127.0.1.1
which then does the per-domain DNS correctly. So whether you hit it
directly or through NSS, you get the same result.

With resolved, if you query things directly through DNS, you'll only hit
the upstream forwarders as defined in /etc/resolv.conf which will not
reflect per-domain DNS servers.
You'd then have to either query it through NSS which would mean loosing
access to the complete DNS reply or query it through some direct DBUS
API which would mean extra work for downstreams.

> > In order to reach your goal without breaking anything, I suspect we'd
> > either need to have resolved offer a local DNS server which can be put
> > ahead of everything else in /etc/resolv.conf, similar to what we do with
> > dnsmasq.
> 
> Turning resolved into a DNS server (which it doesn't aim/want to be)
> would be rather pointless IMHO. Then we can just as well use dnsmasq
> everywhere, I see little point in having two different solutions on
> desktop/server. Are there any objections to that? (It makes using
> networkd significantly harder, but we can patch this in principle)

We don't have anything right now that lets you manage such a dnsmasq, so
some integration code would have to be written for resolvconf I'd expect
to setup and manage that dnsmasq instance.

> Martin


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-06-07 Thread Stéphane Graber
On Tue, Jun 07, 2016 at 10:16:37AM +0200, Martin Pitt wrote:
> Hello all,
> 
> Stéphane Graber [2016-06-06 12:27 -0400]:
> > > There's a thread here on Ubuntu and systemd-resolved:
> > > https://lists.dns-oarc.net/pipermail/dns-operations/2016-June/014964.html
> 
> I skipped over the first bunch of noise, I'm now in the middle of it
> where some actual meat comes in. Ondřej contacted me privately last
> week about it already, and said that he'll follow up with some
> consolidated and objective criticism here. Much appreciated!
> 
> >  - Anything which doesn't use the C library resolving functions, which
> >would include any static binary bundling its own copy of those, will
> >fallback to /etc/resolv.conf and not get split DNS information or the
> >desired fallback mechanism.
> 
> This isn't new, though. Anything not using NSS has behaved differently
> all the time already, such as not doing LLMNR (libnss-mdns4, for
> resolving names in the local network). This mainly affects tools like
> "dig", but these probably have a good reason to do things by
> themselves.
> 
> >This is likely to affect a whole bunch of Go binaries and similar
> >statically built piece of software. It will also, probably more visible
> >affect web browsers who have recently all switches to doing their own
> >DNS resolving.
> 
> Being statically built doesn't mean that these programs can't/don't
> use libc or can't do NSS (it's done via dlopen() anyway, not via
> shared libs). So I take it Go's runtime library doesn't currently do
> this then?

Go can build in two modes depending on version and flags:
 - Completely static mode, in which case there isn't a single line of
   glibc being used in the binary. That means that the resolver code just
   parses /etc/resolv.conf and use those servers directly.

 - Static for everything except for gethostbyname & getaddrinfo which
   are called through the C library using cgo. That's the default when
   building Go on Ubuntu, with our version of Go anyway.
   Go folks however do have a tendency to ship pre-built binaries and
   all bets are off on those.

Then there is the concern brought by Scott that even in languages which
don't encourage shipping static binaries that aren't using C library
functions, there are a number of commonly used libraries which do DNS
lookups directly (python-dns).

Those libraries usually do such lookups directly rather than through NSS
(or through whatever new API systemd came up with) because they want to
get the raw reply. This is for example needed if you want to know the
value of a DNS record even if its DNSSEC validation failed (while also
knowing that it failed).

> >  - This breaks downstream DNSSEC validation. Mail servers and some web
> >browsers require the ability to read the DNSSEC validation result from
> >the DNS reply. Those therefore don't use the libc resolving functions
> >and instead do the DNS request themselves, they'd then fall into the
> >above problem where they'd use /etc/resolv.conf and miss any split DNS
> >or similar configuration done inside resolved.
> 
> This is essentialy the same issue as above indeed.
> 
> >  - Some concerns about it broadcasting queries to all DNS servers rather
> >than just the one it's supposed to use for a given domain. Hopefully
> >this was just mis-configuration and not how resolved actually works, as
> >this would be a pretty big privacy issue.
> 
> Right, this is a current bug, I filed it as
> https://github.com/systemd/systemd/issues/3421 .
> 
> >  - Not having resolved offer a DNS service itself means we can't
> >properly daisy-chain our other DNS/DHCP servers like the dnsmasq
> >instances we use for LXC, LXD and libvirt. That means that the
> >containers and virtual machines will not be getting the same DNS view as
> >the host, being only restricted to hitting the servers in the host
> >/etc/resolv.conf without any awareness of split view DNS.
> > 
> > Unless the above can be fixed somehow, and I very much doubt resolved
> > will grow a DNS server any time soon,
> 
> There actually was talk about it, but I don't think that'd be
> unambiguously good, see below.
> 
> > ... the switch to resolved mostly feels like a regression over the
> > existing resolvconf+dnsmasq setup we've got right now and which in
> > my experience at least, has been working pretty well for us.
> 
> Note that this is *only* a setup on desktops with NetworkManager. On
> servers, cloud instances etc. we haven't set up any local DNS server;
> thus we have bad handling multiple servers with failures, no way to do
> split DNS or DNSSEC. 

Re: ANN: DNS resolver changes in yakkety

2016-06-06 Thread Stéphane Graber
On Mon, Jun 06, 2016 at 05:41:06PM +0100, Dimitri John Ledkov wrote:
> On 6 June 2016 at 17:27, Stéphane Graber <stgra...@ubuntu.com> wrote:
> > On Mon, Jun 06, 2016 at 03:17:51PM +0100, Robie Basak wrote:
> >
> > Unless the above can be fixed somehow, and I very much doubt resolved
> > will grow a DNS server any time soon, the switch to resolved mostly
> > feels like a regression over the existing resolvconf+dnsmasq setup we've
> > got right now and which in my experience at least, has been working
> > pretty well for us.
> >
> 
> I have in the past tried to drop all config files from /etc.
> 
> Dropping /etc/nsswitch.conf is trivial. Apart from libc and shadow
> very little else parses that, so that has minimal breakage so things
> that do call into libc end up doing the right thing.
> Droping /etc/resolv.conf is hard, and in essence a bunch of stuff
> parses and uses it, for right and wrong reasons (e.g. even when doing
> shared linking with glibc and having it available).
> In those cases, things do go wrong. If there is no split routing,
> everything is fine and the change is mostly harmless. With split
> routing things will break.
> Ideally I would like to still see 127.0.0.1 specified in resolf.conf,
> and I'll be fine with that being implemented on top systemd-resolvd
> api, I don't think that would be hard, however It seems to me like a
> re-implementation of resolvconf+dnsmasq solution.
> 
> I have heard before that it was requested as desirable to have
> plaintext view of the dns config. can somebody point out how can I
> get dns info out of current stable resolvconf+dnsmasq? E.g. what are
> my current dns servers, default, per- interface, etc? I guess i'm a
> dnsmasq n00b.

Sending SIGUSR1 will dump the list of servers in syslog.

Jun  6 12:48:09 castiana dnsmasq[3429]: time 1465231689
Jun  6 12:48:09 castiana dnsmasq[3429]: cache size 0, 0/0 cache insertions 
re-used unexpired cache entries.
Jun  6 12:48:09 castiana dnsmasq[3429]: queries forwarded 188289, queries 
answered locally 4888
Jun  6 12:48:09 castiana dnsmasq[3429]: queries for authoritative zones 0
Jun  6 12:48:09 castiana dnsmasq[3429]: server 
2607:f2c0:f00f:2720:216:3eff:fe19:6f91#53: queries sent 945, retried or failed 0
Jun  6 12:48:09 castiana dnsmasq[3429]: server 
2607:f2c0:f00f:2720:216:3eff:fec3:3e8d#53: queries sent 1183, retried or failed 0


This isn't exactly user friendly though.

In the past, "nm-tool" would dump you a nice view of your network
configuration, including DNS servers and VPNs but that went away with NM 1.x.

Looks like the nmcli way of doing it nowadays is:

root@castiana:~# nmcli dev show | grep DNS
IP6.DNS[1]: 2607:f2c0:f00f:2720:216:3eff:fe19:6f91
IP6.DNS[2]: 2607:f2c0:f00f:2720:216:3eff:fec3:3e8d


I'd definitely be in favor of a change to dnsmasq to write and maintain
its current DNS configuration as comments in its resolvconf file. That
way a good old "cat /etc/resolv.conf" would show that 127.0.1.1 is the
DNS server but the actual configuration of that server would be included
above it as nice user-readable comments.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-06-06 Thread Stéphane Graber
On Mon, Jun 06, 2016 at 03:17:51PM +0100, Robie Basak wrote:
> There's a thread here on Ubuntu and systemd-resolved:
> https://lists.dns-oarc.net/pipermail/dns-operations/2016-June/014964.html
> 
> It looks like there is some credible criticism here that is worth
> considering.

They do have some very very good points, my main concerns after reading
the e-mail above are:

 - Anything which doesn't use the C library resolving functions, which
   would include any static binary bundling its own copy of those, will
   fallback to /etc/resolv.conf and not get split DNS information or the
   desired fallback mechanism.

   This is likely to affect a whole bunch of Go binaries and similar
   statically built piece of software. It will also, probably more visible
   affect web browsers who have recently all switches to doing their own
   DNS resolving.

 - This breaks downstream DNSSEC validation. Mail servers and some web
   browsers require the ability to read the DNSSEC validation result from
   the DNS reply. Those therefore don't use the libc resolving functions
   and instead do the DNS request themselves, they'd then fall into the
   above problem where they'd use /etc/resolv.conf and miss any split DNS
   or similar configuration done inside resolved.

 - Some concerns about it broadcasting queries to all DNS servers rather
   than just the one it's supposed to use for a given domain. Hopefully
   this was just mis-configuration and not how resolved actually works, as
   this would be a pretty big privacy issue.

 - Not having resolved offer a DNS service itself means we can't
   properly daisy-chain our other DNS/DHCP servers like the dnsmasq
   instances we use for LXC, LXD and libvirt. That means that the
   containers and virtual machines will not be getting the same DNS view as
   the host, being only restricted to hitting the servers in the host
   /etc/resolv.conf without any awareness of split view DNS.


Unless the above can be fixed somehow, and I very much doubt resolved
will grow a DNS server any time soon, the switch to resolved mostly
feels like a regression over the existing resolvconf+dnsmasq setup we've
got right now and which in my experience at least, has been working
pretty well for us.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-06-02 Thread Stéphane Graber
On Thu, Jun 02, 2016 at 11:26:17PM +0200, Martin Pitt wrote:
> Hello Stéphane,
> 
> to conclude the lose end of this thread..
> 
> Stéphane Graber [2016-05-31 15:52 -0400]:
> > > >  1) Does resolved now support split DNS support?
> > > > That is, can Network Manager instruct it that only *.example.com
> > > > should be sent to the DNS servers provided by a given VPN?
> > > 
> > > resolved has a D-Bus API SetLinkDomains(), similar in spirit to
> > > dnsmasq. However, NM does not yet know about this, and only indirectly
> > > talks to resolved via writing /etc/resolv.conf (again indirectly via
> > > resolvconf). So the functionality on the resolved is there, but we
> > > don't use it yet. This is being tracked in the blueprint.
> > 
> > Ok and does it support configuring this per-domain thing through
> > configuration files?
> > 
> > That's needed so that LXC, LXD, libvirt, ... can ship a file defining a
> > domain for their bridge which is then forwarded to their dnsmasq
> > instance.
> 
> In my other reply I said that resolved doesn't have this kind of
> fine-grained configuration files, as it mostly expects network
> management software to tell it about these things. But what you *can*
> do is to use networkd for this:
> 
>   $ cat /lib/systemd/network/lxdbr0.network
>   [Match]
>   Name=lxdbr0
> 
>   [Network]
>   DNS=127.0.0.1
>   Domains= ~lxd
> 
> With this, networkd won't actually set up the bridge (as there is no
> DCHP=, Address=, corresponding .netdev  etc.), but as soon as it comes
> up via auto-activation of lxd-bridge.service, it will poke that
> information into resolved (via the above SetLinkDomains() call). I
> just tested that in a VM, and it does what you expect.
> 
> The main drawback is that you need to start systemd-networkd.service
> for this (at least as a Requires= of lxd-bridge.service). Now, on
> server/cloud we want to move to networkd anyway, but on a desktop we'd
> usually only have NetworkManager running. So this overhead would
> mainly be justified if you would consider replacing lxd-bridge.service
> by a "full" networkd config, i. e. let the above file actually set up
> and configure the full bridge (But this doesn't go that well with the
> existing /etc/default/lxd-bridge format).
> 
> If using a configuration *file* is not a tight requirement, but you
> only actually care about this working OOTB, then a less intrusive
> approach might be to just add a dbus-send/gdbus/busctl ExecStartPost=
> to lxd-bridge.service that does the SetLinkDomains() call.
> 
> I initially thought about lxd just dropping a resolvconf hook, but
> that doesn't work I think: /etc/resolv.conf has no syntax for
> domain-specific DNS servers, so we need to use a richer API like
> dnsmasq or resolved for these.
> 
> Would either approach work for you, or do we need something different?

We'd probably do it through dbus-send then in the bridge configuration script.

Does the resolved configuration persists? That is, if resolved gets a
package update and is restarted, will it loose the information it knows
about .lxd, .lxc, .libvirt, ...?

> 
> Thanks,
> 
> Martin

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-06-01 Thread Stéphane Graber
On Wed, Jun 01, 2016 at 09:37:51AM +0200, Martin Pitt wrote:
> Stéphane Graber [2016-05-31 17:26 -0400]:
> > So yes, the random transaction ID sure helps, so long as it's actually
> > random and so long as you get a DNS reply reasonably quickly.
> 
> It's reading from /dev/urandom, so pretty much "as random as you can
> get".
> 
> > I think your estimate of a minute isn't anywhere near accurate. One
> > could pretty easily pre-generate all 32768 packets in pcap format, just
> > replace the source address and port once known and then inject the whole
> > pcap onto the network much much faster than that.
> 
> No, that doesn't work, as I explained in my other reply. You only have
> one shot and then have to wait for the cached entry to time out until
> you get another one. I. e. a chance of 1/65536 in say half an hour,
> which is much worse than having a chance every few minutes or seconds
> if the client does not cache DNS queries.

Indeed, my way of doing it wouldn't work, however you can flip it around
and make it work.

Since any packet with the wrong transaction id invalidates the whole
request and because you don't randomize the source port, all I have to
do is have my helper machine flood the target with fake responses for
all the DNS records I care about using transaction ID 0.

Any request to those records which doesn't use transaction ID 0 will
immediately fail, so I just need to query it in a loop on the target
machine until I hit the right transaction ID and there you go, cache
poisoned.


Source port randomization would make things slightly more difficult in
that I would need communication between the target and helper machine to
communicate what source port to use. Still very far from impossible though.


So this attack would effectively DoS the record until the bad value
makes it to the cache, ensuring that the local user can only connect to
the poisoned record. And again, with doing a query loop on the target,
you should be able to hit the right transaction id reasonably quickly,
once you do, your fake record can have as big a TTL as you want, so
you'll be good for quite a while.

> 
> > > > 1- a privacy issue. It is trivial for a local user to probe if a site 
> > > > was
> > > > visited by another local user.
> > > 
> > > I assume by looking at the time that it takes to get a response?
> > 
> > stgraber@dakara:~$ dig www.ubuntu.com @172.17.20.30
> > www.ubuntu.com. 600 IN  A   91.189.89.118
> >
> > stgraber@dakara:~$ dig www.ubuntu.com @127.0.0.1
> > www.ubuntu.com. 594 IN  A   91.189.89.118
> >
> > 
> > The first query shows you the TTL for the record from the recursive
> > server used by the local resolver, here we see it's 600 seconds, the
> > second request hits the local cache which returns a TTL of 594 seconds.
> > Meaning that the DNS record was accessed by someone on the machine
> > within the last 6 seconds.
> > 
> > Do that with some sensitive website and you can know when someone on the
> > machine accessed it.
> > 
> > Note that the above wasn't done through resolved.
> 
> Exactly :-) So this is unrelated as dig doesn't use nsswitch and thus
> isn't affected by how we configure resolved.
> 
> However, you can still look at the time it takes. When I try this with
> a site that I haven't called in ages:
> 
>   $ time host www.cnn.com
>   www.cnn.com is an alias for turner.map.fastly.net.
>   turner.map.fastly.net has address 185.31.19.73
>   real0m0.170s
> 
>   $ time host www.cnn.com
>   www.cnn.com is an alias for turner.map.fastly.net.
>   turner.map.fastly.net has address 185.31.19.73
>   real0m0.155s
> 
>   $ time host www.cnn.com
>   www.cnn.com is an alias for turner.map.fastly.net.
>   turner.map.fastly.net has address 185.31.19.73
>   real0m0.155s
> 
> So there is a measurable time difference when a lookup happens (the
> first time) and when it's cached (the two others).
> 
> However: as you demonstrated with dig, you don't necessarily need to
> get this information from the local resolver -- you can look at the
> TTL at the real DNS servers in /etc/resolv.conf with dig.
> 
> So, the current status is:
> 
>  1) I'm not convinced yet [1] that disabling caching helps against
> injecting false responses. resolved implements enough of
> https://tools.ietf.org/html/rfc5452 to make local attacks
> impossible, and remote attacks actually harder than without a
> cache.
> 
>  2) I acknowledge the timing difference between recently visited and
> unvisited addresses, but you can get the same information from
> your real DNS server with more precision.
> 
> T

Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 11:33:43PM +0200, Martin Pitt wrote:
> Martin Pitt [2016-05-31 22:45 +0200]:
> > Can you please give a sketch how to look up the source port that the
> > resolver uses? That'd be a good piece of information for the upstream
> > bug report too, as it's not at all obvious.
> 
> Look up, and also how to forge it -- as creating a RAW_SOCKET requires
> root privileges, so I suppose it can be done with a normal UDP socket
> somehow?

You can forge the source port very easily by just calling bind() with
the wanted source port.

The difficulty is with forging the source address. You can use any IP
which the machine already has, but you can't typically use anything
else.


That's why such attacks usually involve a second computer (or container
or VM) on which you have root access and which is attached to the same
subnet as the first. It doesn't need to be in the path (so no MITM),
just to be closed to the target and have a route to it.

As you have root access to that second computer, you can write a tiny
bit of code that runs on it and will send any raw packet that you need.


So if I was to perform such an attack, I'd have a tiny service on my
laptop which listens on a port for a string containing the IP address of
the DNS server to impersonate and its port.

Then I'd have another piece of software on the machine I want to poison
which does the DNS query for the record I want to poison, immediately
looks up the source port and DNS server IP which was used and send those
to my laptop. My laptop then immediately replaces those two in a
pre-generated PCAP containing 32768 UDP packets (one for each of the
transaction IDs) and dumps the generated pcap onto the wire.


This entirely avoids having to go through the whole kernel stack to
generate a real UDP connection. You just dump all 32768 packets into the
network card in one shot.

Then even if it takes a while for the target to process them all, you
are almost guaranteed to have them all ahead of the real reply in the
queue and so have a pretty good chance to indeed poison the cache.


> 
> Thanks!
> 
> Martin

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 10:45:24PM +0200, Martin Pitt wrote:
> Hello Marc,
> 
> Stéphane, Marc, thanks for these!
> 
> Marc Deslauriers [2016-05-31 16:08 -0400]:
> > > I seem to remember it being a timing attack. If you can control when the
> > > initial DNS query happens, which as an unprivileged user you can by just
> > > doing a local DNS query and you know what upstream server is being hit,
> > > which you also know by being able to look at /etc/resolv.conf, then you
> > > can generate fake DNS replies locally (DNS is UDP so the source can
> > > trivially be spoofed) which will arrive before the real reply and end up
> > > in your cache, letting you override any record you want.
> 
> > 2- a cache poisoning attack. Because the resolver is local, source port
> > randomization is futile when a local user can trivially look up which source
> > port was selected when a particular request was made and can respond with a
> > spoofed UDP packet faster than the real dns server. No MITM required.
> 
> ATM resolved uses randomized ID fields (16 bits), which means that you
> need an average of 32.768 tries to get an acceptable answer into
> resolved, which you can probably do in the order of a minute. It does
> not use source port randomization though, which would lift the average
> time to the magnitude of a month.
> 
> Can you please give a sketch how to look up the source port that the
> resolver uses? That'd be a good piece of information for the upstream
> bug report too, as it's not at all obvious.

stgraber@dakara:~$ netstat -nAinet | grep 53
udp0  0 172.17.0.51:50662   172.17.20.30:53 ESTABLISHED

That gives me the source ip and port for the current DNS query as an
unprivileged user. I can then spoof a DNS reply that matches this.

The rest then depends on how random the transaction ID is, there have
been attacks related to that in the past:
  
https://blogs.technet.microsoft.com/srd/2008/04/09/ms08-020-how-predictable-is-the-dns-transaction-id/

Note that in the case of those attacks, they didn't have nearly as much
information as you would by controling when the query happens and being
able to check the source port immediately.


So yes, the random transaction ID sure helps, so long as it's actually
random and so long as you get a DNS reply reasonably quickly.

I think your estimate of a minute isn't anywhere near accurate. One
could pretty easily pre-generate all 32768 packets in pcap format, just
replace the source address and port once known and then inject the whole
pcap onto the network much much faster than that.


> > 1- a privacy issue. It is trivial for a local user to probe if a site was
> > visited by another local user.
> 
> I assume by looking at the time that it takes to get a response?

stgraber@dakara:~$ dig www.ubuntu.com @172.17.20.30
; <<>> DiG 9.10.3-P4-Ubuntu <<>> www.ubuntu.com @172.17.20.30
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24839
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.ubuntu.com.IN  A

;; ANSWER SECTION:
www.ubuntu.com. 600 IN  A   91.189.89.118

;; Query time: 123 msec
;; SERVER: 172.17.20.30#53(172.17.20.30)
;; WHEN: Tue May 31 17:06:19 EDT 2016
;; MSG SIZE  rcvd: 59

stgraber@dakara:~$ dig www.ubuntu.com @127.0.0.1
; <<>> DiG 9.10.3-P4-Ubuntu <<>> www.ubuntu.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63104
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.ubuntu.com.IN  A

;; ANSWER SECTION:
www.ubuntu.com. 594 IN  A   91.189.89.118

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Tue May 31 17:06:25 EDT 2016
;; MSG SIZE  rcvd: 59



The first query shows you the TTL for the record from the recursive
server used by the local resolver, here we see it's 600 seconds, the
second request hits the local cache which returns a TTL of 594 seconds.
Meaning that the DNS record was accessed by someone on the machine
within the last 6 seconds.

Do that with some sensitive website and you can know when someone on the
machine accessed it.



Note that the above wasn't done through resolved.

> Thanks,
> 
> Martin
> 
> -- 
> Martin Pitt| http://www.piware.de
> Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
> 
> -- 
> ubuntu-devel mailing list
> ubuntu-devel@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 09:50:03PM +0200, Martin Pitt wrote:
> Hello Stéphane,
> 
> Stéphane Graber [2016-05-31 11:31 -0400]:
> > One more thing on that point which was just brought up in:
> > https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1571967
> > 
> > In the past, with dnsmasq on desktop we could ship a .d file which would
> > instruct the system dnsmasq to forward all ".lxc" or ".lxd" queries to
> > the LXC or LXD dnsmasq instance.
> 
> Per-domain DNS servers can't be configured globally via files in
> resolved, only per network device. However, you said in the bug that
> this isn't working on the host anyway, only from within containers.
> And for those lxc sets up its own dnsmasq which the containers use
> as DNS server, so nothing should change in that regard, unless you are
> planning to replace lxc's dnsmasq as well.
> 
> FYI, this can be made to work on the host if lxc/lxd would register
> containers in machined, then libnss-mymachines will resolve those
> names.
> 
> Thanks,
> 
> Martin


We were hoping to ship a dnsmasq.d file this cycle that would make .lxc,
.lxd and .libvirt point to their respective dnsmasq instance.

It's not the case right now which is why it only works from inside
containers, but it's something we were hoping to change.


As far as registering containers with systemd, it's my understanding
that unprivileged processes cannot do that. As the upstream of LXC and
LXD, I'm also not very keen on having to implement yet another
systemd-specific feature anyway when we already run a standard service
(DNS server) which exports that data in a normally, perfectly usable
form.


Anyway, that part isn't particularly critical for me.

Not regressing split DNS support for VPN and not compromising system
security with unsafe cache settings is way more important.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 09:38:51PM +0200, Martin Pitt wrote:
> Hello Stéphane,
> 
> Stéphane Graber [2016-05-31 11:23 -0400]:
> > So in the past there were two main problems with using resolved, I'd
> > like to confirm both of them have now been taken care of:
> > 
> >  1) Does resolved now support split DNS support?
> > That is, can Network Manager instruct it that only *.example.com
> > should be sent to the DNS servers provided by a given VPN?
> 
> resolved has a D-Bus API SetLinkDomains(), similar in spirit to
> dnsmasq. However, NM does not yet know about this, and only indirectly
> talks to resolved via writing /etc/resolv.conf (again indirectly via
> resolvconf). So the functionality on the resolved is there, but we
> don't use it yet. This is being tracked in the blueprint.

Ok and does it support configuring this per-domain thing through
configuration files?

That's needed so that LXC, LXD, libvirt, ... can ship a file defining a
domain for their bridge which is then forwarded to their dnsmasq
instance.

I don't believe we do this automatically anywhere but it was planned to
do it this cycle for LXD and quite possibly for LXC and libvirt too (so
you can resolve .lxd or .libvirt).

> 
> >  2) Does resolved now maintain a per-uid cache or has caching been
> > disabled entirely?
> 
> No, it uses a global cache.
> 
> > In the past, resolved would use a single shared cache for the whole
> > system, which would allow for local cache poisoning by unprivileged
> > users on the system. That's the reason why the dnsmasq instance we spawn
> > with Network Manager doesn't have caching enabled and that becomes even
> > more critical when we're talking about doing the same change on servers.
> 
> Indeed Tony mentioned this in today's meeting with Mathieu and me --
> this renders most of the efficiency gain of having a local DNS
> resolver moot. Do you have a link to describing the problem? This was
> requested in LP: #903854, but neither that bug nor the referenced
> blueprint explain that.
> 
> How would an unprivileged local user change the cache in resolved? The
> only way how to get a result into resolvconf's cache is through a
> response from the forwarding DNS server. If a user can do that, what
> stops her from doing the same for non-cached lookups?
> 
> The caches certainly need to be dropped whenever the set of
> nameservers *changes*, but this already happens. (But this is required
> for functioning correctly, not necessarily a security guard).
> 
> If you have some pointers to the attack, I'm happy to forward this to
> an upstream issue and discuss it there (or file an issue yourself,
> tha'd be appreciated). If this is an issue, it should be fixed
> upstream, not downstream by disabling caching completely.

I seem to remember it being a timing attack. If you can control when the
initial DNS query happens, which as an unprivileged user you can by just
doing a local DNS query and you know what upstream server is being hit,
which you also know by being able to look at /etc/resolv.conf, then you
can generate fake DNS replies locally (DNS is UDP so the source can
trivially be spoofed) which will arrive before the real reply and end up
in your cache, letting you override any record you want.

For entries that are already cached, you can just query their TTL and
time the attack to begin exactly as the cached record expires.


This would then let an unprivileged user hijack just about any DNS
record unless you have a per-uid cache, in which case they'd only hurt
themselves.



Anyway, you definitely want to talk to the security team :)

> 
> > Additionally, what's the easiest way to undo this change on a server?
> 
> Uninstall libnss-resolve, or systemctl disable systemd-resolved, I'd
> say.
> 
> > I have a few deployments where I run upwards of 4000 containers on a
> > single system. Such systems have a main DNS resolver on the host and all
> > containers talking to it. I'm not too fond of adding an extra 4000
> > processes to such systems.
> 
> I don't actually intend this to be in containers, particularly as
> LXC/LXD already sets up its own dnsmasq on the host. That's why I only
> seeded it to ubuntu-standard, not to minimal. The
> images.linuxcontainers.org images (rightfully) don't have
> ubuntu-standard, so they won't get libnss-resolve and an enabled
> resolved.

But our recommended images are the cloud images and they sure do include
ubuntu-standard:

root@xenial:~# dpkg -l | grep ubuntu-standard
ii  ubuntu-standard  1.361   amd64  
  The Ubuntu standard system


The images.linuxcontainers.org images are tiny images which some of our
users prefer over the recommended official one

Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 11:23:01AM -0400, Stéphane Graber wrote:
> On Tue, May 31, 2016 at 11:34:41AM +0200, Martin Pitt wrote:
> > Hello all,
> > 
> > yesterday I landed [1] in Yakkety which changes how DNS resolution
> > works -- i. e. how names like "www.ubuntu.com" get translated to an IP
> > address like 1.2.3.4.
> > 
> > Until now, we used two different approaches for this:
> > 
> >  * On desktops and touch, NetworkManager launched "dnsmasq" configured
> >as effectively a local DNS server which forwards requests to the
> >"real" DNS servers that get picked up usually via DHCP. Thus
> >/etc/resolv.conf said "nameserver 127.0.0.1" and it was rather
> >non-obvious to show the real DNS servers. (This was one of the
> >complaints/triggers that led to creating this blueprint).  But
> >dnsmasq does proper rotation and fallback between multiple
> >nameservers, i. e. if one does not respond it uses the next one
> >without long timeouts.

One more thing on that point which was just brought up in:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1571967

In the past, with dnsmasq on desktop we could ship a .d file which would
instruct the system dnsmasq to forward all ".lxc" or ".lxd" queries to
the LXC or LXD dnsmasq instance.

We were planning on doing so by default this cycle, so it'd be good to
confirm that resolved doesn't regress things in this regard.

> > 
> >  * On servers, cloud images etc. we did not have any local DNS server.
> >Configured DNS servers (via DHCP or static configuration in
> >/etc/network/interfaces) were put into /etc/resolv.conf, and
> >every program (via glibc's builtin resolver) directly contacted
> >those.
> > 
> >This had the major drawback that if the first DNS server does not
> >respond (or is slow), then *every* DNS lookup suffers from a ~ 10s
> >timeout, which makes every network operation awfully slow.
> >Addressing this was the main motivation for the blueprint. On top
> >of that, there was no local caching, thus requesting the same name
> >again would do another lookup.
> > 
> > As of today, we now have one local resolver service for all Ubuntu
> > products; we picked "resolved" as that is small and lightweight,
> > already present (part of the systemd package), does not require D-Bus
> > (unlike dnsmasq), supports DNSSEC, provides transparent fallback to
> > contacting the real DNS servers directly (in case anything goes wrong
> > with the local resolver), and avoids the first issue above that
> > /etc/resolv.conf always shows 127.0.0.1.
> > 
> > Now DNS resolution goes via a new "libnss-resolve" NSS module which
> > talks to resolved [2]. /etc/resolv.conf has the "real" nameservers,
> > broken name servers are handled efficiently, and we have local DNS
> > caching. NetworkManager now stops launching a dnsmasq instance.
> > 
> > I've had this running on my laptop for about three weeks now without
> > noticing problems, but there may well be some corner cases where this
> > causes problems. If you encounter a regression that causes DNS names
> > to not get resolved correctly, please do "ubuntu-bug libnss-resolve"
> > with the details.
> > 
> > Thanks,
> > 
> > Martin
> 
> 
> So in the past there were two main problems with using resolved, I'd
> like to confirm both of them have now been taken care of:
> 
>  1) Does resolved now support split DNS support?
> That is, can Network Manager instruct it that only *.example.com
> should be sent to the DNS servers provided by a given VPN?
> 
> That's a very important feature that the current dnsmasq integration
> gives us which amongst other things avoids leaking DNS queries to your
> employer when you're not routing all your traffic to the VPN and also
> greatly reduces the overall network latency when using a VPN with a far
> away endpoint.
> 
> It's also a critical feature for anyone who wants to run multiple
> VPNs in parallel, which NetworkManager 1.2 now supports.
> 
>  2) Does resolved now maintain a per-uid cache or has caching been
> disabled entirely?
> 
> In the past, resolved would use a single shared cache for the whole
> system, which would allow for local cache poisoning by unprivileged
> users on the system. That's the reason why the dnsmasq instance we spawn
> with Network Manager doesn't have caching enabled and that becomes even
> more critical when we're talking about doing

Re: ANN: DNS resolver changes in yakkety

2016-05-31 Thread Stéphane Graber
On Tue, May 31, 2016 at 11:34:41AM +0200, Martin Pitt wrote:
> Hello all,
> 
> yesterday I landed [1] in Yakkety which changes how DNS resolution
> works -- i. e. how names like "www.ubuntu.com" get translated to an IP
> address like 1.2.3.4.
> 
> Until now, we used two different approaches for this:
> 
>  * On desktops and touch, NetworkManager launched "dnsmasq" configured
>as effectively a local DNS server which forwards requests to the
>"real" DNS servers that get picked up usually via DHCP. Thus
>/etc/resolv.conf said "nameserver 127.0.0.1" and it was rather
>non-obvious to show the real DNS servers. (This was one of the
>complaints/triggers that led to creating this blueprint).  But
>dnsmasq does proper rotation and fallback between multiple
>nameservers, i. e. if one does not respond it uses the next one
>without long timeouts.
> 
>  * On servers, cloud images etc. we did not have any local DNS server.
>Configured DNS servers (via DHCP or static configuration in
>/etc/network/interfaces) were put into /etc/resolv.conf, and
>every program (via glibc's builtin resolver) directly contacted
>those.
> 
>This had the major drawback that if the first DNS server does not
>respond (or is slow), then *every* DNS lookup suffers from a ~ 10s
>timeout, which makes every network operation awfully slow.
>Addressing this was the main motivation for the blueprint. On top
>of that, there was no local caching, thus requesting the same name
>again would do another lookup.
> 
> As of today, we now have one local resolver service for all Ubuntu
> products; we picked "resolved" as that is small and lightweight,
> already present (part of the systemd package), does not require D-Bus
> (unlike dnsmasq), supports DNSSEC, provides transparent fallback to
> contacting the real DNS servers directly (in case anything goes wrong
> with the local resolver), and avoids the first issue above that
> /etc/resolv.conf always shows 127.0.0.1.
> 
> Now DNS resolution goes via a new "libnss-resolve" NSS module which
> talks to resolved [2]. /etc/resolv.conf has the "real" nameservers,
> broken name servers are handled efficiently, and we have local DNS
> caching. NetworkManager now stops launching a dnsmasq instance.
> 
> I've had this running on my laptop for about three weeks now without
> noticing problems, but there may well be some corner cases where this
> causes problems. If you encounter a regression that causes DNS names
> to not get resolved correctly, please do "ubuntu-bug libnss-resolve"
> with the details.
> 
> Thanks,
> 
> Martin


So in the past there were two main problems with using resolved, I'd
like to confirm both of them have now been taken care of:

 1) Does resolved now support split DNS support?
That is, can Network Manager instruct it that only *.example.com
should be sent to the DNS servers provided by a given VPN?

That's a very important feature that the current dnsmasq integration
gives us which amongst other things avoids leaking DNS queries to your
employer when you're not routing all your traffic to the VPN and also
greatly reduces the overall network latency when using a VPN with a far
away endpoint.

It's also a critical feature for anyone who wants to run multiple
VPNs in parallel, which NetworkManager 1.2 now supports.

 2) Does resolved now maintain a per-uid cache or has caching been
disabled entirely?

In the past, resolved would use a single shared cache for the whole
system, which would allow for local cache poisoning by unprivileged
users on the system. That's the reason why the dnsmasq instance we spawn
with Network Manager doesn't have caching enabled and that becomes even
more critical when we're talking about doing the same change on servers.

If not done already, I'd very strongly suggest a full audit of
resolved by the security team with a focus on its caching mechanism.


Additionally, what's the easiest way to undo this change on a server?

I have a few deployments where I run upwards of 4000 containers on a
single system. Such systems have a main DNS resolver on the host and all
containers talking to it. I'm not too fond of adding an extra 4000
processes to such systems.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Support status of nginx in Ubuntu 14.04LTS expired in Feburary 2015?

2016-04-25 Thread Stéphane Graber
On Mon, Apr 25, 2016 at 02:57:22PM -0400, Marc Deslauriers wrote:
> Hi,
> 
> On 2016-04-25 02:45 PM, Andreas Wundsam wrote:
> > Hello Ubuntu Maintainers,
> > 
> > I was surprised to see that ubuntu-support-status shows the support of 
> > package
> > nginx expired in February 2015?
> > 
> > ---
> > $ ubuntu-support-status --show-all
> > []
> > Supported until February 2015 (9m):
> > [...] *nginx nginx-common *
> > ---
> > 
> > apt show shows the package as being in main, but receiving only 9 months of 
> > support:
> > 
> > ---
> > Supported: 9m
> > APT-Sources: http://us.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
> > 
> > 
> > So far, it has been my world view that packages that reside in the main
> > repository would receive the full 5 years of LTS support.
> > 
> > What am I missing?
> > 
> 
> Short answer: don't use ubuntu-support-status, it doesn't work.
> 
> Long answer: ubuntu-support-status is a deprecated tool that used to be used
> when we had a 3y/5y split on desktop and server packages. It returns the
> contents of the "Supported:" tag which hasn't been updated since Ubuntu 10.04
> LTS. I've filed a bug to get it removed:
> https://bugs.launchpad.net/ubuntu/+source/update-manager/+bug/1574670
> 
> Marc.

The Supported: field logic actually got updated on release week for
16.04, so it's absolutely meant to be meaningful.

The code for that logic can be found at:
http://bazaar.launchpad.net/~ubuntu-archive/ubuntu-archive-publishing/trunk/view/head:/scripts/maintenance-check.py

If the logic doesn't match reality, then someone should send a branch to
fix the logic.

Note that it's long been the case that the fact that a package is in
main or in universe doesn't necessarily indicate support length. We have
plenty of packages in universe with support for 3 years or 5 years
during LTS cycles and there are a number of packages that are in main
but aren't part of a product and so aren't supported past the 9 months
mark.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Edubuntu 16.04 and beyond

2016-03-21 Thread Stéphane Graber
Hello,

I'm sending this e-mail on behalf of the current Edubuntu project
leaders, Jonathan Carter and myself.

Jonathan and I have both been involved in Edubuntu for a long time
(almost 10 years for Jonathan and almost 9 for me). We were at first
just contributors, then became council members and after the council got
disolved due to lack of candidates, as the two project leaders.


A lot changes in that many years and while at the start we both had a
considerable amount of spare time to invest in making Edubuntu great,
even getting paid for it at times, that is simply not the case anymore.

We've both moved on to new projects, with the hope that we would one day
find some time to work on Edubuntu again. That's why we decided to make
Edubuntu LTS-only after the 14.04 release, hoping that over the course
of two years we would find the needed time to make a good Edubuntu 16.04 LTS.

This plan didn't quite work out as we're now a month away from the 16.04
release with little to no work having been done on Edubuntu.

We could of course patch things up a bit, drop the things that don't
work and call it good enough. But we don't think that would be fair to
our users who are expecting a well thought through distribution where
all details have been taken care of.


That's why I'm announcing today that Edubuntu will NOT be releasing a
16.04 LTS version. Instead, Jonathan and I will focus on ongoing support
of Edubuntu 14.04 LTS until it goes EOL in April 2019.


That's not to say that Edubuntu is dead, at least not yet.

While Jonathan and I will solely focus on fulfilling our promise of
support for Edubuntu 14.04 LTS, new contributors are absolutely welcome
to take over the Edubuntu project and shape it to their liking.

The two of us will be happy to sponsor any Edubuntu related uploads,
will help new contributors get Edubuntu membership and then hold
elections to setup a new Edubuntu Council which would finally take
the whole project over from us.


Should none of that happen by the time Ubuntu 17.10 is released,
Jonathan and I will ask the Technical Board to revoke Edubuntu as an
official flavour and will be removing any leftover packages from the
archive, remove our seeds and any cdimage build integration, effectively
removing Edubuntu from the Ubuntu release process.



While a bit late as far as announcing this, I think our plan fulfills
the "Step down considerately" clause of the Ubuntu Code of Conduct,
allowing for new contributors to pick things where we left them or to
come up with a completely new vision if they prefer.



It's been a fun ride for the two of us and we very much hope that this
won't be the end of Edubuntu but instead a new beginning for this great
Ubuntu flavour!

Sincerely,

The retiring Edubuntu project leaders, Jonathan Carter and Stéphane Graber.


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Vivid Vervet (15.04) Final Freeze

2015-04-16 Thread Stéphane Graber
Hello everyone,

So as of a bit over an hour ago, we've entered the Final Freeze in
preparation for the release of Ubuntu 15.04.


As usual, this means that you should refrain from uploading any seeded
package which doesn't contain a release critical fix. If you're unsure
about whether you should be uploading a package or not, please get in
touch with the release team in #ubuntu-release.

Unseeded packages will still get auto-accepted until the bot is turned
off shortly before release.


As for images, we're expecting the first run of candidate images to
happen late tomorrow before most of us leave for a weekend of travel.


On behalf of the Ubuntu Release Team,

Stéphane Graber

-- 
ubuntu-devel-announce mailing list
ubuntu-devel-announce@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce


Re: Please retire extras.ubuntu.com

2015-01-11 Thread Stéphane Graber
On Sun, Jan 11, 2015 at 06:26:54PM +, Dimitri John Ledkov wrote:
 At the moment do-release-upgrade -d from utopic - vivid is failing
 because extras.ubuntu.com repository does not exist for vivid, it's
 failing to fetch and thus upgrade is aborted becase of possible
 network problems.
 
 Can we please stop adding extras.ubuntu.com by default? Not consider
 it's absence as failure to upgrade Ubuntu? Have it in apt.sources.d/*
 instead of apt.sources?
 
 Quantal appears to be the last release that had extras.ubuntu.com.
 
 Unless there are any objects I'll remove extras.ubuntu.com from new
 installations and will seek SRU to trusty and up to drop
 extras.ubuntu.com from the apt.sources.
 
 -- 
 Regards,
 
 Dimitri.

This was discussed by the Technical Board earlier this cycle and we
indeed agreed to retire extras.ubuntu.com.

I believe I've got the action to do the actual changes to get it off new
installations and upgrades to vivid but have been pretty busy with other
things so far, so any help is very much welcome!

For the record, the plan is to remove it from the default sources.list,
from software-properties and from any sources.list template that we
generate at installation time. Then do a change to the upgrader to
remove it for people upgrading to vivid.

SRUs would be nice but aren't strictly needed (other than what's needed
for the upgrader) as extras.ubuntu.com will remain online until trusty
goes EOL in April 2019 at which point we'll kill of the web server entirely.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Considering change of Architecture: all builders for next release cycle

2014-10-03 Thread Stéphane Graber
On Fri, Oct 03, 2014 at 02:42:06PM +0100, Colin Watson wrote:
 Hi,
 
 We've always built Architecture: all packages as part of our i386
 builds.  This is beginning to look a bit outdated.  A few packages have
 had difficulty building their architecture-independent components on
 amd64 (usually due to address space limits, I think).  Debian is working
 on Architecture: all autobuilding - it's traditionally been done by the
 developer, but source-only uploads require autobuilding - and my
 understanding is that this is likely to be done on amd64 buildds.
 
 Building these on amd64 would make no difference to our capacity - all
 our amd64 and i386 builders are shared nowadays anyway - but it's more
 forward-looking and might help a few more packages build cleanly.
 
 Launchpad only lets us set this when initialising a new series, so we're
 coming close to a decision point.  What would people think about
 switching this to amd64 when we initialise the 15.04 series?

+1

 
 Thanks,
 
 -- 
 Colin Watson   [cjwat...@ubuntu.com]
 
 -- 
 ubuntu-devel mailing list
 ubuntu-devel@lists.ubuntu.com
 Modify settings or unsubscribe at: 
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Point of reviews

2014-05-23 Thread Stéphane Graber
On Fri, May 23, 2014 at 08:14:57PM +0400, Dmitry Shachnev wrote:
 On Fri, May 23, 2014 at 8:01 PM, Scott Kitterman ubu...@kitterman.com wrote:
  Particularly since the list of people that can upload to the relevant PPAs 
  is
  not constrained to Ubuntu developers.
 
 No, I meant: is it possible to bypass the queue with only relevant
 PPAs or with any PPA?

To skip binNEW entirely, you need a devirt PPA (building on the distro
builders instead of the PPA builders) and have all architectures
enabled. Otherwise the binary packages will get rebuilt post-copy and
will hit the queue at that point.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: [Ubuntu-phone] Image build infrastructure revamp status

2014-05-11 Thread Stéphane Graber
 (16, 19 and 20th of May) so if you have cdimage setup by
then, it's possible I can help setting things up for system-image
builds, if you need more time, then it's just a matter of sitting
together with Paul in Malta and getting the remaining bits sorted out (I
expect we'll need some minor tweaks to the code but nothing too scary).

 
 Cheers,
 
 -- 
 Colin Watson   [cjwat...@ubuntu.com]
 
 -- 
 Mailing list: https://launchpad.net/~ubuntu-phone
 Post to : ubuntu-ph...@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~ubuntu-phone
 More help   : https://help.launchpad.net/ListHelp

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: remove i386 from foreign arch on server install?

2014-02-15 Thread Stéphane Graber
On Sat, Feb 15, 2014 at 02:56:13PM +, Colin Watson wrote:
 On Fri, Feb 14, 2014 at 02:19:10PM +, Robie Basak wrote:
  On Fri, Feb 14, 2014 at 09:03:17AM -0500, Scott Moser wrote:
   Fair.  However, the 40M of space *is* permanent waste.
  
  In addition, we're promoting doing things in containers more and more -
  particularly in server deployments. The 40M gets multiplied by the
  number of containers you have on a system. And currently, we end up
  fetching this for each container, too.
 
 These are separate problems; there's no intrinsic reason why the
 defaults for a container should have to be the same as the defaults on a
 bare-metal installation.  In fact, given how LXC templates work, if they
 are the same it's because somebody made an explicit choice to that
 effect.

It's indeed an explicit choice. Since 12.04, we've tried very hard to
keep our templates as close to a standard install as possible.

Someone should be able to copy an existing Ubuntu system into a
container or copy a container rootfs onto a real machine and both should
work exactly as expected (well, except for you needing to install a
kernel and bootloader in the second case, obviously).

We expect anyone following software installation instructions to be able
to use those identically on a standard install or in a container.
Dropping i386 by default in containers would break that.

Note also that a lot of users are using containers specifically to
contain proprietary software they don't trust and those are very often
i386 multi-arch packages, so the i386 on amd64 use case in container is
a fairly common one.


(If our main concern here is JuJu and the way it deploy services, it may
be worth having JuJu just customize its image using cloud-init userdata
or similar to turn off i386 if it won't need it, but I'd be opposed to
doing this by default unless it's a project wide change.)

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: [RFC] 12.04.5

2014-02-07 Thread Stéphane Graber
On Fri, Feb 07, 2014 at 09:20:06AM -0700, Adam Conrad wrote:
 On Fri, Feb 07, 2014 at 08:00:12AM -0800, Leann Ogasawara wrote:
  
  With 12.04.4 having just released, I wanted to propose the idea of having a
  12.04.5 point release for Precise.
 
 FWIW, I think the engineering burden for doing this is worth the trade
 off for it being The Right Thing To Do.

+1

 ... Adam

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: [RFC] 12.04.5

2014-02-07 Thread Stéphane Graber
On Fri, Feb 07, 2014 at 05:24:23PM -0800, Steve Langasek wrote:
 On Fri, Feb 07, 2014 at 08:00:12AM -0800, Leann Ogasawara wrote:
  With 12.04.4 having just released, I wanted to propose the idea of having a
  12.04.5 point release for Precise.
 
  As many are aware, recent 12.04.x point releases have shipped with a newer
  kernel and X stack by default for hardware enablement purposes.
   Maintainers of these enablement stacks have agreed to support these until
  a Trusty based enablement stack is supported in Precise.  Once a Trusty
  enablement stack is supported, all previous enablement stacks would EOL and
  be asked to migrate to the final Trusty based enablement stack which would
  continue to be supported for the remaining life of Precise.
 
  Currently, 12.04.4 is our final point release for Precise.  12.04.4 shipped
  with a Saucy enablement stack by default.  This Saucy enablement stack in
  Precise will eventually EOL in favor of the Trusty enablement stack.  Once
  that happens, our final point release for Precise will be delivering an
  EOL'd enablement stack.  This seems unfortunate and inappropriate.  I would
  like to propose having a 5th point release for Precise which would deliver
  the Trusty enablement stack for Precise.
 
  Providing a 12.04.5 point release will add no additional maintenance burden
  upon teams supporting enablement stacks in Precise.  It would require some
  extra effort on part of the Canonical Foundations Team as well as the
  Ubuntu Release Team to spin up an additional set of images and testing
  coordination etc.  However, I informally discussed this with a few members
  of each of those teams and the tentative agreement was that 12.04.5 was a
  reasonable request which could be accommodated.  Collectively we could find
  no compelling reason to not provide 12.04.5.  We also discussed that a
  12.04.5 release should be optional for the Flavors to participate in.
   Additionally, we would want to purposely avoid clashing the 14.04.1 and
  12.04.5 release dates and would suggest releasing 14.04.1 first and 12.04.5
  after (exact date TBD).
 
  What are other's thoughts here?  Does anyone have a compelling reason for
  not providing a 12.04.5 point release?
 
 For the record, this has the Foundations Team's support as well (we've
 already discussed the resourcing considerations).  So unless someone knows
 of a reason why we *shouldn't* go ahead with this, I think the main question
 here is whether the flavors want to participate.

Speaking with my Edubuntu flavor lead hat on, we'd be happy to participate.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Include samba and libpam-smbpass by default in Ubuntu

2014-01-05 Thread Stéphane Graber
On Sun, Jan 05, 2014 at 03:08:32AM +, Dimitri John Ledkov wrote:
 On 27 December 2013 00:59, pabloalmeida...@gmail.com
 pabloalmeida...@gmail.com wrote:
  As suggested by a triager of a bug I reported on this issue, I'm bringing
  this idea for discussion on this list. If this is the wrong place, feel free
  to point me to the right one.
 
  When one tries to share a folder in the network via Nautilus for the
  first time, a dialog asks for installation of two packages and then a
  session restart is required. This used to make sense when we had only
  the space in a CD, but now that Ubuntu doesn't fit in a CD anymore, it
  makes sense to include these packages by default, so that no extra steps
  or reboots are required to complete this task. Besides, this would
  resolve a bug in Ubuntu 13.10 which prevents the installation of libpam-
  smbpass via the GUI offered by Nautilus.
 
 
 This is a good enough mailing list, but I guess hollidays are also
 affecting the response here. I haven't been in Ubuntu long enough to
 know if we used to ever have samba in the default install or not. I
 think we'd still want to fix the bug of installation, as users may not
 have it installed (e.g. if removed, or upgrading from previous
 versions of ubuntu under some conditions). And forwarding this email
 to a wider ubuntu-devel mailing list.
 
 In very busy networks, e.g. public wifi cafe, I think it will be
 undesirable to have samba installed and enabled out of the box, since
 it would be easy to leak / share things beyond what one intended to do
 share on my home wifi, not cafe wifi or otherwise performance
 impact.
 
 Also the unlimited cd size for desktop, is actually not entirely
 true once again. We are indeed 700MB iso, as the media factor was no
 longer a relevant constraint. On the other hand we are still limitted
 on what ends up in the default installs due to ubuntu-touch and
 convergence. There are often hard limits, at times not that different
 from an iso size e.g. 900MB, as to what can be flashed on the devices
 and we have started to aim for ~200-300MB highly compressed base
 system tarballs, or use incremental system-image updates for ubuntu on
 touch devices.
 
 Do we or do we not want samba in the default install?
 
 -- 
 Regards,
 
 Dimitri.

Ubuntu has a no open port by default policy at least for the Desktop
installation. If you look at a default Ubuntu Desktop system the only
exceptions you should see to that rule are the DHCP client (which needs
to listen on udp/68) and avahi-daemon (which needs to listen on
udp/5353).

So having samba installed and running by default isn't an option and
would be a potential security risk for millions of systems which do not
need the service at all anyway.

I think having nautilus prompt the user for those packages to be
installed is perfectly reasonable, having to restart the session however
seems a bit odd to me and shouldn't be a requirement.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Minutes for the Developer Membership Board meeting (2013/12/16)

2013-12-16 Thread Stéphane Graber
 * Chair: stgraber
 * Present: Barry Warsaw, Benjamin Drung, Iain Lane and Stefano Rivera
 * Log: 
http://ubottu.com/meetingology/logs/ubuntu-meeting/2013/ubuntu-meeting.2013-12-16-15.17.html

=== Review Previous Action Items ===

 * The December 30th meeting will be canceled, next meeting to be on the 13th 
of January.
 * Rest of the actions were carried to next meeting.

=== Dmitry Shachnev for PPU of: python-markdown, python-keyring, 
python-secretstorage, python-docutils, python-qt4, sip4, sphinx and the QT5 
packageset ===

 * https://wiki.ubuntu.com/DmitryShachnev/PPUApplication
 * Approved with 4 votes for, 0 vote against and 0 abstention.

=== Graham Inggs for MOTU ===

 * https://wiki.ubuntu.com/GrahamInggs/MOTUApplication
 * Approved with 5 votes for, 0 vote against and 0 abstention.

=== Any other business ===
 * Chair for the next meeting will be ScottK.
 * December 30th meeting skipped due to end of year holidays.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel-announce mailing list
ubuntu-devel-announce@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce


Re: Giving developers access to requeue package imports [Was: Ubuntu Platform developers BOF session?]

2013-11-13 Thread Stéphane Graber
On Wed, Nov 13, 2013 at 03:33:59PM -0500, Barry Warsaw wrote:
 On Nov 13, 2013, at 11:32 AM, Steve Langasek wrote:
 
 But I think it would be more interesting to get a permanent fix for this
 bug:
 
   https://bugs.launchpad.net/udd/+bug/714622
 
 This accounts for the problem people have mentioned, that core packages are
 much more likely to have failed imports.  The importer badly needs fixed to
 not throw up its hands when the revision ID of a tag has changed; it should
 only care about this when the branch contents are wrong.
 
 This single bug accounts for just under half of all importer failures, and
 is a failure scenario that the importer *could*, with sufficient smarts,
 resolve automatically.
 
 This may be controversial, but (except for trying to fix error conditions), I
 think we should disallow all developer pushes to UDD branches and only let the
 importer write to them.  It's simply too error prone otherwise, and there's no
 good reason for it.
 
 One possible reason for developers to push to UDD branches is to share the
 code with other people, or to avoid the lag in importer runs.  Of course the
 former can be easily handled by pushing to a personal branch.  The latter?  Oh
 well, I can live with that for error-free branches. ;)
 
 A long time ago I decided never to push UDD branches and always let the
 importer update them.  I've never regretted that or encountered problems with
 that discipline.
 
 Cheers,
 -Barry

Hmm, so if we can't planned changes to UDD branches and have to use a
separate user-owned branch for that, then what's the use of the UDD
branch?

It sounds to me like it'd then be much easier for me to just maintain my
own branch on the side and upload from there, ignoring UDD entirely,
which surely isn't what we want there.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Giving developers access to requeue package imports [Was: Ubuntu Platform developers BOF session?]

2013-11-13 Thread Stéphane Graber
On Wed, Nov 13, 2013 at 04:19:19PM -0500, Barry Warsaw wrote:
 On Nov 13, 2013, at 03:43 PM, Stéphane Graber wrote:
 
 Hmm, so if we can't planned changes to UDD branches and have to use a
 separate user-owned branch for that, then what's the use of the UDD
 branch?
 
 It sounds to me like it'd then be much easier for me to just maintain my
 own branch on the side and upload from there, ignoring UDD entirely,
 which surely isn't what we want there.
 
 We're all familiar with workflows where landings to the master branch are
 guarded by robots like Jenkins or Tarmac.  I think of this exactly the same
 way, with the 'bot being the importer.
 
 -Barry

Sure, except that for those projects, there's an incentive to push a MP
and wait for the bot to process it as otherwise the change won't land.

For UDD, if we can't commit to the branch, then there's zero benefit in
even using it as the source branch as I could just as well use apt-get
source, which will get me the current package from the archive (which
UDD doesn't necessarily give me...), then apply changes and push that.

Not having commit rights to the UDD branch would make UDD a simple
archiving service and based on its current state, a pretty bad one at
that.


To clarifiy my position, I really like UDD and I think that kind of VCS
integration is what we want for the distro, but it's never been working
reliably enough to gain sufficient momentum.

In a perfect world, I'd have a VCS repository containing branches for
all Ubuntu series and pockets, for the various Debian releases and for
the upstream code, making any rebase/cherry-pick trivial and having LP
trigger builds on either a special signed commit or a signed tag.

Also in that perfect world, the inner workings of our upload process
would be tied to that, so that it wouldn't be possible for the branch to
be out of sync with the archive. This could be achieved by either making
this the only way to upload or making the legacy upload path, commit
to the branch and export from there prior to processing.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Giving developers access to requeue package imports [Was: Ubuntu Platform developers BOF session?]

2013-11-13 Thread Stéphane Graber
On Wed, Nov 13, 2013 at 04:38:17PM -0500, Barry Warsaw wrote:
 On Nov 13, 2013, at 04:28 PM, Stéphane Graber wrote:
 
 For UDD, if we can't commit to the branch, then there's zero benefit in
 even using it as the source branch as I could just as well use apt-get
 source, which will get me the current package from the archive (which
 UDD doesn't necessarily give me...), then apply changes and push that.
 
 For simple package changes, you could have a point, but I rarely encounter
 simple package changes specifically in Ubuntu.  Usually I'm merging a new
 upstream, or Debian version, and then local version control is often a
 godsend.  Sometimes the merge goes badly, or you have conflicts, or it's not
 enough to get a working Ubuntu package.  I can do the merge, commit that local
 change, and then do further commits locally as I refine the package to build
 and work properly.  I can diff between revisions, revert changes, etc.
 E.g. all the benefits of version control.  I can create side branches of my
 main branch to try things out, then merge them back into my main branch.  All
 this is especially useful if you are working on a package over some span of
 time.
 
 apt-get source is like performing without a net.  Let's say you head down the
 wrong path while updating a package.  It's very difficult to backup and try
 again, take side travels for experimentation, etc.  Oh, and chdist is nice,
 but I prefer having ubuntu:series{,-proposed}/package branches.

Well, to be fair my fallback process when not doing UDD is:
 - pull-lp-source package series
 - cd */
 - bzr init  bzr add  bzr commit -m current
 - Do whatever changes, commit when needed, revert, ...
 - bzr bd -S
 - dput

Which based on what you described about commitless UDD seems pretty much
identical with the significant improvement that I don't have to grab the
whole branch on top of that :)

I could also push and share that temporary branch with others and
there'd be no downside to this since I wouldn't be able to merge that
branch back into the UDD one anyway.

At least for me, UDD without commit rights, would mean lost granularity
in some changes I'm doing in the archive, for example for some of the
edubuntu packages I've had dozens of commits before an actual upload,
and I quite enjoy having that present in the UDD history, loosing that
ability would be loosing much of UDD's benefits.

 
 Not having commit rights to the UDD branch would make UDD a simple
 archiving service and based on its current state, a pretty bad one at
 that.
 
 To clarifiy my position, I really like UDD and I think that kind of VCS
 integration is what we want for the distro, but it's never been working
 reliably enough to gain sufficient momentum.
 
 In a perfect world, I'd have a VCS repository containing branches for
 all Ubuntu series and pockets, for the various Debian releases and for
 the upstream code, making any rebase/cherry-pick trivial and having LP
 trigger builds on either a special signed commit or a signed tag.
 
 Also in that perfect world, the inner workings of our upload process
 would be tied to that, so that it wouldn't be possible for the branch to
 be out of sync with the archive. This could be achieved by either making
 this the only way to upload or making the legacy upload path, commit
 to the branch and export from there prior to processing.
 
 I'll agree with you there.  I'd love to live in this world. :)
 
 -Barry



-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Introducing sbuild-launchpad-chroot

2013-10-22 Thread Stéphane Graber
On Tue, Oct 22, 2013 at 06:42:16PM +0200, Andreas Moog wrote:
 On 21.10.2013 16:31, Stéphane Graber wrote:
 
  With trusty now open, I uploaded a tool I've been using for a few months 
  now.
  
  It's called sbuild-launchpad-chroot and pretty much does exactly what
  the name says.
 
 That sounds like a useful tool. Is there a way to have it use lvm
 volumes instead of directories?

Should be easy enough, yes, patches are welcome :)

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Introducing sbuild-launchpad-chroot

2013-10-21 Thread Stéphane Graber
That's pretty much my plan, find a way to get schroot to interface with
LXC (or just unshare the netns directly). Need something a bit more
clever than just blocking access completely though since you still want
to grab the build-depends, but passing a socket to a small proxy would
be a way, creating a veth pair would be another (and using iptables to
block non-archive traffic).

On Tue, Oct 22, 2013 at 11:33:19AM +1300, Robert Collins wrote:
 Cool. Using lxc rather than a chroot will let you cut internet off hard :)
 
 -Rob
 
 On 22 October 2013 03:31, Stéphane Graber stgra...@ubuntu.com wrote:
  Hey everyone,
 
  With trusty now open, I uploaded a tool I've been using for a few months 
  now.
 
  It's called sbuild-launchpad-chroot and pretty much does exactly what
  the name says.
 
  The package contains 3 things:
   - 1 tool to create/update/delete sbuild chroots
   - 1 schroot hook to update the chroot at the beginning of a build
   - 1 schroot hook to generate the right sources.list for the build
 
  That last hook was written by Andy Whitcroft and some of you may already
  be using it.
 
  With the package installed, you can then do:
   sudo sbuild-launchpad-chroot create -n trusty-amd64-sbuild -s trusty -a 
  amd64
 
  This will define a new chroot in schroot called trusty-amd64-sbuild, set
  some extra launchpad.* options for the series and architecture on
  Launchpad, donwload the current Launchpad chroot and also setup the
  following aliases:
   - trusty-security-amd64-sbuild
   - trusty-security+main-amd64-sbuild
   - trusty-security+restricted-amd64-sbuild
   - trusty-security+universe-amd64-sbuild
   - trusty-security+multiverse-amd64-sbuild
   - trusty-updates-amd64-sbuild
   - trusty-updates+main-amd64-sbuild
   - trusty-updates+restricted-amd64-sbuild
   - trusty-updates+universe-amd64-sbuild
   - trusty-updates+multiverse-amd64-sbuild
   - trusty-proposed-amd64-sbuild
   - trusty-proposed+main-amd64-sbuild
   - trusty-proposed+restricted-amd64-sbuild
   - trusty-proposed+universe-amd64-sbuild
   - trusty-proposed+multiverse-amd64-sbuild
 
  Once done, you can then trigger a build with something like:
   sbuild --dist=trusty --arch=amd64 -c 
  trusty-proposed+restricted-amd64-sbuild dsc
 
  This will print the following:
   I: 01launchpad-chroot: [trusty-amd64-sbuild] Processing config
   I: 01launchpad-chroot: [trusty-amd64-sbuild] Already up to date.
   I: 90apt-sources: setting apt pockets to 'release security updates 
  proposed' in sources.list
   I: 90apt-sources: setting apt components to 'main restricted' in 
  sources.list
 
  Confirming that the hook has checked the chroot currently matches with
  what Launchpad uses and telling you that the sources.list in the build
  environment contains all the pockets (but backports) and the main and
  restricted components.
 
 
  In theory the only noticable difference between a build environment
  created by sbuild-launchpad-chroot and the real thing is that you'll
  have internet connectivity from inside the chroot (but I'm working on
  also emulating that part of the LP build environment) and that you'll be
  running with a newer version of sbuild than what's used on the real
  buildds.
 
 
  --
  Stéphane Graber
  Ubuntu developer
  http://www.ubuntu.com
 
  --
  ubuntu-devel mailing list
  ubuntu-devel@lists.ubuntu.com
  Modify settings or unsubscribe at: 
  https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
 
 
 
 
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Minutes of the Developer Membership Board meeting 2013-07-01

2013-07-15 Thread Stéphane Graber
Sorry it took me so long to get around to sending those.

Present: stgraber, Laney, rbasak, bdrung, barry, tumbleweed, Daviey

Log:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2013/ubuntu-meeting.2013-07-01-15.00.log.html

== 2013-07-01 ==

 * Review of previous action items
  * Laney: everyone read and amend
http://pad.ubuntu.com/dmb-ppu-membership-proposal, and sign up for the
implementation tasks
  * Discussed at the end of the meeting
  * tumbleweed: add louis-bouchard to universe-contributors
  * done


 * MOTU Applications - Robie Basak
   * https://wiki.ubuntu.com/RobieBasak/ServerDeveloperApplication
   * This was granted with 5/0/0 (yes/no/abstention)

 * PPU upload rights - Robie Basak
   * https://wiki.ubuntu.com/RobieBasak/ServerDeveloperApplication
   * This was specifically for the Ubuntu Server packageset
   * The request was granted with 5/0/0 (yes/no/abstention)


 * PPU membership proposal
   * http://pad.ubuntu.com/dmb-ppu-membership-proposal
   * We don't seem to unanimously agree on one of the options, so Laney
 will setup a CIVS poll and we'll go with the winner.

== Action items ==
 * Laney to setup a CIVS poll to choose between the two options
   (and NOTA) for PPU membership

== Next meeting ==
The next meeting will be on Monday 15th of July at 19:00 UTC in
#ubuntu-metting on freenode and will be chaired by ScottK.



Please join me in welcoming our new MOTU and PPU uploader, Robie Basak!

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel-announce mailing list
ubuntu-devel-announce@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce


Minutes from the Technical Board meeting, 2013-06-24

2013-06-24 Thread Stéphane Graber
Technical Board meeting, 2013-06-24

= Attendees =
 * Colin Watson
 * Dave Walker
 * Kees Cook
 * Matt Zimmerman
 * Soren Hansen
 * Stéphane Graber (chair)
 * Steve Langasek

= Notes =
== Development series alias on archive.ubuntu.com and for uploads ==
During the last Ubuntu vUDS Rick Spencer was tasked with finding a good
name for the new development series alias which would allow users to
stick to whatever is the current development release (or to use it as an
upload target). Rick came up with rolling as the suggested name. After
a bit of discussion between the present TB members, it appears the
members prefer next instead.
The main concern about using rolling is added confusion with regard
to the how Ubuntu is developed as it'd surely be interpreted by some as
Ubuntu being on a Rolling Rlease schedule which it's not.

The Technical Board didn't make a final decision on the name though as
Rick couldn't attend the meeting himself to defend his proposal. As a
result, the Technical Board feels that Colin should go ahead with the
implementation of the feature in Launchpad but not put it live until
after our next meeting (8th of July) so to leave until that time for
Rick to defend his proposal (on our mailing-list or at our next meeting)
or for someone else to propose a better name. In any case, we expect to
make a final decision on the matter on the 8th.

== Packages linking against OpenSSL ==
Discussion is on-going on the Technical Board mailing-list about whether
OpenSSL should be considered as system library allowing packages to
link against it without the need to include a special exception in the
license of the software.
We expect to vote on this at our next meeting, discussing the various
interpretations of the license on our mailing-list until then.

== Micro release exceptions ==
Stefan Bader asked for a micro release exception for Xen, unfortunately
nobody from the Technical Board had time to look into this and reply to
his request but we expect this to move forward soon. There are some
concerns regarding how much regression testing happens on Xen and
whether this is enough for us or if we need to do more on our side. This
discussion will be continued on our mailing-list.

Kees also brought up that we have a few provisional exceptions that
should be reviewed to either become full MREs or be withdrawn.

= Topics for the next TB meeting =
Based on the above, the following will be discussed at our next
scheduled meeting:
 * Final vote on the development series alias name
 * Final vote on the OpenSSL system library discussion
 * Review our current provisional Micro Release Exceptions

Meeting log:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2013/ubuntu-meeting.2013-06-24-20.10.log.html


== Next meeting ==
 * 2013-07-08 21:00 London time (mdz to chair).

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel-announce mailing list
ubuntu-devel-announce@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce


Re: Listing the licenses used on an installed system?

2013-06-14 Thread Stéphane Graber
On Fri, Jun 14, 2013 at 04:24:44PM +0200, Sebastien Bacher wrote:
 Hey,
 
 I'm looking at the about this phone panel [1] for the touch image,
 one of the feature included is a Software licenses screen which:
 should navigate to a single “Software Licenses” screen that
 consists of a single text view listing all the licenses for included
 software (since there is no other way to access that information). 
 
 Does anyone know if we already have tools doing that, or what would
 be the best way to get those informations? From a quick discussions
 on IRC yesterday the idea that came out was basically to read/dump
 /usr/share/doc/*/copyright ... does that seems a reasonable
 approach? Checking on an android device, their equivalent panel is
 slow and dumping tons of informations as well...
 
 Cheers,
 Sebastien Bacher
 
 [1] https://wiki.ubuntu.com/AboutThisDevice#Phone


Hi,

Short of having every package that's part of the Touch image to use machine
parsable changelog files, I think dumping /usr/share/doc/*/copyright is your
best bet.
As you say, it won't be pretty or really readable, but that doesn't appear to
be much of a concern on most if not all current devices.

One thing you probably want to do though is generate a list of unique
real-paths (non-symlinks) and use that as the list of copyright files to
display. In theory that should avoid any duplication for source packages
producing multiple binaries.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: [Ubuntu-phone] Status of 100 scopes and touch landing to saucy

2013-06-07 Thread Stéphane Graber
On 06/07/2013 12:48 PM, Dave Morley wrote:
 On 07/06/13 14:46, Didier Roche wrote:
 38 components MIRed and promoted to main.
 Man we are gonna have to call MIRed something new now I thought you
 had uploaded Mir display 38 times :D

I'm looking forward to the MIR for Mir, not confusing at all ;)

For those not following too closely:
 - MIR = Main Inclusion Request
 - Mir = The new display server

Exact spelling is critical to avoid confusion (so is careful reading)!

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Systems with invalid locales

2013-06-04 Thread Stéphane Graber
On 06/04/2013 06:18 AM, Adam Conrad wrote:
 On Tue, Jun 04, 2013 at 12:03:10PM +0200, Martin Pitt wrote:

 I have always considered this default behaviour of ssh unexpected and
 wrong. It blindly applies the host locale to the remote ssh session
 without any checks whether that locale is actually valid. In
 particular because it only seems to do that if the remote server does
 not have any default locale from /etc/default/locale
 
 I wonder if it might be high time to discuss slapping C.UTF-8 in the
 default locale in pretty much every minimal installation scenario we
 can think of (obviously, still overriding in more complex installers
 that allow a user to choose a properly local locale).
 
 ... Adam

+1

In a perfect world (yeah, I know...) we should be able to run Ubuntu
systems with language-pack-en as we wouldn't have any hardcoded
en_US.UTF-8 but instead use C.UTF-8.

That'd let us save some space on some images (think of any localized
image), on the actual target systems and on updates (langpacks are big).

In any case, C.UTF-8 is usually vastly better than just C, so I'm all
for having that change done.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Systems with invalid locales

2013-06-04 Thread Stéphane Graber
On 06/04/2013 06:33 AM, Adam Conrad wrote:
 On Tue, Jun 04, 2013 at 04:18:40AM -0600, Adam Conrad wrote:

 I wonder if it might be high time to discuss slapping C.UTF-8 in the
 default locale in pretty much every minimal installation scenario we
 can think of (obviously, still overriding in more complex installers
 that allow a user to choose a properly local locale).
 
 And on a vague tangent from that, it might also be getting close to a
 point where we should discuss switching buildds to C.UTF-8 too (they
 currently force C).  My gut feeling is that this shouldn't have much
 effect, with the following assumptions:
 
 1) Most packages and testsuites shouldn't care what locale they're
run in in the first place (but it doesn't seem to make sense to
test in a locale almost no one uses in production, while a UTF-8
locale will at least trip a few curious issues real people see)
 
 2) Most packages that do require being in C for their builds or for
their testsuite probably already force this, because maintainers
have been working in fancy locales for years now, and they've
had to force C where needed to make things work outside chroots.
 
 Pretty sure we *would* run into some packages (especially older ones)
 that would fail in a UTF-8 locale, but I think the general benefit of
 building in a locale more similar to 99% of people's runtime locales
 would be a net win, even if we have to fix builds and testsuites to
 cope.
 
 Thoughts?

+1

I stopped counting the number of packages I had to fix because they blew
up mid-build while trying to parse their own changelog and exploded on
my name, so I'd be very happy to finally have a UTF-8 locale on our buildds.

I'd also be very surprised if we had actual cases where C.UTF-8 gives a
different result than C. Unless some tests actually attempt to print
non-ASCII characters under C, but that'd be a pretty weird test to have.

So I think it's a change worth doing and any issue we find will clearly
be bugs that should simply get fixed.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Intention to drop Wubi from 13.04 release

2013-04-02 Thread Stéphane Graber
On 04/01/2013 04:08 PM, Stéphane Graber wrote:
 On 04/01/2013 03:59 PM, Steve Langasek wrote:
 Dear developers,

 Recent bug reports suggest that the Ubuntu installer for Windows, Wubi, is
 not currently in very good shape for a release:

   13.04 installer doesn't create user account 
   https://bugs.launchpad.net/wubi/+bug/1155704

   Wubi fails to detect 12.04.2 and 13.04 AMD64 ISO
   https://bugs.launchpad.net/wubi/+bug/1134770


 Combined with the fact that Wubi has not been updated to work with Windows 8
 (bug #1125604), and the focus on mobile client over desktop, the Foundations
 team does not expect Wubi to be in a releasable state for 13.04.

 I am therefore proposing to drop Wubi from the 13.04 release, starting
 immediately with the upcoming Beta.  This will save our testers from
 spending their time testing an image that will not have developers working
 on fixing the bugs they find, and spares our users from using an image for
 13.04 that is not up to Ubuntu's standards of quality.
 
 I think this will save us quite a bit of the usual troubles we have
 around release time and the fact that Wubi is nearly unusable on Windows
 8 makes it a lot less relevant.
 
 As we discussed, there may be some interest for some of the flavours
 targeting people who tend to use older versions of Windows but even
 then, current wubi has a bunch of bugs that need fixing first before we
 can realistically ship it even for a limited set of flavours.
 
 So anyway, +1.
 
 I'll take care of disabling any remaining wubi builds and the matching
 products on the tracker (and remove wubi from the 13.04 manifest).

Done. I have disabled daily builds of Wubi for everything but Ubuntu
12.04, removed any existing build from the QA Tracker and removed Wubi
from the 13.04 manifest.

All the bits are technically still there as we need those for 12.04, so
if someone wants to spend the time to get Wubi back in shape and
convince some of the flavours to keep on shipping it, it'll be easy to
re-enable.

 If someone is interested in taking over the maintenance of Wubi so that it
 can be released with 13.04 (or if not with 13.04, then with a future
 release), I would encourage them to start by looking at the abovementioned
 bugs and preparing patches, then talking to the release team.

 Thanks,


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Minutes of the Developer Membership Board meeting 2013-03-25

2013-04-02 Thread Stéphane Graber
Present: bdrung, micahg, ScottK, stgraber, tumbleweed

Log:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2013/ubuntu-meeting.2013-03-25-19.01.log.html

== 2013-03-25 ==

 * Review of previous action items
  * Micah to send summary of PPU decoupling discuss to the DMB list
* DONE
''ACTION:'' Micah to send summary of PPU decoupling discuss to the DMB
list ''ACTION:'' Micah to urgently send feedback on Bjorn's PPU application


 * PerPackageUploader Applications - Paul Gevers
   * https://wiki.ubuntu.com/Elbrus/PerPackageUploaderApplication
   * Was granted upload rights to:
 winff, daisy-player, ebook-speaker and cacti.

 * MOTU Applications - Vibhav Pant
   * https://wiki.ubuntu.com/VibhavPant/MOTUApplication
   * Although Vibhav has demonstrated a lot of good work and technical
knowledge, the board feels that Vibhav is a bit overenthusiastic and
that he sometimes should stop and ask other developers before acting. As
a result, we do not believe he's currently fit for MOTU and invite him
to come back a bit later.

 * Chair for next meeting: Barry


== Votes ==
 * PPU for Paul Gevers (winff, daisy-player, ebook-speaker and cacti)
   For: 5 Against: 0 Abstained: 0

 * Vibhav Pant for MOTU
   For: 1 Against: 1 Abstained: 3

== Action items ==
 * Micah to urgently send feedback on Bjorn's PPU application

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel-announce mailing list
ubuntu-devel-announce@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce


Re: Intention to drop Wubi from 13.04 release

2013-04-01 Thread Stéphane Graber
On 04/01/2013 03:59 PM, Steve Langasek wrote:
 Dear developers,
 
 Recent bug reports suggest that the Ubuntu installer for Windows, Wubi, is
 not currently in very good shape for a release:
 
   13.04 installer doesn't create user account 
   https://bugs.launchpad.net/wubi/+bug/1155704
 
   Wubi fails to detect 12.04.2 and 13.04 AMD64 ISO
   https://bugs.launchpad.net/wubi/+bug/1134770
 
 
 Combined with the fact that Wubi has not been updated to work with Windows 8
 (bug #1125604), and the focus on mobile client over desktop, the Foundations
 team does not expect Wubi to be in a releasable state for 13.04.
 
 I am therefore proposing to drop Wubi from the 13.04 release, starting
 immediately with the upcoming Beta.  This will save our testers from
 spending their time testing an image that will not have developers working
 on fixing the bugs they find, and spares our users from using an image for
 13.04 that is not up to Ubuntu's standards of quality.

I think this will save us quite a bit of the usual troubles we have
around release time and the fact that Wubi is nearly unusable on Windows
8 makes it a lot less relevant.

As we discussed, there may be some interest for some of the flavours
targeting people who tend to use older versions of Windows but even
then, current wubi has a bunch of bugs that need fixing first before we
can realistically ship it even for a limited set of flavours.

So anyway, +1.

I'll take care of disabling any remaining wubi builds and the matching
products on the tracker (and remove wubi from the 13.04 manifest).

 If someone is interested in taking over the maintenance of Wubi so that it
 can be released with 13.04 (or if not with 13.04, then with a future
 release), I would encourage them to start by looking at the abovementioned
 bugs and preparing patches, then talking to the release team.
 
 Thanks,
 
 
 


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Upstart user session available for testing

2013-02-19 Thread Stéphane Graber
Hello,

The foundations team has been working on Upstart User Sessions for a
good part of this cycle. An early preview is now available for testing.

Upstart user sessions basically mean that instead of having lightdm
spawn Xsession which in turn spawn gnome-session, upstart is now
inserted between Xsession and gnome-session.
This means that bits of gnome-session can now be turned into user jobs,
can react on any event coming from the system wide upstart daemon and
can be monitored/respawned just like system jobs are.

The idea behind this work is to allow for fewer long lasting processes
on the Ubuntu desktop by having more event driven short lasting
processes. It should also help with the perceived stability of the
desktop thanks to upstart's respawn feature.

Details can be found at:
https://wiki.ubuntu.com/FoundationsTeam/Specs/RaringUpstartUserSessions

The test PPA is at:
https://launchpad.net/~stgraber/+archive/foundation-build


This PPA contains a build of upstart trunk that's been tested for the
past week by members of the Foundations team and Desktop team. So we're
reasonably sure that it's stable. We however recommend that you keep a
working upstart package around just in case we missed something and you
need to do emergency revert of usptart on your system.


What that package does is install a new version of upstart containing
all our user session changes. It also ships with a custom Xsession hook
and a new X session called ubuntu-upstart.

Once you have that package install, you'll notice a new session in
lightdm called Ubuntu (upstart). Selecting this session will start a
standard Ubuntu Unity session that'll be running inside an upstart user
session.

After using this for the past few days, I haven't noticed any side
effect. On the upside, you can now dump extra jobs in ~/.init, list
running user jobs with initctl list, start/stop them with initctl
start/stop and do pretty much anything you'd expect from standard
upstart jobs.

As an example, write the following to ~/.init/mumble.conf:

start on started pulseaudio and :sys:sound-device-added
stop on :sys:sound-device-removed
exec mumble


Then plug a USB headset and you'll see mumble pop up on your desktop.


The system wide jobs can be found under /usr/share/upstart/sessions/
Those are examples that I wrote last week. I expect a final version of
those to end up in the individual source packages.

Log output from the jobs can be found under ~/.cache/upstart/


As it's today, we only support a standard Unity session, however adding
support for any other session based on gnome-session should be trivial.

I plan on spending a bit of time looking at whether we can get the
system generic enough to apply to any Xsession but I think we'll want to
keep this opt-in for the moment and not force everything to run on top
of upstart (thinking of the flavours that ship their own desktop
environment).

As far as the actual release is concerned, we are two branches away from
having everything that's in my PPA merged upstream, so we're well on
track to have a new upstream release before FeatureFreeze so we can have
this landed and integrated by then.

I also have a branch with an experimental dconf bridge which lets you
react to keys being set/modified/removed. It still needs a bit of
cleanup though and it's not clear whether we'll want that running by
default or not.

Feedback, suggestions, bug reports would be much appreciated.
We're available in #upstart (on freenode) and can be contacted on
upstart-devel@lists.u.c (Cced).

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Call for votes: Developer Membership Board restaffing

2013-01-31 Thread Stéphane Graber
On 01/30/2013 07:22 PM, Steve Langasek wrote:
 Hello,
 
 On Wed, Jan 30, 2013 at 09:36:19AM -0600, Micah Gersten wrote:
 The Developer Membership Board has started a vote to restaff for the
 four members whose terms are expiring. The Developer Membership Board is
 responsible for reviewing and approving new Ubuntu developers. It
 evaluates prospective Ubuntu developers and decides when to entrust them
 with developer privileges. There are seven candidates:
 
 Benjamin Drung (bdrung) https://wiki.ubuntu.com/BenjaminDrung
 Bhavani Shankar (coolbhavi) https://wiki.ubuntu.com/BhavaniShankar
 Cody Somerville (cody-somerville) https://wiki.ubuntu.com/CodySomerville
 Dmitrijs Ledkovs (xnox) 
 https://wiki.ubuntu.com/DmitrijsLedkovs/DMBApplication
 Iain Lane (Laney) https://wiki.ubuntu.com/IainLane/DMB2013
 Scott Kitterman (ScottK) https://wiki.ubuntu.com/ScottKitterman
 Stéphane Graber (stgraber) https://wiki.ubuntu.com/stgraber
 
 At the January DMB meeting, there were two applicants, both of whom were
 rejected.  It doesn't say that on paper; on paper it says that Adam Stokes's
 application was changed to contributing member during the meeting and was
 approved.  But the long and the short of it is that two people with a
 substantial history of contributing to Ubuntu in their respective domains
 applied for upload rights in January, were recommended by existing Ubuntu
 developers, and were denied upload rights by the DMB.
 
 I understand that the DMB won't always agree with their fellow Ubuntu
 Developers about whether a particular applicant is ready for a particular
 uploader status.  But I do think it's important that when the DMB disagrees
 with the developers who are recommending someone for uploader status, there
 be transparency about the reasons for this disagreement.  Currently, the
 wiki says:
 
   It can be difficult to know when you are ready to apply for uploader team
   membership.
 
   (https://wiki.ubuntu.com/DeveloperMembershipBoard/ApplicationProcess)
 
 That's certainly true, but I think this is something that the DMB has a duty
 to correct.  Frankly, I think there's no reason that Adam and Björn couldn't
 have been ready for upload rights by January, *if* the DMB's expectations
 were made clearer.  If there were documented standards that at least tried
 to be objective, people who are aiming to get upload rights can be working
 to those standards in advance, instead of being told in the DMB meeting that
 the work they've been doing doesn't tick the right boxes on the DMB's
 invisible checklist.
 
 So my question to each of the candidates is this.  As a member of the DMB,
 what would you do to remove this uncertainty around when people are ready to
 apply, reducing the number of rejections (whether those are hard rejects, or
 soft redirects) at DMB meetings?

First of all, I'd like to start by looking at some numbers because I
know people on this list love those.
Based on my quick grepping through the IRC logs, over the past year
we've had a total of 29 applications of which 22 were granted, giving
over 75% success rate for our applicants.

Note that those numbers look at what was originally asked for by the
applicant, so any case where the application is downgraded by the DMB
isn't part of those 22.

Detailed stats are:
Coredev applications: 6 / 8 = 75%
MOTU applications: 2 / 4 = 50%
PPU: 14 / 16 = 87.5% (of which 5 included the creation of a new set)
UCD: 0 / 1 = 0%



Now to try to improve those numbers a bit, I think we can do a few things:
 - Improve our wiki documentation. I think we've rewritten it from
scratch 2-3 times since I first joined the DMB, but maybe the next time
will be perfect. As others said, we can't and won't publish a checklist
of things to do to become a coredev, MOTU or get PPU, but what we can
publish is a list of important areas of work related to the membership.
   Essentially we'd mention that coredevs work on the whole archive,
have to deal with a variety of different packaging methods, help with
library transitions, SRUs, merges from Debian and coordinate with the
release team. That should give enough pointers as to what we typically
look at when we get one of those applications.

 - I like the idea of having the Developer Advisory Board contact us and
in general I feel that applicants and sponsors should equally feel free
to contact DMB members to check if they feel they are ready or what kind
of work is still expected.
   All DMB members don't always agree with each other, so unfortunately
even if you're being told that you seem ready, it's no guarantee that
you'll be successful at the meeting, but those cases are fairly rare I
think and unfortunately Adam Stokes' application was one of those (where
I was the DMB member contacted pre-meeting).

 - Attempt to better describe what we expect on the social/distro side.
   The DMB doesn't only grant upload privileges, it also grants Ubuntu
Membership along with those privileges. That's

Minutes from the Technical Board meeting, 2013-01-21

2013-01-21 Thread Stéphane Graber
Technical Board meeting, 2013-01-21

= Attendees =
 * Colin Watson
 * Kees Cook
 * Matt Zimmerman
 * Martin Pitt
 * Soren Hansen
 * Stéphane Graber (chair)

= Notes =
Really short meeting ( 10min) with a completely empty agenda as all the
recent items have been already discussed directly by e-mail on the
Technical Board mailing-list.

For those not subscribed to our mailing-list, those items were:
 - MicroReleaseException for sssd = granted
 - Addition of Laura Czajkowski to ~launchpad-buildd-admins = done

Meeting log:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2013/ubuntu-meeting.2013-01-21-21.01.html


== Next meeting ==
 * 2013-02-04 21:00 London time (cjwatson to chair).

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com





signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel-announce mailing list
ubuntu-devel-announce@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce


Re: Desktop sharing - security issue

2013-01-19 Thread Stéphane Graber
On 01/12/2013 05:13 AM, James Harris wrote:
 This is a security issue that allowed someone to get remote desktop
 access to my Ubuntu machine even though the machine is behind a
 firewall. I was going to report it as a bug but from the Launchpad
 instructions it seems it is more a policy issue so am reporting it to
 the mailing list that the page directed me to.
 
 Context:
  * Recent upgrade to 12.04 LTS. (May or may not be related.)
  * Home network behind NAT firewall.
  * Home router configured to reject all incoming connections.
 
 Problem: Someone on the Internet gained access to my Ubuntu machine.
 
 Cause: Desktop Sharing preferences and other.
 
 Since the upgrade I found intermittent text on screen that I hadn't
 written. It was the same attack as is mentioned at
 
   http://www.bleepingcomputer.com/forums/topic314188.html
 
 The router was configured to be completely locked down and reject all
 connections from the internet, even ping, but after a lot of looking
 for viruses etc I eventually found what I think is the cause.
 
 Desktop Sharing has a setting: Automatically configure UPnP router to
 open and forward ports. This setting was selected. I don't know when
 it was turned on but it is not something I would want to use. The
 router turned out to be UPnP configurable. This, I think, meant that
 the desktop sharing software told the router to open up access. This
 is not something I was aware of and I had not selected it.
 
 How is it best to protect Ubuntu users from unintentionally opening up
 access as described above? (If it helps, my other desktop sharing
 settings were completely open but nothing warned me of the danger.)
 
 James

Hi,

I just had a quick look here at what the default values for those
settings are on a perfectly clean Ubuntu installation.

Desktop sharing itself is disabled by default.
When enabled, any connection will require explicit user confirmation
through a popup message showing on your desktop.

UPNP auto-configuration is never done automatically and requires the
user to explicitly tick the Automatically configure UPnP router to open
and forward ports option.


So unless someone explicitly enables desktop sharing, then unticks You
must confirm each access to this machine and ticks Automatically
configure UPnP router to open and forward ports., what you described
above simply isn't possible on an Ubuntu machine.

As for clearly stating the risks, here is a copy/paste from the help
message as can be accessed from the configuration dialog:

== Security ==
It is important that you consider the full extent of what each security
option means before changing it.

=== Confirm access to your machine ===
If you want to be able to choose whether to allow someone to access your
desktop, select You must confirm each access to this machine. If you
disable this option, you will not be asked whether you want to allow
someone to connect to your computer.
This option is enabled by default.

=== Enable password ===
To require other people to use a password when connecting to your
desktop, select Require the user to enter this password. If you do not
use this option, anyone can attempt to view your desktop.
This option is disabled by default, but you should enable it and set a
secure password.

=== Allow access to your desktop over the Internet ===
If your router supports UPnP Internet Gateway Device Protocol and it is
enabled, you can allow other people who are not on your local network to
view your desktop. To allow this, select Automatically configure UPnP
router to open and forward ports. Alternatively, you can configure your
router manually.
This option is disabled by default.


So my best guess here is that for some reason you at some point changed
those settings and didn't realize what the UPnP option would do and
apparently didn't read the help before changing those settings.
Then some time later, someone scanned your router's IP address and
discovered that the VNC port was open and then either brute-forced any
password you may have set or directly connected if you didn't set one.


You say you didn't select that setting, but obviously somebody or
something did and somebody or something also unset the other setting
forcing the confirmation prompt.

As a conclusion, I believe the settings we ship Ubuntu with are
perfectly sane and safe. It's not impossible that some external software
you downloaded may have tempered with those settings, but there's really
little we can do about this (as if that's indeed the case, that software
may just as well have bundled its own copy of a VNC server).

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Application Review Board restaffing results

2013-01-18 Thread Stéphane Graber
Hello everyone,

The Application Review Board[0] has been seeking new members for the
past month and closed the nomination period on the 14th of January.

Only a single nominee, Jonathan Carter[1], came out of this for the 3
seats that were available on the board.


The usual procedure is for the Technical Board to setup a CIVS poll for
such restaffing, though in this particular case, we do not feel it'd
serve any real purpose and so decided to simply confirm Jonathan's
membership on the Application Review Board.


Please join me on welcoming Jonathan back on the ARB!


PS: Note that the ARB is still understaffed by 2 members so if anyone
would like to join, please get in touch with them.

[0] https://wiki.ubuntu.com/AppReviewBoard
[1] https://launchpad.net/~jonathan

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Application Review Board restaffing results

2013-01-18 Thread Stéphane Graber
Hello everyone,

The Application Review Board[0] has been seeking new members for the
past month and closed the nomination period on the 14th of January.

Only a single nominee, Jonathan Carter[1], came out of this for the 3
seats that were available on the board.


The usual procedure is for the Technical Board to setup a CIVS poll for
such restaffing, though in this particular case, we do not feel it'd
serve any real purpose and so decided to simply confirm Jonathan's
membership on the Application Review Board.


Please join me on welcoming Jonathan back on the ARB!


PS: Note that the ARB is still understaffed by 2 members so if anyone
would like to join, please get in touch with them.

[0] https://wiki.ubuntu.com/AppReviewBoard
[1] https://launchpad.net/~jonathan

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel-announce mailing list
ubuntu-devel-announce@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce


Subset of the Ubuntu Archive frozen for alpha-1 release

2012-12-05 Thread Stéphane Graber
Hello,

As was discussed and agreed on at UDS, flavours participating in the
opt-in milestones may ask for a subset of the Ubuntu archive to be
temporarily frozen while they get ready for release.

Edubuntu and Kubuntu decided to participate in alpha-1 which is due for
release tomorrow.

As of a few minutes ago, at Kubuntu's request, all packages that ship on
the Kubuntu images have been frozen.

You can find the exact lists of source packages being frozen here:
http://bazaar.launchpad.net/~ubuntu-release/britney/hints-ubuntu/files

The freeze is done in britney, the tool we use to promote packages from
the proposed pocket to the release pocket. That means that developers
can continue to upload packages as usual, any frozen package will simply
be temporarily held in the proposed pocket.


Stéphane Graber, on behalf of the Ubuntu Release Team.



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Subset of the Ubuntu Archive frozen for alpha-1 release

2012-12-05 Thread Stéphane Graber
Hello,

As was discussed and agreed on at UDS, flavours participating in the
opt-in milestones may ask for a subset of the Ubuntu archive to be
temporarily frozen while they get ready for release.

Edubuntu and Kubuntu decided to participate in alpha-1 which is due for
release tomorrow.

As of a few minutes ago, at Kubuntu's request, all packages that ship on
the Kubuntu images have been frozen.

You can find the exact lists of source packages being frozen here:
http://bazaar.launchpad.net/~ubuntu-release/britney/hints-ubuntu/files

The freeze is done in britney, the tool we use to promote packages from
the proposed pocket to the release pocket. That means that developers
can continue to upload packages as usual, any frozen package will simply
be temporarily held in the proposed pocket.


Stéphane Graber, on behalf of the Ubuntu Release Team.



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel-announce mailing list
ubuntu-devel-announce@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce


Re: Patch Pilot

2012-11-27 Thread Stéphane Graber
On 11/27/2012 11:29 AM, Dmitrijs Ledkovs wrote:
 I have tried to process the oldest merges first:
 
 = Uploaded into raring with minor tweaks =
 
 https://code.launchpad.net/~kampka/ubuntu/quantal/fwknop/upstart-support/+merge/124683
 https://code.launchpad.net/~kampka/ubuntu/quantal/zabbix/upstart-support/+merge/124660
 https://code.launchpad.net/~pitti/ubiquity/pygobject-fixes/+merge/136327
 https://code.launchpad.net/~sonia/ubuntu/quantal/vim-scripts/fix-for-31204/+merge/126966
 https://bugs.launchpad.net/ubuntu/+source/gst-plugins-bad0.10/+bug/973014
 
 = Uploaded SRU =
 https://bugs.launchpad.net/ubuntu/+source/portmap/+bug/688550
 
 = Not a complete / valid SRU bug =
 Commented of what to do next  unsubscribed sponsors:
 https://bugs.launchpad.net/ubuntu/+source/samba/+bug/1061244
 https://bugs.launchpad.net/myunity/+bug/999771
 
 == Plese mark as rejected ==
 
 # Already fixed in lp:kubuntu-docs
 * 
 https://code.launchpad.net/~m-alaa8/ubuntu/quantal/kubuntu-docs/fix-for-1066132/+merge/129598

Done. In the future, feel free to ping me on IRC with a list of status
changes you want done and I'll be happy to do them.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Minutes from the Developer Membership Board meeting of the 1st of November (at UDS)

2012-11-20 Thread Stéphane Graber
On 11/20/2012 01:28 PM, Bryce Harrington wrote:
 On Sun, Nov 18, 2012 at 07:55:33PM -0500, Stéphane Graber wrote:
 A new Xorg packageset was created:
 http://people.canonical.com/~stgraber/package_sets/raring/xorg and
 Maarten was added to it as an uploader.
 
 The list of packages ubuntu-x tends will generally vary over time.  We
 track the packages we tend via the package subscriptions of the ubuntu-x
 team.  Will someone be periodically comparing that list with the package
 set, and updating the set when there are discrepencies?
 
 Bryce

Hi Bryce,

The standard process is for one of the members of the packageset to send
us an e-mail to ubuntu-devel-permissi...@lists.ubuntu.com

If the request matches the criteria for the packageset as defined when
the set was created, the packages will be added to the packageset, if
not, you'll be asked to attend a Developer Membership Board meeting to
discuss a change of the criteria.

We haven't yet defined a strict set of criteria for the xorg packageset,
though I expect anything that's closely related to xorg/wayland and
related drivers and GL libraries to be added without requiring you to
attend a meeting.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Minutes from the Developer Membership Board meeting of the 1st of November (at UDS)

2012-11-20 Thread Stéphane Graber
On 11/20/2012 01:37 PM, Stéphane Graber wrote:
 On 11/20/2012 01:28 PM, Bryce Harrington wrote:
 On Sun, Nov 18, 2012 at 07:55:33PM -0500, Stéphane Graber wrote:
 A new Xorg packageset was created:
 http://people.canonical.com/~stgraber/package_sets/raring/xorg and
 Maarten was added to it as an uploader.

 The list of packages ubuntu-x tends will generally vary over time.  We
 track the packages we tend via the package subscriptions of the ubuntu-x
 team.  Will someone be periodically comparing that list with the package
 set, and updating the set when there are discrepencies?

 Bryce
 
 Hi Bryce,
 
 The standard process is for one of the members of the packageset to send
 us an e-mail to ubuntu-devel-permissi...@lists.ubuntu.com

That'd be: devel-permissi...@lists.ubuntu.com

 If the request matches the criteria for the packageset as defined when
 the set was created, the packages will be added to the packageset, if
 not, you'll be asked to attend a Developer Membership Board meeting to
 discuss a change of the criteria.
 
 We haven't yet defined a strict set of criteria for the xorg packageset,
 though I expect anything that's closely related to xorg/wayland and
 related drivers and GL libraries to be added without requiring you to
 attend a meeting.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Minutes from the Developer Membership Board meeting of the 1st of November (at UDS)

2012-11-18 Thread Stéphane Graber
Hello,

Sorry for being so late sending these minutes.

The Developer Membership Board met at UDS (9am on Thursday 1st of
November) to review Maarten Lankhorst and Andy Whitcroft's applications.

=== Maarten Lankhorst ===

Maarten originally applied for Ubuntu Core Development membership but
after discussion, this was changed into the creation of an Xorg
packageset and upload rights to that packageset.

https://wiki.ubuntu.com/MaartenLankhorst/DeveloperApplication

The application was discussed and voted on: For: 6 Against: 0 Abstained: 0

A new Xorg packageset was created:
http://people.canonical.com/~stgraber/package_sets/raring/xorg and
Maarten was added to it as an uploader.

=== Andy Whitcroft ===

Andy applied for Ubuntu Core Development membership.
https://wiki.ubuntu.com/AndyWhitcroft/CoreDevApplication

The application was discussed and voted on: For: 6 Against: 0 Abstained: 0

Andy Whitcroft was added to the Ubuntu Core Development team.


The next meeting will be tomorrow (18th of November) at 14:00 UTC in
#ubuntu-meeting and I believe the chair will be Barry (sorry, my memory
is a bit fuzzy after almost a month ;)).


Congratulations to Maarten and Andy!

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: DNS caching disabled for 12.10...still

2012-10-07 Thread Stéphane Graber
On 10/07/2012 04:32 AM, Benjamin Kerensa wrote:
 
 On Oct 7, 2012 12:28 AM, Daniel J Blueman dan...@quora.org
 mailto:dan...@quora.org wrote:

 DNS caching was previously disabled [1] when dnsmasq was introduced in
 12.04 (one of the benefits), to prevent privacy issues, and to
 prevent local users from spying on source ports and trivially
 performing a birthday attack in order to poison the cache.

 Since dnsmasq eg introduced the standard port-randomisation
 mitigations [2] for Birthday attacks in 2008 and related hardening,
 what are the other technical reasons we should still keep this
 disablement, despite upstream keeping DNS caching enabled? (ie should
 upstream also disable DNS caching?)

 Of course, the impact of disabling DNS caching is considerable.

 Thanks!
   Daniel

 [1] https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/903854
 [2]
 http://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/2008q3/002148.html
 --
 Daniel J Blueman

 
 Good points it does look like hardening and addressing some of the
 concerns has occurred it is possible perhaps that enabling caching was
 just overlooked but either way it would be nice to see it enabled in 13.04.

dnsmasq still doesn't support per-user caching so it still doesn't meet
the criteria we discussed with the security team last cycle and as such
as kept in its current configuration.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Proposing a New App Developer Upload Process

2012-09-06 Thread Stéphane Graber
On 12-09-06 12:12 PM, Steve Langasek wrote:
 On Thu, Sep 06, 2012 at 03:35:50PM +0200, Stefano Rivera wrote:
 Hi ubuntu-devel (2012.09.06_15:31:14_+0200)
 The hooks just run a script provided by another package (in the
 archive). It makes the decisions on how to collate things.
 
 A (hopefully) clearer attempt to articulate this:
 
 We make the extras packages entirely self-contained and namespaced.
 
 Then, we provide some machinery outside them that handles collates
 things across extras packages. If there's some kind of conflict here,
 (although it should be avoidable), it's not an issue. It just results in
 a broken extras package. Not broken in a way that stops apt from
 working. And it doesn't break anything in the Ubuntu archive, only the
 conflicting extras packages.
 
 There's no reason that any of this should be done in a postinst hook.  If we
 already have a scheme to make the extras packages properly namespaced with
 no conflicts, the same class of namespacing should be used as well for the
 integration points (the shared directories), and the files should all be
 shipped in the package.  If there's a conflict means it's designed wrong,
 because this should be done in a way that there's never a conflict.
 
 It's completely achievable to have our packaging helper create the correct
 symlinks automatically.  Compared to the work of getting the package
 installed correctly in /opt/extras.u.c/$pkg, it's a piece of cake, even.


I agree that we shouldn't generate the symlinks from the maintainer
scripts, instead we should just be enforcing the same filename
requirement as the ARB is currently using for files outside of /opt.

That's prefixing any file outside of /opt/extras.ubuntu.com/package by
package_. As dpkg is ensuring we can't have two binary packages
installed with the same name on the system, there's no potential
conflict possible.

My opinion would be to have the following:
 - All files in /opt/extras.ubuntu.com/package
 - Have a reserved directory in /opt/extras.ubuntu.com or directly in /opt/
 - Use /opt/extras.ubuntu.com/reserved/ or /opt/reserved/ to contain
symlinks to the desktop files, dbus services, unity lenses/scopes, ...
adding that path at the end of the XDG_DATA_DIRS list. Any file in that
path needs to be prefixed by package_.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Proposal to drop Ubuntu alternate CDs for 12.10

2012-08-28 Thread Stéphane Graber
On 08/28/2012 01:56 AM, Alkis Georgopoulos wrote:
 For the LTSP use case, another possible workaround for post-12.10 releases:
  * In the last stages of installation, copy the whole /target system to
 /target/opt/ltsp/i386,
  * Chroot to /target/opt/ltsp/i386 and install ltsp-client and ldm,
  * Run /target/opt/ltsp/i386/usr/share/ltsp/cleanup to remove the user
 account that was created, regenerate dbus machine id etc,
  * Install ltsp-server to /target,
  * And run /target/ltsp-update-image to generate a squashfs image in
 /target/opt/ltsp/images/i386.img out of the fat chroot in
 /target/opt/ltsp/i386.

You seem to be assuming that the Ubuntu desktop media will be containing
LTSP which it won't.
Even then, that wouldn't solve the RAID question as no media letting you
setup RAID will contain LTSP if alternate is removed.

Being able to re-use the squashfs/target as a bootable LTSP system is
indeed quite nice and in some cases (Edubuntu i386) will help save a lot
of space on the media, however it won't be helping with the RAID problem.

 This changes the default LTSP chroot to one that supports fat+thin
 clients (instead of only thins), but with the current trends that
 require 3d acceleration on desktops, that's probably a good thing.
 
 And it only requires minimal network connectivity to generate the
 chroot, or a couple of MB of packages in the installation media
 (ltsp-server, ltsp-client, ldm).
 


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Proposal to drop Ubuntu alternate CDs for 12.10

2012-08-28 Thread Stéphane Graber
On 08/28/2012 03:25 AM, Dmitrijs Ledkovs wrote:
 On 28 August 2012 06:56, Alkis Georgopoulos alk...@gmail.com wrote:
 For the LTSP use case, another possible workaround for post-12.10 releases:
  * In the last stages of installation, copy the whole /target system to
 /target/opt/ltsp/i386,
  * Chroot to /target/opt/ltsp/i386 and install ltsp-client and ldm,
  * Run /target/opt/ltsp/i386/usr/share/ltsp/cleanup to remove the user
 account that was created, regenerate dbus machine id etc,
  * Install ltsp-server to /target,
  * And run /target/ltsp-update-image to generate a squashfs image in
 /target/opt/ltsp/images/i386.img out of the fat chroot in
 /target/opt/ltsp/i386.

 This changes the default LTSP chroot to one that supports fat+thin
 clients (instead of only thins), but with the current trends that
 require 3d acceleration on desktops, that's probably a good thing.

 And it only requires minimal network connectivity to generate the
 chroot, or a couple of MB of packages in the installation media
 (ltsp-server, ltsp-client, ldm).

 
 This sounds very ubiquity friendly. We would probably install
 ltsp-server, ltsp-client  ldm in the squashfs, and remove them as
 needed (this is what ubiquity currently does). Then an ltsp
 installation plugin needs to be added that can do the rest of the
 steps =)

No need, the code already exists in edubuntu-live and has been around
for years ;)
The only difference is that with what Alkis describes, it's possible to
re-use the livefs squashfs instead of using a separate ltsp squashfs
like Edubuntu currently does.

The biggest concern we found with that, and the reason why we didn't
switch to it for 12.10 in Edubuntu is that most thin clients are unable
to run amd64 code, yet the server ideally should be amd64.

That's why the Edubuntu media contains both a regular livefs for the
desktop/ltsp server installations and a separate ltsp squashfs that's
always i386 so even when installing with the amd64 image you get an i386
LTSP chroot.

 
 Copied to prevent disappearing:
 https://blueprints.launchpad.net/ubuntu/+spec/foundations-r-ubiquity-ltsp
 
 Regards,
 
 Dmitrijs.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Proposal to drop Ubuntu alternate CDs for 12.10

2012-08-27 Thread Stéphane Graber
On 08/27/2012 05:50 PM, Steve Langasek wrote:
 Dear developers,
 
 As part of ongoing efforts to reduce the number of images we ship for
 Ubuntu, and to make the desktop image more useful in a variety of scenarios,
 Dmitrijs Ledkovs has been hard at work in quantal adding support for LVM,
 cryptsetup, and RAID to ubiquity.
 
 The good news is that this means today we already have support in ubiquity
 for cryptsetup and LVM in the guided partitioner, with manual partitioning
 support soon to follow.  The somewhat bad news is that we will not have
 support for RAID setup in ubiquity this cycle.
 
 I would like to propose that, in spite of not reaching 100% feature parity,
 we drop the Ubuntu alternate installer for 12.10 anyway.
 
 The arguments that I see in favor of this are:
 
  - RAID is relatively straightforward to turn on post-install.  You install
to one disk, boot to the system, assemble a degraded RAID with the other
disks, copy your data, reboot to the degraded RAID, and finally merge
your install disk into the array.  It's not quick, but it's *possible*.
  - Desktop installs on RAID will still be supported by other paths: using
either netboot or server CDs and installing the desktop task.
  - RAID on the desktop really is a minority use case.  Laptops almost never
have room for more than one hard drive; desktops can but are rarely
equipped with them.  So the set of affected users is very small.  Some
rough analysis of bug data in launchpad suggests a very liberal upper
bound of .8% of desktop users.
  - RAID on the desktop correlates with conservatism in other areas:  we can
probably continue to recommend 12.04 instead of 12.10 for the affected
users.
  - It lets us tighten our focus on making the desktop CD shine: fewer images
to QA, fewer different paths to get right (like the CD apt upgrader case)
means more time to focus on the things that matter.
 
 So my opinion is that we should drop the Ubuntu alternate CDs with Beta 1. 
 Other flavors are free to continue building alternate CDs (i.e.,
 debian-installer CDs) according to their preference, but we would drop
 them for Ubuntu and direct users to one of the above-mentioned alternatives
 if they care about RAID on desktop installs.
 
 Please note one implication here that, with the possibility of not having
 i386 server CDs for 12.10, the only install option for an i386 user wanting
 RAID on a desktop would be to install via netboot or with the mini ISO.
 
 Do any of you see reasons for not making this change, and dropping the
 alternate CDs?  Are there shortcomings to the proposed fallback solutions
 that we haven't identified here?

Another use case that would be dropped when dropping alternate is LTSP.
At the moment LTSP is installable from Ubuntu Alternate by pressing F4
and selecting Install LTSP server, it's my understanding that it's the
most widely used way of install an LTSP server today.

In order to install LTSP from a media, you need ltsp-server and its
dependencies + a full Ubuntu desktop (for the application server), so it
can't simply be moved to the server media as it'd require the addition
of several hundreds of Megs of packages.

What we were planning for LTSP, knowing that alternate would disappear
was to keep it in main (by adding it to the supported seed) as it's
currently supported by Canonical and quite popular in some governments
and educational networks.
We'd then recommend using Edubuntu as the easy way of getting LTSP.
However Edubuntu is a live-only product so that plan isn't possible
until we get RAID support in ubiquity...

Almost all LTSP setups that I know of are using RAID1 or RAID5 (or a mix
of both for system/home), no longer having the alternate and not having
RAID support in ubiquity would likely prevent a significant part of the
LTSP users from installing it on 12.10.


That being said, we could certainly try to get these people to install
using the server media, then getting ltsp post-install. This will still
require them to download a substantial amount of packages and my
experience is that setting this all up by hand properly is scaring a lot
of people.


I'd certainly expect most large scale deployments of LTSP to stick on
12.04 LTS, so I can't provide any number on how big a user base we're
talking about, but I'm sure we'll be getting quite a few questions and
complaints should we drop the alternate media without a viable
alternative for these users.


I don't think this use case alone is enough to keep the alternate
images, but it's surely something to keep in mind and make sure is
clearly communicated to the users, telling them that this is a temporary
situation and will all be resolved in 13.04.


 Thanks,


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman

Minutes from the 12.04.1 team meeting (Thursday 9th of August)

2012-08-16 Thread Stéphane Graber
(Sorry for sending this so late, I thought I already did ...)

The eighth 12.04.1 team meeting took place at 14:00 UTC on Thursday the
2nd of August 2012.

== Attendees ==

 - arges
 - jamespage
 - jibel
 - seb128
 - skaet
 - slangasek
 - smoser
 - stgraber (chair)
 - xnox

== Notes ==
=== Deadlines ===
The upcoming deadlines are:
 * 2012/08/09: KernelFreeze, LanguageTranslationDeadline, SRU Fix
Validation Testing
 * 2012/08/16: FinalFreeze, ReleaseNoteFreeze
 * 2012/08/23: Ubuntu 12.04.1

=== Bugs ===
Verification and fixes of any detected regression should be the main
focus at this point.
Work on 10.04 to 12.04 upgrade without internet media is ongoing.

=== Oversizedness ===
A few live-build fixes have made all the images fit again.

== Team status updates ==
 - Desktop: moving bugs to 12.04.2, discussing whoopsie on
ubuntu-release mailinglist, compiz SRU fixed to work on arm, rest look good
 - Release: working on release notes at
https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes
 - Foundations: working on upgrade fixes, fixed remaining oversizedness
bugs, doing SRU verification
 - L3:
http://people.canonical.com/~arges/point-release/milestone-12.04.1.html
is now running from a cronjob
 - Server: waiting for walinuxagent, been moving other bugs to 12.04.2.
There won't be a MaaS update for 12.04.1.
 - QA: focus is on upgrade testing, found a few obsolete conffiles being
left around,

== Actions ==
 * xnox to liase with ballons, gema and jibel w.r.t. fs/storage testing
(carried)


== Next meeting ==

Full meeting log is available here:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2012/ubuntu-meeting.2012-08-09-14.02.log.html

The next meeting will be on Thursday the 16th of August at 14:00 UTC
(#ubuntu-meeting on freenode).

Useful resources and the meeting agenda are available at:
https://wiki.ubuntu.com/PrecisePangolin/12.04.1

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com















signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Minutes from the 12.04.1 team meeting (Thursday 2nd of August)

2012-08-06 Thread Stéphane Graber
The seventh 12.04.1 team meeting took place at 14:00 UTC on Thursday the
2nd of August 2012.

== Attendees ==
 * arges
 * Daviey
 * jibel
 * NCommander
 * seb128
 * skaet
 * smartboyhw
 * smoser
 * stgraber (chair)

== Notes ==
=== Deadlines ===
The upcoming deadlines are:
 * last Thursday 21:00 UTC: Beginning of PointReleaseProcess and
DesktopInfrastructureFreeze
 * 2012/08/09: KernelFreeze, LanguageTranslationDeadline, SRU Fix
Validation Testing
 * 2012/08/16: FinalFreeze, ReleaseNoteFreeze
 * 2012/08/23: Ubuntu 12.04.1

=== Bugs ===
The number of targeted bugs went from 106 last week to 75!
Of these 75, 32 are in fix commited state (likely waiting in -proposed)
and 9 are marked in progress (waiting to be approved).

So that's 34 bugs that still need fixing of which 12 aren't assigned to
somebody.

The full list of bugs can be found at:
https://bugs.launchpad.net/ubuntu/precise/+bugs?field.milestone%3Alist=49926
The list of remaining bugs to fix can be found at: http://goo.gl/RUiZe
Some of these aren't assigned to someone yet: http://goo.gl/22zV2

Bugs that haven't been directly discussed and agreed on with the release
team will be moved to 12.04.2 at this point.
If a fix that's not in -proposed or in the queue at this point, needs to
get in 12.04.1, please get in touch with seb128, skaet or stgraber.
(By e-mail or #ubuntu-release on IRC).

=== Oversizedness ===
A live-build fix is currently in precise-proposed and needs testing on
the builders. This should help reduce by around 2MB the size of the live
medias, though up to 7MB may still need freeing. Work on this will
continue this week.

== Team status updates ==
 - Server: Still no clear plan for MaaS, might see something landing
this week (will required exception).
 - Desktop: Mostly done for .1, some bugs didn't make it but that's not
the end of the world and these weren't important for the iso. nux and
some other updates are still in the queue. Unity update was delayed as
considered too risky for .1. The desktop team would like to see whoopsie
turned off, a conversation will be started on the release mailing list.
 - QA: Found bug 1029531, continued doing SRU verification, found a
problem with wubi image generation overwriting the quanal ones. Been
sprinting all week.
 - PES: highbank enablement landed in -updates, armadaxp is being tested
now.
 - Foundations: Cleaned up the bug list, eglibc update should be
uploaded very soon, some verification work, uploaded ubiquity and
base-files in preparation for .1.
 - Release: tracking the translation work, precise builds are now
happening for every image taking part in .1, will now start work on the
release notes.
 - L1: Some more work on the point release bug report.

== Actions ==
 * xnox to liase with ballons, gema and jibel w.r.t. fs/storage testing
(carried)
 * Flavor leads participating, please verify that the images are as you
expect, and start smoke testing tomorrow to make sure all the right
12.04.1 bits are in place.


== Next meeting ==

Full meeting log is available here:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2012/ubuntu-meeting.2012-08-02-14.02.log.html

The next meeting will be on Thursday the 9th of August at 14:00 UTC
(#ubuntu-meeting on freenode).

Useful resources and the meeting agenda are available at:
https://wiki.ubuntu.com/PrecisePangolin/12.04.1

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com













signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Minutes from the 12.04.1 team meeting (Thursday 26th of July)

2012-07-26 Thread Stéphane Graber
The sixth 12.04.1 team meeting took place at 14:00 UTC on Thursday the
26th of July 2012.

== Attendees ==
 * arges
 * jibel_
 * NCommander
 * seb128
 * skaet
 * smoser
 * stgraber (chair)

== Notes ==
=== Deadlines ===
The upcoming deadlines are:
 * 2012/08/02: Beginning of PointReleaseProcess and
DesktopInfrastructureFreeze
 * 2012/08/09: KernelFreeze, LanguageTranslationDeadline, SRU Fix
Validation Testing
 * 2012/08/16: FinalFreeze, ReleaseNoteFreeze
 * 2012/08/23: Ubuntu 12.04.1

=== Bugs ===
The number of targeted bugs went from 120 last week to 106.
Of these 106, 45 are in fix commited state (likely waiting in -proposed)
and 14 are marked in progress (waiting to be approved).

So that's 47 bugs that still need fixing of which 19 aren't assigned to
somebody.

The full list of bugs can be found at:
https://bugs.launchpad.net/ubuntu/precise/+bugs?field.milestone%3Alist=49926
The list of remaining bugs to fix can be found at: http://goo.gl/RUiZe
Some of these aren't assigned to someone yet: http://goo.gl/22zV2

xnox offered to help arges and stokachu with the required rdepends
rebuild for their multiarch packages by using his rebuild scripts on the
HP cloud.

If someone has fixes that will be uploaded after the deadline and this
was agreed with the release team, please let me know, otherwise I'll
move all remaining bugs to 12.04.2 on the 2nd of August at 21:00 UTC.

=== Oversizedness ===
The daily-live media for all architectures but powerpc are currently
oversized, this is at least partly caused by duplicate kernel headers on
the media. A fix should be uploaded shortly so we'll know whether
further actions are required to get 12.04.1 to fit on the media.

== Team status updates ==
 - Foundations: Uploaded a few more bugfixes, cleaned up the bug list,
updated statuses, done some verification and reminded people about the
deadline.
 - L3: Worked on point-release dashboard.
 - Server: Gone through the bug list, not sure what to expect for maas
in 12.04.1, some other SRUs going through.
 - PES: highbank enabled is in -updates and being verified
 - QA: Doing SRU verification, no regression found so far, problems with
alternates and server images not being installable due to kernel
mismatch, oem install missing ubiquity.
 - Desktop: unity SRU is in proposed, compiz should be uploaded soon,
some smaller gnome fixes too and that should be it for .1
 - Release: Discussed with vanhoof the cedartrail* landing late,
looking into a bug with the slideshow on a fresh install,  settled with
martin and david that david will be doing the translation updates.
Doing calculations to add rest of LTS dailies to cron job,  will add
after A3 publishes.

== Actions ==
 * xnox to liase with ballons, gema and jibel w.r.t. fs/storage testing
(carried)

== Next meeting ==

Full meeting log is available here:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2012/ubuntu-meeting.2012-07-26-14.01.log.html

The next meeting will be on Thursday the 2nd of August at 14:00 UTC
(#ubuntu-meeting on freenode).

Useful resources and the meeting agenda are available at:
https://wiki.ubuntu.com/PrecisePangolin/12.04.1

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com











signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Minutes from the 12.04.1 team meeting (Thursday 12th of July)

2012-07-12 Thread Stéphane Graber
The fourth 12.04.1 team meeting took place at 14:00 UTC on Thursday the
12th of July 2012.

*Note: the meetings are now moving to a weekly schedule until the
release of 12.04.1 on the 23rd of August*

== Attendees ==
 * arges
 * bdmurray
 * jamespage
 * ScottK
 * seb128
 * skaet
 * smoser
 * stgraber (chair)
 * stokachu

== Notes ==
arges worked on a bug report for point releases, it's available at:
http://people.canonical.com/~arges/point-release/milestone-12.04.1.html
This report isn't currently built automatically and will likely be moved
to reports.qa.ubuntu.com in the near future.
Quite a few suggestions have been made to improve that report and ensure
that all the relevant bugs are on it (quite a few seem to be missing).

The upcoming deadlines are:
 * 2012/08/02: Beginning of PointReleaseProcess and
DesktopInfrastructureFreeze
 * 2012/08/09: KernelFreeze, LanguageTranslationDeadline, SRU Fix
Validation Testing
 * 2012/08/16: FinalFreeze, ReleaseNoteFreeze
 * 2012/08/23: Ubuntu 12.04.1

There are still quite a few concerns on the length of the Unapproved
queue for -proposed, sitting at around 30 packages, representing almost
two weeks of backlog. This is making it harder for the team to track
exactly what's going on.

Bug list went from 106 bugs targeted to 12.04.1 to 112.
26 of these are currently marked fix commited (vs 27 two weeks ago).
50 of the 112 bugs aren't currently assigned to someone.

This is quite bad as the number of bug is supposed to go down, not up at
this point... All 12.04.1 team members will go through the list to
ensure all the bugs are assigned and that any bug that won't make it to
12.04.1 is moved to another milestone.

The meeting cadence is also changing from fortnightly to weekly in an
effort to keep everyone focused and aware of the state of 12.04.1 in
that last month before release.

== Team status updates ==
 * L3: Working on multi-arch and the point-release report.
 * Release: Looking at QA daily testing, will try to get SRUs to land
faster.
 * Desktop: A bit behind on compiz/unity SRUs, Unity should be uploaded
next week, compiz was reverted at the last minute and will be pushed
again real soon. The team is concerned by the SRU backlog, making it
more difficult to track issues on errors.ubuntu.com
 * Server: Went through the whole list, will follow up on juju and maas
to make sure the fixes land on time.
http://status.qa.ubuntu.com/reports/kernel-bugs/reports/rls-p-tracking-bugs.html
 * Foundations: Going through the whole list of bugs, will fix the
status/target/importance for any where it's obviously wrong, trying to
get things assigned to people.

== Actions ==
 * xnox to liase with ballons, gema and jibel w.r.t. fs/storage testing
(carried)
 * stgraber to review and sponsor bug 977947, bug 977952 and bug 977940
 * skaet to poke the SRU team and see what can be done to process the
current backlog
 * stgraber to change the meeting to a weekly meeting

Full meeting log is available here:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2012/ubuntu-meeting.2012-07-12-14.00.log.html

The next meeting will be on Thursday the 19th of July at 14:00 UTC
(#ubuntu-meeting on freenode).

Useful resources and the meeting agenda are available at:
https://wiki.ubuntu.com/PrecisePangolin/12.04.1

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com







signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Enabling Connectivity Checking in NetworkManager

2012-07-10 Thread Stéphane Graber
On 07/10/2012 03:06 PM, Ted Gould wrote:
 On Tue, 2012-07-10 at 14:48 -0400, Scott Kitterman wrote:
 On Tuesday, July 10, 2012 02:41:35 PM Mathieu Trudel-Lapierre wrote:
 As for the actual change, it is limited to the
 /etc/NetworkManager/NetworkManager.conf file; to which the following
 will be added:

 [connectivity]
 uri=http://start.ubuntu.com/connectivity-check.html
 response=Lorem ipsum

 See the manual page for NetworkManager.conf(5) for the details of what
 these settings do.

 Please let me know if you have questions or think there are good
 reasons not to enable this feature. If there is no response by the end
 of the week, I'd like to proceed with a enabling this in Quantal and
 making sure it gets well tested.

 I think that a significant fraction of Ubuntu's user base is (reasonably) 
 very 
 sensitive about privacy issues.  While this is no worse the the NTP check 
 that 
 already exists (that is controversial), I don't think it  should be enabled 
 by 
 default.
 
 I think that for those who are concerned, this is trivial to disable.
 But, I think what happens for those who are, is that Ubuntu does the
 right thing by default.  If you're at a hotel or other location that
 captures for a login page, you won't get your mail and apt and ... all
 downloading bogus stuff.
 
   --Ted

There are other ways to detect such cases without having the machine
connect to an external service.

Someone suggested on IRC to implement a doesnt-exist.ubuntu.com which is
essentially a record that Canonical would guarantee never to exist in
the ubuntu.com. zone.

If you can resolve or even access that host, then you are behind some
kind of captive portal/proxy.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com




signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Enabling Connectivity Checking in NetworkManager

2012-07-10 Thread Stéphane Graber
On 07/10/2012 03:20 PM, Marc Deslauriers wrote:
 On Tue, 2012-07-10 at 15:11 -0400, Stéphane Graber wrote:
 On 07/10/2012 03:06 PM, Ted Gould wrote:
 On Tue, 2012-07-10 at 14:48 -0400, Scott Kitterman wrote:
 On Tuesday, July 10, 2012 02:41:35 PM Mathieu Trudel-Lapierre wrote:
 As for the actual change, it is limited to the
 /etc/NetworkManager/NetworkManager.conf file; to which the following
 will be added:

 [connectivity]
 uri=http://start.ubuntu.com/connectivity-check.html
 response=Lorem ipsum

 See the manual page for NetworkManager.conf(5) for the details of what
 these settings do.

 Please let me know if you have questions or think there are good
 reasons not to enable this feature. If there is no response by the end
 of the week, I'd like to proceed with a enabling this in Quantal and
 making sure it gets well tested.

 I think that a significant fraction of Ubuntu's user base is (reasonably) 
 very 
 sensitive about privacy issues.  While this is no worse the the NTP check 
 that 
 already exists (that is controversial), I don't think it  should be 
 enabled by 
 default.

 I think that for those who are concerned, this is trivial to disable.
 But, I think what happens for those who are, is that Ubuntu does the
 right thing by default.  If you're at a hotel or other location that
 captures for a login page, you won't get your mail and apt and ... all
 downloading bogus stuff.

 --Ted

 There are other ways to detect such cases without having the machine
 connect to an external service.

 Someone suggested on IRC to implement a doesnt-exist.ubuntu.com which is
 essentially a record that Canonical would guarantee never to exist in
 the ubuntu.com. zone.

 If you can resolve or even access that host, then you are behind some
 kind of captive portal/proxy.

 
 That only works if the portal/proxy spoofs DNS. Some don't do that.
 
 Seriously, there's a whole slew of software on the desktop that connects
 to the Internet regularly, I don't see how this is any different. It's
 easy to change for paranoid people, and enabling it would make Ubuntu so
 much better for a majority of users.
 
 Marc.

Just to clarify, I'm not at all against that change, being one of the
ones who asked Mathieu to put that on this todo after looking at 2-3
implementation of that check in ubiquity alone that I'd love to get rid off.

I'm not sure I like the idea of having NM poke that same address every 5
minutes as it sounds like a pretty easy way for anyone to accurately
count the number of Ubuntu machines currently running in any given network.

Sadly it's not how it was implemented in Network Manager, but I think
I'd have preferred to have this check be exposed over DBUS so that
applications like ubiquity can use that call to query the connectivity
on demand.
This would also have allowed to extend the check to work with other
protocols, letting the client application query for a specific host and
protocol if it wants to (with the default being whatever is defined in
NetworkManager.conf).

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com





signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Enabling Connectivity Checking in NetworkManager

2012-07-10 Thread Stéphane Graber
On 07/10/2012 03:39 PM, Marc Deslauriers wrote:
 On Tue, 2012-07-10 at 15:29 -0400, Stéphane Graber wrote:
 On 07/10/2012 03:20 PM, Marc Deslauriers wrote:
 On Tue, 2012-07-10 at 15:11 -0400, Stéphane Graber wrote:
 On 07/10/2012 03:06 PM, Ted Gould wrote:
 On Tue, 2012-07-10 at 14:48 -0400, Scott Kitterman wrote:
 On Tuesday, July 10, 2012 02:41:35 PM Mathieu Trudel-Lapierre wrote:
 As for the actual change, it is limited to the
 /etc/NetworkManager/NetworkManager.conf file; to which the following
 will be added:

 [connectivity]
 uri=http://start.ubuntu.com/connectivity-check.html
 response=Lorem ipsum

 See the manual page for NetworkManager.conf(5) for the details of what
 these settings do.

 Please let me know if you have questions or think there are good
 reasons not to enable this feature. If there is no response by the end
 of the week, I'd like to proceed with a enabling this in Quantal and
 making sure it gets well tested.

 I think that a significant fraction of Ubuntu's user base is 
 (reasonably) very 
 sensitive about privacy issues.  While this is no worse the the NTP 
 check that 
 already exists (that is controversial), I don't think it  should be 
 enabled by 
 default.

 I think that for those who are concerned, this is trivial to disable.
 But, I think what happens for those who are, is that Ubuntu does the
 right thing by default.  If you're at a hotel or other location that
 captures for a login page, you won't get your mail and apt and ... all
 downloading bogus stuff.

   --Ted

 There are other ways to detect such cases without having the machine
 connect to an external service.

 Someone suggested on IRC to implement a doesnt-exist.ubuntu.com which is
 essentially a record that Canonical would guarantee never to exist in
 the ubuntu.com. zone.

 If you can resolve or even access that host, then you are behind some
 kind of captive portal/proxy.


 That only works if the portal/proxy spoofs DNS. Some don't do that.

 Seriously, there's a whole slew of software on the desktop that connects
 to the Internet regularly, I don't see how this is any different. It's
 easy to change for paranoid people, and enabling it would make Ubuntu so
 much better for a majority of users.

 Marc.

 Just to clarify, I'm not at all against that change, being one of the
 ones who asked Mathieu to put that on this todo after looking at 2-3
 implementation of that check in ubiquity alone that I'd love to get rid off.

 I'm not sure I like the idea of having NM poke that same address every 5
 minutes as it sounds like a pretty easy way for anyone to accurately
 count the number of Ubuntu machines currently running in any given network.
 
 Meh, there are countless other things that can be used for that
 currently...apt requests, ntp, browser user-agent strings, etc.

None that gives you the guarantee of happening at a given interval.
NTP happens on boot and whenever an interface is brought online, so you
can't really know how many machines that's.

With the connectivity check running exactly every 5 minutes, you can
take a one hour sample of the http traffic on a network, divide by 12
and have a pretty accurate estimate of the number of machines on it.

Given a longer log, you could probably get an even more accurate count
by looking at the exact time different between checks to detect new
machines being turned on or machines disappearing.

 Sadly it's not how it was implemented in Network Manager, but I think
 I'd have preferred to have this check be exposed over DBUS so that
 applications like ubiquity can use that call to query the connectivity
 on demand.
 
 I'm confused...Network Manager already exposes connectivity information
 over dbus, and that's what apps are supposed to use...

What I'm saying is that I'd rather a function be exported over DBUS
than a status/event.
So that when something needs to know whether they have connectivity they
trigger that test and possibly pass it some more information so that
Network Manager can test it properly.

Querying the page in the background and poking the application back is
the difficult part of that process, not having a test service up and
running. So I could see quite a few software developers wanting to use
the capability in Network Manager but with their own test service and
possibly with a different protocol.

 This would also have allowed to extend the check to work with other
 protocols, letting the client application query for a specific host and
 protocol if it wants to (with the default being whatever is defined in
 NetworkManager.conf).
 
 Well, the idea is apps ask Network Manager, so it can be configured in a
 central location, and not have every app try and override the default...

Sure, in most cases they won't have to and so shouldn't mess with the
default, though I still think being able to override the default is
valuable as it'd let some developers have a way of preventing expensive
API calls when something is wrong on their side

Minutes from the 12.04.1 team meeting (Thursday 28th of June)

2012-07-04 Thread Stéphane Graber
The third 12.04.1 team meeting took place at 14:00 UTC on Thursday the
28th of June 2012.

== Attendees ==
 - arges
 - cjwatson
 - cyphermox
 - jibel
 - ScottK
 - seb128
 - skaet
 - smoser
 - stgraber (chair)
 - stokachu
 - xnox

== Notes ==
We currently have 106 bugs targeted to 12.04.1.
Of these, 27 are currently marked as fix commited, leaving 79 needing
fixing. Of these 79, 40 aren't currently assigned to someone.

The 12.04.1 team would appreciate if all the teams could review the list
at:
https://bugs.launchpad.net/ubuntu/precise/+bugs?field.milestone%3Alist=49926
And assign the bugs to their team members or get in touch if the fixes
won't make it to the point release.

It'd be appreciated if any multi-arching change targeted to 12.04.1
could be uploaded ASAP as these will need quite a bit of verification,
so now that we are a month away from 12.04.1 freeze, please make sure
these get reviewed and uploaded.

Some members expressed concerns regarding the processing delays for new
SRUs. Since then, Kate has let me know that the SRU team now implemented
shifts to help improve that issue.
Details can be found here: https://wiki.ubuntu.com/StableReleaseUpdates

The daily images for precise are now build using -proposed, making it
easier for people to verify fixes going through the SRU process.
These are available at http://cdimage.ubuntu.com/precise/
(or http://cdimage.ubuntu.com/product/precise/ for !ubuntu)

== Team status updates ==
 - Desktop: Desktop SRUs are going well, compiz, libunity and bamf are
now in -proposed. Unity SRU is expected within the next 2 weeks and then
potentially another one end of July.
 - Foundations: doing some more SRU verification, expecting a few
network related fixes to go as SRU, mdadm and e2fsprogs have been uploaded.
 - Kubuntu: KDE for 12.04.1 is currently in -proposed and going through
testing.
 - L3: Working on multi-arch SRUs and some scripts to help tracking 12.04.1.
 - QA: Automated testing reported one broken OEM installation, caused by
livefs/pool mismatch, fixed by Colin last week. Some post-upgrade tests
started reporting obsolete config files, might be related to changes in
the auto-upgrade-tester, jibel is investigating.
 - Release: Going through bugs and adding some more to the 12.04.1 list,
based on fixes hitting quantal. Started discussing the schedule for 12.04.2.
 - Server: Quite a lot of bugs to fix, Scott will spend some time making
sure the team is making progress on these.


== Actions ==
 - arges: to work on a 12.04.1 bug report, showing targeted bugs and
information on status in development release, patches attached and
branches linked
 - xnox: to liase with ballons, gema and jibel w.r.t. fs/storage testing
(blocked on UTAH)



Full meeting log is available here:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2012/ubuntu-meeting.2012-06-28-14.00.log.html

The next meeting will be on Thursday the 12th of July at 14:00 UTC
(#ubuntu-meeting on freenode).

Useful resources and the meeting agenda are available at:
https://wiki.ubuntu.com/PrecisePangolin/12.04.1

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com





signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Releasing Alphas and Betas without freezing

2012-06-21 Thread Stéphane Graber
On 06/21/2012 03:34 PM, Robbie Williamson wrote:
 On 06/21/2012 02:00 PM, Stéphane Graber wrote:
 On 06/21/2012 02:34 PM, Robbie Williamson wrote:
 So we've clearly heard the opinion of Kubuntu...are there any other
 derivatives who wish to contribute to this discussion.  I for one, would
 be interested in knowing/hearing how these suggested changes impact them.

 -Robbie

 Starting with just a bit of nitpicking but it's something we agreed at
 UDS that we'd try to get fixed in everyone's mind :)
 We're flavours not derivatives, flavours are fully integrated in
 the Ubuntu project and have been recognized as such by the various
 governance boards. I usually consider derivatives as referring to our
 downstreams like mint where they're indeed a derived product.
 Fair enough, but then this wiki page probably needs changing to reflect
 this:
 https://wiki.ubuntu.com/DerivativeTeam/Derivatives

I also thought that page was wrong initially but it's actually kind of
right.

You'll notice that the flavours are listed separately under a flavours
heading on that page, and what's listed afterwards are proper
derivatives as in, out of the archive, custom spin based on Ubuntu.

I guess we could argue that the flavours probably shouldn't be listed on
that page at all to make it clearer.

Kate also reminded me on IRC that she has an action item to at least
remove mentions of derivatives across all the official Ubuntu websites.
The wiki will always be trickier as people (such as that DerivativeTeam
I never heard about) can write whatever they want...

 

 Now, speaking for Edubuntu, we don't feel like we could increase our
 testing frequency as we'll be increasing the number of platforms that
 we'll be supporting this cycle, don't have a lot of testers and
 generally don't feel the need for it.

 In the past we were only supporting a desktop installation on i386 and
 amd64. This cycle we're extending the desktop support to i386, amd64 and
 armhf.
 On top of that, we'll be introducing Edubuntu Server this cycle, that'll
 still be installed from our single media but will add a good dozen of
 extra services to test.


 The upstreams Edubuntu is working with are perfectly aware of our
 milestones and freeze periods and make sure their releases land on time
 so we have to ask for very little freeze exceptions or last minute
 upload (I don't think we asked for much more than 2 FFe last cycle).

 Changing the way we work after we agreed on the release schedule for
 this release would confuse our contributors and upstreams with no clear
 benefit for us.


 There are plenty of really good changes to the archive that are planned
 for this cycle as part of the archive reorg and increasing the use of
 -proposed, still with my Edubuntu release team hat on I don't think
 piling up changes is a good idea.
 I'd rather we do what we agreed on at UDS, try to encourage additional
 daily testing (because that never hurts, doesn't cost any development
 time and is beneficial) and discuss the next steps at the next UDS when
 we have concrete feedback on how these changes went.
 
 Thanks for the feedback Stephane. I think you've make some valid and
 reasonable points that we should consider.
 
 


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Drop Alphas and Betas (was Re: Releasing Alphas and Betas without freezing)

2012-06-18 Thread Stéphane Graber
On 06/18/2012 03:46 PM, Scott Kitterman wrote:
 On Monday, June 18, 2012 03:42:49 PM Rodney Dawes wrote:
 Ditën e Mon, 18/06/2012 më 15.30 -0400, Scott Kitterman ka shkruar:
 On Monday, June 18, 2012 01:30:34 PM Chris Wilson wrote:
 ...

 I also think adding the Release Candidate (RC) designation towards the
 end of the cycle would encourage more people who are quite wary of
 installing pre-release software on their computer to get involved with
 last
 minute testing since RC indicates that it's pretty much done and all
 that's
 left is to iron out some minor glitches.

 ...
 We used to call what's now Beta 1 a Release Candidate for similar
 reasons, but renamed it because it's not really a release candidate. 
 Generally Release Candidate means Thing that will get released if no
 new significant issues turn up.  Our current usage matches that and we
 should stick with it. I don't like the idea of turning Release Candidate
 into a marketing term in order to encourage more testing.

 And I think the idea here is that every single daily image fits into
 that category of what we're calling a Release Candidate. If we
 maintain high quality throughout the cycle, then at any point after
 the higher level freezes (feature, UI, string, etc…) we could
 theoretically point to any image and say this is the release
 candidate. If we can't do that, then we should at least be able to
 isolate where and why we can't (particular packages not meeting the
 quality standards, introducing problems late in cycle, etc…), and
 work on preventing that from happening in the future.
 
 Up until the last translation import is finished, we know everything is not a 
 release candidate.  After that, I agree.
 
 Scott K

We typically need the last batch of outside-of-langpacks translation
updates, a new batch of langpacks, a new base-files (for lsb_release), a
new ubiquity (dropping any remaining alpha/beta warning) and matching
debian-cd change (to actually mark the images as finals).

Currently we consider that any of the above (including the debian-cd
flag change) require re-testing of the images.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Releasing Alphas and Betas without freezing

2012-06-15 Thread Stéphane Graber
On 06/15/2012 10:12 AM, Rick Spencer wrote:
 Hello all,
 
 At UDS I had some hallway discussions about why we freeze for Alphas
 and Betas, and the fact that I think it is time to drop this practice
 and rather focus on making Ubuntu good quality each day. Sadly, there
 was no session on this, thus this email to this list for discussion.
 
 I think it is time drop our Freeze practices for the alphas and
 betas. Here is my reasoning:
 
 1. We are developing tools that allow us to efficiently use -proposed
 in a way that will ensure we will not have partially built or
 incompatible components in the release pocket ... ever. Including days
 we release Alphas and Betas:
 
 These blueprints tools to ensure that Ubuntu is not uninstallable or
 have other problems due to partially built components and such:
 https://blueprints.launchpad.net/ubuntu/+spec/foundations-p-upload-intermediary
 https://blueprints.launchpad.net/ubuntu/+spec/other-q-freeze-use-of-proposed
 
 I have been assured that the tools necessary to automate the work of
 moving components correctly from -proposed to the release will be
 ready before Alpha 2.
 
 2. We are investing heavily in the daily quality of Ubuntu. For example ...
 We run the same automated tests on an alpha as we run on a daily:
 https://jenkins.qa.ubuntu.com/view/Quantal/view/ISO%20Testing%20Dashboard/
 
 We tend to archive issues each day:
 http://people.canonical.com/~ubuntu-archive/testing/quantal_probs.html
 
 We ran all the manual ISO tests *before* we released Alpha 1, and we
 have the capability of doing this at will:
 http://iso.qa.ubuntu.com/qatracker/milestones/221/builds
 
 In short, freezing the archive before an alpha or beta should not
 actually be contributing to either ensuring the installability of
 Ubuntu images or ensuring the quality of these images. This implies,
 therefore, that all the work around freezing, and all the productivity
 lost during a freeze, actually subtracts from the quality of Ubuntu by
 reducing our overall velocity for both features and bug fixes, since
 every day the image is good quality, and Alpha or Beta should be just
 that day's image tagged appropriately.
 
 AIUI, A1 was delivered in such a manner, though without the tooling to
 ensure that moving from -proposed to the release pocket was efficient
 and automated.
 
 Cheers, Rick

Hi Rick,

We certainly want to allow people to upload stuff to -proposed during a
milestone week, but I don't agree that we should automatically copy from
-proposed to the release pocket during a milestone week.

We usually try to release all our images with the same versions of the
packages, considering it takes us hours to rebuild everything, having
seeded packages land during that time, will lead to images having
different version of packages.

As for what happened with Alpha 1, we simply asked everyone to upload
their packages to -proposed and then cherry picked the packages we
actually needed for the release from -proposed and copied them into the
release pocket before rebuilding the images (we did that 3 times).


As I understand it, the plan going forward is to have the release pocket
be an alias of -proposed on upload, so that everything always lands into
-proposed.
After something lands in -proposed, is properly built and passes
whatever other criteria we'll have, the package will be automatically
copied to the release pocket.

That last part (copy to the release pocket) would be what we'd block
during a milestone week for any package that's seeded. These would be
copied on a case by case basis by the release team and the images
rebuilt afterwards.

That'd essentially allow any non-seeded package to still flow to the
release pocket and be available for everyone.
All the others will be available for people running with -proposed
enabled or will be available when we manually copy them to the release
pocket or right after we release the milestone and we copy everything
left in -proposed to the release pocket.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Ubuntu 12.04.1 team announcement and initial meeting notes

2012-05-31 Thread Stéphane Graber
Hello,

In preparation for the Ubuntu 12.04.1 point release, to be released on
the 23rd of August, we've created a virtual team made of members from
the various engineering team that's dedicated to tracking down the bugs
and crashes that we want to see fixed for 12.04.1.

I wrote a wiki page describing who's currently involved in that team and
what we're currently looking at:
https://wiki.ubuntu.com/PrecisePangolin/12.04.1

The team is meeting every two weeks on the Thursday at 14:00 UTC in
#ubuntu-meeting
Outside of these scheduled meetings, most of the communication will be
done on the ubuntu-release mailing-list.

---

The first meeting was earlier today and was mostly about seeing who's
involved, discussing some bugs and figuring out the structure for our
next meetings.

Attendees:
 - jamespage
 - NCommander
 - ScottK
 - skaet
 - smoser
 - stgraber (chair)
 - stokachu
 - xnox

Meeting notes can be found at:
http://ubottu.com/meetingology/logs/ubuntu-meeting/2012/ubuntu-meeting.2012-05-31-14.01.html

The next meeting will be on the 14th of June at 14:00 UTC in
#ubuntu-meeting on freenode.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Pilot Report 2012-05-17

2012-05-17 Thread Stéphane Graber
On 05/17/2012 08:12 PM, Bryce Harrington wrote:
 80 items at start, 70 (actually, 63) by end.
 
 221112 keyboard-config: [SRU] Fix problem with space bar for fr(oss) keymap
   + Uploaded to quantal in 2.5-1ubuntu2
   + Filed SRU
 
 825624 xkeyboard-config: [SRU] Add dead_hook and dead_horn to latin keymap
   + Uploaded to quantal in 2.5-1ubuntu2
   + Filed SRU
 
 1000355 drbd8 - lucid sru
   + Already sponsored by timg-tpi; unsub sponsors
 
 829819 balazarbrothers - spelling fix
   + Already accepted by Debian.  Too minor to diverge; disapproved branch
 
 1000541 ia32-libs - Drop fluendo depends
   + Fix was uploaded for precise but not quantal
   + sponsored quantal upload 
 
 994752 lxc
   + Already sponsored; unsub sponsors
 
 quantal/emacs23/merge-23.4
   + Reviewed two branches from Laney, uploaded the second
 
 1000557, 1000558, 1000560, 1000561 - texlive-* sync requests
   + sync'd
 
 Branches needing set to Work In Progress:
  (I'm unable to set this for some reason; perhaps someone else could?)
  
 https://code.launchpad.net/~logatron/ubuntu/precise/xchat/fix-for-584207/+merge/102993
  
 https://code.launchpad.net/~kroq-gar78/ubuntu/precise/ubuntu-dev-tools/fix-988009/+merge/103595
  
 https://code.launchpad.net/~paolorotolo/rhythmbox/fix-for-991107/+merge/104368
  
 https://code.launchpad.net/~mitya57/app-install-data-ubuntu/unity-mail-fix/+merge/105514
  
 https://code.launchpad.net/~dmitrij.ledkov/ubuntu/quantal/bogl/merge/+merge/104578
  
 https://code.launchpad.net/~kroq-gar78/ubuntu/precise/dkms/fix-989998/+merge/103962
  
 https://code.launchpad.net/~bkerensa/ubuntu/precise/landscape-client/fix-for-962974/+merge/101839

All done with the exception of:
https://code.launchpad.net/~paolorotolo/rhythmbox/fix-for-991107/+merge/104368

This one isn't proposed for merging into an UDD branch so I have no way
to mark it Work in progress. Whoever owns lp:rhythmbox should do that
instead.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Last-minute uploads

2012-04-17 Thread Stéphane Graber
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 04/17/2012 05:12 PM, Serge Hallyn wrote:
 Quoting Daniel Holbach (daniel.holb...@ubuntu.com):
 Hello Serge,
 
 On 17.04.2012 16:38, Serge Hallyn wrote:
 Quoting Daniel Holbach (daniel.holb...@ubuntu.com):
 lxc lp:~utlemming/lxc/cloud-image-extraction_lp979996
 
 This bug is fixed in the last upload from yesterday.  (The
 tree itself was rejected for technical reasons, but his patch
 is in)
 
 Unfortunately it's not enough to 'disapprove' it - which is just
 a comment status. Somebody needs to set the merge proposal to
 'rejected'.
 
 I'm afraid I don't seem to have the rights to do so.
 
 -serge

Done


- -- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQIcBAEBCgAGBQJPjYmfAAoJEMY4l01keS1nAHAQAI0opzor57mCCPsjmtE+R3nX
8NA3limaQtCHx5kTVk2iOthcuMZ0lXL0yvCRXWYNou+ts7dEiI7qsknLZyQAJAPd
U0mZt4sdIukQtYqsDCdpw76OsthbULeIPqpsEtwsCY3CEsZPe0GlJH6Xj3j4x1lC
bezNdgSy+DYpbS0KEfK6QOoKaiHGCZIc/vIEndg92UJVk8VrskFguXGnkY/z/Wr1
3iOZL88vlaL5naXwUq9LcZjvnod+F8FH7hcp/XLTyuMh0vgeqob3qSNgJws2CwpY
gVTPpdXWILmhm34jjkpI9+3cSYaZil2xY1q6IpZOIUrXPZ5T5UF1vXZ6Jp2CLkpI
6W8EqLtzS7VBPH9U0gv9obwDTii+TfCJryEgH41fsUY97amRFnSAiAQ5Aq/T9y0m
lE2JxtFIu7aAty/S/fpZd8mLI3s5vJRE6HLAcrYdm5Uh27UBwBwv1ySiqipGUOVj
kytOuu/eqoyye6TXbr/yScjsOY1g5tUD6M9NVEfDr181DxiOykuGoqjxrZbQdp6e
BsqtmBWeW8Q8V1gZOIorB9HDlBjBIGT/+xSkY3sjkD3LY4Xhe3UMJyu1Byrm1yWQ
vsubFxTBYBdGHFL2IC2GDkUCj0t/Odp7/SeQtTQX/7PW12uyn3RMPRQ3duOLez8O
2U5Wq/OzcI7ZYkumEcGb
=ekmZ
-END PGP SIGNATURE-

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Confusion RE: UI Freeze Rules

2012-03-01 Thread Stéphane Graber
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 03/01/2012 11:25 AM, Scott Kitterman wrote:
 On Thursday, March 01, 2012 11:03:08 AM Rodney Dawes wrote:
 Hi All,
 
 On the wiki page[1], it is stated that the UI and user-visible 
 strings are frozen for applications which are installed by
 default. My own understanding was that this was for all apps in
 the archive, as was the general understanding of others when I
 asked in IRC.
 
 However, as we (Ubuntu One) are currently working on various
 changes in our applications, some of which are not part of the
 default install, this point of confusion has been brought up, and
 I think we need to clarify what it means exactly. We have 2
 options:
 
 1) The Wiki is correct, and we need to communicate to everyone
 that this is the case, and that exceptions are not required for
 applications which are not part of the default install (until
 later in the cycle).
 
 2) The Wiki is wrong, and we need to fix it, and continue on with
 the presumption that many of us have been working with.
 
 What do the rest of you think?
 
 It was my understanding that the primary purpose of U/I and string
 freeze was to allow documentation developers and translators time
 to get their work done.
 
 It is also to enforce a coordination requirement between the person
 proposing the change and the documentation team so that if the
 proposed change affects documentation or (particularly) screen
 shots they can be updated.
 
 None of translations applies to Universe.

My understanding is that with the introduction of:
X-Ubuntu-Use-Langpack: yes

That you can set on a universe source package, the assumption that
universe packages won't be in a langpack is no longer true.

 There are probably very few, not installed by default applications
 covered in documentation (although I expect Ubuntu One related
 packages would likely be documented).  This aspect of the freeze is
 important and shold apply to all packages even if it's a simple
 Hey, I need to change X, Oh no screen shots of X in the
 documentation so it's fine exchange.
 
 So I'd always thought it included all packages although exception
 approval for some would end up being very light weight.

This was also what I thought and what I think we should do, it's easy
enough to ack a UIFe if there's clearly no documentation being
affected, much easier than having to either revert the change or
update a set of screenshots anyway.


 Scott K
 


- -- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQIcBAEBCgAGBQJPT6muAAoJEMY4l01keS1ngfEP/17yeJ2Q6ZpoG9G3hgdsuGMj
hEGZy0m87LQfBOOSr+9mHQHcMNK9XcoIMqsrUN5FnDorS8N6TmcysdmrauddTTri
0cFsfkjMNNytjHDCMOsRi3ZrR53yMQGsxOoyZViwmmLHhbscu24q4OZEl2nQAA2x
pEPdMrEu2L2q2B0Oz6LmAYXjUdtbBAb1SEdNwYR5zULYhxe3GhFvftqNmotYqSRN
je8OjFu4E9O+JJij2U4DVchs5jZDkeD780qQq0qJA89yW+bvkh2S9OU5Nflhhf/a
Fi/akhkainPJBU4ELJGz8JqIZ0OroLv32NvYBrIo7N+GL/iRCmcdV2eZcjocxpJz
Th9+DqFIKdUMyEGN5g8WJWQohDXifx3paolC5zbKfdAcqIgOvWCM1m2dGot43So4
wRCtmgDNcVrVWSknODLGtVC+pxgRAUtEerv/olwaDeMK9mk6veKX/qFWgx4QxYVI
Hf0rX3K9SuwavhxqWWol3boCtW1v2EYvJ3KwjnWDppNuqIatQgNm6HLEdfEjy6uf
MRq3UtRhAAbeOcpZyVJ0aH+P1wGfkV1pbkrjsWLXfK8eCFZfmMEQobLgtEZ+qBNo
5KBP5QJK7XYenuj9HCCLa029BhBq+/wHe1f1fD9d8MlZuqDTYjP/OnJipJK3vu4M
/TWNZuP7Nycc1tTKhfwQ
=Sae6
-END PGP SIGNATURE-

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Call for testing ISC DHCP 4.1-ESV-R4

2012-02-20 Thread Stéphane Graber
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hello,

We recently (post-Feature Freeze) discovered that Ubuntu's ISC-DHCP
package isn't exactly up to date.

Ubuntu currently ships ISC-DHCP 4.1.1 (released in June 2010) but
upstream's latest release of the 4.1 branch is 4.1-ESV-R4 (released in
December 2011).
4.1-ESV-R4 is an Extended Support Version of ISC-DHCP, that'll be
supported until at least December 2014 and so seems ideal for an LTS.

The delta between 4.1.1 and 4.1-ESV-R4 is fairly big though apparently
entirely made of bugfixes, if you are interested, details can be found
here: https://deepthought.isc.org/article/AA-00566


I have successfully updated our packaging to work with the new
upstream, which really was just dropping and refreshing some patches.

So I'm now ready for some people to help testing it, use cases, include:
 - DHCPv4 client
 - DHCPv6 client
 - DHCPv4 relay
 - DHCPv6 relay
 - DHCPv4 server
 - DHCPv6 server

I already did an initial sanity testing of the packages as a DHCPv4
relay, as a DHCPv4 client using Network Manager and as a DHCPv4 client
using ifupdown.
I'll now be running a bunch of tests for the IPv6 side of the things
but the more people testing these the better.

Unless we discover something critical in there, I'm planning on
uploading these on Wednesday around 20:00 UTC.
Please report any issue as comment on the bug report below, that
should make tracking and fixing these easier for me.

Thanks for your help!


Launchpad bug:
https://bugs.launchpad.net/ubuntu/+source/isc-dhcp/+bug/937169

Test packages:
https://launchpad.net/~stgraber/+archive/experimental/+packages

- -- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQIcBAEBCgAGBQJPQqMsAAoJEMY4l01keS1nBPAQAND9cbr3NAqLv1lZ5ElO0R+B
IWL8Jhkz8oPbFbZtbjUBH0ZHRT5oEFBI2r7z85EtwhPnzKWnMhCCyTY4yy1gaBck
E/kKOVPp4MjA64Llo7XWh23j+Hq1iaEHG1cc7u0sZ938b+LYf+TgWTe1D4Hg1Z62
Y7KK18/LiWNQ+NMl1cG6uFcxqQAR2hTiZiUa8dk96q39/ArKoLWbIxabMyT3CJeJ
9JSQBo1LTYgUg6G2VfrTf7EAY4W9+cVtwpvCRjxQtuuO5RXHMOePPs1dKfbnjZvC
OTpMkJvMdyFlVH8QFcX1anCC5xqxuADmVObM1K9Fqbwl6329Mhn682T+2xdgh80s
VJQvtAO3QVQl1kXuhnWy1j+QS2Wg/NZlksZ0ba+ISe7A8mXY2iH3TRMFc60bg5iC
iorE7OQTGtcLOdbrW3/Hp9DF//lYXhoAvTcpa9lIqXgccDVw77/nPX4grkZUuRNU
LxRxMC0vqJkYeOq0GvNQ2FwkiCEsTQ5KpnOLMAswYbXlpBZSYZVUKb2iMsZHOLYw
kSRqp0H5+Zo3wFg2iMYpR46yuAy+yMR57IQxaqxATSjyq9x735DnP1UoHxu5KBP4
IsFrSt1rXnjTkSRS9vNl9W1S7J0z6dolcOghRoA7xtcND+oMA3kTJaFvQ2CuMsWV
Dcbe0DQ5kKWkXIdGfrrA
=EVcV
-END PGP SIGNATURE-

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Using biosdevname by default?

2012-02-07 Thread Stéphane Graber

On 02/07/2012 06:08 AM, Colin Watson wrote:

On Mon, Feb 06, 2012 at 04:48:32PM -0500, James M. Leddy wrote:

On 02/06/2012 01:06 PM, Mario Limonciello wrote:

I'm unsure the timeline John's team will be able to commit to finding
and fixing these things.  Feature freeze sounds a bit tight to me.  I've
offered to help to review and sponsor his team's fixes however as they
find them.


I suspect that the hard part of this is installer changes, especially
since the scripts and applications that break as a result of using
biosdevname are probably not too distro-specific.


No, the necessary installer changes are small and easily handled.  It's
the knock-on effects on the rest of the distribution that will take
time.


Correct, and the bits that'd break are very distro-specific, think 
upstart jobs, udev rules and scripts included in packages like 
ifenslave-2.6, bridge-utils, vlan, ifupdown.


All of these either come from Debian or are completely Ubuntu specific, 
as far as I know, none of these directly come from upstream, so it's 
unlikely that anything that was fixed for Fedora can be applied as-is to 
Ubuntu.




--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com

--
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: tcp_mtu_probing on by default?

2012-02-07 Thread Stéphane Graber

On 02/07/2012 09:40 AM, Mathieu Trudel-Lapierre wrote:

On Mon, Feb 6, 2012 at 6:15 PM, Martin Poolm...@canonical.com  wrote:

I have helped a few people recently who were having path MTU discovery
problems, causing bulk TCP transfers to hang quasi-intermittently.
Once you know the likely cause it's fairly easy but it's a fairly
annoying problem for someone who doesn't recognize it.

There is a kernel sysctl sudo sysctl -w net.ipv4.tcp_mtu_probing=1
that seems fairly effective at detecting when the problem is occurring
and automatically fixing it.   This implements RFC 4821.  It is off by
default in the kernel.  I haven't seen any reports of problems caused
by turning it on, but there may be some.

I wonder if Ubuntu should turn it on in /etc/sysctl.d?


Admittedly I haven't really looked much into this and whether it's
likely to cause issues in some environments, but setting it to 1
indeed seems relatively safe.

   0 - Disabled
** 1 - Disabled by default, enabled when an ICMP black hole detected
   2 - Always enabled, use initial MSS of tcp_base_mss.

This s hould help those network paths for which fragmentation is
required.On the other hand, enabling this will cause more
retransmissions of segments in this case, which would mean an increase
in traffic. I don't think it's likely to be huge, but just something
to keep in mind.

The question would be how many people would benefit from this change?
I'd be tempted to say it probably doesn't affect all that many people
in general. If you've found a lot of people who had this issue, maybe
it's worth also trying to figure out if they have the same ISP, if
they try to connect to the same place, etc. in case it's an issue
outside their network.

Mathieu Trudel-Lapierremathieu...@gmail.com
Freenode: cyphermox, Jabber: mathieu...@gmail.com
4096R/EE018C93 1967 8F7D 03A1 8F38 732E  FF82 C126 33E1 EE01 8C93


I'm surprised this is actually still a problem for IPv4, mtu 
discovery/probing is a very important part of IPv6 (where it's enabled 
by default). Anyone actually requiring this with IPv4 has something 
seriously broken in their or their ISP's network. Typically a sign that 
ICMP is blocked somewhere.


However, looking at the above description from Matt, setting the value 
to 1 doesn't seem too risky. Though if we choose to do it, I'd suggest 
we do it as early as possible and carefully look for bugs.
I'm guessing these bugs will be equally as difficult to find and debug 
as these that triggered this discussion.



--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


  1   2   >