Re: Summary of the current state of the tag2upload discussion

2024-06-25 Thread Simon McVittie
On Tue, 25 Jun 2024 at 10:02:03 +0200, Thomas Goirand wrote:
> I do not think it is reasonable that a particular (git?) workflow, specific
> to the way *YOU* prefer working, gains special upload rights.

This seems to be based on a misconception. tag2upload specifically
doesn't require one specific git workflow! It supports several common git
workflows, including "manage the patches in debian/patches using quilt".

> I've read
> about the deb-rebase workflow, I would hate it, and prefer managing patches
> with quilt directly.

For what it's worth, I also prefer a "patches unapplied" workflow
equivalent to the one used in pkg-perl, pkg-gnome and similar teams.

I use gbp-pq rather than quilt, but the principle is the same, and they
interoperate with each other - I can use gbp-pq for the same source package
and git repo where my GNOME team colleagues use quilt, and it isn't a
problem.

> Does this mean your tag2upload doesn't work for me?

No, tag2upload does not require you to use git-debrebase. git-debrebase
is one of several workflows that it supports. Using quilt or gbp-pq
is another.

I believe tag2upload supports all of the source tree layouts that dgit
does, and I regularly upload gbp-pq-based packages with `dgit push-source`,
which works fine.

smcv



Re: [RFC] General Resolution to deploy tag2upload

2024-06-19 Thread Simon McVittie
On Wed, 19 Jun 2024 at 07:54:45 +0200, Ansgar  wrote:
> Just include a hash
> similar to [1] in the signed tag data

Prior art: this is conceptually the same as git-evtag from
src:git-evtag. You can see real-world use of git-evtag in the upstream
tags (e.g. v0.9.0) of src:bubblewrap.

> it might need minor changes if
> one cares about file permissions[2].

If this is something that will be used as a security mechanism, then
I think it probably needs to represent symbolic links as well. I think
git-evtag does (it checksums all git "blobs" and I believe that includes
symlinks), but it seems sumdb/dirhash behaves as though symlinks didn't
exist.

git specifically *doesn't* care about file permissions, beyond a 1-bit
representation of whether it's executable or not, so anything like
tag2upload that is based on git-as-source will have to cope with mtimes
and detailed permissions possibly differing between what was obtained
from git and what's in the .dsc. When people have talked about code being
"treesame" elsewhere in this thread, I believe they mean "all facts that
git tracks in its tree are the same, facts that git does not track might
not be".

smcv



Re: How is the original tarball obtained in tag2upload

2024-06-14 Thread Simon McVittie
On Fri, 14 Jun 2024 at 12:26:50 +0100, Ian Jackson wrote:
> Note however: right now you can't do a source-only upload of a new
> upstream version, and tag2upload only supports source-only uploads.
> So this situation generally won't arise: someone will have had to
> upload the orig.tar.gz the old way.

I think I must be misunderstanding you, because as far as I can see,
I do source-only uploads of new upstream versions rather frequently
(for example flatpak_1.14.8-1 was a source-only upload, for which
I think I used dgit). To achieve that, I need to arrange to have a
suitable orig.tar.* where dpkg-buildpackage or dgit are going to find it
(currently achieved with uscan and/or pristine-tar), which I recognise
is non-trivial to achieve for tag2upload.

You can't currently do a source-only upload of a completely new *package*,
or of a new upstream release that bumps SONAME or otherwise adds new
binary packages, because the ftp team requires a binary upload[1] for
anything that will go into the NEW queue. Is that what you're thinking of?

But most new upstream releases (at least for relatively mature packages)
don't break backwards compatibility and don't need to go through NEW.

smcv

[1] technically the upload needs at least one binary package, but
presumably the ftp team usually prefer uploaders to follow the spirit
of the rules as well as the letter, and the spirit of the rule is that
they want to see a comprehensive set of binary packages for at least
one architecture so that Lintian can flag any obvious mis-packaging



Re: [RFC] General Resolution to deploy tag2upload

2024-06-14 Thread Simon McVittie
On Fri, 14 Jun 2024 at 08:52:38 +0200, Gerardo Ballabio wrote:
> As I understand, the proper way to resolve disagreement over technical
> issues is to bring the matter to the Technical Committee. Why are you
> proposing a GR instead?

The TC can overrule individual developers (§6.1.4 in the Debian
constitution), but it can't overrule a position delegated by the DPL,
and the ftp team is such a position. We have had situations in the past
where an issue involving the ftp team was brought to the TC, and the
most we could do abotu it was to "offer advice" (agree among ourselves
on a non-binding opinion, §6.1.5) and hope that the ftp team might
reconsider their decisions on the basis of that advice.

To the best of my understanding, the only mechanism the project has for
overruling a DPL delegate is a GR.

smcv
(former TC member)



Re: Archive support for *.orig.bundle.* and *.debian.bundle.*

2024-06-14 Thread Simon McVittie
On Fri, 14 Jun 2024 at 09:12:56 +0200, Simon Josefsson wrote:
> I don't think a shallow copy will work generally.  Instead you want
> to upload the entire upstream git repository as a bundle.

I believe this has been specifically ruled out by the ftp team in the past.

With a tarball or a shallow bundle, the maintainer and the ftp team are
"only" responsible for ensuring and verifying that everything in the
current state of the tree is Free according to the DFSG (which is already
a significant task, but we usually manage to achieve it). If the upstream
project contains non-Free content, the maintainer can simply delete it
and repack the tarball (or export a new shallow bundle, in git world)
and the ftp team will be happy with that.

With the whole git history as a bundle, and our current policies around
Freeness, the maintainer and the ftp team would be responsible for
ensuring and verifying that every past commit reachable from the bundle
is *also* Free, which is a much, much larger task - and every time some
past commit contained non-Free content, the maintainer would have to
amend that commit to remove it, and then rebase the rest of the history
from that point onward (including merges!) onto the amended commit.

I don't think shipping full history in the archive would be feasible
unless we were willing to (do a GR to) relax our policies on the handling
of non-Free content in packages' history.

Having full history would also make it harder to detect and remove
non-distributable (e.g. copyright-infringing) content, in the rarer cases
where that gets into a project.

smcv



Re: [RFC] General Resolution to deploy tag2upload [and 1 more messages]

2024-06-13 Thread Simon McVittie
On Thu, 13 Jun 2024 at 15:08:15 +0100, Ian Jackson wrote:
> I think it is possible that there will be a handful of packages where
> things are significantly more awkward, which might not be able to
> adopt tag2upload.

This would presumably be the same minority of packages where maintainers
use a debian/-only workflow (even if they normally prefer to keep upstream
source in git) and avoid dgit (even if they normally prefer to use it),
because the upstream source is too bulky to be convenient to track in git?
Such as the openarena-data family and other large game assets?

Those packages are already exceptional and already need to be handled
specially. They'd only be a problem if dgit and/or tag2upload became
mandatory, which (as far as I understand it) is not the plan.

smcv



Re: Security review of tag2upload

2024-06-12 Thread Simon McVittie
On Wed, 12 Jun 2024 at 13:03:04 -0400, Antoine Beaupré wrote:
> On 2024-06-11 18:39:04, Russ Allbery wrote:
> > - Someone in the keyring (either a Debian Developer or a Debian Maintainer
> >   for a package) uploads a malicious source package but makes it appear
> >   that the package was uploaded by someone else in the keyring.
...
> > Neither the existing upload mechanism nor tag2upload attempt to prevent or
> > detect (as opposed to trace) the upload of a malicious source package by
> > someone in full possession of a key in the keyring, so this threat is not
> > considered in this document, although tracing for this threat is
> > discussed briefly.
> 
> I'm actually curious as to why that is treated as a separate
> possibility, because if kind of overlaps with the second model ("someone
> uploads a malicious package appearing from someone else")...

Using "victim" as a shorter name for the "someone else" that the attacker
wants to blame for the upload:

For the "makes it appear that..." scenario, the upload would have the
victim's name in all of its non-cryptographic metadata (debian/changelog,
git tag GIT_COMMITTER_NAME, etc.), but it would have to be signed by
the attacker's key - because the threat model in this scenario is that
the attacker's own private key is the only one in the keyring that they
have access to, so they can't sign it with the victim's key, or with
some third developer's key.

(Or I suppose the attacker could also generate a new keypair under their
control and sign with that, but it seems fairly obvious that it would
be rejected, because tag2upload wouldn't find it in the keyring -
hopefully there is a test-case for that!)

For the "full possession of a key" scenario, the threat model is that the
attacker *would* have access to the victim's private key, and therefore a
rational attacker would validly sign the upload with that, making it
indistinguishable from a legitimate upload by the victim (except for the
fact that, afterwards, the victim would hopefully say "wait, I didn't
upload that?" and start raising the alarm).

I think that's the intended difference here?

> For me, that case and the "xz-utils" case are actually quite pressing
> matters

The bottom line is that if the attacker has the victim's private key
material, then the attacker can do anything that the victim can do,
because in our key-based security model the only way we can authenticate
the victim remotely is that they have control of their own private key(s),
and the attacker (we assume) doesn't.

smcv



Re: [RFC] General Resolution to deploy tag2upload

2024-06-12 Thread Simon McVittie
On Wed, 12 Jun 2024 at 16:04:34 -, Marco d'Itri wrote:
> >Is your position here that if your upstream releases source tarballs
> >that intentionally differ from what's in git (notably this is true
> >for Autotools `make dist`), then any Good™ maintainer must generate
> >their own .orig.tar.* from upstream git and use those in the upload,
> >disregarding upstream's source tarball entirely?
>
> It is mine, and this is what I have been doing for a long time for all
> my packages.

If there is consensus that devref is lagging behind best-practice and
actually this is fine (or preferable, or should-be-required), perhaps
someone who advocates this model could propose a replacement for devref
§6.8.8?

smcv



Re: [RFC] General Resolution to deploy tag2upload

2024-06-12 Thread Simon McVittie
On Wed, 12 Jun 2024 at 15:20:45 +0100, Ian Jackson wrote:
> tag2upload, like dgit, ensures and insists that the git tree you are
> uploading corresponds precisely [1] to the generated source package.
> 
> If you base your Debian git maintainer branch on the upstream git (as
> you should) and there is a discrepancy between the contents of the
> upstream git branch, and the .orig.tar.gz you're using, the upload
> will fail.

Is your position here that if your upstream releases source tarballs
that intentionally differ from what's in git (notably this is true
for Autotools `make dist`), then any Good™ maintainer must generate
their own .orig.tar.* from upstream git and use those in the upload,
disregarding upstream's source tarball entirely?

That approach has many advantages, but it flatly contradicts what devref
claims a Good™ maintainer would do, which is to always use the pristine
source tarball as released by upstream (unless it's non-free) - which
implies that if they're using dgit, then the upstream tree must match
an import of the tarball.

Or are you saying that we should simply not package software produced
by such upstreams? (If that, we're going to need replacements for most
of GNU...)

I'd prefer not to be in a situation where whatever a maintainer does,
some segment of the project will consider them to be failing to meet
the project's basic expectations - that seems like a recipe for burnout.

> In the xz case, if the .orig.tar.gz is upstream's, that would have
> detected the attack.

xz-utils is built using Autotools, so if the .orig.tar.gz is upstream's
`make dist`, it is *always* going to differ from what's in git[1]: for
example ./configure exists in the tarball but not in git. The xz attacker
was counting on that - the glue code to activate their malicious payload
was hidden in a diff that was already expected to be inconveniently
large for reasons that are usually considered to be benign.

It would be easy to say "well your upstream shouldn't behave like that"
but, realistically, many of them do, and they are not all going to change
just because we say so.

In the projects where I'm an upstream maintainer, I *am* trying to move
towards the official source release being equivalent to a `git archive`
(including replacing Autotools with Meson, replacing submodules with
subtrees, etc.), but I don't have the resources or social capital to do
that instantaneously, even in the few projects where I have influence.

smcv

[1] unless your upstream is one of the rare upstreams that is happy to
commit all the Autotools-generated files to upstream git, which
comes with its own problems



Re: source tarballs vs. source from git (was: tag2upload)

2024-06-12 Thread Simon McVittie
On Wed, 12 Jun 2024 at 08:53:40 -0400, Scott Kitterman wrote:
> On Tuesday, June 11, 2024 6:25:02 PM EDT Sean Whitton wrote:
> > - it improves the traceability and auditability of our source-only
> >   uploads, in ways that are particular salient in the wake of xz-utils.
> 
> As I understand it, Debian was affected by the xz-utils hack, in part, 
> because 
> some artifacts were inserted into an upstream tarball that were not 
> represented in the upstream git.  Please explain how use of tag2upload is 
> relevant to this scenario?  I'm afraid I don't follow.

I think the claim here might be that Debian should stop dealing with
upstream source tarball releases, and instead have the packaging be
branched from upstream git? It isn't explicit in the proposal, and is not
*necessarily* mandatory for tag2upload, but the mentions of generating
".orig" tarballs for consumption by the ftp archive via `git-deborig`
(which is an adjusted git-archive) would seem to imply that the proponents
of tag2upload would like to go in the direction of not redistributing
upstreams' official source-code archives as 1:1 binary blobs.

As a concrete example, for bubblewrap_0.9.0 (a convenient example
of a relatively small package), that would mean that instead
of having our packaged version of bubblewrap be based on the
bubblewrap-0.9.0.tar.xz with sha256 c6347eac... which can be downloaded
from https://github.com/containers/bubblewrap/releases/tag/v0.9.0, our
packaged version of bubblewrap would be based on the tree that forms part
of the tagged commit 8e51677a... in upstream git.

If we did that for xz-utils, then the xz-utils attacker would have
had to include the glue code to activate their malicious payload in
the upstream git history, and not just the official tarball release -
which would hopefully have made it more likely that it would have been
discovered before we integrated the malicious version.

I think that's going to be a harder sell for some packages than for
others. For packages that build with Meson or CMake, the official
upstream source artifact is often just a `git archive` *anyway* (albeit
with submodules replaced by their content, e.g. by `meson dist`); for
example, in bubblewrap[1], `git diff v0.9.0..upstream/0.9.0` is empty,
where upstream/0.9.0 is a `gbp import-orig` of the upstream source
artifact and v0.9.0 is the upstream tag. So there is little difference
between taking the upstream source artifact or making our own
`git archive`. For bubblewrap 0.9.0, the one advantage of the upstream
source artifact is that the upstream release manager (which happens to
have been me) has signed it, with a stronger-than-SHA1 hash.

However, for packages like xz-utils that build with Autotools, the `make
dist` output can include a significant amount of source that is not always
straightforward to obtain any other way (for example modules vendored
from gnulib at a specified version, with no guarantee that a different
gnulib version would be compatible), together with a significant volume
of derived/non-source files that makes a meaningful review of the diff
between the git repo and the official source release difficult to achieve
(you'll see what I mean if you take a look at an older bubblewrap release
`git diff v0.8.0..upstream/0.8.0` [1]), and often, some ambiguous
not-quite-source not-quite-derived content that makes it difficult to say
with confidence what is source and what is not.

This is not unique to Autotools: in the Python packaging team we have
a similar tension between maintainers who say we should always use
upstream git as our basis for source packages, and maintainers who say
we should always use the "sdist" tarball that upstream released to PyPI
(usually not identical).

Of course, the xz-utils attacker was counting on it being difficult to
do a meaningful review of the diff between the git repo and the official
tarball release that they produced, and that diff is exactly where they
hid the glue code to activate their malicious payload.  So I think it's
valid to hope that key upstreams will move towards producing releases
that are as transparent and "nothing up my sleeve" as possible.

(However, many of the most key upstreams are overworked, presented
with incompatible demands such as modernizing their codebases but also
minimizing change and remaining compatible with obsolete platforms, and
in no position to change how do their releases quickly; so it would be
easy to end up in the paradoxical situation where small/irrelevant/"toy"
projects have an easy audit trail, but the projects that we depend on
for our security remain difficult to audit.)

Our colleagues in other distributions often have workflows that have a
git-based code path and a tarball-based code path, usually preferring the
former: for instance Arch Linux PKGBUILDs usually start from a shallow clone
of a specified commit if possible, only falling back to tarballs if no
suitable git repo is available.

devref currently demands that we use 

Re: General Resolution: non-free firmware: results

2022-10-05 Thread Simon McVittie
On Wed, 05 Oct 2022 at 16:34:27 +0200, Philip Hands wrote:
> I didn't want to inflict work on the debian-cd
> team, and I assume that nobody will object if volunteers turn up to help
> build/test the free images. If they're built and tested, I'm pretty sure
> they'll be published.

As one of the people who sometimes helps Steve to test CD releases (but
not a debian-cd team member):

I suspect that one of the motivations for not wanting two sets of images
on an equal footing is that building and testing images is a time- and
resource-intensive process: by its very nature, building and testing an
installation image or a live image involves shovelling a lot of data
around, and there's a policy of requiring at least the live images to
be tested on real hardware, because there have been cases in the past
where the images worked fine on a VM but failed on real hardware. Last
time we did bullseye and buster point releases, I left Steve's house
well after midnight, and I don't think that's atypical.

It would be a much less draining process if the combinatorial explosion
of things to test was smaller: at the moment we have netinsts, CDs,
DVDs, 16G images for USB sticks, Blu-Ray images, a live image per major
desktop environment (for some value of "major"), various paths through the
installer, amd64/i386, UEFI/BIOS, non-firmware/firmware and so on. Not
producing separate firmware and non-firmware images is one way to speed
this up by making the critical path shorter. I suspect the debian-cd
team might also be seriously considering discontinuing the larger
installation images like the Blu-Ray and 16G USB stick - certainly I
would be, if I was them.

So if volunteers turn up to help build/test images without non-free
firmware, I'm sure nobody is going to object to them doing that work,
but it might come with some limitations in order to take that work off the
critical path of building and testing the primary deliverable on release
day, which will now be the version with firmware: perhaps something like
"yes, but only after the primary images are ready" or "yes, but please
build only the most useful 1-3 variants per architecture" or something
along those lines.

smcv



Re: Possible draft non-free firmware option with SC change

2022-09-12 Thread Simon McVittie
On Mon, 12 Sep 2022 at 19:20:29 +0200, Simon Josefsson wrote:
> Steve McIntyre  writes:
> > Many common laptops in the last 5-10 years don't come with wired
> > ethernet; it's becoming rarer over time. They ~all need firmware
> > loading to get onto the network with wifi. Many now need firmware for
> > working non-basic video, and audio also needs firmware on some of the
> > very latest models. The world has changed here, and I think your
> > perceptions may be out of date.
> 
> I recall that it took ~5 years until hardware (usually audio, video,
> network cards) was well supported with stable releases of free software
> distributions in the 1990's.

I don't think this is the same thing at all. In the 1990s, it took that
length of time to get Free drivers that run on the main CPU, communicating
with firmware that the devices already had on-board (in ROM or flash) - but
it was usually unnecessary for the OS/driver vendor (in our case Debian)
to supply the device's firmware, because that was already on-board in
permanent storage, updated rarely or not at all.

In the 2020s, independent of how quickly or slowly we get Free drivers
that run on the main CPU, it's often the case that those Free drivers
will need to know how to upload firmware that will run on the device
itself into the device's RAM, and the OS/driver vendor is expected to
supply that firmware, because the device simply does not have permanent
storage on-board where it could keep its firmware any more - at power-on,
all it knows how to do is accept a firmware upload, and it doesn't know
how to play audio or join a network or whatever is its real job until
that firmware arrives.

This design isn't really any less Free than what you had in the 1990s:
in both cases, if you're using a Free OS and Free drivers, you get
total control over what's running on the main-CPU side of the bus, and
no control over what's running on the peripheral device side of the bus
(other than to the extent that it can be controlled by requests sent
by the Free driver). However, the presence of non-Free firmware is a
lot more visible now, because the hardware manufacturer now expects the
OS/driver vendor to be involved in providing it to the hardware.

For devices that *do* still have on-board storage (notably those that
need to start up before the OS kernel), often there is a version of
the non-Free firmware in permanent storage, which is enough to make the
device basically work, but is likely to contain bugs or even security
vulnerabilities (because firmware is software written by fallible humans,
and therefore has bugs). In these cases, being able to upgrade it to a
newer non-Free firmware blob is obviously less desirable than having a
Free replacement, but it's better than being stuck with the version that
originally shipped on your device, and doesn't really give you any less
freedom than if you were stuck with the original version.

In some ways the new situation is better for people who want to
reverse-engineer device firmware - the proprietary firmware blob is right
there for you to look at (rather than being hidden away), and if the
device isn't checking a signature, modifying the Free driver to upload
your replacement firmware blob into the device's RAM is going to be a
lot simpler than reflashing the device using some out-of-band mechanism.

I think you mentioned elsewhere in the thread that you're using a Lenovo
X200? If that's the case, then I'm sorry to inform you that you're
relying on a non-free BIOS (unless you replaced it with Coreboot, which
most of our users are not going to be willing or able to do), non-free
embedded-controller firmware (for the keyboard and battery charging,
among other things), a CPU with non-free microcode, and probably a bunch
of miscellaneous ROMs in things like audio hardware. They might never
have gone through Debian's web servers, and you might never have upgraded
them, but they're there (and you probably *should* upgrade some of them,
particularly the BIOS and the CPU microcode, because otherwise there
are likely to be unfixed security vulnerabilities).

smcv



Re: Draft ballot voting secrecy GR

2022-03-12 Thread Simon McVittie
On Sat, 12 Mar 2022 at 18:09:20 +0100, Kurt Roeckx wrote:
> Choice 3: Reaffirm public voting
> 
> 
> ince we can either have [...]

I assume this was meant to start with "Since"?

smcv



Re: GR: Change the resolution process (corrected)

2021-11-25 Thread Simon McVittie
I've lost track of who wrote:
> > > Suggest making this "None of the above" instead of "Further discussion"
> > > to avoid two different default options for TC decisions vs project
> > > decisions.

On Thu, 25 Nov 2021 at 10:28:55 -0600, Gunnar Wolf wrote:
>   I would prefer the change to extend also to the TC votes. I think
>   it's clear that "none of the above" means we would not have an
>   outcome to present, but I feel "none of the above" to be
>   clearer.

Also a TC member but writing only on my own behalf. I agree with Gunnar
that NOTA seems fine as a default for TC decisions (except for choosing
the TC chair, which is special-cased to have no default).

When we're voting for a DPL, the default is already NOTA, which we've
always interpreted to mean the same thing as "re-open nominations"
(RON) in some other organizations' systems for electing officers:
constitutionally, we need a DPL, so NOTA winning the DPL election would
mean we need to find a different candidate who enough people can agree on
(and the way we would achieve that is by reopening nominations and hoping
someone more popular will volunteer). I think the equivalence between NOTA
and RON has always been uncontroversial. The only reason we don't have
this for the TC chair is that the TC chair has a fixed set of candidates
(the TC members) so reopening nominations would have no practical effect.

Similarly, when the TC has been asked for a decision, a win for NOTA
would mean none of our draft resolutions were accepted, so the decision
is unresolved and we would need to (loosely) "re-open nominations"
to get a better draft resolution that enough TC members can agree on.

In practice, it seems like NOTA/FD is unlikely to win a TC vote: the
only way I can think of for that to happen is if someone called the vote
prematurely, before we got close enough to consensus to be able to write
at least one option that a majority would vote above NOTA/FD.

If the TC is voting to *not* do something (for example if we have been
asked to overrule the foobar maintainer, but we have consensus that
the foobar maintainer was correct), then it seems we implement that by
voting on the resolution we have consensus for (in this case it would be
"formally decline to overrule foobar maintainer" > FD), rather than
putting up a resolution "overrule the foobar maintainer" that none of us
agree with and then voting FD > "overrule the foobar maintainer".
We could equally well do that by voting
"formally decline to overrule foobar maintainer" > NOTA.

smcv



Re: Draft proposal for resolution process changes

2021-09-28 Thread Simon McVittie
On Tue, 28 Sep 2021 at 13:56:21 +0200, Karsten Merker wrote:
>   In this case the chair surely wouldn't vote to overrule
>   themselves as that would be a completely nonsensical behaviour,

The casting vote cannot be used to select an option that is not in the
Schwartz set (loosely: it can only be used to select an option that
could have won if it had one extra vote). Suppose the TC chair wants
to paint a bike shed green, but this is unacceptable for some reason,
and we have a vote among the TC members with these options:

R: overrule the TC chair: the bike shed must be red
B: overrule the TC chair: the bike shed must be blue
Y: overrule the TC chair: the bike shed must be yellow
FD: further discussion

If the TC membership (excluding the chair in this case) has voted
R = B > FD > Y, with some members preferring R > B and an equal number
preferring B > R, so that both R and B are in the Schwartz set, then the
chair is forced to use their casting vote to overrule themselves. They
can use the casting vote to choose whether the bike shed must be red or
blue, but they cannot choose to paint it green or yellow, because those
options were not in the Schwartz set.

Also, I believe the rationale for this casting vote is the same as for
the existence of a casting vote in general: to make sure that the TC is
always able to make a decision, one way or another, and that there is
never an unresolved situation where the outcome of the vote is "there is
no decision". Even if the chair is not placed in the bizarre situation
of choosing precisely how to overrule themselves, they should still be
stating a decision.

To put that another way, if the TC is voting on options R, B, Y and
Further Discussion, we would like the outcome to be either R, B, Y
or FD. It seems bad if the outcome can, in rare cases, be a strange
indeterminate state that is distinct from FD, but is also not R, B or Y.

> - There is an excemption for the chair in the rule about having
>   to abstain from the vote and the chair makes use of the casting
>   vote.

In this case, the TC has (narrowly) made a decision: namely, not to
overrule the chair.

> - There is no special excemption for the chair in the rule about
>   having to abstain from the vote, so the tie isn't resolved and 
>   as a result the TC doesn't overrule the chair.

In this case, the TC has not made any decision - we have not decided to
overrule the chair, we have not decided to decline to overrule the chair,
and we have not even decided on Further Discussion! Once we get to the
point of holding a vote, I don't think we want this to be procedurally
possible.

smcv



Re: New option for the RMS/FSF GR: reaffirm the values of the majority

2021-04-03 Thread Simon McVittie
On Sat, 03 Apr 2021 at 21:46:08 +0200, someone claiming to be Enrico Zini wrote:
> We explicitly refuse to acknowledge irrelevant political issues

I was surprised to read this apparently coming from Enrico, particularly
since it doesn't seem consistent with what Enrico has said in other
threads regarding RMS. It turns out not to be signed with the same key.

Key fingerprint used to sign
, matching
Enrico's key in the official Debian keyring:
1793 D6AB 7566 3E6B F104  953A 634F 4BD1 E7AD 5568

Key fingerprint used to sign the message to which I'm replying:
2490 211A D036 087E 6D1D  9A92 D0FF 49CB E3F4 FB68

It seems highly likely that the message to which I'm replying was not
sent or authorized by Enrico, and that its sender is trying to mislead
Debian members by impersonating a prominent and respected developer.

I don't think passing off a message as coming from a different author is
consistent with taking any sort of moral high ground (for example based
on a position of free speech, meritocracy or opposing "cancel culture")
and I'm disappointed to see it on Debian mailing lists.

smcv


signature.asc
Description: PGP signature


Re: Q to all candidates: NEW queue

2020-03-28 Thread Simon McVittie
On Fri, 27 Mar 2020 at 23:06:55 +, Holger Levsen wrote:
> another option would be 'unstable-proposed' (or whatever) where packages get
> uploaded to, and which only gets moved to 'unstable' if they don't fail 
> piuparts,
> autopkgtests (plain build tests) and so forth...

Do you mean this to be for all package, like Ubuntu does (they run the
unstable->testing migration with a 0-day delay, all packages get uploaded
into the equivalent of unstable, and users of their development branch
are actually getting the equivalent of testing), or do you mean just for
source-NEW and/or binary-NEW packages?

smcv



Re: Some thoughts about Diversity and the CoC

2019-12-13 Thread Simon McVittie
On Thu, 12 Dec 2019 at 23:54:16 +, Scott Kitterman wrote:
> I think when people personally feel excluded/diminished/pick your term
> then it's appropriate to work on how to frame things to see how to make
> them feel welcome (e.g. if someone is more comfortable being referred
> to by they, then I think it's appropriate to use it).  That's not how
> I read Simon's request.  I read as being speculative that maybe the
> wording might make someone uncomfortable, not about anything they were
> directly experiencing.

For what it's worth, I didn't intend to imply that phrasing it as "init
diversity" was offensive, exclusionary or a code-of-conduct issue, it
hadn't occurred to me that someone might interpret my message that way,
and I'm sorry that my phrasing didn't make that clear.

Chris and Ian expressed what I was trying to say better than I could.

smcv



Re: If we're Going to Have Alternate Init Systems, we need to Understand Apt Dependencies

2019-12-08 Thread Simon McVittie
On Sun, 08 Dec 2019 at 09:37:07 +, Anthony DeRobertis wrote:
> it seems like there is an alternative — have them provided by
> a different package. Probably one package providing quite a few of
> them. It'd need some way to only try to start installed daemons, but
> that sounds solvable.

Not only solvable, but already required to be solved: any LSB init script
that doesn't start with something like

test -x "$DAEMON" || exit 0

(or use init-d-script(5) which does this for you) is already (RC-?)buggy,
because LSB init scripts are conffiles, and conffiles usually get left
behind when a package is removed but not purged.

smcv



Re: Proposal: Reaffirm our commitment to support portability and multiple implementations

2019-12-03 Thread Simon McVittie
On Mon, 02 Dec 2019 at 00:28:54 +0100, Simon Richter wrote:
> Wasn't there a plan to add support for containers managed through
> systemd that have filtered access to the system dbus, or is that just a
> special case of a service unit?

As a general rule, "heavyweight" containers with their own init system
that behave like a lighter-weight alternative to VMs (notably lxd and
some uses of systemd-nspawn) have their own set of D-Bus buses managed
by their own init system and process tree, the same as if they were a VM;
while "lightweight" containers that behave more like a chroot (like Docker
and some uses of systemd-nspawn) or a restricted view of the host system
(like most bubblewrap-based containers) either share the host system's
buses, or don't have any D-Bus access at all.

Containers managed as system services by systemd-as-pid-1 are outside
any login session or user-session, so it would not be appropriate for
them to access anyone's session bus. They could access the D-Bus system
bus if desired (with or without filtering). If they access the system
bus, I would expect it to be conceptually the same system bus used by
non-contained system services like NetworkManager, but maybe with fewer
things that they are allowed to do.

Flatpak apps in containers have filtered access to the D-Bus session
and/or system bus from the host system. This is conceptually the same
as if they weren't in a container, but with a firewall-style filter
(xdg-dbus-proxy) between the client and the bus. Not everything is
allowed, but everything that *is* allowed behaves the same as if there
was no container: the same number of buses exist, and their scopes are
the same.

As far as I'm aware, Snap apps are approximately the same shape as Flatpak
in this respect: filtered access to the D-Bus session and/or system
bus from the host system (if you're running Ubuntu's kernel patchset
with AppArmor enhancements), or unfiltered access (otherwise). Again,
this doesn't change the number of buses that exist or what they mean.

smcv


signature.asc
Description: PGP signature


Re: Proposal: Reaffirm our commitment to support portability and multiple implementations

2019-12-02 Thread Simon McVittie
On Mon, 02 Dec 2019 at 04:26:53 +0100, Simon Richter wrote:
> My expectation was that with systemd, dbus activation functionality
> would have moved into the main systemd binary for better process
> tracking and to avoid code duplication with the other activation methods.

Yes ish, but on an opt-in basis (the D-Bus service integration file has to
refer to the corresponding systemd unit file via the SystemdService key).
Many (most?) system services and some session/user services do this.

For session/user services, this is only done when dbus-user-session is
in use - otherwise, the scope of the dbus-daemon and the scope of
`systemd --user` would be different, making it inappropriate to turn
D-Bus services into systemd user services.

On a systemd system, running systemd-cgls will show you which D-Bus
services have been delegated to systemd (they have their own cgroup
alongside dbus.service) and which ones have not (they are part of
the same cgroup as dbus-daemon).

smcv



Re: Proposal: Reaffirm our commitment to support portability and multiple implementations

2019-12-01 Thread Simon McVittie
On Sun, 01 Dec 2019 at 22:14:06 +0100, Laurent Bigonville wrote:
> It's bin:libpam-systemd that pulls bin:systemd-sysv (the package that makes
> systemd the init on the system), not bin:systemd. Here it's dbus-user-session
> that pulls it because it needs a logind (dunno if it works with elogind)
> session opened to have the session bus started when the user logs-in.

dbus-user-session needs the `systemd --user` per-uid service manager to be
run whenever a uid has at least one login session, which is functionality
provided by the combination of the systemd service manager (init system)
with systemd-logind. An implementation of the logind interfaces, such
as elogind, is not sufficient. If I remember correctly, libpam-systemd +
systemd-shim was not sufficient either: that configuration provided the
logind interfaces but not `systemd --user`, just like elogind.

The changelog for dbus_1.12.16-2 documents this as the reason why
this dependency was not changed to default-logind | logind when those
virtual packages were added to Policy.

smcv



Re: Proposal: Reaffirm our commitment to support portability and multiple implementations

2019-12-01 Thread Simon McVittie
On Sun, 01 Dec 2019 at 22:02:31 +0100, Simon Richter wrote:
> In that particular case, the user session must be available to allow
> activation of gsettingsd via dbus

There is no such thing as gsettingsd. Presumably you mean dconf-service
(which is conceptually one of many backends, although in practice it's
the one that is nearly always used).

> dbus-x11 is not a complete solution -- it makes a "session" dbus
> instance available, but without dbus activation of services

This is not true. Any dbus-daemon[1], including the one started by
dbus-x11, provides D-Bus activation (services started on-request as
children of dbus-daemon, which acts as a crude service manager). This
functionality has been present since 2003-2004 for session buses,
depending where you draw the line for it being feature-complete, and
since 2007 for the system bus.

The dbus-x11 problem that is most relevant here[2] is that because the
session bus is run as a child process of (a somewhat arbitrary process
inside) an X11 desktop session, those D-Bus-activated services cannot
be managed by `systemd --user`, even on systems that have it; and
the session bus and its session services are not visible/available to
programs that run as `systemd --user` services, such as gpg-agent. This
is because `systemd --user` is "bigger than" the per-X11-display session
bus provided by dbus-x11, so anything in its scope that tries to connect
to "the" session bus is faced with the question: which session?

Conceptual model of dbus-user-session on systems with `systemd --user`:

"the system" (system services, etc.)
  \- uid 1000
\- systemd --user
\- dbus-daemon
\- D-Bus-activated services without SystemdService=
\- D-Bus-activated services with SystemdService=
\- non-D-Bus-based user services
\- login session 1 (X11)
\- gnome-session, startkde, ~/.xinitrc or equivalent
\- window manager
\- applications
\- login session 2 (sshd)
\- bash or equivalent
\- CLI applications
  \- uid 1001
  \- ... similar tree ...

Login sessions 1 and 2, `systemd --user` services, and `systemd --user`
itself can share and use the dbus-daemon, because it is part of a scope
that is "as large as" both of them.

Conceptual model of dbus-x11 without dbus-user-session:

"the system" (system services, etc.)
  \- uid 1000
\- systemd --user (if present)
\- non-D-Bus-based user services
\- login session 1 (X11)
\- gnome-session, startkde, ~/.xinitrc or equivalent
\- window manager
\- applications
\- dbus-daemon
\- D-Bus-activated services from login session 1
\- login session 2 (sshd)
\- bash or equivalent
\- CLI applications
\- in principle you could have another dbus-daemon here
   if you run dbus-launch manually
 \- D-Bus-activated services from login session 2
  \- uid 1001
  \- ... similar tree ...

Here, login session 1 owns its own dbus-daemon and is the only one that
can see it. `systemd --user` and login session 2 cannot.

The system bus does not have this duality, because there's only one
system bus per system, the same as the init system; so there is no issue
about the init system's service manager role being in a "larger" or
"smaller" scope than the D-Bus system bus.

"uid 1000" and "uid 1001" in the above refer to any uid that currently
has at least one, but perhaps more than one, login session. When I refer
to a "user session" in D-Bus documentation and terminology, this is
what I mean. uid 1000 has a user session; login sessions 1 and 2 are
part of that user session; and so is its `systemd --user`. uid 1001
has a separate user session, containing its own login session(s) and
`systemd --user`. If we imagine uid 1002 exists but does not currently
have any login sessions, then it also does not have a user-session.

I would suggest that dependency system representations for D-Bus services
should probably not be designed by developers for whom the contents of
this message are new information.

smcv

[1] Assuming you have not configured your dbus-daemon with
--disable-traditional-activation at build time; this makes sense for
some constrained embedded systems, which might be Debian derivatives,
but is unlikely to be suitable for a general-purpose distribution
like Debian, and I have no plans to do so.
[2] There are others.



Re: Proposal: Reaffirm our commitment to support portability and multiple implementations

2019-12-01 Thread Simon McVittie
On Sun, 01 Dec 2019 at 11:13:46 -0800, Russ Allbery wrote:
> Simon Richter  writes:
> > Right, but the dependency chain is there to make sure the package is
> > usable on systemd systems
>
> My recollection is that these dependencies are mostly about either making
> sure user sessions are available or that D-Bus is available, right?  (I'm
> fairly sure about user sessions and less sure about D-Bus.)

"Making sure D-Bus is available" is not particularly meaningful, because
the D-Bus system and session buses are distinct. Depending on the package
in question, it might want one or more of:

* the D-Bus system bus is available at all times (except early boot)
- it's a fairly straightforward system service
- depend on dbus (one day I might break this out into a dbus-system-bus
  package, which at the moment is a Provides in dbus)
- any init system can work

* the D-Bus session bus is available in at least X11 login sessions
- it's a per-user or per-X11-display service
- depend on default-dbus-session-bus | dbus-session-bus
- any init system is fine, *if* you use dbus-x11 to implement
  dbus-session-bus
- the default implementation (see below) requires systemd

* the D-Bus session bus is available in all login sessions, *and* has the
  semantics that it is per-uid rather than per-login-session (the "user
  bus", which is "larger than" a single login session)
- it's a per-user service
- depend on dbus-user-session
- this currently requires `systemd --user`, which requires systemd
  as pid 1 *and* systemd-logind (elogind is not enough) and I don't
  see this changing any time soon

dbus-user-session is not, and probably will not be, usable on non-systemd
systems. If per-user service managers other than `systemd --user` exist
and can be configured to provide equivalent semantics, I'd be happy
to review the necessary integration files, but at the moment there
is no way to have the semantics represented by dbus-user-session on a
non-systemd system.

dbus-user-session is not implied by systemd, or even by systemd --user.
Some `systemd --user` services work badly, or not at all, without
dbus-user-session (represented by a Recommends or Depends on it);
but I've gone to some lengths to make sure that if systemd users who
do not rely on those services want multiple parallel X11 sessions,
each with its own per-X11-display D-Bus session bus, they can have that
(by removing dbus-user-session and installing dbus-x11).

> > It wouldn't be a problem in practice to break that dependency chain, as
> > systemd based installations tend not to be curated on a
> > package-by-package basis

It's true that non-systemd-based installations need to be curated to
remain non-systemd-based; and it's true that because systemd is the
default init, systems that accept defaults will be systemd-based and
not strongly curated; but I don't think either of those implies that
there are no strongly curated systemd-based systems.

> Is it possible to have a systemd system that doesn't have these
> properties?  In other words, do these dependencies only matter with other
> init systems, or do they also matter in container scenarios?

These categories exist:

* No service manager at all
- typical for Docker (pid 1 is usually a simple process reaper)
- typical for chroots (pid 1 is outside the chroot)
- systemd --user isn't run
- dbus-user-session doesn't work
* Various non-systemd service managers (sysv-rc, OpenRC, etc.)
- systemd --user isn't run
- dbus-user-session doesn't work
* systemd as pid 1, but no pam_systemd
- systemd --user isn't run
- dbus-user-session doesn't work
* systemd as pid 1, and pam_systemd is used
- typical for "full" systems (bare metal, VM, lxd)
- systemd --user is run
- dbus-user-session works

I hope that clarifies the situation.

smcv



Re: Please drop/replace the use of the term "diversity"

2019-11-27 Thread Simon McVittie
On Wed, 27 Nov 2019 at 11:27:13 +, Chris Lamb wrote:
> May I gently request we replace the use of the word "diversity"
> throughout the "init systems and systemd" General Resolution prior to
> it being subject to a plebiscite?

Thank you for raising this, Chris.

I agree. I have been uncomfortable with this in the context of "init
diversity" efforts, but I didn't raise it in the past because I couldn't
articulate clearly why I felt that it was a problem.  Since it's now
on-topic, here's my best attempt at that:

The diversity team, and wider efforts around diversity in Debian and
in software in general, have used "diversity" as a catch-all term for
personal characteristics of our contributors and community members when
discussing inclusion and how we treat people, as a way to avoid having
to enumerate specific characteristics (which would tend to lead to focus
on those characteristics at the expense of others).

If we use the same word in discussions around technical decisions, this
raises some concerns for me. Jokes about the emacs and vi religions
aside, technical preferences are not really the same thing as the
characteristics we normally refer to by "diversity". Of course, we
should treat the people who hold those preferences with respect, but
that isn't the same as considering implementation of their preference
to be an ethical imperative for Debian.

To take a deliberately slightly absurd example, preferring Gentoo over
Debian is not an inclusion or diversity issue; we welcome constructive
contributions to Debian from people who would prefer to be using Gentoo
(notably some of our upstreams!), but we do not consider it to be an
ethical imperative to expand the scope of Debian to encompass everything
Gentoo does.

I would hate to see diversity and inclusion of people (the meaning of
the word used in the name of the Diversity Team) harmed by creating a
perception that the term "diversity" has been devalued by stretching
it to encompass technical preferences, because I think diversity and
inclusion of people is much too important to let that happen.

Conflating diversity of people with diversity of implementation could
easily also harm our technical decisions, in either direction:

* it could influence technical decisions away from making a choice as
  a project, and towards creating infrastructure to make that choice on
  individual systems, by developers who do not wish to be perceived to
  be opposing "diversity" in the interpersonal/Diversity Team sense of
  the word;

* conversely, it could influence technical decisions *towards* making a
  choice as a project, and *away from* making that choice on individual
  systems, by developers who might believe this use of "diversity" is
  disingenuous (even if it was not intended as such).

The extent to which we make choices project-wide, and the amount of
technical cost we are willing to accept to be able to make those choices
onto individual systems, seem like something that we should decide based
on their merits. Whatever the result of the imminent vote might be,
I would like it to be chosen for the right reasons.

smcv