An early report, this month, as I've ran out of work hours earlier than expected...
GnuPG & Enigmail ================ To get Enigmail working properly with the Thunderbird upload from last week, we need GnuPG 2.1 in jessie. I [backported GnuPG 2.1][] to Debian jessie directly, using work already done to backport the required libraries from jessie-backports. It was [proposed][2] to ship the libraries as private libraries or statically link GnuPG itself. I believe this is the wrong approach and besides I'm unsure as to how this would work in practice, so I recommend going forward with the libraries backport. I provided a [summary][3] of the conversation to try and bring a conclusion. [1]: https://lists.debian.org/87r2fqnja0....@curie.anarc.at [2]: https://lists.debian.org/0c03ba38-26f5-8e45-d792-5b1871a4d...@gmail.com [3]: https://lists.debian.org/87sgzw7q7k....@curie.anarc.at Spamassassin ============ Once Spamassassin 3.4.2 was [accepted][4] in the latest stable point release, I went back to work on the jessie upgrade I [proposed last month][5] and uploaded the resulting package as [DLA-1578-1][]. [4]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=912198#36 [5]: https://lists.debian.org/87h8h39she....@curie.anarc.at [DLA-1578-1]: https://lists.debian.org/20181113190619.ga15...@curie.anarc.at Security tracker ================ I worked on a few sticky parts of the security tracker. Automatic unclaimer ------------------- After an internal discussion about work procedures, a friend pointed me at the [don't lick the cookie][6] article which I found really interesting. The basic idea is that our procedure for work distribution is based on "claims" which mean some packages remain claimed for extended periods of time. [6]: https://www.benday.com/2016/10/21/scrum-dont-lick-the-cookie/ For some packages it makes sense: the kernel updates, for example, have been consistently and dilligently performed by Ben Hutchings for as long as I remember, and I myself would be very hesitant in claiming that package myself. In that case it makes sense that the package remains claimed for a long time. But for some other packages, it's just an oversight: you claim the package, work on it for a while, then get distracted by more urgent work. It happens all the time, to everyone. The problem is then that the work is stalled and, in the meantime, other people looking for work are faced with a long list of claimed packages. In theory, we are allowed to "unclaim" a package that's been idle for too long, but as the article describes, there's a huge "emotional cost" associated with making such a move. So I looked at automating this process and [unclaim packages automatically][7]. This was originally rejected by the security team which might have confused the script implementation with a separate [proposal][8] to add a cronjob on the security tracker servers to automate the process there. [7]: https://salsa.debian.org/security-tracker-team/security-tracker/merge_requests/23 [8]: https://salsa.debian.org/security-tracker-team/security-tracker-service/merge_requests/2 After some tweaking and bugfixing, I believe the script is ready for use and our new LTS coordinator will give it a try, in what I would describe as a "manually triggered automatic process" while with figure out if the process will work for us. Splitting huge files in the repository -------------------------------------- I once again looked at splitting the large (17MB and counting) `data/CVE/list` file in the security tracker. While my [first attempt][9] was just trying to improve performance in my own checkouts, the heaviness of the repository has now been noticed by the Salsa administrators (bug #908678) as it triggers several performance issues in GitLab. [9]: https://salsa.debian.org/security-tracker-team/security-tracker/issues/2 And while my first attempt clearly not a good tradeoff and made performance worse (by splitting each CVE in its own file), the new proposal (split by year) actually brings significant performance improvements. Clones takes 11 times less space (145MB vs 1.6GB) and resolve ten times faster (2 vs 21 minutes, local only). Running annotate on one year takes 26 seconds while running this takes around 10 minutes over the whole file. This arguably, is less impressive: there are, after all, twenty years of history in that repository, so to be fair, we'd need to run annotate against all of those. But obviously, earlier years are smaller than the latest, so the total is also faster (2 minutes). And besides, we don't really need to run annotate against the *entire* file: when I do this, I usually want to know who to contact about a comment in the file, which is usually a recent change. The conversion itself was an interesting exercise in optimisation. The original tool was a simple bash script, which split the file in 15 seconds, which is fine if we are ready to lose history in the repository, But that is probably unacceptable, so I rewrote the script in Python, which gave a huge performance improvement, processing in less than a second. This was still a bit slow, so I rewrote it in go which gave another leap in performance, until a colleague noticed the resulting files were all empty. After fixing that shameful bug, performance of the go implementation actually become worse than the Python, something I was quite surprised about, considering Python is not known for its fast startup times or performance. I have yet to explain this discrepancy. Unfortunately, the split proposal doesn't seem to match the workflow of the security team, which seem to be strongly attached to having the entire history of CVE identifiers in a single file still. Instead, a bug report against git (#913124) was open in the hope git could fix the issue. Considering how git is designed and how it's reknowned for not dealing well with large files, however, I have very little hope something like that could happen and I do not see why we are trying to fit that proverbial round peg in the square hole that is Git. Other reviews and fixes ----------------------- While I was working on the security tracker, I also fixed a trivial issue with the test pipeline which was [promptly merged][24] [24]: https://salsa.debian.org/security-tracker-team/security-tracker/merge_requests/24 I also provided reviews of merge requests [20][], [21][] and [22][], some of which were eventually merged. [20]: https://salsa.debian.org/security-tracker-team/security-tracker/merge_requests/20 [21]: https://salsa.debian.org/security-tracker-team/security-tracker/merge_requests/21 [22]: https://salsa.debian.org/security-tracker-team/security-tracker/merge_requests/22 I also participated in informal discussion surrounding the DLA issuance process to make sure they are reflected on the security website, as part of bug #859122, which, I was surprised to realize, I opened more than a year ago. I will continue working on this next month, unless someone beats me to it. :) systemd ======= Finally, I took a deep dive in systemd, trying to address the worrisome security issues that came up recently. Many of those are mitigated by the way Debian uses systemd: for example, systemd-networkd is not used by default, so there's no remote root execution (!). The issues fixes were: * CVE-2018-1049 - automounter race condition, easy backport * CVE-2018-15688 - dhcp6 client buffer overflow, trivial backport * CVE-2018-15686 - deserialization privilege escalation, more involved backport, required changes to the logging and error reporting as `log_error_errno` doesn't exist in v215 and it's part of a large tangle of macros that was unwidely to backport Regarding the latter patch, I [asked upstream][10] if this was the correct patch to backport, but haven't received an answer yet. [10]: https://github.com/systemd/systemd/pull/10519#issuecomment-438419900 Finally, I also worked on the tmpfiles issues which were marked as not affecting wheezy back then, but that *do* affect jessie. This is CVE-2018-6954, but also CVE-2017-18078, which is actually trivial to fix. The problem is that upstream fixed the issue first with a [small PR][8358] and that was fixed as part of `229-4ubuntu21.8`. But unfortunately, that fix was found to be incomplete and a massive rewrite of the tmpfiles handling was done in a [much larger PR][8822]. And because that touches on many more parts of the files, it was much more difficult to backport. I ended up giving up and it will probably be easier to simply backport the entire `tmpfiles.c` from upstream, removing the parts that are not currently supported, than trying to backport each of the 26 upstream commits into the jessie release. [8358]: https://github.com/systemd/systemd/pull/8358 [8822]: https://github.com/systemd/systemd/pull/8822 So after uploading a [test package][11], which provided a welcome backup to the above mess I introduced in my source package, I uploaded the test unchanged to jessie, and announced it as [DLA-1580-1][]. [11]: https://lists.debian.org/8736s0dcm2....@curie.anarc.at [DLA-1580-1]: https://lists.debian.org/ -- Seul a un caractère scientifique ce qui peut être réfuté. Ce qui n'est pas réfutable relève de la magie ou de la mystique. - Karl Popper