Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
On Thu, 2015-04-09 at 18:17 +0200, Ansgar Burchardt wrote: as I don't think we'll currently shorten the time Release is valid for, I'm closing the bug report as suggested by the submitter[1]. For the records, my suggestion was not based on the issue being fixed or not an issue, but rather the matter of fact that Debian has apparently decided to provide no or only weak security in this area. Cheers, Chris. -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Santiago Vila writes (Re: Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files): I have a laptop with testing which I use mostly on weekends. I have a partial mirror there, which I try to update as soon as I login into the system. Firstly, I think this is an important use case which we should cater to. But also I think we should try to reduce the risk of the kinds of attacks that Christoph Mitterer is worried about. I think that we should approach this problem in the best Debian tradition and try to invent some kind of technical approach that works, as well as we can arrange, for everyone. Certainly just changing the validity period for the signatures is too blunt a tool. As demonstrated, there is no right value for this configuration parameter. IMO that shows that we have a design problem. We should fix that, rather than having an argument about which use case is more important. Indeed, the time interval between vulnerabilities being known and being widely exploited is becoming very short. We need to speed up our distribution of security patches, if we can, and that means we need to reduce the rollback vulnerability window. I don't have a complete recipe for this but here are some possible pieces: * The computer knows when it last polled for updates from whatever its mirror is. Perhaps this information is of use. * We could run a lightweight polling service on Debian infrastructure which the computer could use to find out how out of date it is. * We could provide a separate command or tool or option to check for security updates - a tool which would _fail_ if the network and infrastructure was not sufficiently working. * We could provide a configurable addition to the validity period. * The security archive might want a different validity period. * We might want automation which was capable of automatically shutting a server down into some kind of minimal maintenance mode, when it is unable to verify its own security support status. * Some people here have already suggested that `desktop' and `server' configurations might want different defaults. `Laptop' probably wants yet different defaults. This is a real pain and it reminds me of subscription services or DRM stuff, like those games that fail to work if the player is not online. As someone who is running various servers, I would love it if my server shut itself down if it thought it was `offline' because it couldn't do its security updates. Of course, conversely, that would be incredibly annoying for my netbook! Ian. -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Hi Ian. In principle I've had left this discussion for previously stated reasons, but since you're one of the few who actually seems to be willing to discuss about this on a technical level with real arguments and ideas some replies to them: On Mon, 2014-11-03 at 17:56 +, Ian Jackson wrote: IMO that shows that we have a design problem. The thing is IMHO just, that that the issues have these problems inherently. - when one wants short validity times (for the reasons pointed out several times now), then may run into availability problems *unless* one deals with them on a technical level, as I've proposed some times now - when one makes the whole thing optional for security paranoid users, then it's likely that it won't work very well vor them (due to only few people taking care about it)... and the majority looses any possible benefit. Now to your ideas: * The computer knows when it last polled for updates from whatever its mirror is. Perhaps this information is of use. In principle yes,... though this may be a complex/error-prone information,... thinking about multiple different repos with different last-checked times (some of them may have been down, or simply not checked)... * We could run a lightweight polling service on Debian infrastructure which the computer could use to find out how out of date it is. I personally would probably strongly vote against such model. It's basically the same as with OSCP, and we've seen how well that works. Decoupling certain security assertions from that actual object being secured leads to all kinds of problems,... And it opens again the question: what to do when this polling service is down (or DoS attacked)? Which is basically the same question as to my basic proposal of just changing the validity times. Either one would ignore the polling service being down (which makes the whole thing useless as OSCP is for most browsers) or one would make it a failure communicated to client - which is not different to what I've proposed what should happen when the clients encounter a expired Release file. * We could provide a separate command or tool or option to check for security updates - a tool which would _fail_ if the network and infrastructure was not sufficiently working. Intuitively I'd say, that this makes usability of everything just more difficult. One would always need to use that tool in addition, need to adapt 3rd party programs for it. And either, you wouldn't enable that tool per default (which makes it again useless for the majority)... or you would, which would bring back all the points from my critics. * We could provide a configurable addition to the validity period. I think we already have that, don't we? But it doesn't solve the problem, at least unless the Release files would be much more frequently resigned. And even if that would be done (e.g. hourly ),... if the valid-until time still stays long, the majority wouldn't benefit from all this. * The security archive might want a different validity period. Sure,... as I've proposed in the beginning :) * We might want automation which was capable of automatically shutting a server down into some kind of minimal maintenance mode, when it is unable to verify its own security support status. I think this surely is rather some long term goal... it definitely would need to be configurable what should be done: - doing nothing - just shutting down services that listen on the network - loading a different set of firewall rules (perhaps such, that block all network access from the internet, but allow the services to be still reachable from the trustworthy intranet) - completely stopping all networking (i.e. systemctl stop network.service or something like this) - really shutting the node down (may actually be desired for some people, because a security hole could have been found in the kernel, that allows remote attacks and even bypassing netfilter) - or a mixture of this combined with perhaps a timeout of automatically trying to reach the admin via SMS, and if he doesn't take action within 20 minutes do xyz. * Some people here have already suggested that `desktop' and `server' configurations might want different defaults. `Laptop' probably wants yet different defaults. I personally wouldn't see why. We've seen these fast security holes for programs affecting all of these systems... People of all of these systems may use automatic upgraders,... As someone who is running various servers, I would love it if my server shut itself down if it thought it was `offline' because it couldn't do its security updates. Of course, conversely, that would be incredibly annoying for my netbook! I think no one said your server should shut completely down... it would be enough if - in such case - network services would shut down, or strict firewall rules be enabled. So I don't think that your notebook would suddenly poweroff, just because it couldn't get
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Dear ftpmasters: Contrary to what this report suggests, I believe the current validity of 7 days for testing and unstable is extremely low and should be increased. I have a laptop with testing which I use mostly on weekends. I have a partial mirror there, which I try to update as soon as I login into the system. My sources.list points to the local mirror and very often it happens that I want to install a package in the partial mirror before the mirror update has finised. Well, many times it happens that the system refuses to install anything because of expired Release file, sometimes by a day or two, sometimes by just a few hours or minutes. This is a real pain and it reminds me of subscription services or DRM stuff, like those games that fail to work if the player is not online. Please consider increasing the expiration time. IMHO, an expiration time of 30 days or something alike would be a lot better than the current 7 days for testing and unstable. Moreover, one day, testing will become stable and the expiration date will probably be set to infinity. I think it would make sense if this happen not suddenly but gradually during the frozen state of the current testing distribution. Thanks. -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
More to the point: If we want testing to be constantly usable (as opposed to mostly useless if you don't apt-get update in a week), the expiration time for the Release file should be a lot closer than the one used for stable and far away than the one used for unstable. Thanks. -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Hey, I'd think everything about this attack vector has been exhaustively discussed now from security and technical point of views and since all concerns and their possible solutions lay quite obviously at the desk. Jörg, AFAICS you've been the only one of the FTP masters that was contributing to the discussion,... would you therefore agree if I close the ticket? Santiago, if you wish the validity times to be extended or abolished, then could you please open another ticket for that, as it just clutters this one, and since it contradicts security also wouldn't match this ticket's security tag :) Thanks, Chris. smime.p7s Description: S/MIME cryptographic signature
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Hey. Some more technical on this: Right now, we get the validity via the fields in the Release files. I'm not sure whether the following could actually help with the technical issues (i.e. speed of distributing re-signed release files across the mirrors), but perhaps basing the validity on the OpenPGP signature could help a tiny bit. That way one would just need to distribute the detached signatures and perhaps one could also place multiple signatures along with the Release files to assist the turn-over. Not sure though, whether this would still work with InRelease - I guess OpenPGP itself would probably support it, but no sure whether gnupg does. Also this doesn't help with the point, that one rather needs a fast distribution of all the Release/Packages/Sources files for shorter validity times, at least if my analysis from message #60 is more or less correct. Cheers, Chris smime.p7s Description: S/MIME cryptographic signature
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Hey Henrique, et al. I've had lost my interest a bit, since it feels like fighting windmills... but one month has passed and it's perhaps a good time to revisit this. On Mon, 2014-09-29 at 08:08 -0300, Henrique de Moraes Holschuh wrote: On Mon, 29 Sep 2014, Christoph Anton Mitterer wrote: Now to deal with your concern of larger outages: 2) Just because there are no valid [In]Release* files, it doesn't mean that those mirrors and their repositories can't be used any longer. The data is still there as it was before. An application like apt/aptitude/etc. could simply give the user an error, telling that the files have expired for hh:mm and could give the user and option to nevertheless trust them. And the same options could be provided for batch modes. This is not making any sense anymore. Step back and think about your threat model in the first place. I do not quite understand what you meant... the attack model ist clear, the technical questions (i.e. distribution to mirrors) as well, and I gave several ways to work around such issues: - there's the way of resigning the Release files and mirroring them with priority, which should work fast enough - adding functionality to the clients to adequately warn about what's happening and allowing them to override the expiration. The *entire* threat model, not whatever small part of it that looks easily fixable by a severe reduction to the inrelease validity period Well I guess you should perhaps look at e.g.: https://www.drupal.org/PSA-2014-003 People had roughly 7 hours (estimated) before that hole was exploited massively all over the net. As far as I understand the security team uploaded a fixed version (of stable) at Wed, 15 Oct 2014 11:43:08 -0500 ... can we see from some logs when this became really available on the mirrors? Anyway this should demonstrate quite practical, how fast attackers are these days and that severely reducing the validity times doesn't just help against some completely unreal attack vectors. Even if the security team is as fast as above, then a victim may be compromised by a downgrade attack, thus not even being notified about new upgrades. (which you have already been told by several Debian archive ops _and_ mirror ops people to be very much a Bad Idea). Now, if you want us to add per-repository validity overrides to source.lists that can *reduce* the range APT will accept, so that the local admin can tighten things, that's fine. Well we have that anyway, don't we? But it's probably nothing which is used by the typical majority. Conceptually, the trust lies in the server. Even when the client reduces his validity times, than a server could still simply distribute old packages, just newly signed. So the right place for reducing the validity is on the server / repo-meta-data side, not on the client side. If the client side (i.e. apt, aptitude) set their own maximum validity times, than this should rather server to either override the servers or to identify accidentally misconfigured servers which give out e.g. files that have a validity of years or so. If you're going to propose some sort of tiered system and a way for apt to actually know it is OK to use this updates not often at all fallback mirror as long as it also has a mirror from the fresh stuff only tier, that would be at least sensible... Would those help? I don't know, that's what the full threat model analysis is for. Hmm I'd rather say that this sounds like an overly complex solution that is error prone. An in principle it doesn't help, because in that case an MitM-capable attacker would simply block the fresh server,... if the clients then fall back to the fallback mirror in silence, things are useless again. So, can we get now some alternative proposals that address the fact that some mirrors need 48H validity, and many leaf mirrors really want at least a week? Or to help apt detect it is using a mirror that is more outdated than expected, which *is* the reason 99,999% of our users ever suffer an unintended downgrade attack ? I don't think that there is a way around it, because the whole issue is about the requirement to have up-to-date repository information. From a security point of view, a mirror model in which mirrors are lagging that long behind is simply not adequate. One can have such slow mirrors for repos which don't change often (stable, oldtable, etc.) but for anything which is used to deal security updates to the users (security.d.o, unstable)... slow mirrors are simply broken from a security POV. On Mon, 2014-09-29 at 09:05 -0300, Henrique de Moraes Holschuh wrote: Sure, 48H or 24H refresh requirements for anything that is mirroring s.d.o is a restriction we could deploy. Well I guess the drupal example shows that even ranges in about 1 hour would be appropriate. But there's the DoS concern if there is a problem refreshing s.d.o from ftp-master. At least, s.d.o. is a lot more
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Package: snapshot.debian.org Severity: wishlist X-Debbugs-CC: debian-de...@lists.debian.org On Sun, Sep 28, 2014 at 4:34 AM, Peter Palfrader wrote: On Fri, 26 Sep 2014, Paul Wise wrote: On Thu, Sep 25, 2014 at 11:21 PM, Christoph Anton Mitterer wrote: Well I think snapshot is it's own construction site, isn't it? snapshot is a read-only (modulo cosmic rays and removal of non-redistributable things) historical record, files in it will not be modified to re-sign with newer keys nor to update Valid-Until. That doesn't mean one couldn't consider providing an overlay of sorts, that provides re-signed release files if the original ones verified. Under a different path obviously. We could look at patches if they somehow appeared. Excellent idea, documenting it in the bug tracker. -- bye, pabs https://wiki.debian.org/PaulWise -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
On Fri, 2014-09-26 at 11:20 +0800, Paul Wise wrote: snapshot is a read-only (modulo cosmic rays and removal of non-redistributable things) historical record, files in it will not be modified to re-sign with newer keys nor to update Valid-Until. So what would you do now, when one of the past keys was compromised or got simply too weak to be trustworthy anymore? This would mean that stuff shipped by snapshot.d.o is no longer secure (in the sense of preventing MitM during the download, not in the sense that the package themselves would be secured otherwise). Actually, having another APT key for just snapshot.d.o sounds somehow appealing to me from a design POV. Cheers, Chris. smime.p7s Description: S/MIME cryptographic signature
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
On Thu, 2014-09-25 at 21:56 +0200, Joerg Jaspert wrote: It also sounds quite stupid that suddenly all users have no working mirror anymore, should there be an outage of ftp-master or security-master longer than the signing time. Well I don't see why this must necessarily happen. Even if ftp-master and or the signing services went down for a longer time period, then nothing has really changed,... of course the Release files will no longer be valid, but what then should happen is simply this: 1) Programs that use secure APT fail if nothing else has been specified manually by the user/admin - and this is exactly what we (should) want. If a critical hole, like the shellshock comes out and we fix it quickly, than no attacker should be able to keep some systems from reacting either by our current long validity times OR by simply [D]DoSing ftp-master. If the secure APT is used manually, than the user/admin will immediately see that something is fishy an can react appropriately. If secure APT is somehow used automatically, than a properly configured system should send the user/admin notifications that updates/upgrades no longer work, and again, appropriate measures can be taken. Now to deal with your concern of larger outages: 2) Just because there are no valid [In]Release* files, it doesn't mean that those mirrors and their repositories can't be used any longer. The data is still there as it was before. An application like apt/aptitude/etc. could simply give the user an error, telling that the files have expired for hh:mm and could give the user and option to nevertheless trust them. And the same options could be provided for batch modes. The only difference is, that now users can notice and can decide themselves (like trusting files expired for 1 hour, but not for 29 days) or can take appropriate measures (like looking around in the news or debian.org/security, whether anything big is going on). I mean the validity is just a like a flag, that allows software to decide - but the default should be that - if something is fishy - the software gives an error/warning and the validity periods should be short enough to match the typical package update cycle for each repo. Or a release going on, during which we commonly turn off the archive and ALL cronjobs. Until we are sure that it is all fine again. No, a full release doesn't go through in less than a dinstalls time. Even down to two dinstall intervals is short and would require us to add one more level of complexity to our working. Well don't fully understand this, to my understanding, it would be mainly security and sid archives, which should have very short validity periods of a few hours, since those are the archives where people expect their security updates on a fast track. Apart from that: The validity-period should IMHO be mainly considered as a security-related property and not related to the technical periods of how the repo data is distributed. And the apropriate value should be aligned to the typical time that it needs for updates to go into that repo (on master - not on all mirrors). I.e. that means if our security team is so fast that it sometimes provides updates within 1 hour of a hole becoming public, that should mean that the appropriate validity time is that. If this leads technical problems, than either there is a design problem in the current distribution model,... or we simply need our clients (apt/aptitude) to notice the user and leave the choice up to him. IMHO it's quite dangerous if people start to negotiate security for technical reasons, the wellness-factor of users or for historical reasons. Attackers simply don't care about this. And yes, security always comes at price, also for the end-user (like in our case that he could sometimes face expired meta-data and would have to decide what to do),... This is why we have a lousy X.509 based security infrastructure in the web instead of a properly meshed PKI like e.g. OpenPGP would provide it. This is why Debian still enables network services per default after installing them, even though this is quite stupid from a security POV. And I'm sure that there were already people in the past who wondered about the stupid features that bash provides, but for sure they would have failed to convince upstream to remove them. Security is not negotiable. Cheers, Chris. btw: I did some PoCs on some of my own servers (respectively such which are under my control under the university) with shellshock. As soon as you can MitM, you can do such downgrade attack, and hide any updates to bash from the systems (which in our case run check_apt via Icinga)... so noone would notice and take appropriate actions. Okay that this works, was probably clear to everyone. But you even don't need to be able for a direct MitM. Many systems netselect-apt, so all you need to do is: run a mirror that is considered the closest to your attack-target, wait for a suitable security hole, stop that
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Hey Joerg. On Sun, 2014-09-14 at 21:52 +0200, Joerg Jaspert wrote: Technically we could go down to 1 second, validtime is expressed in seconds in our setup. ;-) My proposal would be something like that: unstable/testing: 4-12 hours [wheezy|squeeze]/updates at security.d.o: 1-6 hours I'm not sure going below a dinstall cycle is useful. Probably even two. Have it expire before the new stuff even got a chance to get out is not a good idea, IMO. Are there any numbers how long it actually takes for the stuff to get distributed? But apart from that, even if it take a while for the actual package files to distribute through all the mirrors, once that is done, only the re-signed meta-data files would need to be distributed, which should be quite fast. So if copying is done smart, I'd guess this could be made to work. Anyway, even if there are technical issues, don't you think that it sounds kinda stupid, if all the distros and security guys try to orchestrate the publication of important issues (like the apt or bash stuff we've seen these days), so that basically fixed packages could be available for all distros at nearly the same time, while we still leave our users basically vulnerable by having far too long validity times. Also, going down to such small intervals means we MUST resign, even if there is no update at all in the archive (so an extra cronjob, just to be sure). Sure, I mean that's the whole point of constantly and frequently assuring that the package meta-data (including it's information about security things) is current... doing these resigns often is basically what prevents downgrade/blocking attacks. That's no problem in the main archive, there is always enough going on, but security can go way longer without an update (which is why such a (weekly) cronjob exists on security). So does this mean that it *would* be a problem for e.g. non-main archives? That is technically not a big problem. Unless you happen to look at services like snapshot, which run an import on every trigger. No idea how much it hurts them if only the Release files change, need to find out. Well I think snapshot is it's own construction site, isn't it? IIRC, snapshot ships the old Release files, and thus everything older than a few weeks is anyway considered invalid, right? And doesn't it also use the old GPG keys? IMHO that makes it a bit difficult to use snapshot.d.o. I think it would be better if there was a special GPG key for snapshot.d.o. As for the Release files and their validity, one could go probably two ways: - constantly resign them (or give them basically infinite validity), thereby making it easy to use it (i.e. no warnings from APT), assuming that it is clear to everyone who uses snapshot.d.o that these packages are not current and probably full of security issues. - leaving it as is, i.e. keep the original expiration times in the Release files and people must manually tell their APT/aptitude/etc. to ignore this. Cheers, Chris. smime.p7s Description: S/MIME cryptographic signature
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
On Mon, 2014-09-15 at 00:04 +0200, Stefan Lippers-Hollmann wrote: Please consider that too short intervals (24h might still work, but it's hard on the limit) make non-primary (cron based) mirrors basically impossible. Including local mirrors used for systems that can't connect to the internet directly or potentially even setups used for (personal) archive-wide rebuilds or debian-cd related tasks. Intervals below an hour, besides probably even invalidating most primary mirrors, are likely to render apt-get update to expire before it has finished downloading the meta data for all repos on slower internet connections. Well as I've said in my post before: such slow systems should just need to resync the (re-signed) metadata files at a more current point. But of course you're right, extremely slow mirrors and/or pull models are likely to cause troubles (or break), but their working model is simply broken, at least with respect to security. Decreasing these validity cycles too much would force many of these uses to ignore expiry times alltogether - or having to re-sign a local archive mirror with longer periods (which is not exactly a reasonable task for most users or anything that involves debootstrapping). I guess most uses would opt to go with the first option instead, which won't help anyone... Well, users can always take a gun and shoot themselves. No reason to expose *all* users to security issues, if there is (small?) fraction of users who would just completely deactivate secure APT, if it means too much effort for them. I mean that's basically always the problem with security, isn't it? You don't get it for free, which is why we have crap like the Internet's X.509 certificate hierarchy that basically fails and breaks and all points. If a really strong (i.e. meshed) system would have been used, many end-users would have refused to use it altogether, wich is why we have a system now, that is basically useless for all. Personally I think 24 hours (better something like 26-28 hours to allow some overlap for secondary/ tertiary/ local mirrors only updated daily) would be the technical limit that might be possible, but anything shorter than a week (or at very least three days) would already significantly impact many valid use cases where local mirroring and/or a fixed archive state is required. Don't you think it would be possible for mirrors, to have faster resync of just the meta-datafiles? I mean these are really small, and actually it should be only necessary to frequently resync the [In]Release* files, if everything else stayed the same, only they will have changed for the new validity times. While there might be an argument to decrease the expiry times for security.d.o, perhaps even down to a day or at least half a day[1], the negative net impact for all normal archives (especially testing and unstable) would imho far outweigh any potential security improvements. Why? I guess many people use e.g. sid as their main system, so all of them will get their security upgrades via that and not via security.d.o. So they would be still left vulnerable downgrade attacks in case of such things like the bash/apt holes from these days, even though Debian might have perfectly reacted and already supplies fixed packages. Just look at the common advice given for expired signatures on snapshots.d.o, most suggest to use a global apt-get -o Acquire::Check-Valid-Until=false update or Acquire::Check-Valid-Until 0; for apt. Well sure, but snapshots.d.o is a completely different story, isn't it? Everyone using it, should be aware that these are old archived packages, and that it's not so unlikely that they contain security issues. In other words, there is no implicit guarantee in any way that snapshot.d.o gives you secure stuff - therefore we don't need to defend against things like blocking/downgrade attacks. For these reasons I would suggest against changing the current intervals, especially least not into the hour- or single day regions. Well I think the current intervals (what were they? 1 month?) are *far* too long. If there should be really insolvable technical issues (and I guess most of them might be solved by quickly resyncing the [In]Release* files, which should be trivial)... than one day is probably the interval I'd go for. Cheers, Chris. smime.p7s Description: S/MIME cryptographic signature
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
On 13710 March 1977, Christoph Anton Mitterer wrote: I'm not sure going below a dinstall cycle is useful. Probably even two. Have it expire before the new stuff even got a chance to get out is not a good idea, IMO. Are there any numbers how long it actually takes for the stuff to get distributed? Maybe somewhere, dont know. Anyway, even if there are technical issues, don't you think that it sounds kinda stupid, if all the distros and security guys try to orchestrate the publication of important issues (like the apt or bash stuff we've seen these days), so that basically fixed packages could be available for all distros at nearly the same time, while we still leave our users basically vulnerable by having far too long validity times. It also sounds quite stupid that suddenly all users have no working mirror anymore, should there be an outage of ftp-master or security-master longer than the signing time. Or a release going on, during which we commonly turn off the archive and ALL cronjobs. Until we are sure that it is all fine again. No, a full release doesn't go through in less than a dinstalls time. Even down to two dinstall intervals is short and would require us to add one more level of complexity to our working. That is technically not a big problem. Unless you happen to look at services like snapshot, which run an import on every trigger. No idea how much it hurts them if only the Release files change, need to find out. Well I think snapshot is it's own construction site, isn't it? IIRC, snapshot ships the old Release files, and thus everything older than a few weeks is anyway considered invalid, right? And doesn't it also use the old GPG keys? I don't care here what snapshot ships. Wrong point to look at. It's import runs are costly, and it gets ALL of the mirror runs. -- bye, Joerg liw er, *not* what I meant, is what I meant signature.asc Description: PGP signature
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
On Thu, Sep 25, 2014 at 11:21 PM, Christoph Anton Mitterer wrote: Well I think snapshot is it's own construction site, isn't it? snapshot is a read-only (modulo cosmic rays and removal of non-redistributable things) historical record, files in it will not be modified to re-sign with newer keys nor to update Valid-Until. Updating the Release files more often will simply mean slightly more disk space used for the extra Release files. Depending on the update frequency, the quantity of data is probably too little to make any significant difference in the disk usage of the snapshot service so nothing to worry about IMO. -- bye, pabs https://wiki.debian.org/PaulWise -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
On 13616 March 1977, Christoph Anton Mitterer wrote: [ Not doing a full quote, but keeping quite a bit of context for debian-devel readers ] As Jakub Wilk pointed out[1] these are the current validity periods for Release files: unstable, experimental: 7 days testing: 7 days wheezy: no limit wheezy(-proposed)-updates: 7 days wheezy/updates at security.d.o: 10 days wheezy-backports: 7 days squeeze: no limit squeeze(-proposed)-updates: 7 days squeeze/updates at security.d.o: 10 days squeeze-lts: 7 days IMHO all of them are far too long. Maintainers and our Security Team are usually doing a great job in trying to provide fixes for security issues ASAP. But even if they're incorporated only hours or less after being released, an attacker can do a downgrade attack for 7-10 days and trick a system into not seeing these new packages. Such downgrade attack is very easy to perform, as soon as one can MitM, and we generally must expect that not only powerful groups like NSA and friends are able to do this. Since many unattended systems (especially in the stable branches) are more or less automatically updated, and since an attacker that can MitM can likely also block any security announcement mails, users/admins have no chance to take note about such updates being available for 7-10 days. I'd suggest to reduce the validity to at most 1 day in all cases. Actually I'd choose much smaller values if this causes no other problems. Technically we could go down to 1 second, validtime is expressed in seconds in our setup. Many users run unstable/testing as their normal system, so it's not enough to only tighten the periods for the stable branches. My proposal would be something like that: unstable/testing: 4-12 hours [wheezy|squeeze]/updates at security.d.o: 1-6 hours I'm not sure going below a dinstall cycle is useful. Probably even two. Have it expire before the new stuff even got a chance to get out is not a good idea, IMO. Also, going down to such small intervals means we MUST resign, even if there is no update at all in the archive (so an extra cronjob, just to be sure). That's no problem in the main archive, there is always enough going on, but security can go way longer without an update (which is why such a (weekly) cronjob exists on security). That is technically not a big problem. Unless you happen to look at services like snapshot, which run an import on every trigger. No idea how much it hurts them if only the Release files change, need to find out. Same goes for any service that starts $whatever-heavy-action on a mirror trigger. If they don't check that nothing else changed, they may waste loads of cpu. (Of course we also force users to update way more often) So, now, CC-ed debian-devel to get more input on what a good time for the Release files validity would be. I'm happy to change it to whatever is deemed best, but instead of just changing and waiting for fallout, I would like some more input up front. -- bye, Joerg Yeah, patching debian/rules sounds like changing shoes while running the 100 meters track. -- Michael Koch signature.asc Description: PGP signature
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Hi On Sunday 14 September 2014, Joerg Jaspert wrote: On 13616 March 1977, Christoph Anton Mitterer wrote: [ Not doing a full quote, but keeping quite a bit of context for debian-devel readers ] As Jakub Wilk pointed out[1] these are the current validity periods for Release files: [...] I'd suggest to reduce the validity to at most 1 day in all cases. Actually I'd choose much smaller values if this causes no other problems. Technically we could go down to 1 second, validtime is expressed in seconds in our setup. Please consider that too short intervals (24h might still work, but it's hard on the limit) make non-primary (cron based) mirrors basically impossible. Including local mirrors used for systems that can't connect to the internet directly or potentially even setups used for (personal) archive-wide rebuilds or debian-cd related tasks. Intervals below an hour, besides probably even invalidating most primary mirrors, are likely to render apt-get update to expire before it has finished downloading the meta data for all repos on slower internet connections. Decreasing these validity cycles too much would force many of these uses to ignore expiry times alltogether - or having to re-sign a local archive mirror with longer periods (which is not exactly a reasonable task for most users or anything that involves debootstrapping). I guess most uses would opt to go with the first option instead, which won't help anyone... Personally I think 24 hours (better something like 26-28 hours to allow some overlap for secondary/ tertiary/ local mirrors only updated daily) would be the technical limit that might be possible, but anything shorter than a week (or at very least three days) would already significantly impact many valid use cases where local mirroring and/or a fixed archive state is required. While there might be an argument to decrease the expiry times for security.d.o, perhaps even down to a day or at least half a day[1], the negative net impact for all normal archives (especially testing and unstable) would imho far outweigh any potential security improvements. Just look at the common advice given for expired signatures on snapshots.d.o, most suggest to use a global apt-get -o Acquire::Check-Valid-Until=false update or Acquire::Check-Valid-Until 0; for apt. For these reasons I would suggest against changing the current intervals, especially least not into the hour- or single day regions. Regards Stefan Lippers-Hollmann [1] and even for security.d.o I don't believe that anything below at least 2-3 days would be a good idea. signature.asc Description: This is a digitally signed message part.
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
On Sun, Sep 14, 2014 at 09:52:00PM +0200, Joerg Jaspert wrote: Also, going down to such small intervals means we MUST resign, even if there is no update at all in the archive (so an extra cronjob, just to be sure). That's no problem in the main archive, there is always enough going on, but security can go way longer without an update (which is why such a (weekly) cronjob exists on security). Unless ftp-master is down because of breakage and all Debian systems will start showing warnings because the resign did not take place. Resilience through not serving the world directly has some value. I guess these days we have a mirror that's fairly up to date to potentially start serving from, once the key is restored (I assume it is not synced). Not sure if that's true for security-master. Kind regards Philipp Kern signature.asc Description: Digital signature
Bug#752450: ftp.debian.org: please consider to strongly tighten the validity period of Release files
Package: ftp.debian.org Severity: important Tags: security Dear ftp masters. I've thought about that before but then forgot it again and it came back to my mind during the recent thread[0] about security, that I've started on debian-devel. As Jakub Wilk pointed out[1] these are the current validity periods for Release files: unstable, experimental: 7 days testing: 7 days wheezy: no limit wheezy(-proposed)-updates: 7 days wheezy/updates at security.d.o: 10 days wheezy-backports: 7 days squeeze: no limit squeeze(-proposed)-updates: 7 days squeeze/updates at security.d.o: 10 days squeeze-lts: 7 days IMHO all of them are far too long. Maintainers and our Security Team are usually doing a great job in trying to provide fixes for security issues ASAP. But even if they're incorporated only hours or less after being released, an attacker can do a downgrade attack for 7-10 days and trick a system into not seeing these new packages. Such downgrade attack is very easy to perform, as soon as one can MitM, and we generally must expect that not only powerful groups like NSA and friends are able to do this. Since many unattended systems (especially in the stable branches) are more or less automatically updated, and since an attacker that can MitM can likely also block any security announcement mails, users/admins have no chance to take note about such updates being available for 7-10 days. I'd suggest to reduce the validity to at most 1 day in all cases. Actually I'd choose much smaller values if this causes no other problems. Many users run unstable/testing as their normal system, so it's not enough to only tighten the periods for the stable branches. My proposal would be something like that: unstable/testing: 4-12 hours [wheezy|squeeze]/updates at security.d.o: 1-6 hours For the others, it depends how security updates are distributed, i.e. since they come via [wheezy|squeeze]/updates at security.d.o it probably makes not much sense to have that short times for wheezy and for squeeze. Not sure about wheezy(-proposed)-updates, squeeze(-proposed)-updates and wheezy-backports, squeeze-lts. Cheers, Chris. btw: I'll CC the security team, the debian lts guys and affect the bug to release.debian.org... at least these are hopefully the responsible guys acording to [1]. [0] https://lists.debian.org/debian-devel/2014/06/msg00171.html [1] https://lists.debian.org/debian-devel/2014/06/msg00407.html -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org