Re: Bug#923300: ITP: golang-github-openshift-imagebuilder -- Builds Dockerfile using the Docker client
On Thu, 28 Feb 2019 at 17:53:37 -0500, Reinhard Tartler wrote: > Upstream doesn't appear to be willing to upgrade to a new version (quote from > the bug above "[...] I really don't want to [...]". Looking at how this > package > is using the vendored library, it seems openshift/imagebuilder is using a > rather specific subset of the docker code, some of which possibly shouldn't > have been exposed in the first place. Therefore, I'm inclined to follow > upstream and vendor this library. I agree that this sounds like a de facto fork of the vendored library, more than a convenience code copy. > I wonder whether it wouldn't be actually be more > appropriate to create a tarbal with the vendored library and ship it in the > debian/ subdirectory. You could consider using a multiple-.orig-tarball package in 3.0 (quilt) format? See for example yquake2 (a relatively simple example which bundles together https://github.com/yquake2/yquake2 and https://github.com/yquake2/ctf) or llvm-toolchain-7 (a more elaborate example with multiple subprojects). smcv
Work-needing packages report for Mar 1, 2019
The following is a listing of packages for which help has been requested through the WNPP (Work-Needing and Prospective Packages) system in the last week. Total number of orphaned packages: 1393 (new: 1) Total number of packages offered up for adoption: 165 (new: 3) Total number of packages requested help for: 57 (new: 0) Please refer to https://www.debian.org/devel/wnpp/ for more information. The following packages have been orphaned: freeradius (#923034), orphaned 5 days ago Description: high-performance and highly configurable RADIUS server Reverse Depends: freeradius freeradius-config freeradius-dbg freeradius-dhcp freeradius-iodbc freeradius-krb5 freeradius-ldap freeradius-memcached freeradius-mysql freeradius-postgresql (6 more omitted) Installations reported by Popcon: 1595 Bug Report URL: https://bugs.debian.org/923034 1392 older packages have been omitted from this listing, see https://www.debian.org/devel/wnpp/orphaned for a complete list. The following packages have been given up for adoption: numlockx (#923349), offered 2 days ago Description: enable NumLock in X11 sessions Reverse Depends: ltsp-client Installations reported by Popcon: 2943 Bug Report URL: https://bugs.debian.org/923349 osc (#923351), offered 2 days ago Description: OpenSUSE (buildsystem) commander Reverse Depends: obs-utils osc-plugins-dput Installations reported by Popcon: 136 Bug Report URL: https://bugs.debian.org/923351 rpm (#923352), offered 2 days ago Description: package manager for RPM Reverse Depends: alien createrepo debugedit dose-distcheck dose-extra dtrx git-buildpackage-rpm grr-server koji-common koji-servers (23 more omitted) Installations reported by Popcon: 18681 Bug Report URL: https://bugs.debian.org/923352 162 older packages have been omitted from this listing, see https://www.debian.org/devel/wnpp/rfa_bypackage for a complete list. For the following packages help is requested: autopkgtest (#846328), requested 820 days ago Description: automatic as-installed testing for Debian packages Reverse Depends: autodeb-worker debci-worker Installations reported by Popcon: 1179 Bug Report URL: https://bugs.debian.org/846328 balsa (#642906), requested 2713 days ago Description: An e-mail client for GNOME Installations reported by Popcon: 699 Bug Report URL: https://bugs.debian.org/642906 broadcom-sta (#886599), requested 416 days ago (non-free) Description: Broadcom STA Wireless driver (non-free) Installations reported by Popcon: 1894 Bug Report URL: https://bugs.debian.org/886599 cargo (#860116), requested 688 days ago Description: Rust package manager Reverse Depends: cargo-vendor dh-cargo Installations reported by Popcon: 891 Bug Report URL: https://bugs.debian.org/860116 cyrus-sasl2 (#799864), requested 1254 days ago Description: authentication abstraction library Reverse Depends: 389-ds-base 389-ds-base-legacy-tools 389-ds-base-libs adcli autofs-ldap cairo-dock-mail-plug-in claws-mail claws-mail-acpi-notifier claws-mail-address-keeper claws-mail-archiver-plugin (114 more omitted) Installations reported by Popcon: 200063 Bug Report URL: https://bugs.debian.org/799864 dee (#831388), requested 958 days ago Description: model to synchronize mutiple instances over DBus Reverse Depends: dee-tools gir1.2-dee-1.0 libdee-dev zeitgeist-core Installations reported by Popcon: 60074 Bug Report URL: https://bugs.debian.org/831388 developers-reference (#759995), requested 1643 days ago Description: guidelines and information for Debian developers Installations reported by Popcon: 10749 Bug Report URL: https://bugs.debian.org/759995 devscripts (#800413), requested 1248 days ago Description: scripts to make the life of a Debian Package maintainer easier Reverse Depends: apt-build apt-listdifferences aptfs arriero autodeb-worker brz-debian bzr-builddeb customdeb debci debian-builder (30 more omitted) Installations reported by Popcon: 12582 Bug Report URL: https://bugs.debian.org/800413 docker.io (#908868), requested 166 days ago Description: Linux container runtime Reverse Depends: golang-docker-dev golang-github-fsouza-go-dockerclient-dev golang-github-google-cadvisor-dev golang-github-samalba-dockerclient-dev kubernetes-node subuser whalebuilder Installations reported by Popcon: 1555 Bug Report URL: https://bugs.debian.org/908868 ed (#886643), requested 416 days ago Description: classic UNIX line edit
Bug#923300: ITP: golang-github-openshift-imagebuilder -- Builds Dockerfile using the Docker client
Package: wnpp Severity: wishlist Owner: Reinhard Tartler * Package name: golang-github-openshift-imagebuilder Version : 1.0+git20190212.3682349-1 Upstream Author : OpenShift * URL : https://github.com/openshift/imagebuilder * License : Apache-2.0 Programming Lang: Go Description : Builds Dockerfile using the Docker client This library supports using the Dockerfile syntax to build OCI & Docker compatible images, without invoking a container build command such as buildah bud or docker build. It is intended to give clients more control over how they build container images, including: • Instead of building one layer per line, run all instructions in the same container• Set HostConfig settings like network and memory controls that are not available when running container builds• Mount external files into the build that are not persisted as part of the final image (i.e. "secrets")• If there are no RUN commands in the Dockerfile, the container is created and committed, but never started. The final image should be 99.9% compatible with regular container builds, but bugs are always possible. This is a build dependency of the buildah tool. A particular challenge with this package is that it vendors in an old version of the "docker" library. This has been reported by one of the maintainers of the buildah project as https://github.com/openshift/imagebuilder/issues/116 Upstream doesn't appear to be willing to upgrade to a new version (quote from the bug above "[...] I really don't want to [...]". Looking at how this package is using the vendored library, it seems openshift/imagebuilder is using a rather specific subset of the docker code, some of which possibly shouldn't have been exposed in the first place. Therefore, I'm inclined to follow upstream and vendor this library. My question here is what is the best way of implementing this. I could update the Files-Excluded field in debian/copyright to exclude all entries but openshift/imagebuilder, and use mk-origtargz to strip the tarball. That would, however, lead to a *very* elaborate Files-Excluded field. I wonder whether it wouldn't be actually be more appropriate to create a tarbal with the vendored library and ship it in the debian/ subdirectory. Has anyone else encountered this issue and/or could point to other packages solving the same or a similar issue? Cheers, -rt
Re: FYI/RFC: early-rng-init-tools
> "Thorsten" == Thorsten Glaser writes: Thorsten> It’s not about what we feed to the kernel, but about the Thorsten> property of it distributing the input evenly across the Thorsten> output. The basic tenet here is that, if I have 128 bytes Thorsten> of random input from the seed file, then, if I write the Thorsten> output of an SRNG to both the kernel and back to the seed Thorsten> file, each has about 128 bytes worth of entropy iff only Thorsten> one is used (and somewhat less otherwise, but, again, Thorsten> according to tytso and I believe even Ben in some Debbug Thorsten> against OpenSSL, 16 or 32 bytes of input “should be Thorsten> enough”). Not exactly. The entropy is also capped at the internal state of your SRNG whatever it is. I'd strongly recommend finding a design where you don't use your own RNG and just use the kernel's RNG entirely. I think it complicates the analysis and may introduce problems to have multiple PRNGs involved: your security is the weakest link in this instance. Let me see if I understand the issue that makes this hard for you. You want to avoid crediting the entropy before the new seed is written. You cannot call getrandom before you credit the entropy. So you cannot use getrandom to produce the new seed file. That's ugly, and if I've got it right, I at least understand your constraints. I'd probably do something like: 1) read the seed file 2) remove the seed file; fsync the directory. 3) write to kernel; credit the entropy 4) mix in other stuff 5) call getrandom and generate a new seed file. You're in a bad position if you crash between reading and writing the seed file, but I think your other options are also bad.
Re: FYI/RFC: early-rng-init-tools
Ben Hutchings dixit: >On Thu, 2019-02-28 at 14:52 +, Ian Jackson wrote: >>Thorsten Glaser writes ("FYI/RFC: early-rng-init-tools"): >>> • during postinst, creates a 128-byte random seed file in / >> >>Can you confirm that this is done with data from >>getrandom(,,GRND_RANDOM) ? (Presumably with GRND_NONBLOCK too.) No. The mechanism is to initialise a local arc4random instance, which is then used to stretch the seed into the kernel and the new content of the seed file. >>This latter stuff is fine but not really critical IMO. It’s merely used to attempt to make the boots more dissimilar. >>> to initialise a stretching RNG (arc4random) >> >>Why are you feeding this through a separate hashing function rather >>than letting the kernel PRNG's hasher do it ? I am seriously >>unconvinced that arc4random is a good idea here. > >I agree. This is the basic design decision (see above). I’m not reading from the kernel to write to the seed file because ① the entropy given might not be enough to make it initialise, depending on future kernel changes; this should not block boot, just make it better if it can ② I only let the kernel attribute the random bytes AFTER the seed file has been successfully written to disc with the new content, in order to guarantee that either the next boot differs or that the script does not make the system think it has more entropy than it really has >>> ‣ writes between 32 and 256 bytes to /dev/urandom (but does not >>> accredit them yet, just remembers the amount written) >> >>Why not write the whole lot ? The whole lot of what? My only constraint here is that I need to write enough bits to justifyably accredit at least 128 bits, and that the total number of bits I accredit is at most the seed file size (128 bytes) times a trust factor smaller than 1 (using ⅞ here). >>IMO you should promise to the kernel an amount of entropy exactly >>equal to the size of the saved seed. Why is it a problem to use less? This is commonly called a security margin. >>> I am fully aware that it is not suitable for everyone: >>> • it’s Linux-only (the RNG on kFreeBSD is very different, and >>> I didn’t even look into Hurd’s urandom translator) >> >>I think the same principles will apply. If your utility on BSD uses >>BSD's /dev/random instead of Linux's getrandom() syscall, then it will Some of the BSDs don’t even have /dev/random. It’s also not about getrandom(), which is not used (read() from /dev/urandom is), but about the RNDADDTOENTCNT ioctl, which is pretty much specific to the Linux RNG and the ones deriving from it (OpenBSD/MirBSD). >> > • it means you trust a seed file and the arc4random algorithm (to make >> > a uniform enough stream from the various seeds) >> >> The question is nothing to do with its uniformity. The kernel PRNG >> will hash its input. I think you can feed it whatever. It’s not about what we feed to the kernel, but about the property of it distributing the input evenly across the output. The basic tenet here is that, if I have 128 bytes of random input from the seed file, then, if I write the output of an SRNG to both the kernel and back to the seed file, each has about 128 bytes worth of entropy iff only one is used (and somewhat less otherwise, but, again, according to tytso and I believe even Ben in some Debbug against OpenSSL, 16 or 32 bytes of input “should be enough”). >>> • it prevents you from booting with / mounted read-only >> >>I think this is an undesirable side effect. There is a tradeoff here: […] >>I missed the beginning part. Is it not possible to defer all of this >>to make it run just after / is mounted rw ? No, because we’re already in parallel boot by then, and this would mean extremely invasively patching of all init scripts/systems. Do you see any practical concern with mounting / read-write before init, as opposed to rather quickly after init by one of the init scripts? >> If the RC4 were critical to the security properties of your scheme, >> then I would be making a much stronger complaint, because RC4 is (of >> course) broken (when used as a supposedly cryptographically secure >> pseudorandom bitstream generator). “of course”? Not all applications of RC4 are broken. If you use it as a generator of random bytes after seeding it with suitably large, suitably random input (as opposed to an 8-byte IV that’s reused *cough*WEP*cough*) and throw away some amount of initial keystream (which cannot be done in things like TLS, because that would change the protocol), then, no, RC4 is not broken. (There’s additional throwing-away-stuff-randomly-during- execution in place. UTSL if interested.) >The "arc4random" functions really use ChaCha20 today, anyway. Not all of them. This utility ships its own implementation, which is derived from the original BSD one, so it’s still using aRC4, although with all the changes from newer security research applied. (I had hopes for Spritz, but it turns out it misbehaves wo
Re: FYI/RFC: early-rng-init-tools
Sebastian Andrzej Siewior dixit: >so I have one older box that suffers from that. I installed haveged and >seemed to went away: I tried that, after the suggestion to use haveged went up, but… >As far as I understand, it would reach the "init done" state before >systemd took over, right? … this was not true for me. Not before init takes over, anyway (as haveged does not have any initramfs integration), but we’re talking about “crng init done” here, not “fast init done”. In my scenario, haveged was started much too late in the boot to be useful (after tomcat, even). But then, I use a non-parallel sysvinit startup. It’s fragile anyway; if you install more daemons, for example, it might also block before reaching the stage where haveged starts on your parallel systemd setup suddenly. >So what is the advantage over using haveged? haveged tries to use CPU jitter, in a way similar to jytter but on a much more massive scale, to gather entropy-ish and writes that to the kernel RNG. It, however, does that all the time, and not just a little bit. Basically, it’s an attempt to gather entropy, while early-rng-init-tools just takes what’s there during normal system runtime (which you have to provide yourself, at the very least before installing it, but sensibly also normally) and makes it available to the kernel earlier (this really ought to be done in the bootloader, even, but this at least improves on what we currently have). So, different concept (even though early-rng-init-tools also has a *small* gather function which, on x86, gathers a few bytes using the same mechanism… but the majority of randomness comes from the seed file). From what I’ve read about haveged, statements from its author, and looking at the source code (which begs to be customised for the exact CPU setup your machine has, as if it were a FORTRAN library), I’d prefer to not use haveged on my systems even if it would help. I’m the owner of several Simtec EntropyKey sticks and use them and a entropy distributing scheme over the network (with SSL/SSH) instead to add runtime entropy to machines lacking local (disk/keyboard/mouse). But, as I said, that’s just at runtime; early-rng-init-tools isn’t about that (except it updates the seedfile later durng runtime to mix in at least some more runtime entropy that the next boot will be able to use). bye, //mirabilos -- “It is inappropriate to require that a time represented as seconds since the Epoch precisely represent the number of seconds between the referenced time and the Epoch.” -- IEEE Std 1003.1b-1993 (POSIX) Section B.2.2.2
Re: FYI/RFC: early-rng-init-tools
> "Ben" == Ben Hutchings writes: >> If the seed > files used in two different boots are somewhat >> correlated, and the > entropy estimation doesn't account for >> that, the output of /dev/random > may also be somewhat correlated >> between the boots, which is not > supposed to happen. >> >> I'm not sure what you mean by `somewhat correlated'. Ben> I meant that they're not completely independent, so that Ben> knowing one allows you to make some predictions about the Ben> other. But if I've understood rightly, that doesn't matter as Ben> long as the entropy estimation is right. If the seed is secret and there is enough entropy, and some data (no matter how low entropy) is added to distinguish the boots, then no you should not be able to make such predictions. Doing so is sufficient to prove the kernel PRNG is not a PRNG (at least assuming you can do so in polynomial time). I think that may be what you mean when you say that if you've understood rightly, that doesn't matter. If so, then your understanding is correct. signature.asc Description: PGP signature
Re: FYI/RFC: early-rng-init-tools
On Thu, 2019-02-28 at 14:52 +, Ian Jackson wrote: [...] > > to initialise a stretching RNG (arc4random) > > Why are you feeding this through a separate hashing function rather > than letting the kernel PRNG's hasher do it ? I am seriously > unconvinced that arc4random is a good idea here. I agree. [...] > > • it means you trust a seed file and the arc4random algorithm (to make > > a uniform enough stream from the various seeds) > > The question is nothing to do with its uniformity. The kernel PRNG > will hash its input. I think you can feed it whatever. Yes. > If the RC4 were critical to the security properties of your scheme, > then I would be making a much stronger complaint, because RC4 is (of > course) broken (when used as a supposedly cryptographically secure > pseudorandom bitstream generator). The "arc4random" functions really use ChaCha20 today, anyway. Ben. > I hope you have found this review helpful. -- Ben Hutchings This sentence contradicts itself - no actually it doesn't. signature.asc Description: This is a digitally signed message part
Re: FYI/RFC: early-rng-init-tools
On Thu, 2019-02-28 at 11:56 +, Ian Jackson wrote: > Ben Hutchings writes ("Re: FYI/RFC: early-rng-init-tools"): > > On Mon, 2019-02-25 at 19:37 +0200, Uoti Urpala wrote: > > > Generally you don't ever > > > need to use /dev/random instead of /dev/urandom unless you make > > > assumptions about cryptography failing. > > > > I think I agree with that, but there is no way to add entropy that > > unblocks getrandom() without also unblocking /dev/random. If the seed > > files used in two different boots are somewhat correlated, and the > > entropy estimation doesn't account for that, the output of /dev/random > > may also be somewhat correlated between the boots, which is not > > supposed to happen. > > I'm not sure what you mean by `somewhat correlated'. I meant that they're not completely independent, so that knowing one allows you to make some predictions about the other. But if I've understood rightly, that doesn't matter as long as the entropy estimation is right. > Assuming that the random seed file is not copied, there is no weakness > in copying entropy out of the kernel random pool and reinserting it on > next boot, assuming that either (i) the entropy estimate provided on > next boot is no bigger than the kernel's entropy counter at shutdown > OR (ii) the kernel's PRNG was at any time properly seeded so that > /dev/random unblocked. I think this is right. Ben. -- Ben Hutchings This sentence contradicts itself - no actually it doesn't. signature.asc Description: This is a digitally signed message part
Re: freeze and security fixes
Hi Jérémy, On 28-02-2019 16:05, Jérémy Lal wrote: > the documentations: > https://release.debian.org/buster/freeze_policy.html > https://www.debian.org/doc/manuals/developers-reference/ch05.html#bug-security > > leave me unsettled about what to do during freeze w.r.t. security > uploads in testing. > (for a known and upstream-fixed CVE): > - should i just go ahead and upload to unstable ? You can do that, but as the first link states, you'll have to get approval from the release team to have it migrate to testing: """ Note that when considering a request for an unblock, the changes between the (proposed) new version of the package in unstable and the version currently in testing are taken in to account. If there is already a delta between the package in unstable and testing, the relevant changes are all of those between testing and the new package, not just the incremental changes from the previous unstable upload. This is also the case for changes that were already in unstable at the time of the freeze, but didn't migrate at that point. We strongly prefer changes that can be done via unstable instead of testing-proposed-updates. If there are unrelated changes in unstable, you should consider reverting these instead of making an upload to testing-proposed-updates. """ and """ Targeted fixes A targeted fix is one with only the minimum necessary changes to resolve a bug. The freeze process is designed to make as few changes as possible to the forthcoming release. Uploading unrelated changes is likely to result in a request for you to revert them if you want an unblock. """ So make sure the only change in unstable with respect to testing is the fix for the security issue (and possible other bugs that qualify for an unblock). > - should i set urgency=high ? This has no effect during the freeze period, so it doesn't matter (but is consistent with normal behavior). > - should i send a debdiff and wait for ack from security team first ? Waiting for the ack from the security team is only needed for the upload to stable. For the fix in testing you need to do the same to the release team, except you don't have to wait for the ack to upload to unstable (keeping in mind the above mentioned and linked requirements). Paul signature.asc Description: OpenPGP digital signature
freeze and security fixes
Hi, the documentations: https://release.debian.org/buster/freeze_policy.html https://www.debian.org/doc/manuals/developers-reference/ch05.html#bug-security leave me unsettled about what to do during freeze w.r.t. security uploads in testing. (for a known and upstream-fixed CVE): - should i just go ahead and upload to unstable ? - should i set urgency=high ? - should i send a debdiff and wait for ack from security team first ? Jérémy
Re: FYI/RFC: early-rng-init-tools
Thorsten Glaser writes ("FYI/RFC: early-rng-init-tools"): > • during postinst, creates a 128-byte random seed file in / Can you confirm that this is done with data from getrandom(,,GRND_RANDOM) ? (Presumably with GRND_NONBLOCK too.) > – after the root filesystem is read-write, > ‣ reads from the seed file (128 bytes) > ‣ uses that and a number of other things (to make it differ)… > ← md5sum of dmesg > ← 3 random bytes written into initramfs during update-initramfs > ← the current time > ← a bit of kernel entropy (from AT_RANDOM auxvec, set anyway) > ← on x86, Jytter and the TSC This latter stuff is fine but not really critical IMO. > to initialise a stretching RNG (arc4random) Why are you feeding this through a separate hashing function rather than letting the kernel PRNG's hasher do it ? I am seriously unconvinced that arc4random is a good idea here. > ‣ writes between 32 and 256 bytes to /dev/urandom (but does not > accredit them yet, just remembers the amount written) Why not write the whole lot ? > ‣ updates the seed file with 128 bytes from the SRNG > ‣ fsync(2)s and close(2)s the seed file to ensure the next run > will be different Why not update the seed file with information from getrandom(GRND_RANDOM), instead ? (You would have to do this after the next step.) > ‣ now tells the kernel X random bits were added, where X is… > → the number of bytes written earlier… > → times 6 (so we count at best six bits per byte)… > → capped at 128*7 bits, to both not overwhelm and because our > seed is only 128 bytes in size, estimated conservatively IMO you should promise to the kernel an amount of entropy exactly equal to the size of the saved seed. > I am fully aware that it is not suitable for everyone: > • it’s Linux-only (the RNG on kFreeBSD is very different, and > I didn’t even look into Hurd’s urandom translator) I think the same principles will apply. If your utility on BSD uses BSD's /dev/random instead of Linux's getrandom() syscall, then it will DTRT. > • it prevents you from booting with / mounted read-only I think this is an undesirable side effect. There is a tradeoff here: If you defer updating the seed to later, after you have already set off the kernel RNG, then any situation with multiple partial boots will reuse the seed, which is bad. Whether that's a practical problem depends on what exactly might be run in the meantime, with / still ro. I missed the beginning part. Is it not possible to defer all of this to make it run just after / is mounted rw ? > • it means you trust a seed file and the arc4random algorithm (to make > a uniform enough stream from the various seeds) The question is nothing to do with its uniformity. The kernel PRNG will hash its input. I think you can feed it whatever. If the RC4 were critical to the security properties of your scheme, then I would be making a much stronger complaint, because RC4 is (of course) broken (when used as a supposedly cryptographically secure pseudorandom bitstream generator). I hope you have found this review helpful. Regards, Ian. -- Ian JacksonThese opinions are my own. If I emailed you from an address @fyvzl.net or @evade.org.uk, that is a private address which bypasses my fierce spamfilter.
Re: FYI/RFC: early-rng-init-tools
> "Ian" == Ian Jackson writes: Ian> Sam Hartman writes ("Re: FYI/RFC: early-rng-init-tools"): >> Ben Hutchings writes: >[Someone:] >> The >> additional entropy gathered is for extra safety; it is not >> >> *depended* on for basic security assumptions. > [...] It is, >> because the the kernel is told to treat it as providing > a >> certain number of bits of entropy. >> >> I see no problem crediting the secret stored across the reboot >> with the entropy in the pool at the time of shutdown. Ian> Indeed. Ian> AIUI the reason given for not doing this by default is that Ian> nowadays many installations are VMs of some kind which may be Ian> cloned between shutdown and startup. Right, and I'm not talking about changing the default, simply saying that having a simple way to change that default on a given system by installing a package is important. >> I agree that the credits for the entropy of the additional >> information added may be too high. >> >> I'm skeptical that the actual entropy credits matter much once >> you have *enough*, but I agree that the /dev/random interface >> does depend on that, and the proposal as described may be >> violating that assumption. Ian> Linux /dev/random's notion that there is any difference between Ian> `enough' entropy and `more' is wrong. In particular its idea Ian> that taking PRNG output out of /dev/random could cause Ian> degradation of any kind is wrong. I absolutely agree with you, and I think a lot of people in the Linux community agree with you thus the getrandom syscall. I'm not a designer of cryptographic primitives, but I am qualified to evaluate their use in protocols. Having learned from my own mistakes and others, I'm very nervous of violating the explicit security assumptions or claims of an interface. If you asked me whether we should make available an interface like /dev/random that cared about entropy beyond "enough," I'd say "absolutely not." However, given that we have such an interface, should we violate its assumptions? I can't see the harm. I've been burned by the harm I didn't see too many times before. I'd do it on my system. I might or might not flag it in a security review of someone else's design depending on circumstances. Once someone flags it--and Ben has flagged it--I'm reluctant to dismiss it without careful consideration. But since we think good enough is good enough, what's the big deal? Credit the secret seed across boot but don't credit much for the low entropy stuff we're mixing in additionally.
Re: FYI/RFC: early-rng-init-tools
Ben Hutchings writes ("Re: FYI/RFC: early-rng-init-tools"): > On Mon, 2019-02-25 at 19:37 +0200, Uoti Urpala wrote: > > Generally you don't ever > > need to use /dev/random instead of /dev/urandom unless you make > > assumptions about cryptography failing. > > I think I agree with that, but there is no way to add entropy that > unblocks getrandom() without also unblocking /dev/random. If the seed > files used in two different boots are somewhat correlated, and the > entropy estimation doesn't account for that, the output of /dev/random > may also be somewhat correlated between the boots, which is not > supposed to happen. I'm not sure what you mean by `somewhat correlated'. Assuming that the random seed file is not copied, there is no weakness in copying entropy out of the kernel random pool and reinserting it on next boot, assuming that either (i) the entropy estimate provided on next boot is no bigger than the kernel's entropy counter at shutdown OR (ii) the kernel's PRNG was at any time properly seeded so that /dev/random unblocked. Ian. -- Ian JacksonThese opinions are my own. If I emailed you from an address @fyvzl.net or @evade.org.uk, that is a private address which bypasses my fierce spamfilter.
Re: FYI/RFC: early-rng-init-tools
Sam Hartman writes ("Re: FYI/RFC: early-rng-init-tools"): > Ben Hutchings writes: > >[Someone:] > >> The additional entropy gathered is for extra safety; it is not > >> *depended* on for basic security assumptions. > > [...] > It is, because the the kernel is told to treat it as providing > > a certain number of bits of entropy. > > I see no problem crediting the secret stored across the reboot with the > entropy in the pool at the time of shutdown. Indeed. AIUI the reason given for not doing this by default is that nowadays many installations are VMs of some kind which may be cloned between shutdown and startup. This makes some kind of sense, although I am not sure that it is right to break a lot of embedded systems for the benefit of people who do that without care for cryptographic seeds. Those are only one of several things that need fixing up or working around when cloning a VM image. > I agree that the credits for the entropy of the additional information > added may be too high. > > I'm skeptical that the actual entropy credits matter much once you have > *enough*, but I agree that the /dev/random interface does depend on > that, and the proposal as described may be violating that assumption. Linux /dev/random's notion that there is any difference between `enough' entropy and `more' is wrong. In particular its idea that taking PRNG output out of /dev/random could cause degradation of any kind is wrong. IMO this is a serious and longstanding design defect in Linux's /dev/random (which incidentally is not present in FreeBSD's /dev/random which has had contact with actual cryptographers; and it is not present in at least one of the newer syscall APIs but for reasons best known to Linux developers, there is no simple /dev device with the correct semantics). So, it is of no consequence if Thorsten's scheme violates this /dev/random entropy-counting principle. I have a PhD in computer security - in particular, in cryptographic protocol design. But my knowledge is rather out of date since I left academia and I have not reviewed Thorsten's design in detail. Thanks, Ian. -- Ian JacksonThese opinions are my own. If I emailed you from an address @fyvzl.net or @evade.org.uk, that is a private address which bypasses my fierce spamfilter.