Hi all, I noticed something while we were on the topic of tagging the beta tomorrow that I wanted to bring up, which is a concern with tarball generation. Specifically, the various parallizeable tarball generators (pixz, pbzip2) seem to generate extraneous data.
tar is smart enough to ignore this extra data, but this can affect decompressing our tarballs in a pipeline (i.e. xz --decompress kdelibs-4.foo.tar.xz | tar xf -), as tar closing its STDIN causes xz to write its excess data to a broken pipe. This probably doesn't annoy a ton of different people (except for the obvious problem with source-based distros like Gentoo, e.g. https://bugs.gentoo.org/show_bug.cgi?id=410861) but if the speedup is not very substantial it would be better to use xz or bzip2 to avoid the problem entirely. (This is done by adjusting the value of "compressors" in the pack release script in case you're wondering). It might be possible to still get some concurrency benefit by batching up modules to "pack" and then running 4 or 8 (or however many CPUs are around) separate pack scripts at once, or fire off a pack while starting on tagging the next module, etc. Thoughts? I'll be very clear that I don't think this should affect creating the beta tarballs at all but if we choose to avoid the parallelizing compressors hopefully that would be in time for the release candidates. Regards, - Michael Pyne
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ release-team mailing list release-team@kde.org https://mail.kde.org/mailman/listinfo/release-team