This bug was fixed in the package apt - 1.2.29 --------------- apt (1.2.29) xenial; urgency=medium
* Set DPKG_FRONTEND_LOCKED when running {pre,post}-invoke scripts. Some post-invoke scripts install packages, which fails because the environment variable is not set. This sets the variable for all three kinds of scripts {pre,post-}invoke and pre-install-pkgs, but we will only allow post-invoke at a later time. (LP: #1796808) apt (1.2.28) xenial; urgency=medium [ Julian Andres Klode ] * apt.conf.autoremove: Add linux-cloud-tools to list (LP: #1698159) * Add support for dpkg frontend lock (Closes: #869546) (LP: #1781169) * Set DPKG_FRONTEND_LOCKED as needed when doing selection changes * http: Stop pipeline after close only if it was not filled before (LP: #1794957) * pkgCacheFile: Only unlock in destructor if locked before (LP: #1794053) * Update libapt-pkg5.0 symbols for frontend locking [ David Kalnischkies ] * Support records larger than 32kb in 'apt show' (Closes: #905527) (LP: #1787120) -- Julian Andres Klode <juli...@ubuntu.com> Tue, 09 Oct 2018 12:35:03 +0200 ** Changed in: apt (Ubuntu Xenial) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to apt in Ubuntu. https://bugs.launchpad.net/bugs/1794957 Title: pipelining on archive.u.c aborts after 101 packages Status in apt package in Ubuntu: Fix Released Status in apt source package in Xenial: Fix Released Status in apt source package in Bionic: Fix Released Bug description: [Impact] Downloading many packages on archive.ubuntu.com or some other mirrors seems to close the connection after every 100 or so packages. APT prior to 1.7.0~rc1 (commit df696650b7a8c58bbd92e0e1619e956f21010a96), treats a connection closure with a 200 response as meaning that the server does not support pipelining, hence disabling it for any further downloads. With high speed connections at higher latency, this can cause a severe reduction in usable bandwidth. For example, I saw speeds drop from 40 MB/s to 15 MB/s due to this. The fix ensures that we continue pipelining if the previous connection to the server successfully retrieved at least 3 files with pipelining enabled. [Test case] Pick a package that would cause a large number (200/300 packages of packages to be installed). I used plasma-desktop and xubuntu-desktop, for example. Run apt install -d $package. Ensure that after the first 101 packages the progress does not slow down - you should not see a lot "working" in the progress output. The speed should be substantially higher. Don't run in a container setup by cloud-init, as cloud-init disables pipelining; or remove /etc/apt/apt.conf.d/90cloud-init-pipelining (see bug 1794982). Requirements: * High speed, medium-high latency connection (e.g. 400 Mbit/s at 30 ms RTT is enough); or just increase latency, e.g. sudo tc qdisc add dev wlp61s0 root netem delay 300ms until you see the slow down * Not a terribly slow CPU, as we'd get slowed down by hashing otherwise [Regression potential] This fix is isolated to code enabling/disabling pipelining on subsequent connections. It could cause more pipelining to be tried on servers who are not particularly good at it, but can deal with 3 items correctly. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1794957/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp