[Touch-packages] [Bug 2057671] Re: Rename the ubuntu-advantage-tools package
Subscribing ubuntu-release as per FFE policy. This bug affects noble cloud image release ** Also affects: software-properties (Ubuntu Noble) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to software-properties in Ubuntu. https://bugs.launchpad.net/bugs/2057671 Title: Rename the ubuntu-advantage-tools package Status in software-properties package in Ubuntu: New Status in software-properties source package in Noble: New Bug description: From ubuntu-advantage-tools v31, it has been renamed to ubuntu-pro- client. The current package is now a transitional package pointing to ubuntu-pro-client. software-properties depends on ubuntu-advantage- tool thus we should rename the dependency to ubuntu-pro-client to avoid having the transitional package as a dependency. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/software-properties/+bug/2057671/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2055194] [NEW] Invalid version: 'None' when re-installing/upgrading python-apt using pip when wheel module installed
Public bug reported: [ Impact ] * It is not possible to upgrade or re-install python-apt using pip from the git+ssh://git.launchpad.net/ubuntu/+source/python-apt git repo if already installed and if wheel installed too. * On initial install it also is assigned version `0.0.0` which is incorrect * This applies to the Mantic, Jammy and earlier versions. It has been fixed in Noble [ Test Plan ] * Attempt re-install of python-apt [ Where problems could occur ] * Being unable to install python-apt * Being unable to upgrade python-apt * Being unable to re-install python-apt [ Other Info ] * This been resolved upstream @ https://salsa.debian.org/apt- team/python-apt/-/commit/765772fad2d3611b952b6ad416c6b765739a902a and in `2.4.y` branch @ https://salsa.debian.org/apt-team/python- apt/-/tree/2.4.y?ref_type=heads * You can workaround this too by setting environment variable `DEBVER=2.4.11` * Reproducer ``` ❯ python3 -m venv venv ❯ source venv/bin/activate ❯ python3 -m pip list PackageVersion -- --- pip23.2 setuptools 68.1.2 ❯ python3 -m pip install --upgrade git+ssh://git.launchpad.net/ubuntu/+source/python-apt@ubuntu/jammy-updates Collecting git+ssh://git.launchpad.net/ubuntu/+source/python-apt@ubuntu/jammy-updates Cloning ssh://git.launchpad.net/ubuntu/+source/python-apt (to revision ubuntu/jammy-updates) to ./pip-req-build-t7t8vnsu Running command git clone --filter=blob:none --quiet ssh://git.launchpad.net/ubuntu/+source/python-apt /tmp/pip-req-build-t7t8vnsu warning: filtering not recognized by server, ignoring warning: filtering not recognized by server, ignoring Running command git checkout -b ubuntu/jammy-updates --track origin/ubuntu/jammy-updates Switched to a new branch 'ubuntu/jammy-updates' branch 'ubuntu/jammy-updates' set up to track 'origin/ubuntu/jammy-updates'. Resolved ssh://git.launchpad.net/ubuntu/+source/python-apt to commit f043e6bd4d2842d6693d48e0c68c7a63bd3be9ac Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: python-apt Building wheel for python-apt (pyproject.toml) ... done Created wheel for python-apt: filename=python_apt-0.0.0-cp311-cp311-linux_x86_64.whl size=1808770 sha256=472ff7d9bcdad50912dc42499d68bc5e958b9843c355c157ccf920021728c205 Stored in directory: /tmp/pip-ephem-wheel-cache-nyc3gs5x/wheels/c7/db/d7/21a72eaa5719ab3ab1ca242d5d0e08aca4aa67eaddb3c9b8fa Successfully built python-apt Installing collected packages: python-apt Successfully installed python-apt-0.0.0 ❯ python3 -m pip list PackageVersion -- --- pip23.2 python-apt 0.0.0 setuptools 68.1.2 # install wheel ❯ python3 -m pip install wheel Collecting wheel Obtaining dependency information for wheel from https://files.pythonhosted.org/packages/c7/c3/55076fc728723ef927521abaa1955213d094933dc36d4a2008d5101e1af5/wheel-0.42.0-py3-none-any.whl.metadata Using cached wheel-0.42.0-py3-none-any.whl.metadata (2.2 kB) Using cached wheel-0.42.0-py3-none-any.whl (65 kB) Installing collected packages: wheel Successfully installed wheel-0.42.0 ❯ python3 -m pip list PackageVersion -- --- pip23.2 python-apt 0.0.0 setuptools 68.1.2 wheel 0.42.0 # Now try re-install ❯ python3 -m pip install --upgrade git+ssh://git.launchpad.net/ubuntu/+source/python-apt@ubuntu/jammy-updates Collecting git+ssh://git.launchpad.net/ubuntu/+source/python-apt@ubuntu/jammy-updates Cloning ssh://git.launchpad.net/ubuntu/+source/python-apt (to revision ubuntu/jammy-updates) to ./pip-req-build-lk856i6s Running command git clone --filter=blob:none --quiet ssh://git.launchpad.net/ubuntu/+source/python-apt /tmp/pip-req-build-lk856i6s warning: filtering not recognized by server, ignoring warning: filtering not recognized by server, ignoring Running command git checkout -b ubuntu/jammy-updates --track origin/ubuntu/jammy-updates Switched to a new branch 'ubuntu/jammy-updates' branch 'ubuntu/jammy-updates' set up to track 'origin/ubuntu/jammy-updates'. Resolved ssh://git.launchpad.net/ubuntu/+source/python-apt to commit f043e6bd4d2842d6693d48e0c68c7a63bd3be9ac Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [34 lines of output] Traceback (most recent call last): File "/tmp/venv/lib/python3.11/site-packages/setuptools/_normalization.py", line 59, in safe_version return str(packaging.version.Version(v)) File "/tmp/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/version.py", line 198, in __init__ raise InvalidVersion(f"Invalid version: '{version}'") setuptools.extern.packaging.version.InvalidVersion: Invalid version: 'None' During handling of the above exception,
[Touch-packages] [Bug 2051572] Re: Always preseed core and snapd snap in server seed
Thanks all. Marking as "Won't Fix" and marked MP as Rejected. ** Changed in: ubuntu-meta (Ubuntu Noble) Status: New => Won't Fix -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: Won't Fix Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snapd snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snapd snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051572] Re: Always preseed core and snapd snap in server seed
With only snapd snap preseeded I get boot times very similar to ``` ubuntu@cloudimg:~$ systemd-analyze Startup finished in 3.757s (kernel) + 12.458s (userspace) = 16.216s graphical.target reached after 12.061s in userspace. ``` Which shows we are still taking a boot time hit of ~1.5 seconds... however with snaps snap preseeded we are seeing a much faster "first snap" install. From ~21 seconds down to 17 seconds. To summarise. * Definitely faster boot without any snaps preseeded @ ~1.5 seconds in my tests * Still slower boot with only snapd snap preseeded but faster first snap install time by about ~4 seconds I have been unable to get any data on LXD usage numbers on cloud/server to help. So the question remains do we want faster boot by ~1.5 seconds (based on my test environment) at the expense of first snap install speed improvements of between 4 (if only snapd snap is preseeded) seconds and up to ~15 seconds (if a core snap is also preseeded)? @vorlon @jchittum @paelzer given the above findings are you still -1 on any snap preseeding? Based on the data, I vote not to preseed any snaps. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: New Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snapd snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snapd snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051572] Re: Always preseed core and snapd snap in server seed
> * boot times w/ and w/o preseeded snaps Without preseeded snaps: ``` ubuntu@cloudimg:~$ systemd-analyze Startup finished in 3.609s (kernel) + 11.026s (userspace) = 14.636s graphical.target reached after 10.642s in userspace. ``` With preseeded snapd and core22 snaps: ``` ubuntu@cloudimg:~$ systemd-analyze Startup finished in 3.733s (kernel) + 12.566s (userspace) = 16.300s graphical.target reached after 12.175s in userspace. ``` This is on a powerful AMD threadripper machine using qemu. I ran this 3 times for each image with similar results so we are definitely seeing a boot speed improvement without the snaps preseeded. I will try with only snapd snap too for comparison. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: New Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snapd snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snapd snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051572] Re: Always preseed core and snapd snap in server seed
@vorlon > Also, statically seeding a particular base snap is bad form, as soon as lxd upgrades its base you lose your performance benefit and have to play catch-up in a stable release. yes I don't like this either. Even if we do change it later to core24 then the expectations people have for their snap startup/install time for snaps, not LXD, which use core22 will change/break. > If "time to initialize lxd" is your metric, I think you're measuring the wrong thing :) Well it isn't just LXD though - any snap install/init will be affected with this change @jchittum > * the amount of time/slow down for `lxd` booting from `lxd-installer` in a "no snaps pre-seeded" setting We do have this. In initial tests it 15 seconds vs 29 seconds. > * the _usage_ and _expectation_ about the speed here. I will try get this info from LXD team > * boot times w/ and w/o preseeded snaps I will gather this info @paelzer > I never heard someone complaining that LXD takes a bit there, but every > second of boot time seems to be valued highly. > Now that we had to reduce this to the lxd-installer everywhere (Due to LP > #2051346) it is really worth to be re-revaluated. Thank you for driving this > Phil! Yes I really want this to be a decision we know we are making and consciously making. >IMHO now that your first LXD command will take a bit longer already (due to >fetching LXD snap), the exact amount of that "a bit longer" (as being more by >also fetching snapd and base) seems almost irrelevant as long as it is in the >same ballpark. is 15 seconds vs 30 seconds really in the same ballpark? > Furthermore as Simon showed (thanks), by snapd being a baseless snap we'd not even gain something by having that around already for the latter fetch of lxd by lxd-installer. Not true. It still takes time to fetch and install the snapd snap - 6.738s in my test in qemu > I further appreciate John's comment that we should back up some of our > current assumptions (how much will this slow down lxc interactions, how much > will the boot speed gain) with some actual data. I will continue to gather data on the boot speed implications. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: New Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snapd snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snapd snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051572] Re: Always preseed core and snapd snap in server seed
Thank you for the detail sdeziel -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: New Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snapd snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snapd snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051572] Re: Always preseed core and snapd snap in server seed
> "other cloud cases that have preseeded snaps" (thinking like ec2 or oracle that have snapped cloud agent This isn't something we need to worry about as there will be no change in this case. If any agent snaps are preseeded then so too will a core snap and snapd snap. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: New Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snapd snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snapd snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051572] Re: Always preseed core and snapd snap in server seed
> 15 seconds vs 30 seconds, on a thing that won't affect most cloud customers I'll see if I can find the data to back this -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: New Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snapd snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snapd snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051572] Re: Always preseed core and snapd snap in server seed
If we don't preseed a core snap and snapd it feels like we're failing to prioritise the performance of snaps on server/cloud. But if we acknowledge that knowing that we are prioritising boot speed then that's fine and we can add it to the noble release notes. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: New Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snapd snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snapd snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051572] Re: Always preseed core and snapd snap in server seed
Good points I'll measure boot speed with and without core snap preseeded and add it here. time to initialize any snap was my goal but with lxd as an example as it is such a popular snap. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: New Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snapd snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snapd snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051572] [NEW] Always preseed core and snapd snap in server seed
Public bug reported: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snaps snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real0m29.107s user0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real0m15.034s user0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re-introduce core22 and snaps snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 ** Affects: ubuntu-meta (Ubuntu) Importance: Undecided Assignee: Philip Roche (philroche) Status: New ** Affects: ubuntu-meta (Ubuntu Noble) Importance: Undecided Assignee: Philip Roche (philroche) Status: New ** Changed in: ubuntu-meta (Ubuntu) Assignee: (unassigned) => Philip Roche (philroche) ** Also affects: ubuntu-meta (Ubuntu Noble) Importance: Undecided Assignee: Philip Roche (philroche) Status: New -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051572 Title: Always preseed core and snapd snap in server seed Status in ubuntu-meta package in Ubuntu: New Status in ubuntu-meta source package in Noble: New Bug description: In removing the LXD snap from preseeding in the server seed for Ubuntu 24.04 as part LP #2051346 [1] we also removed the snaps snap and the core22 snap. This means that are subsequent snap install, like LXD, will take much longer than expected for a non minimized image. Time taken to install LXD snap using the lxd-installer package without snapd and core22 preinstalled/seeded ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m29.107s user 0m0.006s sys 0m0.005s ``` Time taken to install LXD snap using the lxd-installer package with snapd and core22 already installed. ``` ubuntu@cloudimg:~$ time sudo lxd --version Installing LXD snap, please be patient. 5.19 real 0m15.034s user 0m0.005s sys 0m0.005s ``` This is a significant difference and for a workload we intend to remain as a core tested and tracked workload. As such I propose we re- introduce core22 and snaps snaps to our seed. LXD do intend to move to the core24 snap as their base as I'm sure snapd does too so when that does happen we need to update the preseeded core snap. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n69 [1] https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051572/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051346] Re: No longer preseed LXD snap to allow for LXD 5.20 release
This has been released in Noble version 1.529 ** Changed in: ubuntu-meta (Ubuntu) Status: New => Fix Released ** Also affects: ubuntu-meta (Ubuntu Noble) Importance: Undecided Status: Fix Released ** Changed in: ubuntu-meta (Ubuntu Noble) Assignee: (unassigned) => Philip Roche (philroche) -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051346 Title: No longer preseed LXD snap to allow for LXD 5.20 release Status in ubuntu-meta package in Ubuntu: Fix Released Status in ubuntu-meta source package in Noble: Fix Released Bug description: With LXD 5.20 there is a license change to AGPL and it has been decided to no longer seed the snap in Ubuntu 24.04 and later and instead seed the lxd-installer package instead. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n60 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2051346] [NEW] No longer preseed LXD snap to allow for LXD 5.20 release
Public bug reported: With LXD 5.20 there is a license change to AGPL and it has been decided to no longer seed the snap in Ubuntu 24.04 and later and instead seed the lxd-installer package instead. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n60 ** Affects: ubuntu-meta (Ubuntu) Importance: Undecided Status: New ** Description changed: With LXD 5.20 there is a license change to AGPL and it has been decided - to no longer seed the snap in Ubuntu 22.04 and later and instead seed + to no longer seed the snap in Ubuntu 24.04 and later and instead seed the lxd-installer package instead. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n60 -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ubuntu-meta in Ubuntu. https://bugs.launchpad.net/bugs/2051346 Title: No longer preseed LXD snap to allow for LXD 5.20 release Status in ubuntu-meta package in Ubuntu: New Bug description: With LXD 5.20 there is a license change to AGPL and it has been decided to no longer seed the snap in Ubuntu 24.04 and later and instead seed the lxd-installer package instead. This bug is to track the work of making that change in the server seed @ https://git.launchpad.net/~ubuntu-core-dev/ubuntu- seeds/+git/ubuntu/tree/server#n60 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/2051346/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2038894] Re: Ubuntu 23.10 cloud images unexpected UDP listening port 5353
> a) You state that some policy says that no ports other than 22 should be open, which policy is that? Does it apply only to cloud images, or is it an Ubuntu policy in general This policy is detailed @ https://wiki.ubuntu.com/Security/Features#ports > Default installations of Ubuntu must have no listening network services after initial install. Exceptions to this rule on desktop systems include network infrastructure services such as a DHCP client and mDNS (Avahi/ZeroConf, see [ZeroConfPolicySpec](https://wiki.ubuntu.com/ZeroConfPolicySpec) for implementation details and justification). For Ubuntu in the cloud, exceptions include network infrastructure services for the cloud and OpenSSH running with client public key and port access configured by the cloud provider. When installing Ubuntu Server, the administrator can, of course, select specific services to install beyond the defaults (e.g. Apache). > Testing for this can be done with netstat -an --inet | grep LISTEN | grep -v 127.0.0.1: on a fresh install. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/2038894 Title: Ubuntu 23.10 cloud images unexpected UDP listening port 5353 Status in cloud-images: New Status in systemd package in Ubuntu: New Status in systemd source package in Mantic: New Bug description: [Impact] In the latest Ubuntu 23.10 cloud images we are seeing unexpected UDP listening port 5353. By default and by policy, aside from port 22 there should be no other open ports on Ubuntu cloud images. Listening port 5353 is a regression. [Test Plan] Check that port 5353 is not open, and in particular that systemd- resolved is not listening on 5353. This is what it looks like when systemd-resolved *is* listening on 5353: ``` $ ss --listening --no-header --tcp --udp --numeric udp UNCONN 00 127.0.0.54:53 0.0.0.0:* udp UNCONN 00 127.0.0.53%lo:53 0.0.0.0:* udp UNCONN 00 10.154.0.17%ens4:68 0.0.0.0:* udp UNCONN 00 127.0.0.1:323 0.0.0.0:* udp UNCONN 00 0.0.0.0:5353 0.0.0.0:* udp UNCONN 00 [::1]:323 [::]:* udp UNCONN 00 [::]:5353 [::]:* tcp LISTEN 0 4096 127.0.0.53%lo:53
[Touch-packages] [Bug 2038894] Re: Ubuntu 23.10 cloud images unexpected UDP listening port 5353
> a) You state that some policy says that no ports other than 22 should be open, which policy is that? Does it apply only to cloud images, or is it an Ubuntu policy in general? I will try find the referenced policy. > b) This is in mantic release at the moment, and switching that option back to "no" could regress users that were relying on this default. What exactly are we losing when we disable this service in this SRU? This was added in version 253.5-1ubuntu1 [1] of systemd on 11 Jul 2023 in the devel release. It was not an intentional change to open port 5353. I am not entirely sure on what we lose but based on the systemd-resolved docs [2] we lose ability to resolve .local domains > This resolver has a notion of the special ".local" domain used for MulticastDNS > c) If this is only about cloud images, is the workaround in comment #4 > something that could be added to the cloud image build process, or we really > want to avoid that? CPC are primarily concerned about cloud images but enabling a new open port was an unintended consequence of the change and I understand not one that is desired. > d) Are there specific security concerns with keeping this service enabled? Yes. Google/GCE specifically have flagged this as an issue and a regression to have more than port 22 open. [1] https://launchpad.net/ubuntu/+source/systemd/253.5-1ubuntu1 [2] https://www.freedesktop.org/software/systemd/man/latest/systemd-resolved.service.html -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/2038894 Title: Ubuntu 23.10 cloud images unexpected UDP listening port 5353 Status in cloud-images: New Status in systemd package in Ubuntu: New Status in systemd source package in Mantic: New Bug description: [Impact] In the latest Ubuntu 23.10 cloud images we are seeing unexpected UDP listening port 5353. By default and by policy, aside from port 22 there should be no other open ports on Ubuntu cloud images. Listening port 5353 is a regression. [Test Plan] Check that port 5353 is not open, and in particular that systemd- resolved is not listening on 5353. This is what it looks like when systemd-resolved *is* listening on 5353: ``` $ ss --listening --no-header --tcp --udp --numeric udp UNCONN 00 127.0.0.54:53 0.0.0.0:* udp UNCONN 00 127.0.0.53%lo:53 0.0.0.0:* udp UNCONN 00 10.154.0.17%ens4:68 0.0.0.0:* udp UNCONN 00 127.0.0.1:323 0.0.0.0:* udp UNCONN 00 0.0.0.0:5353 0.0.0.0:* udp UNCONN 00 [::1]:323 [::]:* udp UNCONN 00 [::]:5353
[Touch-packages] [Bug 2038567] Re: Disable restricting unprivileged change_profile by default, due to LXD latest/stable not yet compatible with this new apparmor feature
cloud minimized and non minimized images have now been tested with 6.5.0-9 kernel from -proposed and pass our lxd-start-stop test suite which was failing and which is the test suite which prompted this whole thread. +1 -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to apparmor in Ubuntu. https://bugs.launchpad.net/bugs/2038567 Title: Disable restricting unprivileged change_profile by default, due to LXD latest/stable not yet compatible with this new apparmor feature Status in Release Notes for Ubuntu: New Status in apparmor package in Ubuntu: New Status in linux package in Ubuntu: Fix Committed Status in lxd package in Ubuntu: Triaged Status in snapd package in Ubuntu: New Bug description: Following upgrade to 6.5.0-7 kernel in mantic cloud images we are seeing a regression in our cloud image tests. The test runs the following: ``` lxd init --auto --storage-backend dir lxc launch ubuntu-daily:mantic mantic lxc info mantic lxc exec mantic -- cloud-init status --wait ``` The `lxc exec mantic -- cloud-init status --wait` times out after 240s and will fail our test as a result. I have been able to replicate in a local VM ``` wget http://cloud-images.ubuntu.com/mantic/20231005/mantic-server-cloudimg-amd64.img wget --output-document=launch-qcow2-image-qemu.sh https://gist.githubusercontent.com/philroche/14c241c086a5730481e24178b654268f/raw/7af95cd4dfc8e1d0600e6118803d2c866765714e/gistfile1.txt chmod +x launch-qcow2-image-qemu.sh ./launch-qcow2-image-qemu.sh --password passw0rd --image ./mantic-server-cloudimg-amd64.img cat < "./reproducer.sh" #!/bin/bash -eux lxd init --auto --storage-backend dir lxc launch ubuntu-daily:mantic mantic lxc info mantic lxc exec mantic -- cloud-init status --wait EOF chmod +x ./reproducer.sh sshpass -p passw0rd scp -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -P ./reproducer.sh ubuntu@127.0.0.1:~/ sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -p ubuntu@127.0.0.1 sudo apt-get update sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -p ubuntu@127.0.0.1 sudo apt-get upgrade --assume-yes sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -p ubuntu@127.0.0.1 ./reproducer.sh ``` The issue is not present with the 6.5.0-5 kernel and the issue is present regardless of the container launched. I tried the jammy container to test this. From my test VM ``` ubuntu@cloudimg:~$ uname --all Linux cloudimg 6.5.0-7-generic #7-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 29 09:14:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ubuntu@cloudimg:~$ uname --kernel-release 6.5.0-7-generic ``` This is a regression in our test that will block 23.10 cloud image release next week. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-release-notes/+bug/2038567/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2036968] Re: Mantic minimized/minimal cloud images do not receive IP address during provisioning; systemd regression with wait-online
I have also successfully verified that -proposed amd64 kernel `6.5.0-7-generic` results in successful network configuration when tested using qemu on an amd64 host with older hardware (ThinkPad T460 with 6th gen intel i5 which is the same hardware which we were able to reproduce the issue on previously). See https://people.canonical.com/~philroche/20231003-mantic-minimal- proposed-kernel/amd64/ for cloud-init logs, some debug output and test image. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/2036968 Title: Mantic minimized/minimal cloud images do not receive IP address during provisioning; systemd regression with wait-online Status in cloud-images: New Status in linux package in Ubuntu: Fix Committed Status in systemd package in Ubuntu: Triaged Bug description: Following a recent change from linux-kvm kernel to linux-generic kernel in the mantic minimized images, there is a reproducable bug where a guest VM does not have an IP address assigned as part of cloud-init provisioning. This is easiest to reproduce when emulating arm64 on amd64 host. The bug is a race condition, so there could exist fast enough virtualisation on fast enough hardware where this bug is not present but in all my testing I have been able to reproduce. The latest mantic minimized images from http://cloud- images.ubuntu.com/minimal/daily/mantic/ have force initrdless boot and no initrd to fallback to. This but is not present in the non minimized/base images @ http://cloud-images.ubuntu.com/mantic/ as these boot with initrd with the required drivers present for virtio-net. Reproducer ``` wget -O "launch-qcow2-image-qemu-arm64.sh" https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/launch-qcow2-image-qemu-arm64.sh chmod +x ./launch-qcow2-image-qemu-arm64.sh wget https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/livecd.ubuntu-cpc.img ./launch-qcow2-image-qemu-arm64.sh --password passw0rd --image ./livecd.ubuntu-cpc.img ``` You will then be able to log in with user `ubuntu` and password `passw0rd`. You can run `ip a` and see that there is a network interface present (separate to `lo`) but no IP address has been assigned. ``` ubuntu@cloudimg:~$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s1: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff ``` This is because when cloud-init is trying to configure network interfaces it doesn't find any so it doesn't configure any. But by the time boot is complete the network interface is present but cloud-init provisioning has already completed. You can verify this by running `sudo cloud-init clean && sudo cloud- init init` You can then see a successfully configured network interface ``` ubuntu@cloudimg:~$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s1 valid_lft 86391sec preferred_lft 86391sec inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr noprefixroute valid_lft 86393sec preferred_lft 14393sec inet6 fe80::5054:ff:fe12:3456/64 scope link valid_lft forever preferred_lft forever ``` The bug is also reproducible with amd64 guest on adm64 host on older/slower hardware. The suggested fixes while debugging this issue are: * to include `virtio-net` as a built-in in the mantic generic kernel * understand what needs to change in cloud-init so that it can react to late additions of network interfaces I will file a separate bug against cloud-init to address the race condition on emulated guest/older hardware. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-images/+bug/2036968/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 2036968] Re: Mantic minimized/minimal cloud images do not receive IP address during provisioning; systemd regression with wait-online
@xnox I have successfully verified that -proposed arm64 kernel `6.5.0-7-generic` results in successful network configuration when tested using qemu on an amd64 host. See https://people.canonical.com/~philroche/20231003-mantic-minimal- proposed-kernel/ for cloud-init logs, some debug output and test image. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/2036968 Title: Mantic minimized/minimal cloud images do not receive IP address during provisioning; systemd regression with wait-online Status in cloud-images: New Status in linux package in Ubuntu: Fix Committed Status in systemd package in Ubuntu: Triaged Bug description: Following a recent change from linux-kvm kernel to linux-generic kernel in the mantic minimized images, there is a reproducable bug where a guest VM does not have an IP address assigned as part of cloud-init provisioning. This is easiest to reproduce when emulating arm64 on amd64 host. The bug is a race condition, so there could exist fast enough virtualisation on fast enough hardware where this bug is not present but in all my testing I have been able to reproduce. The latest mantic minimized images from http://cloud- images.ubuntu.com/minimal/daily/mantic/ have force initrdless boot and no initrd to fallback to. This but is not present in the non minimized/base images @ http://cloud-images.ubuntu.com/mantic/ as these boot with initrd with the required drivers present for virtio-net. Reproducer ``` wget -O "launch-qcow2-image-qemu-arm64.sh" https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/launch-qcow2-image-qemu-arm64.sh chmod +x ./launch-qcow2-image-qemu-arm64.sh wget https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/livecd.ubuntu-cpc.img ./launch-qcow2-image-qemu-arm64.sh --password passw0rd --image ./livecd.ubuntu-cpc.img ``` You will then be able to log in with user `ubuntu` and password `passw0rd`. You can run `ip a` and see that there is a network interface present (separate to `lo`) but no IP address has been assigned. ``` ubuntu@cloudimg:~$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s1: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff ``` This is because when cloud-init is trying to configure network interfaces it doesn't find any so it doesn't configure any. But by the time boot is complete the network interface is present but cloud-init provisioning has already completed. You can verify this by running `sudo cloud-init clean && sudo cloud- init init` You can then see a successfully configured network interface ``` ubuntu@cloudimg:~$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s1 valid_lft 86391sec preferred_lft 86391sec inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr noprefixroute valid_lft 86393sec preferred_lft 14393sec inet6 fe80::5054:ff:fe12:3456/64 scope link valid_lft forever preferred_lft forever ``` The bug is also reproducible with amd64 guest on adm64 host on older/slower hardware. The suggested fixes while debugging this issue are: * to include `virtio-net` as a built-in in the mantic generic kernel * understand what needs to change in cloud-init so that it can react to late additions of network interfaces I will file a separate bug against cloud-init to address the race condition on emulated guest/older hardware. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-images/+bug/2036968/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1656801] Re: ntp: changing the default config from server to pool broke the dhcp hook
I have verified that ntp 1:4.2.8p4+dfsg-3ubuntu5.4 in xenial-proposed passes the test case outlined in the description above. * Launch GCE instance * Enabled proposed * Upgrade ntp * Reboot * Confirm `ntpq -p` returns only one entry ** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ntp in Ubuntu. https://bugs.launchpad.net/bugs/1656801 Title: ntp: changing the default config from server to pool broke the dhcp hook Status in ntp package in Ubuntu: Fix Released Status in ntp source package in Xenial: Fix Committed Status in ntp package in Debian: Fix Released Bug description: In 1:4.2.8p3+dfsg-1, the default config was changed to "Use pool instead of server". This needs a corresponding update to /etc/dhcp/dhclient-exit-hooks.d/ntp, since the DHCP specified servers now get added to the default pool config instead of replacing them. This affects Xenial only as the Yakkety build includes the upstream fix (1:4.2.8p7+dfsg-1). Original Debian busg https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344 & https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 [Test Case] philroche: This surfaced for me initially while testing on GCE. On GCE NTP servers are provided via DHCP so the easiest test case is to launch an instance on GCE without our workaround configured. One such image is "daily-ubuntu-ntpdebug-1604-xenial-v20170331" in project "ubuntu-os-cloud-devel" To start an instance of this image: `gcloud compute instances create daily-ubuntu- ntpdebug-1604-xenial-v20170331 --image-project ubuntu-os-cloud-devel --image "daily-ubuntu-ntpdebug-1604-xenial-v20170331"` Then run `ntpq -p` which should, once ntp is patched, return only one entry 'metadata.google' and should not return any of the ubuntu NTP pools. [Regression Potential] As noted in comments #1 and #2, this SRU might surface an issue if the user is receiving a broken set of NTP servers. And as noted in comment #4, some people might be hackishly using the distinction between spaces and tabs in the config file to trick our current hook. Debian has gotten rid of that distinction, so that hack will stop working on distro-upgrade anyway. The fix is *probably* going to help more people who unintentionally mixed the two up rather than the few that are relying on that bug. But that bit can be easily dropped from the SRU if ya like. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1656801/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1656801] Re: ntp: changing the default config from server to pool broke the dhcp hook
** Description changed: In 1:4.2.8p3+dfsg-1, the default config was changed to "Use pool instead of server". This needs a corresponding update to /etc/dhcp/dhclient-exit-hooks.d/ntp, since the DHCP specified servers now get added to the default pool config instead of replacing them. This affects Xenial only as the Yakkety build includes the upstream fix (1:4.2.8p7+dfsg-1). Original Debian busg https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344 & https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 [Test Case] - mterry isn't sure on this. Robie or Philip, do you have simple - instructions? + philroche: This surfaced for me initially while testing on GCE. On GCE + NTP servers are provided via DHCP so the easiest test case is to launch + an instance on GCE without our workaround configured. + + One such image is "daily-ubuntu-ntpdebug-1604-xenial-v20170331" in + project "ubuntu-os-cloud-devel" + + To start an instance of this image: + + `gcloud compute instances create daily-ubuntu- + ntpdebug-1604-xenial-v20170331 --image-project ubuntu-os-cloud-devel + --image "daily-ubuntu-ntpdebug-1604-xenial-v20170331"` + + Then run `ntpq -p` which should, once ntp is patched, return only one + entry 'metadata.google' and should not return any of the ubuntu NTP + pools. [Regression Potential] As noted in comments #1 and #2, this SRU might surface an issue if the user is receiving a broken set of NTP servers. And as noted in comment #4, some people might be hackishly using the distinction between spaces and tabs in the config file to trick our current hook. Debian has gotten rid of that distinction, so that hack will stop working on distro-upgrade anyway. The fix is *probably* going to help more people who unintentionally mixed the two up rather than the few that are relying on that bug. But that bit can be easily dropped from the SRU if ya like. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ntp in Ubuntu. https://bugs.launchpad.net/bugs/1656801 Title: ntp: changing the default config from server to pool broke the dhcp hook Status in ntp package in Ubuntu: Fix Released Status in ntp source package in Xenial: Triaged Status in ntp package in Debian: Fix Released Bug description: In 1:4.2.8p3+dfsg-1, the default config was changed to "Use pool instead of server". This needs a corresponding update to /etc/dhcp/dhclient-exit-hooks.d/ntp, since the DHCP specified servers now get added to the default pool config instead of replacing them. This affects Xenial only as the Yakkety build includes the upstream fix (1:4.2.8p7+dfsg-1). Original Debian busg https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344 & https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 [Test Case] philroche: This surfaced for me initially while testing on GCE. On GCE NTP servers are provided via DHCP so the easiest test case is to launch an instance on GCE without our workaround configured. One such image is "daily-ubuntu-ntpdebug-1604-xenial-v20170331" in project "ubuntu-os-cloud-devel" To start an instance of this image: `gcloud compute instances create daily-ubuntu- ntpdebug-1604-xenial-v20170331 --image-project ubuntu-os-cloud-devel --image "daily-ubuntu-ntpdebug-1604-xenial-v20170331"` Then run `ntpq -p` which should, once ntp is patched, return only one entry 'metadata.google' and should not return any of the ubuntu NTP pools. [Regression Potential] As noted in comments #1 and #2, this SRU might surface an issue if the user is receiving a broken set of NTP servers. And as noted in comment #4, some people might be hackishly using the distinction between spaces and tabs in the config file to trick our current hook. Debian has gotten rid of that distinction, so that hack will stop working on distro-upgrade anyway. The fix is *probably* going to help more people who unintentionally mixed the two up rather than the few that are relying on that bug. But that bit can be easily dropped from the SRU if ya like. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1656801/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1656801] Re: ntp: changing the default config from server to pool broke the dhcp hook
Hi mterry. RE: test case steps. This surfaced for me initially while testing on GCE. On GCE NTP servers are provided via DHCP so the easiest test case is to launch an instance on GCE without our workaround configured. One such image is "daily-ubuntu-ntpdebug-1604-xenial-v20170331" in project "ubuntu-os-cloud-devel" To start an instance of this image: `gcloud compute instances create daily-ubuntu- ntpdebug-1604-xenial-v20170331 --image-project ubuntu-os-cloud-devel --image "daily-ubuntu-ntpdebug-1604-xenial-v20170331"` Then run `ntpq -p` which should, once ntp is patched, return only one entry 'metadata.google' and should not return any of the ubuntu NTP pools. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ntp in Ubuntu. https://bugs.launchpad.net/bugs/1656801 Title: ntp: changing the default config from server to pool broke the dhcp hook Status in ntp package in Ubuntu: Fix Released Status in ntp source package in Xenial: Triaged Status in ntp package in Debian: Fix Released Bug description: In 1:4.2.8p3+dfsg-1, the default config was changed to "Use pool instead of server". This needs a corresponding update to /etc/dhcp/dhclient-exit-hooks.d/ntp, since the DHCP specified servers now get added to the default pool config instead of replacing them. This affects Xenial only as the Yakkety build includes the upstream fix (1:4.2.8p7+dfsg-1). Original Debian busg https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344 & https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 [Test Case] philroche: This surfaced for me initially while testing on GCE. On GCE NTP servers are provided via DHCP so the easiest test case is to launch an instance on GCE without our workaround configured. One such image is "daily-ubuntu-ntpdebug-1604-xenial-v20170331" in project "ubuntu-os-cloud-devel" To start an instance of this image: `gcloud compute instances create daily-ubuntu- ntpdebug-1604-xenial-v20170331 --image-project ubuntu-os-cloud-devel --image "daily-ubuntu-ntpdebug-1604-xenial-v20170331"` Then run `ntpq -p` which should, once ntp is patched, return only one entry 'metadata.google' and should not return any of the ubuntu NTP pools. [Regression Potential] As noted in comments #1 and #2, this SRU might surface an issue if the user is receiving a broken set of NTP servers. And as noted in comment #4, some people might be hackishly using the distinction between spaces and tabs in the config file to trick our current hook. Debian has gotten rid of that distinction, so that hack will stop working on distro-upgrade anyway. The fix is *probably* going to help more people who unintentionally mixed the two up rather than the few that are relying on that bug. But that bit can be easily dropped from the SRU if ya like. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1656801/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1656801] Re: ntp: changing the default config from server to pool broke the dhcp hook
Please find attached patch for this bug. This is the same fix as upstream (see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 and https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344) The patch adds "pool" to the "server" and "peer" list as well as handling tabs and spaces in the ntp.conf better. ** Bug watch added: Debian Bug tracker #806676 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 ** Bug watch added: Debian Bug tracker #809344 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344 ** Patch added: "lp-1656801-fix-dhcp-hook.debdiff" https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1656801/+attachment/4806257/+files/lp-1656801-fix-dhcp-hook.debdiff -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ntp in Ubuntu. https://bugs.launchpad.net/bugs/1656801 Title: ntp: changing the default config from server to pool broke the dhcp hook Status in ntp package in Ubuntu: Fix Released Status in ntp source package in Xenial: Triaged Bug description: In 1:4.2.8p3+dfsg-1, the default config was changed to "Use pool instead of server". This needs a corresponding update to /etc/dhcp/dhclient-exit-hooks.d/ntp, since the DHCP specified servers now get added to the default pool config instead of replacing them. This affects Xenial only as the Yakkety build includes the upstream fix (1:4.2.8p7+dfsg-1). Original Debian busg https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344 & https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1656801/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1656801] Re: ntp: changing the default config from server to pool broke the dhcp hook
The issue if a broken set of NTP servers is received and having no fallback is the case in Yakkety now too though and previously in Xenial prior to the server/pool changeover in 1:4.2.8p3+dfsg-1. I agree that ideally there would be a fallback if the received NTP servers were broken but this bug itself is a regression from functionality prior to 1:4.2.8p3+dfsg-1. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ntp in Ubuntu. https://bugs.launchpad.net/bugs/1656801 Title: ntp: changing the default config from server to pool broke the dhcp hook Status in ntp package in Ubuntu: Fix Released Status in ntp source package in Xenial: Triaged Bug description: In 1:4.2.8p3+dfsg-1, the default config was changed to "Use pool instead of server". This needs a corresponding update to /etc/dhcp/dhclient-exit-hooks.d/ntp, since the DHCP specified servers now get added to the default pool config instead of replacing them. This affects Xenial only as the Yakkety build includes the upstream fix (1:4.2.8p7+dfsg-1). Original Debian busg https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344 & https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1656801/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1656801] [NEW] ntp: changing the default config from server to pool broke the dhcp hook
Public bug reported: In 1:4.2.8p3+dfsg-1, the default config was changed to "Use pool instead of server". This needs a corresponding update to /etc/dhcp/dhclient-exit-hooks.d/ntp, since the DHCP specified servers now get added to the default pool config instead of replacing them. This affects Xenial only as the Yakkety build includes the upstream fix (1:4.2.8p7+dfsg-1). Original Debian busg https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344 & https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 ** Affects: ntp (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ntp in Ubuntu. https://bugs.launchpad.net/bugs/1656801 Title: ntp: changing the default config from server to pool broke the dhcp hook Status in ntp package in Ubuntu: New Bug description: In 1:4.2.8p3+dfsg-1, the default config was changed to "Use pool instead of server". This needs a corresponding update to /etc/dhcp/dhclient-exit-hooks.d/ntp, since the DHCP specified servers now get added to the default pool config instead of replacing them. This affects Xenial only as the Yakkety build includes the upstream fix (1:4.2.8p7+dfsg-1). Original Debian busg https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809344 & https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=806676 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1656801/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1652695] Re: GCE default NTP configuration uses both leap-smeared and true UTC sources
This seems to be somewhat related to https://bugs.launchpad.net/cpc- gce/+bug/1639089 (only affects Xenial). In summary the "pool" entries in ntp.conf should be commented out by gce-cloud-config but when ntp.conf transitioned from using "server" to "pool" gce-cloud-config was not updated. I will be fixing lp:1639089 in the GCE Xenial image build process initially followed by a fix for gce-cloud-config. Note that gce-cloud-config is being deprecated in favour of gce-compute- image-packages which does not have this bug. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ntp in Ubuntu. https://bugs.launchpad.net/bugs/1652695 Title: GCE default NTP configuration uses both leap-smeared and true UTC sources Status in cloud-images: New Status in ntp package in Ubuntu: Incomplete Bug description: A default install of cloud images on GCE results in a configuration of NTP which will likely be problematic during leap seconds, because it uses both Google's leap-smeared time source, and the public pool, which uses true UTC. These should not be combined in the default configuration. The principle of least astonishment would suggest that a non-standard feature like leap smearing should be disabled by default. This behaviour has been consistent in the 3 different GCE regions in which I've recently created instances. root@instance-4:~# cat /etc/cloud/build.info build_name: server serial: 20161221 root@instance-4:~# cat /etc/ntp.conf # /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help driftfile /var/lib/ntp/ntp.drift # Enable this if you want statistics to be logged. #statsdir /var/log/ntpstats/ statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable filegen peerstats file peerstats type day enable filegen clockstats file clockstats type day enable # Specify one or more NTP #servers. # Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board # on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for # more information. pool 0.ubuntu.pool.ntp.org iburst pool 1.ubuntu.pool.ntp.org iburst pool 2.ubuntu.pool.ntp.org iburst pool 3.ubuntu.pool.ntp.org iburst # Use Ubuntu's ntp #server as a fallback. pool ntp.ubuntu.com # ... comments snipped ... # By default, exchange time with everybody, but don't allow configuration. restrict -4 default kod notrap nomodify nopeer noquery limited restrict -6 default kod notrap nomodify nopeer noquery limited # Local users may interrogate the ntp #server more closely. restrict 127.0.0.1 restrict ::1 # Needed for adding pool entries restrict source notrap nomodify noquery # ... comments snipped ... # Google NTP source. This was added as part of the gce-cloud-config package. server metadata.google.internal To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-images/+bug/1652695/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp