[Kernel-packages] [Bug 2063315] Re: Suspend & Resume functionality broken/timesout in GCE

2024-04-24 Thread Philip Roche
** Also affects: linux-gcp (Ubuntu Noble)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-gcp in Ubuntu.
https://bugs.launchpad.net/bugs/2063315

Title:
  Suspend & Resume functionality broken/timesout in GCE

Status in Release Notes for Ubuntu:
  New
Status in linux-gcp package in Ubuntu:
  New
Status in linux-gcp source package in Noble:
  New

Bug description:
  [Impact]
   
  Suspend/Resume capability is broken in all noble images with kernel version 
6.8.0-1007-gcp.

  GCE offers the capability to "Suspend" a VM to conserve power/lower
  costs when the instance is not in use [0]. It uses ACPI S3 signals to
  tell the guest to power down. This capability no longer works in the
  latest kernel with the following error:

  ```
  Operation type [suspend] failed with message "Instance suspend failed due to 
guest timeout."
  ```

  which points to the following [1].

  

  Refs:

  [0]: https://cloud.google.com/compute/docs/instances/suspend-resume-
  instance

  [1]:
  https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-
  suspend-resume#there_was_a_guest_timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-release-notes/+bug/2063315/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2061851] Re: linux-gcp 6.8.0-1005.5 (+ others) Noble kernel regression with new apparmor profiles/features

2024-04-22 Thread Philip Roche
** Changed in: snapd (Ubuntu Noble)
   Status: New => Invalid

** Changed in: linux-aws (Ubuntu Noble)
   Status: New => Fix Released

** Changed in: linux-azure (Ubuntu Noble)
   Status: New => Fix Released

** Changed in: linux-gcp (Ubuntu Noble)
   Status: New => Fix Released

** Changed in: linux-ibm (Ubuntu Noble)
   Status: New => Fix Released

** Changed in: linux-oracle (Ubuntu Noble)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2061851

Title:
  linux-gcp 6.8.0-1005.5 (+ others) Noble kernel regression with new
  apparmor profiles/features

Status in chrony package in Ubuntu:
  Invalid
Status in linux package in Ubuntu:
  Fix Released
Status in linux-aws package in Ubuntu:
  Fix Released
Status in linux-azure package in Ubuntu:
  Fix Released
Status in linux-gcp package in Ubuntu:
  Fix Released
Status in linux-ibm package in Ubuntu:
  Fix Released
Status in linux-oracle package in Ubuntu:
  Fix Released
Status in snapd package in Ubuntu:
  Invalid
Status in chrony source package in Noble:
  Invalid
Status in linux source package in Noble:
  Fix Released
Status in linux-aws source package in Noble:
  Fix Released
Status in linux-azure source package in Noble:
  Fix Released
Status in linux-gcp source package in Noble:
  Fix Released
Status in linux-ibm source package in Noble:
  Fix Released
Status in linux-oracle source package in Noble:
  Fix Released
Status in snapd source package in Noble:
  Invalid

Bug description:
  * Canonical Public Cloud discovered that `chronyc -c sources` now fails with 
`506 Cannot talk to daemon` with the latest kernels. We are seeing this in 
linux-azure and linux-gcp kernels (6.8.0-1005.5)
  * Disabling AppArmor (`sudo systemctl stop apparmor`) completely results in 
no regression and `chronyc -c sources` returns as expected
  * Disabling the apparmor profile for `chronyd` only results in no regression 
and `chronyc -c sources` returns as expected
  * There are zero entries in dmesg when this occurs
  * There are zero entries in dmesg when this occurs if the  apparmor profile 
for `chronyd` is placed in complain mode instead of enforce mode
  * We changed the time server from the internal GCP metadata.google.internal 
to the ubuntu time server ntp.ubuntu.com with no change in behaviour

  
  We also noted issues with DNS resolution in snaps like `google-cloud-cli` in 
GCE images. 

  * Disabling apparmor completely for snaps too (`sudo systemctl stop
  snapd.apparmor`) results in no regression and calling the snaps
  returns as expected.

  
  The same issues are present in azure kernel `linux-azure` `6.8.0-1005.5` and 
the -proposed `6.8.0-25.25` generic kernel. 

  This is a release blocker for Noble release

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chrony/+bug/2061851/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2061851] Re: linux-gcp 6.8.0-1005.5 (+ others) Noble kernel regression iwth new apparmor profiles/features

2024-04-16 Thread Philip Roche
** Also affects: linux-gcp (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: linux-aws (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: linux-azure (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: linux-oracle (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: linux-ibm (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure in Ubuntu.
https://bugs.launchpad.net/bugs/2061851

Title:
  linux-gcp 6.8.0-1005.5 (+ others) Noble kernel regression iwth new
  apparmor profiles/features

Status in chrony package in Ubuntu:
  New
Status in linux package in Ubuntu:
  New
Status in linux-aws package in Ubuntu:
  New
Status in linux-azure package in Ubuntu:
  New
Status in linux-gcp package in Ubuntu:
  New
Status in linux-ibm package in Ubuntu:
  New
Status in linux-oracle package in Ubuntu:
  New
Status in snapd package in Ubuntu:
  New
Status in chrony source package in Noble:
  New
Status in linux source package in Noble:
  New
Status in linux-aws source package in Noble:
  New
Status in linux-azure source package in Noble:
  New
Status in linux-gcp source package in Noble:
  New
Status in linux-ibm source package in Noble:
  New
Status in linux-oracle source package in Noble:
  New
Status in snapd source package in Noble:
  New

Bug description:
  * Canonical Public Cloud discovered that `chronyc -c sources` now fails with 
`506 Cannot talk to daemon` with the latest kernels. We are seeing this in 
linux-azure and linux-gcp kernels (6.8.0-1005.5)
  * Disabling AppArmor (`sudo systemctl stop apparmor`) completely results in 
no regression and `chronyc -c sources` returns as expected
  * Disabling the apparmor profile for `chronyd` only results in no regression 
and `chronyc -c sources` returns as expected
  * There are zero entries in dmesg when this occurs
  * There are zero entries in dmesg when this occurs if the  apparmor profile 
for `chronyd` is placed in complain mode instead of enforce mode
  * We changed the time server from the internal GCP metadata.google.internal 
to the ubuntu time server ntp.ubuntu.com with no change in behaviour

  
  We also noted issues with DNS resolution in snaps like `google-cloud-cli` in 
GCE images. 

  * Disabling apparmor completely for snaps too (`sudo systemctl stop
  snapd.apparmor`) results in no regression and calling the snaps
  returns as expected.

  
  The same issues are present in azure kernel `linux-azure` `6.8.0-1005.5` and 
the -proposed `6.8.0-25.25` generic kernel. 

  This is a release blocker for Noble release

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chrony/+bug/2061851/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2061851] Re: linux-gcp 6.8.0-1005.5 (+ others) Noble kernel regression iwth new apparmor profiles/features

2024-04-16 Thread Philip Roche
** Also affects: chrony (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: snapd (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: chrony (Ubuntu Noble)
   Importance: Undecided
   Status: New

** Also affects: linux (Ubuntu Noble)
   Importance: Undecided
   Status: New

** Also affects: snapd (Ubuntu Noble)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2061851

Title:
  linux-gcp 6.8.0-1005.5 (+ others) Noble kernel regression iwth new
  apparmor profiles/features

Status in chrony package in Ubuntu:
  New
Status in linux package in Ubuntu:
  New
Status in snapd package in Ubuntu:
  New
Status in chrony source package in Noble:
  New
Status in linux source package in Noble:
  New
Status in snapd source package in Noble:
  New

Bug description:
  * Canonical Public Cloud discovered that `chronyc -c sources` now fails with 
`506 Cannot talk to daemon` with the latest kernels. We are seeing this in 
linux-azure and linux-gcp kernels (6.8.0-1005.5)
  * Disabling AppArmor (`sudo systemctl stop apparmor`) completely results in 
no regression and `chronyc -c sources` returns as expected
  * Disabling the apparmor profile for `chronyd` only results in no regression 
and `chronyc -c sources` returns as expected
  * There are zero entries in dmesg when this occurs
  * There are zero entries in dmesg when this occurs if the  apparmor profile 
for `chronyd` is placed in complain mode instead of enforce mode
  * We changed the time server from the internal GCP metadata.google.internal 
to the ubuntu time server ntp.ubuntu.com with no change in behaviour

  
  We also noted issues with DNS resolution in snaps like `google-cloud-cli` in 
GCE images. 

  * Disabling apparmor completely for snaps too (`sudo systemctl stop
  snapd.apparmor`) results in no regression and calling the snaps
  returns as expected.

  
  The same issues are present in azure kernel `linux-azure` `6.8.0-1005.5` and 
the -proposed `6.8.0-25.25` generic kernel. 

  This is a release blocker for Noble release

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chrony/+bug/2061851/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2061851] [NEW] linux-gcp 6.8.0-1005.5 (+ others) Noble kernel regression iwth new apparmor profiles/features

2024-04-16 Thread Philip Roche
Public bug reported:

* Canonical Public Cloud discovered that `chronyc -c sources` now fails with 
`506 Cannot talk to daemon` with the latest kernels. We are seeing this in 
linux-azure and linux-gcp kernels (6.8.0-1005.5)
* Disabling AppArmor (`sudo systemctl stop apparmor`) completely results in no 
regression and `chronyc -c sources` returns as expected
* Disabling the apparmor profile for `chronyd` only results in no regression 
and `chronyc -c sources` returns as expected
* There are zero entries in dmesg when this occurs
* There are zero entries in dmesg when this occurs if the  apparmor profile for 
`chronyd` is placed in complain mode instead of enforce mode
* We changed the time server from the internal GCP metadata.google.internal to 
the ubuntu time server ntp.ubuntu.com with no change in behaviour


We also noted issues with DNS resolution in snaps like `google-cloud-cli` in 
GCE images. 

* Disabling apparmor completely for snaps too (`sudo systemctl stop
snapd.apparmor`) results in no regression and calling the snaps returns
as expected.


The same issues are present in azure kernel `linux-azure` `6.8.0-1005.5` and 
the -proposed `6.8.0-25.25` generic kernel. 

This is a release blocker for Noble release

** Affects: chrony (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: snapd (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: chrony (Ubuntu Noble)
 Importance: Undecided
 Status: New

** Affects: linux (Ubuntu Noble)
 Importance: Undecided
 Status: New

** Affects: snapd (Ubuntu Noble)
 Importance: Undecided
 Status: New


** Tags: block-proposed block-proposed-noble

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2061851

Title:
  linux-gcp 6.8.0-1005.5 (+ others) Noble kernel regression iwth new
  apparmor profiles/features

Status in chrony package in Ubuntu:
  New
Status in linux package in Ubuntu:
  New
Status in snapd package in Ubuntu:
  New
Status in chrony source package in Noble:
  New
Status in linux source package in Noble:
  New
Status in snapd source package in Noble:
  New

Bug description:
  * Canonical Public Cloud discovered that `chronyc -c sources` now fails with 
`506 Cannot talk to daemon` with the latest kernels. We are seeing this in 
linux-azure and linux-gcp kernels (6.8.0-1005.5)
  * Disabling AppArmor (`sudo systemctl stop apparmor`) completely results in 
no regression and `chronyc -c sources` returns as expected
  * Disabling the apparmor profile for `chronyd` only results in no regression 
and `chronyc -c sources` returns as expected
  * There are zero entries in dmesg when this occurs
  * There are zero entries in dmesg when this occurs if the  apparmor profile 
for `chronyd` is placed in complain mode instead of enforce mode
  * We changed the time server from the internal GCP metadata.google.internal 
to the ubuntu time server ntp.ubuntu.com with no change in behaviour

  
  We also noted issues with DNS resolution in snaps like `google-cloud-cli` in 
GCE images. 

  * Disabling apparmor completely for snaps too (`sudo systemctl stop
  snapd.apparmor`) results in no regression and calling the snaps
  returns as expected.

  
  The same issues are present in azure kernel `linux-azure` `6.8.0-1005.5` and 
the -proposed `6.8.0-25.25` generic kernel. 

  This is a release blocker for Noble release

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chrony/+bug/2061851/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2061079] Re: GTK-ngl (new default backend) rendering issues with the nvidia 470 driver

2024-04-16 Thread Didier Roche-Tolomelli
Confirming that it’s fixed on the same machine with 550.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-graphics-drivers-470 in Ubuntu.
https://bugs.launchpad.net/bugs/2061079

Title:
  GTK-ngl (new default backend) rendering issues with the nvidia 470
  driver

Status in GTK+:
  New
Status in gtk4 package in Ubuntu:
  In Progress
Status in nvidia-graphics-drivers-470 package in Ubuntu:
  Confirmed
Status in nvidia-graphics-drivers-535 package in Ubuntu:
  Invalid
Status in nvidia-graphics-drivers-545 package in Ubuntu:
  Invalid

Bug description:
  With nvidia driver, all GTK4 applications have label rendering issues.

  They are not refresh until passing the cursor over them, giving blank
  windows. The corner are white and not themed. Passing from one app
  scren to another one reproduces the issue.

  gnome-control-center or files, for instance, are blank by default.

  As suggested by seb128, exporting GSK_RENDERER=gl fixes the issue.

  Related upstream bugs and discussions are:
  - https://blog.gtk.org/2024/01/28/new-renderers-for-gtk/
  - https://gitlab.gnome.org/GNOME/gtk/-/issues/6574
  - https://gitlab.gnome.org/GNOME/gtk/-/issues/6411
  - https://gitlab.gnome.org/GNOME/gtk/-/issues/6542

  
  --

  
  $ glxinfo
  name of display: :1
  display: :1  screen: 0
  direct rendering: Yes
  server glx vendor string: NVIDIA Corporation
  server glx version string: 1.4
  server glx extensions:
  GLX_ARB_context_flush_control, GLX_ARB_create_context, 
  GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile, 
  GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float, 
  GLX_ARB_multisample, GLX_EXT_buffer_age, 
  GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, 
  GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_libglvnd, 
  GLX_EXT_stereo_tree, GLX_EXT_swap_control, GLX_EXT_swap_control_tear, 
  GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, 
  GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer, 
  GLX_NV_multigpu_context, GLX_NV_robustness_video_memory_purge, 
  GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, 
  GLX_SGI_video_sync
  client glx vendor string: NVIDIA Corporation
  client glx version string: 1.4
  client glx extensions:
  GLX_ARB_context_flush_control, GLX_ARB_create_context, 
  GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile, 
  GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float, 
  GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age, 
  GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, 
  GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB, 
  GLX_EXT_import_context, GLX_EXT_stereo_tree, GLX_EXT_swap_control, 
  GLX_EXT_swap_control_tear, GLX_EXT_texture_from_pixmap, 
  GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_NV_copy_buffer, 
  GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer, 
  GLX_NV_multigpu_context, GLX_NV_multisample_coverage, 
  GLX_NV_robustness_video_memory_purge, GLX_NV_swap_group, 
  GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, 
  GLX_SGI_video_sync
  GLX version: 1.4
  GLX extensions:
  GLX_ARB_context_flush_control, GLX_ARB_create_context, 
  GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile, 
  GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float, 
  GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age, 
  GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, 
  GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_stereo_tree, 
  GLX_EXT_swap_control, GLX_EXT_swap_control_tear, 
  GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, 
  GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer, 
  GLX_NV_multigpu_context, GLX_NV_robustness_video_memory_purge, 
  GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, 
  GLX_SGI_video_sync
  Memory info (GL_NVX_gpu_memory_info):
  Dedicated video memory: 4096 MB
  Total available memory: 4096 MB
  Currently available dedicated video memory: 3041 MB
  OpenGL vendor string: NVIDIA Corporation
  OpenGL renderer string: NVIDIA GeForce GTX 1050/PCIe/SSE2
  OpenGL core profile version string: 4.6.0 NVIDIA 470.239.06
  OpenGL core profile shading language version string: 4.60 NVIDIA
  OpenGL core profile context flags: (none)
  OpenGL core profile profile mask: core profile
  OpenGL core profile extensions:
  GL_AMD_multi_draw_indirect, GL_AMD_seamless_cubemap_per_texture, 
  GL_AMD_vertex_shader_layer, GL_AMD_vertex_shader_viewport_index, 
  GL_ARB_ES2_compatibility, GL_ARB_ES3_1_compatibility, 
  GL_ARB_ES3_2_compatibility, GL_ARB_ES3_compatibility, 
  GL_ARB_arrays_of_arrays, GL_ARB_base_instanc

[Kernel-packages] [Bug 2042564] Re: Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4 Ubuntu 20.04 kernel

2023-11-21 Thread Philip Roche
I have reproduced with @amalmostafa's updated script with a separate
disk too. I see no segfault and no EXT4 errors but the regression in
performance is still present but not as great as in my previous tests.

```
### Ubuntu 20.04 with 5.4 kernel and data disk

ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdb
fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=libaio, iodepth=128
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1162MiB/s][w=298k IOPS][eta 00m:00s]
fiojob1: (groupid=0, jobs=8): err= 0: pid=2391: Tue Nov 21 10:34:57 2023
  write: IOPS=284k, BW=1108MiB/s (1162MB/s)(320GiB/295713msec); 0 zone resets
slat (nsec): min=751, max=115304k, avg=8630.05, stdev=124263.77
clat (nsec): min=391, max=239001k, avg=3598764.23, stdev=2429948.87
 lat (usec): min=72, max=239002, avg=3607.70, stdev=2428.75
clat percentiles (usec):
 |  1.00th=[  668],  5.00th=[ 1434], 10.00th=[ 1778], 20.00th=[ 2212],
 | 30.00th=[ 2573], 40.00th=[ 2900], 50.00th=[ 3261], 60.00th=[ 3654],
 | 70.00th=[ 4080], 80.00th=[ 4686], 90.00th=[ 5669], 95.00th=[ 6587],
 | 99.00th=[ 9110], 99.50th=[10945], 99.90th=[26608], 99.95th=[43779],
 | 99.99th=[83362]
   bw (  MiB/s): min=  667, max= 1341, per=99.98%, avg=1107.88, stdev=13.07, 
samples=4728
   iops: min=170934, max=343430, avg=283618.08, stdev=3346.84, 
samples=4728
  lat (nsec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (usec)   : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%
  lat (usec)   : 250=0.04%, 500=0.43%, 750=0.66%, 1000=0.51%
  lat (msec)   : 2=13.23%, 4=53.26%, 10=31.18%, 20=0.53%, 50=0.11%
  lat (msec)   : 100=0.03%, 250=0.01%
  cpu  : usr=3.10%, sys=7.36%, ctx=1105263, majf=0, minf=102
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
 issued rwts: total=0,83886080,0,8 short=0,0,0,0 dropped=0,0,0,0
 latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=1108MiB/s (1162MB/s), 1108MiB/s-1108MiB/s (1162MB/s-1162MB/s), 
io=320GiB (344GB), run=295713-295713msec

Disk stats (read/write):
  sdb: ios=96/33256749, merge=0/50606838, ticks=19/30836895, in_queue=971080, 
util=100.00%
ubuntu@cloudimg:~$ 
ubuntu@cloudimg:~$ uname --kernel-release
5.4.0-164-generic
ubuntu@cloudimg:~$ cat /etc/cloud/build.info 
build_name: server
serial: 20231011
ubuntu@cloudimg:~$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/";
SUPPORT_URL="https://help.ubuntu.com/";
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/";
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy";
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

### Ubuntu 20.04 with 5.15 kernel and data disk

ubuntu@cloudimg:~$ uname --kernel-release 
5.15.0-89-generic
ubuntu@cloudimg:~$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/";
SUPPORT_URL="https://help.ubuntu.com/";
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/";
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy";
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
ubuntu@cloudimg:~$ cat /etc/cloud/build.info 
build_name: server
serial: 20231011

ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdb
fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=libaio, iodepth=128
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1071MiB/s][w=274k IOPS][eta 00m:00s]
fiojob1: (groupid=0, jobs=8): err= 0: pid=1008: Tue Nov 21 12:19:56 2023
  write: IOPS=258k, BW=1007MiB/s (1056MB/s)(320GiB/325284msec); 0 zone resets
slat (nsec): min=931, max=36726k, avg=7936.15, stdev=120427.45
clat (nsec): min=1963, max=155870k, avg=3959799.65, stdev=2129472.51
 lat (usec): min=55, max=155872, avg=3968.10, stdev=2128.87
clat percentiles (usec):
 |  1.00th=[  562],  5.00th=[ 1319], 10.00th=[ 1811], 20.00th=[ 2376],
 | 30.00th=[ 2835], 40.00th=[ 3294], 50.00th=[ 3720], 60.00th=[ 4113],
 | 70.00th=[ 4621], 80.00th=[ 5211], 90.00th=[ 6390], 95.00th=[ 7439],
 | 99.00th=[10159], 99.50th=[11863], 99.90th=[19268], 99.95th=[23462],
 | 99.99th=[36439]
   bw (  KiB/s): min=715896, max=1190887, per=100.00%, avg=1031613.99, 
stdev=9298.50, samples=5200
   iops: min

[Kernel-packages] [Bug 2042564] Re: Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4 Ubuntu 20.04 kernel

2023-11-08 Thread Philip Roche
I am NOT seeing the same on 22.04

```
ubuntu@cloudimg:~$ uname --kernel-release
5.15.0-87-generic
ubuntu@cloudimg:~$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/";
SUPPORT_URL="https://help.ubuntu.com/";
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/";
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy";
UBUNTU_CODENAME=jammy
ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sda
fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=libaio, iodepth=128
...
fio-3.28
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1180MiB/s][w=302k IOPS][eta 00m:00s]
fiojob1: (groupid=0, jobs=8): err= 0: pid=2166: Thu Nov  9 07:30:05 2023
  write: IOPS=315k, BW=1230MiB/s (1290MB/s)(320GiB/266343msec); 0 zone resets
slat (nsec): min=662, max=37599k, avg=4806.87, stdev=81368.70
clat (nsec): min=656, max=87622k, avg=3241604.28, stdev=1888945.13
 lat (usec): min=46, max=87623, avg=3246.74, stdev=1888.87
clat percentiles (usec):
 |  1.00th=[  441],  5.00th=[ 1139], 10.00th=[ 1483], 20.00th=[ 1893],
 | 30.00th=[ 2245], 40.00th=[ 2573], 50.00th=[ 2933], 60.00th=[ 3294],
 | 70.00th=[ 3752], 80.00th=[ 4359], 90.00th=[ 5276], 95.00th=[ 6194],
 | 99.00th=[ 9241], 99.50th=[11207], 99.90th=[19530], 99.95th=[24511],
 | 99.99th=[36439]
   bw (  MiB/s): min=  512, max= 1567, per=100.00%, avg=1232.42, stdev=21.42, 
samples=4248
   iops: min=131165, max=401400, avg=315500.53, stdev=5483.45, 
samples=4248
  lat (nsec)   : 750=0.01%, 1000=0.01%
  lat (usec)   : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%
  lat (usec)   : 250=0.02%, 500=1.41%, 750=0.91%, 1000=1.47%
  lat (msec)   : 2=19.05%, 4=51.85%, 10=24.55%, 20=0.66%, 50=0.09%
  lat (msec)   : 100=0.01%
  cpu  : usr=3.39%, sys=7.68%, ctx=968250, majf=0, minf=108
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
 issued rwts: total=0,83886080,0,0 short=0,0,0,0 dropped=0,0,0,0
 latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=1230MiB/s (1290MB/s), 1230MiB/s-1230MiB/s (1290MB/s-1290MB/s), 
io=320GiB (344GB), run=266343-266343msec

Disk stats (read/write):
  sda: ios=230/34934643, merge=0/48920965, ticks=54/27582950, 
in_queue=27583025, util=100.00%
```

... but I do see the following in journalctl


```
Nov 09 07:33:03 cloudimg kernel: EXT4-fs warning (device sda1): 
ext4_dirblock_csum_verify:404: inode #80703: comm journalctl: No space for 
directory leaf checksum. Please run e2fsck -D.
Nov 09 07:33:03 cloudimg kernel: EXT4-fs error (device sda1): 
__ext4_find_entry:1689: inode #80703: comm journalctl: checksumming directory 
block 0
Nov 09 07:33:03 cloudimg kernel: Aborting journal on device sda1-8.
Nov 09 07:33:03 cloudimg kernel: EXT4-fs error (device sda1): 
ext4_journal_check_start:83: comm rs:main Q:Reg: Detected aborted journal
Nov 09 07:33:03 cloudimg kernel: EXT4-fs error (device sda1): 
ext4_journal_check_start:83: comm systemd-journal: Detected aborted journal
Nov 09 07:33:03 cloudimg kernel: EXT4-fs (sda1): Remounting filesystem read-only
Nov 09 07:33:03 cloudimg kernel: EXT4-fs warning (device sda1): 
ext4_dirblock_csum_verify:404: inode #80703: comm journalctl: No space for 
directory leaf checksum. Please run e2fsck -D.
Nov 09 07:33:03 cloudimg kernel: EXT4-fs error (device sda1): 
__ext4_find_entry:1689: inode #80703: comm journalctl: checksumming directory 
block 0
Nov 09 07:33:03 cloudimg kernel: EXT4-fs warning (device sda1): 
ext4_dirblock_csum_verify:404: inode #80703: comm journalctl: No space for 
directory leaf checksum. Please run e2fsck -D.
Nov 09 07:33:03 cloudimg kernel: EXT4-fs error (device sda1): 
__ext4_find_entry:1689: inode #80703: comm journalctl: checksumming directory 
block 0
Nov 09 07:33:03 cloudimg kernel: EXT4-fs warning (device sda1): 
ext4_dirblock_csum_verify:404: inode #80703: comm journalctl: No space for 
directory leaf checksum. Please run e2fsck -D.
Nov 09 07:33:03 cloudimg kernel: EXT4-fs error (device sda1): 
__ext4_find_entry:1689: inode #80703: comm journalctl: checksumming directory 
block 0
Nov 09 07:33:03 cloudimg kernel: EXT4-fs warning (device sda1): 
ext4_dirblock_csum_verify:404: inode #80703: comm journalctl: No space for 
directory leaf checksum. Please run e2fsck -D.
Nov 09 07:33:03 cloudimg kernel: EXT4-fs error (device sda1): 
__ext4_find_entry:1689: inode #80703: comm journalctl: checksumming directory 
block 0
Nov 09 07:33:03 cloudimg kernel: E

[Kernel-packages] [Bug 2042564] Re: Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4 Ubuntu 20.04 kernel

2023-11-08 Thread Philip Roche
@kamalmostafa indeed yes. I had missed this.


See below for the output from 20.04 with 5.15 kernel.


```
ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sda
fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=libaio, iodepth=128
...
fio-3.16
Starting 8 processes
Jobs: 4 (f=4): [W(1),_(2),W(1),_(1),W(2),_(1)][99.6%][w=212MiB/s][w=54.3k 
IOPS][eta 00m:01s]
fiojob1: (groupid=0, jobs=8): err= 0: pid=697: Thu Nov  9 06:30:01 2023
  write: IOPS=319k, BW=1247MiB/s (1307MB/s)(320GiB/262803msec); 0 zone resets
slat (nsec): min=652, max=55305k, avg=4699.55, stdev=76242.73
clat (usec): min=5, max=121658, avg=3187.92, stdev=1886.86
 lat (usec): min=34, max=121660, avg=3192.96, stdev=1886.68
clat percentiles (usec):
 |  1.00th=[  506],  5.00th=[ 1254], 10.00th=[ 1565], 20.00th=[ 1909],
 | 30.00th=[ 2212], 40.00th=[ 2507], 50.00th=[ 2835], 60.00th=[ 3228],
 | 70.00th=[ 3687], 80.00th=[ 4228], 90.00th=[ 5145], 95.00th=[ 5997],
 | 99.00th=[ 8717], 99.50th=[10683], 99.90th=[18744], 99.95th=[25822],
 | 99.99th=[55837]
   bw (  MiB/s): min=  353, max= 1690, per=100.00%, avg=1251.19, stdev=22.89, 
samples=4182
   iops: min=90615, max=432642, avg=320303.28, stdev=5860.14, 
samples=4182
  lat (usec)   : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%, 250=0.02%
  lat (usec)   : 500=0.95%, 750=0.55%, 1000=1.01%
  lat (msec)   : 2=20.21%, 4=53.10%, 10=23.55%, 20=0.52%, 50=0.07%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu  : usr=3.42%, sys=7.71%, ctx=960980, majf=0, minf=86
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
 issued rwts: total=0,83886080,0,8 short=0,0,0,0 dropped=0,0,0,0
 latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=1247MiB/s (1307MB/s), 1247MiB/s-1247MiB/s (1307MB/s-1307MB/s), 
io=320GiB (344GB), run=262803-262803msec

Disk stats (read/write):
  sda: ios=251/31989123, merge=0/51876731, ticks=482/27581400, 
in_queue=27581891, util=100.00%
Segmentation fault
ubuntu@cloudimg:~$ uname --kernel-release
5.15.0-88-generic
ubuntu@cloudimg:~$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/";
SUPPORT_URL="https://help.ubuntu.com/";
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/";
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy";
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
ubuntu@cloudimg:~$ 
```

I also see the same details in `journalctl`

```
Nov 09 06:25:38 cloudimg sudo[692]:   ubuntu : TTY=pts/0 ; PWD=/home/ubuntu ; 
USER=root ; COMMAND=/usr/bin/fio --ioengine=libaio --blocksize=4k 
--readwrite=write --filesize=40G --end_fsync=1 --iodepth=128 --direct=1 
--group_reporting --numjobs=8 --name=fiojob1 --filename=/dev/sda
Nov 09 06:25:38 cloudimg sudo[692]: pam_unix(sudo:session): session opened for 
user root by ubuntu(uid=0)
Nov 09 06:30:01 cloudimg kernel: fio[693]: segfault at 7f62e5d78595 ip 
7f62fa672a50 sp 7ffeaecfd2a8 error 6 in 
libglusterfs.so.0.0.1[7f62fa5b+c3000]
Nov 09 06:30:01 cloudimg kernel: Code: f0 0e d0 dd 14 e5 db 95 d9 14 73 00 00 
00 00 00 00 00 c2 11 ee 01 00 00 00 00 00 3a 21 70 00 00 00 00 4d f9 e0 18 e7 
be 9b 09 <29> 9f 75 66 6c eb 9a 03 e5 33 3a bd 32 2a 9c 0e 7c c6 b4 3d a9 26
Nov 09 06:30:01 cloudimg kernel: Core dump to |/usr/share/apport/apport pipe 
failed
Nov 09 06:30:01 cloudimg sudo[692]: pam_unix(sudo:session): session closed for 
user root
Nov 09 06:30:01 cloudimg kernel: Core dump to |/usr/share/apport/apport pipe 
failed
Nov 09 06:39:19 cloudimg systemd[1]: Starting Cleanup of Temporary 
Directories...
Nov 09 06:39:19 cloudimg kernel: EXT4-fs warning (device sda1): 
ext4_dirblock_csum_verify:404: inode #108: comm systemd-tmpfile: No space for 
directory leaf checksum. Please run e2fsck -D.
Nov 09 06:39:19 cloudimg kernel: EXT4-fs error (device sda1): 
htree_dirblock_to_tree:1080: inode #108: comm systemd-tmpfile: Directory block 
failed checksum
Nov 09 06:39:19 cloudimg systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
```

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2042564

Title:
  Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4
  Ubuntu 20.04 kernel

Status in linux package in Ubuntu:
  New
Status in linux source package in Focal:
  New

Bug description:
  We in the Canonical Public Cloud team have received report from our
  colleagues in Google regarding a potential performance 

[Kernel-packages] [Bug 2042564] Re: Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4 Ubuntu 20.04 kernel

2023-11-08 Thread Philip Roche
Providing exact reproducer steps using Qemu locally -

Launch script:
https://gist.github.com/philroche/8242106415ef35b446d7e625b6d60c90 and
cloud image I used for testing @ http://cloud-
images.ubuntu.com/minimal/releases/focal/release/


```
# Download the VM launch script
wget --output-document=launch-qcow2-image-qemu-40G.sh 
https://gist.githubusercontent.com/philroche/8242106415ef35b446d7e625b6d60c90/raw/4a338b92301b6e08608e9345f85a50452ca5fa21/launch-qcow2-image-qemu-40G.sh

chmod +x launch-qcow2-image-qemu-40G.sh

# Download latest  image
wget --output-document=ubuntu-20.04-minimal-cloudimg-amd64.img 
http://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img

# launch the image - note that this provisions a 40GB disk.
./launch-qcow2-image-qemu-40G.sh --password passw0rd --image 
./20231102-ubuntu-20.04-minimal-cloudimg-amd64.img 

# install fio
sshpass -p passw0rd ssh -p  -o UserKnownHostsFile=/dev/null -o 
CheckHostIP=no -o StrictHostKeyChecking=no ubuntu@127.0.0.1 -- sudo apt-get 
update

sshpass -p passw0rd ssh -p  -o UserKnownHostsFile=/dev/null -o
CheckHostIP=no -o StrictHostKeyChecking=no ubuntu@127.0.0.1 -- sudo apt-
get install --assume-yes fio

# run the synthetic test and gather information
sshpass -p passw0rd ssh -p  -o UserKnownHostsFile=/dev/null -o 
CheckHostIP=no -o StrictHostKeyChecking=no ubuntu@127.0.0.1 -- sudo fio 
--ioengine=libaio --blocksize=4k --readwrite=write --filesize=40G --end_fsync=1 
--iodepth=128 --direct=1 --group_reporting --numjobs=8 --name=fiojob1 
--filename=/dev/sda

# kill VM (Use 'Ctrl-a x' key combination to exit emulator )

# start new VM to upgrade kernel and run the synthetic test again
# launch the image - note that this provisions a 40GB disk.
./launch-qcow2-image-qemu-40G.sh --password passw0rd --image 
./20231102-ubuntu-20.04-minimal-cloudimg-amd64.img 

# install fio
sshpass -p passw0rd ssh -p  -o UserKnownHostsFile=/dev/null -o 
CheckHostIP=no -o StrictHostKeyChecking=no ubuntu@127.0.0.1 -- sudo apt-get 
update

sshpass -p passw0rd ssh -p  -o UserKnownHostsFile=/dev/null -o
CheckHostIP=no -o StrictHostKeyChecking=no ubuntu@127.0.0.1 -- sudo apt-
get install --assume-yes fio

# upgrade the kernel
sshpass -p passw0rd ssh -p  -o UserKnownHostsFile=/dev/null -o 
CheckHostIP=no -o StrictHostKeyChecking=no ubuntu@127.0.0.1 -- sudo apt-get 
install --assume-yes linux-generic-hwe-20.04

# wait until VM has rebooted
sshpass -p passw0rd ssh -p  -o UserKnownHostsFile=/dev/null -o 
CheckHostIP=no -o StrictHostKeyChecking=no ubuntu@127.0.0.1 -- sudo reboot

# run the synthetic test again and gather the information
sshpass -p passw0rd ssh -p  -o UserKnownHostsFile=/dev/null -o 
CheckHostIP=no -o StrictHostKeyChecking=no ubuntu@127.0.0.1 -- sudo fio 
--ioengine=libaio --blocksize=4k --readwrite=write --filesize=40G --end_fsync=1 
--iodepth=128 --direct=1 --group_reporting --numjobs=8 --name=fiojob1 
--filename=/dev/sda

```

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2042564

Title:
  Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4
  Ubuntu 20.04 kernel

Status in linux package in Ubuntu:
  New
Status in linux source package in Focal:
  New

Bug description:
  We in the Canonical Public Cloud team have received report from our
  colleagues in Google regarding a potential performance regression with
  the 5.15 kernel vs the 5.4 kernel on ubuntu 20.04. Their test were
  performed using the linux-gkeop and linux-gkeop-5.15 kernels.

  I have verified with the generic Ubuntu 20.04 5.4 linux-generic and
  the Ubuntu 20.04 5.15 linux-generic-hwe-20.04 kernels.

  The tests were run using `fio`

  fio commands:

  * 4k initwrite: `fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdc`
  * 4k overwrite: `fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdc`

  
  My reproducer was to launch an Ubuntu 20.04 cloud image locally with qemu the 
results are below:

  Using 5.4 kernel

  ```
  ubuntu@cloudimg:~$ uname --kernel-release
  5.4.0-164-generic

  ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k 
--readwrite=write --filesize=40G --end_fsync=1 --iodepth=128 --direct=1 
--group_reporting --numjobs=8 --name=fiojob1 --filename=/dev/sda
  fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
  ...
  fio-3.16
  Starting 8 processes
  Jobs: 8 (f=8): [W(8)][99.6%][w=925MiB/s][w=237k IOPS][eta 00m:01s] 
  fiojob1: (groupid=0, jobs=8): err= 0: pid=2443: Thu Nov  2 09:15:22 2023
write: IOPS=317k, BW=1237MiB/s (1297MB/s)(320GiB/264837msec); 0 zone resets
  s

[Kernel-packages] [Bug 2042564] Re: Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4 Ubuntu 20.04 kernel

2023-11-07 Thread Philip Roche
Google have provided a non synthetic `fio` impact

> Performance was severally degraded when accessing Persistent Volumes
provided by Portworx/PureStorage.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2042564

Title:
  Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4
  Ubuntu 20.04 kernel

Status in linux package in Ubuntu:
  New
Status in linux source package in Focal:
  New

Bug description:
  We in the Canonical Public Cloud team have received report from our
  colleagues in Google regarding a potential performance regression with
  the 5.15 kernel vs the 5.4 kernel on ubuntu 20.04. Their test were
  performed using the linux-gkeop and linux-gkeop-5.15 kernels.

  I have verified with the generic Ubuntu 20.04 5.4 linux-generic and
  the Ubuntu 20.04 5.15 linux-generic-hwe-20.04 kernels.

  The tests were run using `fio`

  fio commands:

  * 4k initwrite: `fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdc`
  * 4k overwrite: `fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdc`

  
  My reproducer was to launch an Ubuntu 20.04 cloud image locally with qemu the 
results are below:

  Using 5.4 kernel

  ```
  ubuntu@cloudimg:~$ uname --kernel-release
  5.4.0-164-generic

  ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k 
--readwrite=write --filesize=40G --end_fsync=1 --iodepth=128 --direct=1 
--group_reporting --numjobs=8 --name=fiojob1 --filename=/dev/sda
  fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
  ...
  fio-3.16
  Starting 8 processes
  Jobs: 8 (f=8): [W(8)][99.6%][w=925MiB/s][w=237k IOPS][eta 00m:01s] 
  fiojob1: (groupid=0, jobs=8): err= 0: pid=2443: Thu Nov  2 09:15:22 2023
write: IOPS=317k, BW=1237MiB/s (1297MB/s)(320GiB/264837msec); 0 zone resets
  slat (nsec): min=628, max=37820k, avg=7207.71, stdev=101058.61
  clat (nsec): min=457, max=56099k, avg=340.45, stdev=1707823.38
   lat (usec): min=23, max=56100, avg=3229.78, stdev=1705.80
  clat percentiles (usec):
   |  1.00th=[  775],  5.00th=[ 1352], 10.00th=[ 1647], 20.00th=[ 2024],
   | 30.00th=[ 2343], 40.00th=[ 2638], 50.00th=[ 2933], 60.00th=[ 3261],
   | 70.00th=[ 3654], 80.00th=[ 4146], 90.00th=[ 5014], 95.00th=[ 5932],
   | 99.00th=[ 8979], 99.50th=[10945], 99.90th=[18220], 99.95th=[22676],
   | 99.99th=[32113]
 bw (  MiB/s): min=  524, max= 1665, per=100.00%, avg=1237.72, stdev=20.42, 
samples=4232
 iops: min=134308, max=426326, avg=316855.16, stdev=5227.36, 
samples=4232
lat (nsec)   : 500=0.01%, 750=0.01%, 1000=0.01%
lat (usec)   : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%
lat (usec)   : 250=0.05%, 500=0.54%, 750=0.37%, 1000=0.93%
lat (msec)   : 2=17.40%, 4=58.02%, 10=22.01%, 20=0.60%, 50=0.07%
lat (msec)   : 100=0.01%
cpu  : usr=3.29%, sys=7.45%, ctx=1262621, majf=0, minf=103
IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
   submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
   complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.1%
   issued rwts: total=0,83886080,0,8 short=0,0,0,0 dropped=0,0,0,0
   latency   : target=0, window=0, percentile=100.00%, depth=128

  Run status group 0 (all jobs):
WRITE: bw=1237MiB/s (1297MB/s), 1237MiB/s-1237MiB/s (1297MB/s-1297MB/s), 
io=320GiB (344GB), run=264837-264837msec

  Disk stats (read/write):
sda: ios=36/32868891, merge=0/50979424, ticks=5/27498602, in_queue=1183124, 
util=100.00%
  ```

  
  After upgrading to linux-generic-hwe-20.04 kernel and rebooting

  ```
  ubuntu@cloudimg:~$ uname --kernel-release
  5.15.0-88-generic

  ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k 
--readwrite=write --filesize=40G --end_fsync=1 --iodepth=128 --direct=1 
--group_reporting --numjobs=8 --name=fiojob1 --filename=/dev/sda
  fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
  ...
  fio-3.16
  Starting 8 processes
  Jobs: 1 (f=1): [_(7),W(1)][100.0%][w=410MiB/s][w=105k IOPS][eta 00m:00s]
  fiojob1: (groupid=0, jobs=8): err= 0: pid=1438: Thu Nov  2 09:46:49 2023
write: IOPS=155k, BW=605MiB/s (634MB/s)(320GiB/541949msec); 0 zone resets
  slat (nsec): min=660, max=325426k, avg=10351.04, stdev=232438.50
  clat (nsec): min=1100, max=782743k, avg=6595008.67, stdev=6290570.04
   lat (usec): min=86, max=782748, avg=6606.08, stdev=6294.03
  clat percentiles (usec):
   |  1.00th=[   914],  5.00th=[  2180], 10.00th=[  2802], 20.00th=[  3556],
   | 30.00th=[  4178]

[Kernel-packages] [Bug 2042564] Re: Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4 Ubuntu 20.04 kernel

2023-11-02 Thread Philip Roche
For reference, I also tried on a 22.04 cloud image with 5.15 kernel

```
ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sda
fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=libaio, iodepth=128
...
fio-3.28
Starting 8 processes
Jobs: 1 (f=1): [_(5),W(1),_(2)][100.0%][w=311MiB/s][w=79.7k IOPS][eta 00m:00s]
fiojob1: (groupid=0, jobs=8): err= 0: pid=2198: Thu Nov  2 13:42:07 2023
  write: IOPS=244k, BW=952MiB/s (998MB/s)(320GiB/344252msec); 0 zone resets
slat (nsec): min=655, max=86425k, avg=6303.33, stdev=121079.62
clat (usec): min=5, max=124812, avg=4182.79, stdev=3154.25
 lat (usec): min=61, max=124814, avg=4189.52, stdev=3155.57
clat percentiles (usec):
 |  1.00th=[  553],  5.00th=[ 1319], 10.00th=[ 1713], 20.00th=[ 2180],
 | 30.00th=[ 2638], 40.00th=[ 3064], 50.00th=[ 3523], 60.00th=[ 4015],
 | 70.00th=[ 4621], 80.00th=[ 5538], 90.00th=[ 7177], 95.00th=[ 9110],
 | 99.00th=[15270], 99.50th=[19792], 99.90th=[34866], 99.95th=[42206],
 | 99.99th=[64226]
   bw (  KiB/s): min=197710, max=1710611, per=100.00%, avg=978380.45, 
stdev=38019.38, samples=5483
   iops: min=49424, max=427652, avg=244594.42, stdev=9504.89, 
samples=5483
  lat (usec)   : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%, 250=0.01%
  lat (usec)   : 500=0.79%, 750=0.82%, 1000=0.95%
  lat (msec)   : 2=13.06%, 4=44.04%, 10=36.63%, 20=3.21%, 50=0.45%
  lat (msec)   : 100=0.03%, 250=0.01%
  cpu  : usr=3.34%, sys=7.56%, ctx=978696, majf=0, minf=122
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
 issued rwts: total=0,83886080,0,0 short=0,0,0,0 dropped=0,0,0,0
 latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=952MiB/s (998MB/s), 952MiB/s-952MiB/s (998MB/s-998MB/s), io=320GiB 
(344GB), run=344252-344252msec

Disk stats (read/write):
  sda: ios=336/33974149, merge=0/49890543, ticks=201/35842485, 
in_queue=35842765, util=100.00%
ubuntu@cloudimg:~$ uname --kernel-release
5.15.0-87-generic
```

This shows bw=952MiB/s

Summary:

Ubuntu 22.04 5.15 kernel `bw=952MiB/s`
Ubuntu 20.04 5.4 kernel `bw=1237MiB/s`
Ubuntu 20.04 5.15 kernel `bw=605MiB/s`

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2042564

Title:
  Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4
  Ubuntu 20.04 kernel

Status in linux package in Ubuntu:
  New
Status in linux source package in Focal:
  New

Bug description:
  We in the Canonical Public Cloud team have received report from our
  colleagues in Google regarding a potential performance regression with
  the 5.15 kernel vs the 5.4 kernel on ubuntu 20.04. Their test were
  performed using the linux-gkeop and linux-gkeop-5.15 kernels.

  I have verified with the generic Ubuntu 20.04 5.4 linux-generic and
  the Ubuntu 20.04 5.15 linux-generic-hwe-20.04 kernels.

  The tests were run using `fio`

  fio commands:

  * 4k initwrite: `fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdc`
  * 4k overwrite: `fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdc`

  
  My reproducer was to launch an Ubuntu 20.04 cloud image locally with qemu the 
results are below:

  Using 5.4 kernel

  ```
  ubuntu@cloudimg:~$ uname --kernel-release
  5.4.0-164-generic

  ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k 
--readwrite=write --filesize=40G --end_fsync=1 --iodepth=128 --direct=1 
--group_reporting --numjobs=8 --name=fiojob1 --filename=/dev/sda
  fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
  ...
  fio-3.16
  Starting 8 processes
  Jobs: 8 (f=8): [W(8)][99.6%][w=925MiB/s][w=237k IOPS][eta 00m:01s] 
  fiojob1: (groupid=0, jobs=8): err= 0: pid=2443: Thu Nov  2 09:15:22 2023
write: IOPS=317k, BW=1237MiB/s (1297MB/s)(320GiB/264837msec); 0 zone resets
  slat (nsec): min=628, max=37820k, avg=7207.71, stdev=101058.61
  clat (nsec): min=457, max=56099k, avg=340.45, stdev=1707823.38
   lat (usec): min=23, max=56100, avg=3229.78, stdev=1705.80
  clat percentiles (usec):
   |  1.00th=[  775],  5.00th=[ 1352], 10.00th=[ 1647], 20.00th=[ 2024],
   | 30.00th=[ 2343], 40.00th=[ 2638], 50.00th=[ 2933], 60.00th=[ 3261],
   | 70.00th=[ 3654], 80.00th=[ 4146], 90.00th=[ 5014], 95.00th=[ 5932],
   | 9

[Kernel-packages] [Bug 2042564] [NEW] Performance regression in the 5.15 Ubuntu 20.04 kernel compared to 5.4 Ubuntu 20.04 kernel

2023-11-02 Thread Philip Roche
Public bug reported:

We in the Canonical Public Cloud team have received report from our
colleagues in Google regarding a potential performance regression with
the 5.15 kernel vs the 5.4 kernel on ubuntu 20.04. Their test were
performed using the linux-gkeop and linux-gkeop-5.15 kernels.

I have verified with the generic Ubuntu 20.04 5.4 linux-generic and the
Ubuntu 20.04 5.15 linux-generic-hwe-20.04 kernels.

The tests were run using `fio`

fio commands:

* 4k initwrite: `fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdc`
* 4k overwrite: `fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sdc`


My reproducer was to launch an Ubuntu 20.04 cloud image locally with qemu the 
results are below:

Using 5.4 kernel

```
ubuntu@cloudimg:~$ uname --kernel-release
5.4.0-164-generic

ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sda
fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=libaio, iodepth=128
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][99.6%][w=925MiB/s][w=237k IOPS][eta 00m:01s] 
fiojob1: (groupid=0, jobs=8): err= 0: pid=2443: Thu Nov  2 09:15:22 2023
  write: IOPS=317k, BW=1237MiB/s (1297MB/s)(320GiB/264837msec); 0 zone resets
slat (nsec): min=628, max=37820k, avg=7207.71, stdev=101058.61
clat (nsec): min=457, max=56099k, avg=340.45, stdev=1707823.38
 lat (usec): min=23, max=56100, avg=3229.78, stdev=1705.80
clat percentiles (usec):
 |  1.00th=[  775],  5.00th=[ 1352], 10.00th=[ 1647], 20.00th=[ 2024],
 | 30.00th=[ 2343], 40.00th=[ 2638], 50.00th=[ 2933], 60.00th=[ 3261],
 | 70.00th=[ 3654], 80.00th=[ 4146], 90.00th=[ 5014], 95.00th=[ 5932],
 | 99.00th=[ 8979], 99.50th=[10945], 99.90th=[18220], 99.95th=[22676],
 | 99.99th=[32113]
   bw (  MiB/s): min=  524, max= 1665, per=100.00%, avg=1237.72, stdev=20.42, 
samples=4232
   iops: min=134308, max=426326, avg=316855.16, stdev=5227.36, 
samples=4232
  lat (nsec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (usec)   : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%
  lat (usec)   : 250=0.05%, 500=0.54%, 750=0.37%, 1000=0.93%
  lat (msec)   : 2=17.40%, 4=58.02%, 10=22.01%, 20=0.60%, 50=0.07%
  lat (msec)   : 100=0.01%
  cpu  : usr=3.29%, sys=7.45%, ctx=1262621, majf=0, minf=103
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
 issued rwts: total=0,83886080,0,8 short=0,0,0,0 dropped=0,0,0,0
 latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=1237MiB/s (1297MB/s), 1237MiB/s-1237MiB/s (1297MB/s-1297MB/s), 
io=320GiB (344GB), run=264837-264837msec

Disk stats (read/write):
  sda: ios=36/32868891, merge=0/50979424, ticks=5/27498602, in_queue=1183124, 
util=100.00%
```


After upgrading to linux-generic-hwe-20.04 kernel and rebooting

```
ubuntu@cloudimg:~$ uname --kernel-release
5.15.0-88-generic

ubuntu@cloudimg:~$ sudo fio --ioengine=libaio --blocksize=4k --readwrite=write 
--filesize=40G --end_fsync=1 --iodepth=128 --direct=1 --group_reporting 
--numjobs=8 --name=fiojob1 --filename=/dev/sda
fiojob1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=libaio, iodepth=128
...
fio-3.16
Starting 8 processes
Jobs: 1 (f=1): [_(7),W(1)][100.0%][w=410MiB/s][w=105k IOPS][eta 00m:00s]
fiojob1: (groupid=0, jobs=8): err= 0: pid=1438: Thu Nov  2 09:46:49 2023
  write: IOPS=155k, BW=605MiB/s (634MB/s)(320GiB/541949msec); 0 zone resets
slat (nsec): min=660, max=325426k, avg=10351.04, stdev=232438.50
clat (nsec): min=1100, max=782743k, avg=6595008.67, stdev=6290570.04
 lat (usec): min=86, max=782748, avg=6606.08, stdev=6294.03
clat percentiles (usec):
 |  1.00th=[   914],  5.00th=[  2180], 10.00th=[  2802], 20.00th=[  3556],
 | 30.00th=[  4178], 40.00th=[  4817], 50.00th=[  5538], 60.00th=[  6259],
 | 70.00th=[  7177], 80.00th=[  8455], 90.00th=[ 10683], 95.00th=[ 13566],
 | 99.00th=[ 26870], 99.50th=[ 34866], 99.90th=[ 63177], 99.95th=[ 80217],
 | 99.99th=[145753]
   bw (  KiB/s): min=39968, max=1683451, per=100.00%, avg=619292.10, 
stdev=26377.19, samples=8656
   iops: min= 9990, max=420862, avg=154822.58, stdev=6594.34, 
samples=8656
  lat (usec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.05%, 750=0.48%, 1000=0.65%
  lat (msec)   : 2=2.79%, 4=23.00%, 10=60.93%, 20=10.08%, 50=1.83%
  lat (msec)   : 100=0.16%,

[Kernel-packages] [Bug 2032933] Re: Mantic (23.10) minimal images increase in memory consumption, port usage and processes running

2023-10-10 Thread Philip Roche
https://bugs.launchpad.net/cloud-images/+bug/2038894 is a related bug to
track specifically the introduction of listening port 5353

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2032933

Title:
  Mantic (23.10) minimal images increase in memory consumption, port
  usage and processes running

Status in cloud-images:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  The Mantic (Ubuntu 23.10) download/qcow2 images available @ 
https://cloud-images.ubuntu.com/minimal/
  are undergoing some big changes prior to 23.10 release in October.

  This is a devel release so this is the perfect time to be making these
  changes but we are noticing some changes that were not expected.

  This bug is to track the unexpected changes and discuss/resolve these.

  The changes that have been made to mantic minimal:

  * Move to the linux-generic kernel from the linux-kvm kernel
    * This also involved removal of the virtio-blk driver, which is the default 
for QEMU and OpenStack, but this is being restored in an upcoming 6.5 mantic 
kernel and is being trakced @ 
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2030745
  * Move to using minimal-cloud seed - see 
https://ubuntu-archive-team.ubuntu.com/seeds/ubuntu.mantic/cloud-minimal
  * No longer installing Recommends packages
* This is during image build only and will not affect any subsequent 
package installs
  * No initramfs fallback for boot - only initramfsless boot

  The latest mantic minimal images are available @ http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ and are also available in the
  public clouds.

  A package name manifest diff can be seen @
  https://pastebin.ubuntu.com/p/rRd6STnNmK/

  We have had reports of higher memory usage on an idle system, higher
  number of ports open on an idle system and higher number of process
  running on a idle system.

  To help with debugging I have built and uploaded the following images
  and package manifests to
  https://people.canonical.com/~philroche/20230824-manticl-minimal-
  LP2032933/

  * 
20230618-before-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * Before kernel change and before seed change
  * 
20230824-after-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and before seed change
  * 
20230821.1-after-kernel-change-after-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and after seed change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2032933/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2038567] Re: Disable restricting unprivileged change_profile by default, due to LXD latest/stable not yet compatible with this new apparmor feature

2023-10-09 Thread Philip Roche
cloud minimized and non minimized images have now been tested with
6.5.0-9 kernel from -proposed and pass our lxd-start-stop test suite
which was failing and which is the test suite which prompted this whole
thread. +1

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2038567

Title:
  Disable restricting unprivileged change_profile by default, due to LXD
  latest/stable not yet compatible with this new apparmor feature

Status in Release Notes for Ubuntu:
  New
Status in apparmor package in Ubuntu:
  New
Status in linux package in Ubuntu:
  Fix Committed
Status in lxd package in Ubuntu:
  Triaged
Status in snapd package in Ubuntu:
  New

Bug description:
  Following upgrade to 6.5.0-7 kernel in mantic cloud images we are
  seeing a regression in our cloud image tests. The test runs the
  following:

  ```
  lxd init --auto --storage-backend dir
  lxc launch ubuntu-daily:mantic mantic
  lxc info mantic
  lxc exec mantic -- cloud-init status --wait
  ```

  The `lxc exec mantic -- cloud-init status --wait` times out after 240s
  and will fail our test as a result.

  I have been able to replicate in a local VM

  ```
  wget 
http://cloud-images.ubuntu.com/mantic/20231005/mantic-server-cloudimg-amd64.img 
  wget --output-document=launch-qcow2-image-qemu.sh 
https://gist.githubusercontent.com/philroche/14c241c086a5730481e24178b654268f/raw/7af95cd4dfc8e1d0600e6118803d2c866765714e/gistfile1.txt
 
  chmod +x launch-qcow2-image-qemu.sh 

  ./launch-qcow2-image-qemu.sh --password passw0rd --image 
./mantic-server-cloudimg-amd64.img 
  cat < "./reproducer.sh"
  #!/bin/bash -eux
  lxd init --auto --storage-backend dir
  lxc launch ubuntu-daily:mantic mantic
  lxc info mantic
  lxc exec mantic -- cloud-init status --wait
  EOF
  chmod +x ./reproducer.sh
  sshpass -p passw0rd scp -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -P  ./reproducer.sh ubuntu@127.0.0.1:~/
  sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p  ubuntu@127.0.0.1 sudo apt-get update
  sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p  ubuntu@127.0.0.1 sudo apt-get upgrade 
--assume-yes
  sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p  ubuntu@127.0.0.1 ./reproducer.sh
  ```

  The issue is not present with the 6.5.0-5 kernel and the issue is
  present regardless of the container launched. I tried the jammy
  container to test this.

  From my test VM

  ```
  ubuntu@cloudimg:~$ uname --all
  Linux cloudimg 6.5.0-7-generic #7-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 29 
09:14:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@cloudimg:~$ uname --kernel-release
  6.5.0-7-generic
  ```

  This is a regression in our test that will block 23.10 cloud image
  release next week.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-release-notes/+bug/2038567/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2038567] [NEW] Mantic 6.5.0-7 kernel causes regression in LXD container usage

2023-10-05 Thread Philip Roche
Public bug reported:

Following upgrade to 6.5.0-7 kernel in mantic cloud images we are seeing
a regression in our cloud image tests. The test runs the following:

```
lxd init --auto --storage-backend dir
lxc launch ubuntu-daily:mantic mantic
lxc info mantic
lxc exec mantic -- cloud-init status --wait
```

The `lxc exec mantic -- cloud-init status --wait` times out after 240s
and will fail our test as a result.

I have been able to replicate in a local VM

```
wget 
http://cloud-images.ubuntu.com/mantic/20231005/mantic-server-cloudimg-amd64.img 
wget --output-document=launch-qcow2-image-qemu.sh 
https://gist.githubusercontent.com/philroche/14c241c086a5730481e24178b654268f/raw/7af95cd4dfc8e1d0600e6118803d2c866765714e/gistfile1.txt
 
chmod +x launch-qcow2-image-qemu.sh 

./launch-qcow2-image-qemu.sh --password passw0rd --image 
./mantic-server-cloudimg-amd64.img 
cat < "./reproducer.sh"
#!/bin/bash -eux
lxd init --auto --storage-backend dir
lxc launch ubuntu-daily:mantic mantic
lxc info mantic
lxc exec mantic -- cloud-init status --wait
EOF
chmod +x ./reproducer.sh
sshpass -p passw0rd scp -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -P  ./reproducer.sh ubuntu@127.0.0.1:~/
sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p  ubuntu@127.0.0.1 sudo apt-get update
sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p  ubuntu@127.0.0.1 sudo apt-get upgrade 
--assume-yes
sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p  ubuntu@127.0.0.1 ./reproducer.sh
```

The issue is not present with the 6.5.0-5 kernel and the issue is
present regardless of the container launched. I tried the jammy
container to test this.

>From my test VM

```
ubuntu@cloudimg:~$ uname --all
Linux cloudimg 6.5.0-7-generic #7-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 29 
09:14:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@cloudimg:~$ uname --kernel-release
6.5.0-7-generic
```

This is a regression in our test that will block 23.10 cloud image
release next week.

** Affects: ubuntu-release-notes
 Importance: Undecided
 Status: New

** Affects: linux (Ubuntu)
 Importance: Critical
 Status: Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2038567

Title:
  Mantic 6.5.0-7 kernel causes regression in LXD container usage

Status in Release Notes for Ubuntu:
  New
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Following upgrade to 6.5.0-7 kernel in mantic cloud images we are
  seeing a regression in our cloud image tests. The test runs the
  following:

  ```
  lxd init --auto --storage-backend dir
  lxc launch ubuntu-daily:mantic mantic
  lxc info mantic
  lxc exec mantic -- cloud-init status --wait
  ```

  The `lxc exec mantic -- cloud-init status --wait` times out after 240s
  and will fail our test as a result.

  I have been able to replicate in a local VM

  ```
  wget 
http://cloud-images.ubuntu.com/mantic/20231005/mantic-server-cloudimg-amd64.img 
  wget --output-document=launch-qcow2-image-qemu.sh 
https://gist.githubusercontent.com/philroche/14c241c086a5730481e24178b654268f/raw/7af95cd4dfc8e1d0600e6118803d2c866765714e/gistfile1.txt
 
  chmod +x launch-qcow2-image-qemu.sh 

  ./launch-qcow2-image-qemu.sh --password passw0rd --image 
./mantic-server-cloudimg-amd64.img 
  cat < "./reproducer.sh"
  #!/bin/bash -eux
  lxd init --auto --storage-backend dir
  lxc launch ubuntu-daily:mantic mantic
  lxc info mantic
  lxc exec mantic -- cloud-init status --wait
  EOF
  chmod +x ./reproducer.sh
  sshpass -p passw0rd scp -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -P  ./reproducer.sh ubuntu@127.0.0.1:~/
  sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p  ubuntu@127.0.0.1 sudo apt-get update
  sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p  ubuntu@127.0.0.1 sudo apt-get upgrade 
--assume-yes
  sshpass -p passw0rd ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o 
StrictHostKeyChecking=no -p  ubuntu@127.0.0.1 ./reproducer.sh
  ```

  The issue is not present with the 6.5.0-5 kernel and the issue is
  present regardless of the container launched. I tried the jammy
  container to test this.

  From my test VM

  ```
  ubuntu@cloudimg:~$ uname --all
  Linux cloudimg 6.5.0-7-generic #7-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 29 
09:14:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@cloudimg:~$ uname --kernel-release
  6.5.0-7-generic
  ```

  This is a regression in our test that will block 23.10 cloud image
  release next week.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-release-note

[Kernel-packages] [Bug 2036968] Re: Mantic minimized/minimal cloud images do not receive IP address during provisioning; systemd regression with wait-online

2023-10-04 Thread Philip Roche
I have also successfully verified that -proposed amd64 kernel
`6.5.0-7-generic` results in successful network configuration when
tested using qemu on an amd64 host with older hardware (ThinkPad T460
with 6th gen intel i5 which is the same hardware which we were able to
reproduce the issue on previously). See
https://people.canonical.com/~philroche/20231003-mantic-minimal-
proposed-kernel/amd64/ for cloud-init logs, some debug output and test
image.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2036968

Title:
  Mantic minimized/minimal cloud images do not receive IP address during
  provisioning; systemd regression with wait-online

Status in cloud-images:
  New
Status in linux package in Ubuntu:
  Fix Committed
Status in systemd package in Ubuntu:
  Triaged

Bug description:
  Following a recent change from linux-kvm kernel to linux-generic
  kernel in the mantic minimized images, there is a reproducable bug
  where a guest VM does not have an IP address assigned as part of
  cloud-init provisioning.

  This is easiest to reproduce when emulating arm64 on amd64 host. The
  bug is a race condition, so there could exist fast enough
  virtualisation on fast enough hardware where this bug is not present
  but in all my testing I have been able to reproduce.

  The latest mantic minimized images from http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ have force initrdless boot and
  no initrd to fallback to.

  This but is not present in the non minimized/base images @
  http://cloud-images.ubuntu.com/mantic/ as these boot with initrd with
  the required drivers present for virtio-net.

  Reproducer

  ```
  wget -O "launch-qcow2-image-qemu-arm64.sh" 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/launch-qcow2-image-qemu-arm64.sh

  chmod +x ./launch-qcow2-image-qemu-arm64.sh
  wget 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/livecd.ubuntu-cpc.img
  ./launch-qcow2-image-qemu-arm64.sh --password passw0rd --image 
./livecd.ubuntu-cpc.img
  ```

  You will then be able to log in with user `ubuntu` and password
  `passw0rd`.

  You can run `ip a` and see that there is a network interface present
  (separate to `lo`) but no IP address has been assigned.

  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc noop state DOWN group default 
qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff

  ```

  This is because when cloud-init is trying to configure network
  interfaces it doesn't find any so it doesn't configure any. But by the
  time boot is complete the network interface is present but cloud-init
  provisioning has already completed.

  You can verify this by running `sudo cloud-init clean && sudo cloud-
  init init`

  You can then see a successfully configured network interface

  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc pfifo_fast state 
UP group default qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
  inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s1
     valid_lft 86391sec preferred_lft 86391sec
  inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr 
noprefixroute
     valid_lft 86393sec preferred_lft 14393sec
  inet6 fe80::5054:ff:fe12:3456/64 scope link
     valid_lft forever preferred_lft forever

  ```

  The bug is also reproducible with amd64 guest on adm64 host on
  older/slower hardware.

  The suggested fixes while debugging this issue are:

  * to include `virtio-net` as a built-in in the mantic generic kernel
  * understand what needs to change in cloud-init so that it can react to late 
additions of network interfaces

  I will file a separate bug against cloud-init to address the race
  condition on emulated guest/older hardware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2036968/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2036968] Re: Mantic minimized/minimal cloud images do not receive IP address during provisioning; systemd regression with wait-online

2023-10-04 Thread Philip Roche
@xnox I have successfully verified that -proposed arm64 kernel
`6.5.0-7-generic` results in successful network configuration when
tested using qemu on an amd64 host. See
https://people.canonical.com/~philroche/20231003-mantic-minimal-
proposed-kernel/ for cloud-init logs, some debug output and test image.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2036968

Title:
  Mantic minimized/minimal cloud images do not receive IP address during
  provisioning; systemd regression with wait-online

Status in cloud-images:
  New
Status in linux package in Ubuntu:
  Fix Committed
Status in systemd package in Ubuntu:
  Triaged

Bug description:
  Following a recent change from linux-kvm kernel to linux-generic
  kernel in the mantic minimized images, there is a reproducable bug
  where a guest VM does not have an IP address assigned as part of
  cloud-init provisioning.

  This is easiest to reproduce when emulating arm64 on amd64 host. The
  bug is a race condition, so there could exist fast enough
  virtualisation on fast enough hardware where this bug is not present
  but in all my testing I have been able to reproduce.

  The latest mantic minimized images from http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ have force initrdless boot and
  no initrd to fallback to.

  This but is not present in the non minimized/base images @
  http://cloud-images.ubuntu.com/mantic/ as these boot with initrd with
  the required drivers present for virtio-net.

  Reproducer

  ```
  wget -O "launch-qcow2-image-qemu-arm64.sh" 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/launch-qcow2-image-qemu-arm64.sh

  chmod +x ./launch-qcow2-image-qemu-arm64.sh
  wget 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/livecd.ubuntu-cpc.img
  ./launch-qcow2-image-qemu-arm64.sh --password passw0rd --image 
./livecd.ubuntu-cpc.img
  ```

  You will then be able to log in with user `ubuntu` and password
  `passw0rd`.

  You can run `ip a` and see that there is a network interface present
  (separate to `lo`) but no IP address has been assigned.

  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc noop state DOWN group default 
qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff

  ```

  This is because when cloud-init is trying to configure network
  interfaces it doesn't find any so it doesn't configure any. But by the
  time boot is complete the network interface is present but cloud-init
  provisioning has already completed.

  You can verify this by running `sudo cloud-init clean && sudo cloud-
  init init`

  You can then see a successfully configured network interface

  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc pfifo_fast state 
UP group default qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
  inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s1
     valid_lft 86391sec preferred_lft 86391sec
  inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr 
noprefixroute
     valid_lft 86393sec preferred_lft 14393sec
  inet6 fe80::5054:ff:fe12:3456/64 scope link
     valid_lft forever preferred_lft forever

  ```

  The bug is also reproducible with amd64 guest on adm64 host on
  older/slower hardware.

  The suggested fixes while debugging this issue are:

  * to include `virtio-net` as a built-in in the mantic generic kernel
  * understand what needs to change in cloud-init so that it can react to late 
additions of network interfaces

  I will file a separate bug against cloud-init to address the race
  condition on emulated guest/older hardware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2036968/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2037398] Re: kexec enable to load/kdump zstd compressed zimg

2023-09-26 Thread Philip Roche
I have confirmed that this issue with not being able to capture kernel
dump with a mantic arm64 kernel is not new. using the arm64 6.5 kernel
(6.5.0-5) in the release pocket I captured the following during test.

```
ubuntu@cloudimg:~$ sudo kdump-config show
DUMP_MODE:  kdump
USE_KDUMP:  1
KDUMP_COREDIR:  /var/crash
crashkernel addr: 0xde00
   /var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-6.5.0-5-generic
kdump initrd: 
   /var/lib/kdump/initrd.img: symbolic link to 
/var/lib/kdump/initrd.img-6.5.0-5-generic
current state:Not ready to kdump

kexec command:
  no kexec command recorded
ubuntu@cloudimg:~$ uname --all
Linux cloudimg 6.5.0-5-generic #5-Ubuntu SMP PREEMPT_DYNAMIC Wed Sep  6 
15:36:23 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
ubuntu@cloudimg:~$ sudo cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-6.5.0-5-generic 
root=UUID=e7604ab2-200c-4f34-ab11-47e78ac4b8bd ro console=tty1 console=ttyS0 
crashkernel=2G-4G:320M,4G-32G:512M,32G-64G:1024M,64G-128G:2048M,128G-:4096M
ubuntu@cloudimg:~$ sudo dmesg | grep -i crash
[0.00] crashkernel reserved: 0xde00 - 0xfe00 
(512 MB)
[0.00] Kernel command line: BOOT_IMAGE=/vmlinuz-6.5.0-5-generic 
root=UUID=e7604ab2-200c-4f34-ab11-47e78ac4b8bd ro console=tty1 console=ttyS0 
crashkernel=2G-4G:320M,4G-32G:512M,32G-64G:1024M,64G-128G:2048M,128G-:4096M
[   87.054802] pstore: Using crash dump compression: deflate
ubuntu@cloudimg:~$ sudo cat /proc/sys/kernel/sysrq
176
ubuntu@cloudimg:~$ sudo service kdump-tools status
● kdump-tools.service - Kernel crash dump capture service
 Loaded: loaded (/lib/systemd/system/kdump-tools.service; enabled; preset: 
enabled)
 Active: active (exited) since Tue 2023-09-26 13:44:22 UTC; 7min ago
Process: 514 ExecStart=/etc/init.d/kdump-tools start (code=exited, 
status=0/SUCCESS)
   Main PID: 514 (code=exited, status=0/SUCCESS)
CPU: 4min 46.461s

Sep 26 13:38:21 cloudimg systemd[1]: Starting kdump-tools.service - Kernel 
crash dump capture service...
Sep 26 13:38:32 cloudimg kdump-tools[514]: Starting kdump-tools:
Sep 26 13:38:32 cloudimg kdump-tools[535]:  * Creating symlink 
/var/lib/kdump/vmlinuz
Sep 26 13:38:41 cloudimg kdump-tools[577]: kdump-tools: Generating 
/var/lib/kdump/initrd.img-6.5.0-5-generic
Sep 26 13:44:11 cloudimg kdump-tools[535]:  * Creating symlink 
/var/lib/kdump/initrd.img
Sep 26 13:44:22 cloudimg kdump-tools[535]:  * failed to load kdump kernel
Sep 26 13:44:22 cloudimg kdump-tools[5536]: failed to load kdump kernel
Sep 26 13:44:22 cloudimg systemd[1]: Finished kdump-tools.service - Kernel 
crash dump capture service.
ubuntu@cloudimg:~$ sudo ls -al /var/lib/kdump/vmlinuz 
lrwxrwxrwx 1 root root 29 Sep 26 13:38 /var/lib/kdump/vmlinuz -> 
/boot/vmlinuz-6.5.0-5-generic
ubuntu@cloudimg:~$ sudo file /boot/vmlinuz-6.5.0-5-generic
/boot/vmlinuz-6.5.0-5-generic: gzip compressed data, was 
"vmlinuz-6.5.0-5-generic.efi.signed", last modified: Thu Sep  7 10:43:26 2023, 
max compression, from Unix, original size modulo 2^32 54989192
```

.. but the `Cannot determine the file type of /var/lib/kdump/vmlinuz` in
the `sudo service kdump-tools status` as @xnox has also reproduced with
`kexec` is new.

I have also confirmed that capturing kernel crash dump with the amd64
kernel 6.5.0-5 & 6.5.0-6 works as expected.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to kexec-tools in Ubuntu.
https://bugs.launchpad.net/bugs/2037398

Title:
  kexec enable to load/kdump zstd compressed zimg

Status in kdump-tools package in Ubuntu:
  Invalid
Status in kexec-tools package in Ubuntu:
  Triaged
Status in linux package in Ubuntu:
  Triaged

Bug description:
  While testing the 6.5.0-6-generic proposed arm64 generic kernel I
  encountered issues being able to use kdump.

  After enabling -proposed and installing the -proposed 6.5.0-6 kernel
  and rebooting I encountered the following:

  `kdump-config show` shows `current state:Not ready to kdump` and
  looking at the status of the kdump-tools services I see `Cannot
  determine the file type of /var/lib/kdump/vmlinuz`

  Full output:

  ```
  ubuntu@cloudimg:~$ sudo kdump-config show
  DUMP_MODE:kdump
  USE_KDUMP:1
  KDUMP_COREDIR:/var/crash
  crashkernel addr: 0xde00
 /var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-6.5.0-6-generic
  kdump initrd: 
 /var/lib/kdump/initrd.img: symbolic link to 
/var/lib/kdump/initrd.img-6.5.0-6-generic
  current state:Not ready to kdump

  kexec command:
no kexec command recorded
  ubuntu@cloudimg:~$ sudo service kdump-tools status
  ● kdump-tools.service - Kernel crash dump capture service
   Loaded: loaded (/lib/systemd/system/kdump-tools.service; enabled; 
preset: enabled)
   Active: active (exited) since Tue 2023-09-26 09:21:44 UTC; 5min ago
  Process: 515 ExecStart=/etc/init.d/kdump-tools start (code=exited, 
status=0/SUCCESS)

[Kernel-packages] [Bug 2037398] [NEW] Unable to capture kernel crash dump using arm64 mantic 6.5.0-6-generic -proposed kernel

2023-09-26 Thread Philip Roche
Public bug reported:

While testing the 6.5.0-6-generic proposed arm64 generic kernel I
encountered issues being able to use kdump.

After enabling -proposed and installing the -proposed 6.5.0-6 kernel and
rebooting I encountered the following:

`kdump-config show` shows `current state:Not ready to kdump` and
looking at the status of the kdump-tools services I see `Cannot
determine the file type of /var/lib/kdump/vmlinuz`

Full output:

```
ubuntu@cloudimg:~$ sudo kdump-config show
DUMP_MODE:  kdump
USE_KDUMP:  1
KDUMP_COREDIR:  /var/crash
crashkernel addr: 0xde00
   /var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-6.5.0-6-generic
kdump initrd: 
   /var/lib/kdump/initrd.img: symbolic link to 
/var/lib/kdump/initrd.img-6.5.0-6-generic
current state:Not ready to kdump

kexec command:
  no kexec command recorded
ubuntu@cloudimg:~$ sudo service kdump-tools status
● kdump-tools.service - Kernel crash dump capture service
 Loaded: loaded (/lib/systemd/system/kdump-tools.service; enabled; preset: 
enabled)
 Active: active (exited) since Tue 2023-09-26 09:21:44 UTC; 5min ago
Process: 515 ExecStart=/etc/init.d/kdump-tools start (code=exited, 
status=0/SUCCESS)
   Main PID: 515 (code=exited, status=0/SUCCESS)
CPU: 4min 21.329s

Sep 26 09:16:14 cloudimg systemd[1]: Starting kdump-tools.service - Kernel 
crash dump capture service...
Sep 26 09:16:24 cloudimg kdump-tools[515]: Starting kdump-tools:
Sep 26 09:16:24 cloudimg kdump-tools[537]:  * Creating symlink 
/var/lib/kdump/vmlinuz
Sep 26 09:16:32 cloudimg kdump-tools[580]: kdump-tools: Generating 
/var/lib/kdump/initrd.img-6.5.0-6-generic
Sep 26 09:21:42 cloudimg kdump-tools[537]:  * Creating symlink 
/var/lib/kdump/initrd.img
Sep 26 09:21:43 cloudimg kdump-tools[5538]: Cannot determine the file type of 
/var/lib/kdump/vmlinuz
Sep 26 09:21:44 cloudimg kdump-tools[537]:  * failed to load kdump kernel
Sep 26 09:21:44 cloudimg kdump-tools[5539]: failed to load kdump kernel
Sep 26 09:21:44 cloudimg systemd[1]: Finished kdump-tools.service - Kernel 
crash dump capture service.

ubuntu@cloudimg:~$ ls -al /var/lib/kdump/vmlinuz
lrwxrwxrwx 1 root root 29 Sep 26 09:16 /var/lib/kdump/vmlinuz -> 
/boot/vmlinuz-6.5.0-6-generic
ubuntu@cloudimg:~$ file /var/lib/kdump/vmlinuz
/var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-6.5.0-6-generic
ubuntu@cloudimg:~$ sudo file /boot/vmlinuz-6.5.0-6-generic
/boot/vmlinuz-6.5.0-6-generic: PE32+ executable (EFI application) Aarch64 
(stripped to external PDB), for MS Windows, 2 sections
```

The reboot with 6.5.0-6 was successful and the reboot after linux-crashdump 
install was successful too. 
I used https://ubuntu.com/server/docs/kernel-crash-dump guide for installing 
linux-crashdump and attempting to trigger a dump.

I used arm64 qcow cloud image from http://cloud-
images.ubuntu.com/mantic/20230925/mantic-server-cloudimg-arm64.img to
test the above emulated on amd64.

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2037398

Title:
  Unable to capture kernel crash dump using arm64 mantic 6.5.0-6-generic
  -proposed kernel

Status in linux package in Ubuntu:
  New

Bug description:
  While testing the 6.5.0-6-generic proposed arm64 generic kernel I
  encountered issues being able to use kdump.

  After enabling -proposed and installing the -proposed 6.5.0-6 kernel
  and rebooting I encountered the following:

  `kdump-config show` shows `current state:Not ready to kdump` and
  looking at the status of the kdump-tools services I see `Cannot
  determine the file type of /var/lib/kdump/vmlinuz`

  Full output:

  ```
  ubuntu@cloudimg:~$ sudo kdump-config show
  DUMP_MODE:kdump
  USE_KDUMP:1
  KDUMP_COREDIR:/var/crash
  crashkernel addr: 0xde00
 /var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-6.5.0-6-generic
  kdump initrd: 
 /var/lib/kdump/initrd.img: symbolic link to 
/var/lib/kdump/initrd.img-6.5.0-6-generic
  current state:Not ready to kdump

  kexec command:
no kexec command recorded
  ubuntu@cloudimg:~$ sudo service kdump-tools status
  ● kdump-tools.service - Kernel crash dump capture service
   Loaded: loaded (/lib/systemd/system/kdump-tools.service; enabled; 
preset: enabled)
   Active: active (exited) since Tue 2023-09-26 09:21:44 UTC; 5min ago
  Process: 515 ExecStart=/etc/init.d/kdump-tools start (code=exited, 
status=0/SUCCESS)
 Main PID: 515 (code=exited, status=0/SUCCESS)
  CPU: 4min 21.329s

  Sep 26 09:16:14 cloudimg systemd[1]: Starting kdump-tools.service - Kernel 
crash dump capture service...
  Sep 26 09:16:24 cloudimg kdump-tools[515]: Starting kdump-tools:
  Sep 26 09:16:24 cloudimg kdump-tools[537]:  * Creating symlink 
/var/lib/kdump/vmlinuz
  Sep 26 09:16:32 cloudimg k

[Kernel-packages] [Bug 2036968] Re: Mantic minimized/minimal cloud images do not receive IP address during provisioning

2023-09-21 Thread Philip Roche
cloud-init bug filed @ https://github.com/canonical/cloud-
init/issues/4451

** Bug watch added: github.com/canonical/cloud-init/issues #4451
   https://github.com/canonical/cloud-init/issues/4451

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2036968

Title:
  Mantic minimized/minimal cloud images do not receive IP address during
  provisioning

Status in cloud-images:
  New
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Following a recent change from linux-kvm kernel to linux-generic
  kernel in the mantic minimized images, there is a reproducable bug
  where a guest VM does not have an IP address assigned as part of
  cloud-init provisioning.

  This is easiest to reproduce when emulating arm64 on amd64 host. The
  bug is a race condition, so there could exist fast enough
  virtualisation on fast enough hardware where this bug is not present
  but in all my testing I have been able to reproduce.

  The latest mantic minimized images from http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ have force initrdless boot and
  no initrd to fallback to.

  This but is not present in the non minimized/base images @
  http://cloud-images.ubuntu.com/mantic/ as these boot with initrd with
  the required drivers present for virtio-net.

  Reproducer

  ```
  wget -O "launch-qcow2-image-qemu-arm64.sh" 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/launch-qcow2-image-qemu-arm64.sh

  chmod +x ./launch-qcow2-image-qemu-arm64.sh
  wget 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/livecd.ubuntu-cpc.img
  ./launch-qcow2-image-qemu-arm64.sh --password passw0rd --image 
./livecd.ubuntu-cpc.img
  ```

  You will then be able to log in with user `ubuntu` and password
  `passw0rd`.

  You can run `ip a` and see that there is a network interface present
  (separate to `lo`) but no IP address has been assigned.

  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc noop state DOWN group default 
qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff

  ```

  This is because when cloud-init is trying to configure network
  interfaces it doesn't find any so it doesn't configure any. But by the
  time boot is complete the network interface is present but cloud-init
  provisioning has already completed.

  You can verify this by running `sudo cloud-init clean && sudo cloud-
  init init`

  You can then see a successfully configured network interface

  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc pfifo_fast state 
UP group default qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
  inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s1
     valid_lft 86391sec preferred_lft 86391sec
  inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr 
noprefixroute
     valid_lft 86393sec preferred_lft 14393sec
  inet6 fe80::5054:ff:fe12:3456/64 scope link
     valid_lft forever preferred_lft forever

  ```

  The bug is also reproducible with amd64 guest on adm64 host on
  older/slower hardware.

  The suggested fixes while debugging this issue are:

  * to include `virtio-net` as a built-in in the mantic generic kernel
  * understand what needs to change in cloud-init so that it can react to late 
additions of network interfaces

  I will file a separate bug against cloud-init to address the race
  condition on emulated guest/older hardware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2036968/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2036968] Re: Mantic minimized/minimal cloud images do not receive IP address during provisioning

2023-09-21 Thread Philip Roche
** Description changed:

  Following a recent change from linux-kvm kernel to linux-generic kernel
- in the mantic minimized images there is a reproducable bug where a guest
- VM does not have an IP address assigned as part of cloud-init
+ in the mantic minimized images, there is a reproducable bug where a
+ guest VM does not have an IP address assigned as part of cloud-init
  provisioning.
  
- This is easiest to reproduce when emulating arm64 on adm64 host. The bug
- is a race condition so there could exist fast enough virtualisation on
+ This is easiest to reproduce when emulating arm64 on amd64 host. The bug
+ is a race condition, so there could exist fast enough virtualisation on
  fast enough hardware where this bug is not present but in all my testing
  I have been able to reproduce.
  
  The latest mantic minimized images from http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ have force initrdless boot and
  no initrd to fallback to.
  
- This bug is not present in the non minimized/base images @ http://cloud-
+ This but is not present in the non minimized/base images @ http://cloud-
  images.ubuntu.com/mantic/ as these boot with initrd with the required
  drivers present for virtio-net.
  
  Reproducer
  
  ```
  wget -O "launch-qcow2-image-qemu-arm64.sh" 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/launch-qcow2-image-qemu-arm64.sh
  
  chmod +x ./launch-qcow2-image-qemu-arm64.sh
  wget 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/livecd.ubuntu-cpc.img
  ./launch-qcow2-image-qemu-arm64.sh --password passw0rd --image 
./livecd.ubuntu-cpc.img
  ```
  
  You will then be able to log in with user `ubuntu` and password
  `passw0rd`.
  
  You can run `ip a` and see that there is a network interface present
  (separate to `lo`) but no IP address has been assigned.
  
  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc noop state DOWN group default 
qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
  
  ```
  
- This is because when cloud-init-local.service is trying to configure
- network interfaces it doesn't find any so it doesn't configure any. But
- by the time boot is complete the network interface is present but cloud-
- init provisioning has already completed.
+ This is because when cloud-init is trying to configure network
+ interfaces it doesn't find any so it doesn't configure any. But by the
+ time boot is complete the network interface is present but cloud-init
+ provisioning has already completed.
  
  You can verify this by running `sudo cloud-init clean && sudo cloud-init
  init`
  
  You can then see a successfully configured network interface
  
  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc pfifo_fast state 
UP group default qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
  inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s1
     valid_lft 86391sec preferred_lft 86391sec
  inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr 
noprefixroute
     valid_lft 86393sec preferred_lft 14393sec
  inet6 fe80::5054:ff:fe12:3456/64 scope link
     valid_lft forever preferred_lft forever
  
  ```
  
- The bug is also reproducable with amd64 guest on adm64 host on
+ The bug is also reproducible with amd64 guest on adm64 host on
  older/slower hardware.
  
  The suggested fixes while debugging this issue are:
  
  * to include `virtio-net` as a built-in in the mantic generic kernel
  * understand what needs to change in cloud-init so that it can react to late 
additions of network interfaces
  
  I will file a separate bug against cloud-init to address the race
  condition on emulated guest/older hardware.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2036968

Title:
  Mantic minimized/minimal cloud images do not receive IP address during
  provisioning

Status in cloud-images:
  New
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Following a recent change from linux-kvm kernel to linux-generic
  kernel in the mantic minimized images, there is a reproducable bug
  where a guest VM does not have an IP address assigned as part of
  cloud-init provisioning.

  This is easiest to reproduce

[Kernel-packages] [Bug 2032933] Re: Mantic (23.10) minimal images increase in memory consumption, port usage and processes running

2023-09-21 Thread Philip Roche
There is a related bug @
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2036968 which might
have affected boot speed.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2032933

Title:
  Mantic (23.10) minimal images increase in memory consumption, port
  usage and processes running

Status in cloud-images:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  The Mantic (Ubuntu 23.10) download/qcow2 images available @ 
https://cloud-images.ubuntu.com/minimal/
  are undergoing some big changes prior to 23.10 release in October.

  This is a devel release so this is the perfect time to be making these
  changes but we are noticing some changes that were not expected.

  This bug is to track the unexpected changes and discuss/resolve these.

  The changes that have been made to mantic minimal:

  * Move to the linux-generic kernel from the linux-kvm kernel
    * This also involved removal of the virtio-blk driver, which is the default 
for QEMU and OpenStack, but this is being restored in an upcoming 6.5 mantic 
kernel and is being trakced @ 
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2030745
  * Move to using minimal-cloud seed - see 
https://ubuntu-archive-team.ubuntu.com/seeds/ubuntu.mantic/cloud-minimal
  * No longer installing Recommends packages
* This is during image build only and will not affect any subsequent 
package installs
  * No initramfs fallback for boot - only initramfsless boot

  The latest mantic minimal images are available @ http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ and are also available in the
  public clouds.

  A package name manifest diff can be seen @
  https://pastebin.ubuntu.com/p/rRd6STnNmK/

  We have had reports of higher memory usage on an idle system, higher
  number of ports open on an idle system and higher number of process
  running on a idle system.

  To help with debugging I have built and uploaded the following images
  and package manifests to
  https://people.canonical.com/~philroche/20230824-manticl-minimal-
  LP2032933/

  * 
20230618-before-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * Before kernel change and before seed change
  * 
20230824-after-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and before seed change
  * 
20230821.1-after-kernel-change-after-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and after seed change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2032933/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2036968] Re: Mantic minimized/minimal cloud images do not receive IP address during provisioning

2023-09-21 Thread Philip Roche
** Also affects: linux (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2036968

Title:
  Mantic minimized/minimal cloud images do not receive IP address during
  provisioning

Status in cloud-images:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Following a recent change from linux-kvm kernel to linux-generic
  kernel in the mantic minimized images there is a reproducable bug
  where a guest VM does not have an IP address assigned as part of
  cloud-init provisioning.

  This is easiest to reproduce when emulating arm64 on adm64 host. The
  bug is a race condition so there could exist fast enough
  virtualisation on fast enough hardware where this bug is not present
  but in all my testing I have been able to reproduce.

  The latest mantic minimized images from http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ have force initrdless boot and
  no initrd to fallback to.

  This bug is not present in the non minimized/base images @
  http://cloud-images.ubuntu.com/mantic/ as these boot with initrd with
  the required drivers present for virtio-net.

  Reproducer

  ```
  wget -O "launch-qcow2-image-qemu-arm64.sh" 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/launch-qcow2-image-qemu-arm64.sh

  chmod +x ./launch-qcow2-image-qemu-arm64.sh
  wget 
https://people.canonical.com/~philroche/20230921-cloud-images-mantic-fail-to-provision/livecd.ubuntu-cpc.img
  ./launch-qcow2-image-qemu-arm64.sh --password passw0rd --image 
./livecd.ubuntu-cpc.img
  ```

  You will then be able to log in with user `ubuntu` and password
  `passw0rd`.

  You can run `ip a` and see that there is a network interface present
  (separate to `lo`) but no IP address has been assigned.

  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc noop state DOWN group default 
qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff

  ```

  This is because when cloud-init is trying to configure network
  interfaces it doesn't find any so it doesn't configure any. But by the
  time boot is complete the network interface is present but cloud-init
  provisioning has already completed.

  You can verify this by running `sudo cloud-init clean && sudo cloud-
  init init`

  You can then see a successfully configured network interface

  ```
  ubuntu@cloudimg:~$ ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: enp0s1:  mtu 1500 qdisc pfifo_fast state 
UP group default qlen 1000
  link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
  inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s1
     valid_lft 86391sec preferred_lft 86391sec
  inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr 
noprefixroute
     valid_lft 86393sec preferred_lft 14393sec
  inet6 fe80::5054:ff:fe12:3456/64 scope link
     valid_lft forever preferred_lft forever

  ```

  The bug is also reproducable with amd64 guest on adm64 host on
  older/slower hardware.

  The suggested fixes while debugging this issue are:

  * to include `virtio-net` as a built-in in the mantic generic kernel
  * understand what needs to change in cloud-init so that it can react to late 
additions of network interfaces

  I will file a separate bug against cloud-init to address the race
  condition on emulated guest/older hardware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2036968/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2032933] Re: Mantic (23.10) minimal images increase in memory consumption, port usage and processes running

2023-09-21 Thread Philip Roche
@paelzer agreed. Good plan.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2032933

Title:
  Mantic (23.10) minimal images increase in memory consumption, port
  usage and processes running

Status in cloud-images:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  The Mantic (Ubuntu 23.10) download/qcow2 images available @ 
https://cloud-images.ubuntu.com/minimal/
  are undergoing some big changes prior to 23.10 release in October.

  This is a devel release so this is the perfect time to be making these
  changes but we are noticing some changes that were not expected.

  This bug is to track the unexpected changes and discuss/resolve these.

  The changes that have been made to mantic minimal:

  * Move to the linux-generic kernel from the linux-kvm kernel
    * This also involved removal of the virtio-blk driver, which is the default 
for QEMU and OpenStack, but this is being restored in an upcoming 6.5 mantic 
kernel and is being trakced @ 
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2030745
  * Move to using minimal-cloud seed - see 
https://ubuntu-archive-team.ubuntu.com/seeds/ubuntu.mantic/cloud-minimal
  * No longer installing Recommends packages
* This is during image build only and will not affect any subsequent 
package installs
  * No initramfs fallback for boot - only initramfsless boot

  The latest mantic minimal images are available @ http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ and are also available in the
  public clouds.

  A package name manifest diff can be seen @
  https://pastebin.ubuntu.com/p/rRd6STnNmK/

  We have had reports of higher memory usage on an idle system, higher
  number of ports open on an idle system and higher number of process
  running on a idle system.

  To help with debugging I have built and uploaded the following images
  and package manifests to
  https://people.canonical.com/~philroche/20230824-manticl-minimal-
  LP2032933/

  * 
20230618-before-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * Before kernel change and before seed change
  * 
20230824-after-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and before seed change
  * 
20230821.1-after-kernel-change-after-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and after seed change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2032933/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2032933] Re: Mantic (23.10) minimal images increase in memory consumption, port usage and processes running

2023-09-04 Thread Philip Roche
@paelzer given the above findings and discussion, I would like to mark
this as Invalid for cloud-images project and continue the conversation
in the context of kernel only. +1 / -1 ?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2032933

Title:
  Mantic (23.10) minimal images increase in memory consumption, port
  usage and processes running

Status in cloud-images:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  The Mantic (Ubuntu 23.10) download/qcow2 images available @ 
https://cloud-images.ubuntu.com/minimal/
  are undergoing some big changes prior to 23.10 release in October.

  This is a devel release so this is the perfect time to be making these
  changes but we are noticing some changes that were not expected.

  This bug is to track the unexpected changes and discuss/resolve these.

  The changes that have been made to mantic minimal:

  * Move to the linux-generic kernel from the linux-kvm kernel
    * This also involved removal of the virtio-blk driver, which is the default 
for QEMU and OpenStack, but this is being restored in an upcoming 6.5 mantic 
kernel and is being trakced @ 
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2030745
  * Move to using minimal-cloud seed - see 
https://ubuntu-archive-team.ubuntu.com/seeds/ubuntu.mantic/cloud-minimal
  * No longer installing Recommends packages
* This is during image build only and will not affect any subsequent 
package installs
  * No initramfs fallback for boot - only initramfsless boot

  The latest mantic minimal images are available @ http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ and are also available in the
  public clouds.

  A package name manifest diff can be seen @
  https://pastebin.ubuntu.com/p/rRd6STnNmK/

  We have had reports of higher memory usage on an idle system, higher
  number of ports open on an idle system and higher number of process
  running on a idle system.

  To help with debugging I have built and uploaded the following images
  and package manifests to
  https://people.canonical.com/~philroche/20230824-manticl-minimal-
  LP2032933/

  * 
20230618-before-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * Before kernel change and before seed change
  * 
20230824-after-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and before seed change
  * 
20230821.1-after-kernel-change-after-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and after seed change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2032933/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2032933] Re: Mantic (23.10) minimal images increase in memory consumption, port usage and processes running

2023-08-30 Thread Philip Roche
** Description changed:

- The Mantic (Ubuntu 23.10) images are undergoing some big changes prior
- to 23.10 release in October.
+ The Mantic (Ubuntu 23.10) download/qcow2 images available @ 
https://cloud-images.ubuntu.com/minimal/
+ are undergoing some big changes prior to 23.10 release in October.
  
  This is a devel release so this is the perfect time to be making these
  changes but we are noticing some changes that were not expected.
  
  This bug is to track the unexpected changes and discuss/resolve these.
  
  The changes that have been made to mantic minimal:
  
  * Move to the linux-generic kernel from the linux-kvm kernel
    * This also involved removal of the virtio-blk driver, which is the default 
for QEMU and OpenStack, but this is being restored in an upcoming 6.5 mantic 
kernel and is being trakced @ 
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2030745
  * Move to using minimal-cloud seed - see 
https://ubuntu-archive-team.ubuntu.com/seeds/ubuntu.mantic/cloud-minimal
  * No longer installing Recommends packages
+   * This is during image build only and will not affect any subsequent 
package installs
  * No initramfs fallback for boot - only initramfsless boot
  
  The latest mantic minimal images are available @ http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ and are also available in the
  public clouds.
  
  A package name manifest diff can be seen @
  https://pastebin.ubuntu.com/p/rRd6STnNmK/
  
  We have had reports of higher memory usage on an idle system, higher
  number of ports open on an idle system and higher number of process
  running on a idle system.
  
  To help with debugging I have built and uploaded the following images
  and package manifests to
  https://people.canonical.com/~philroche/20230824-manticl-minimal-
  LP2032933/
  
  * 
20230618-before-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * Before kernel change and before seed change
  * 
20230824-after-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and before seed change
  * 
20230821.1-after-kernel-change-after-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and after seed change

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2032933

Title:
  Mantic (23.10) minimal images increase in memory consumption, port
  usage and processes running

Status in cloud-images:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  The Mantic (Ubuntu 23.10) download/qcow2 images available @ 
https://cloud-images.ubuntu.com/minimal/
  are undergoing some big changes prior to 23.10 release in October.

  This is a devel release so this is the perfect time to be making these
  changes but we are noticing some changes that were not expected.

  This bug is to track the unexpected changes and discuss/resolve these.

  The changes that have been made to mantic minimal:

  * Move to the linux-generic kernel from the linux-kvm kernel
    * This also involved removal of the virtio-blk driver, which is the default 
for QEMU and OpenStack, but this is being restored in an upcoming 6.5 mantic 
kernel and is being trakced @ 
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2030745
  * Move to using minimal-cloud seed - see 
https://ubuntu-archive-team.ubuntu.com/seeds/ubuntu.mantic/cloud-minimal
  * No longer installing Recommends packages
* This is during image build only and will not affect any subsequent 
package installs
  * No initramfs fallback for boot - only initramfsless boot

  The latest mantic minimal images are available @ http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ and are also available in the
  public clouds.

  A package name manifest diff can be seen @
  https://pastebin.ubuntu.com/p/rRd6STnNmK/

  We have had reports of higher memory usage on an idle system, higher
  number of ports open on an idle system and higher number of process
  running on a idle system.

  To help with debugging I have built and uploaded the following images
  and package manifests to
  https://people.canonical.com/~philroche/20230824-manticl-minimal-
  LP2032933/

  * 
20230618-before-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * Before kernel change and before seed change
  * 
20230824-after-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and before seed change
  * 
20230821.1-after-kernel-change-after-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and after seed change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2032933/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2032933] Re: Mantic (23.10) minimal images increase in memory consumption, port usage and processes running

2023-08-29 Thread Philip Roche
I have uploaded further data now to
https://people.canonical.com/~philroche/20230824-manticl-minimal-
LP2032933/server-metrics/ with kernelmodules, kernelconfig, services,
timers etc. for each of the three images being inspected. This
additional data was gathered with a modified fork of the `server-test-
scripts` repo @ https://github.com/philroche/server-test-
scripts/blob/feature/local-lxc-image-execution-additional-data-
gathering/metric-server-simple/metric-server-simple.sh.

It seems that most of the mem and process increase is attributed to the
kernel change and we know that this was a conscious decision with the
following reported bugs supporting that decision.

* https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2006488
* https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1931841 
* https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1685291

Given the above I feel what we can work on is whether any of the
process/modules introduced by the switch to the generic kernel should be
omitted for the minimal images.

The best, easiest source of this information is the data gathered from
the latest image with both the generic kernel and the switch to the new
minimal seed - https://people.canonical.com/~philroche/20230824-manticl-
minimal-LP2032933/server-metrics/20230821.1-after-kernel-change-after-
seed-change-mantic-minimal-cloudimg-amd64-data-f93870221eb8/

@seth-arnold You highlighted `ksmd`. Are there any others that concern
you.

@paelzer Are you happy to adjust your regression testing/metrics
gathering to increase the memory required knowing that it was a
conscious decision to switch kernel and incur the performance hit for
the benefit of using a kernel with more support and less reported bugs?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2032933

Title:
  Mantic (23.10) minimal images increase in memory consumption, port
  usage and processes running

Status in cloud-images:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  The Mantic (Ubuntu 23.10) images are undergoing some big changes prior
  to 23.10 release in October.

  This is a devel release so this is the perfect time to be making these
  changes but we are noticing some changes that were not expected.

  This bug is to track the unexpected changes and discuss/resolve these.

  The changes that have been made to mantic minimal:

  * Move to the linux-generic kernel from the linux-kvm kernel
    * This also involved removal of the virtio-blk driver, which is the default 
for QEMU and OpenStack, but this is being restored in an upcoming 6.5 mantic 
kernel and is being trakced @ 
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2030745
  * Move to using minimal-cloud seed - see 
https://ubuntu-archive-team.ubuntu.com/seeds/ubuntu.mantic/cloud-minimal
  * No longer installing Recommends packages
  * No initramfs fallback for boot - only initramfsless boot

  The latest mantic minimal images are available @ http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ and are also available in the
  public clouds.

  A package name manifest diff can be seen @
  https://pastebin.ubuntu.com/p/rRd6STnNmK/

  We have had reports of higher memory usage on an idle system, higher
  number of ports open on an idle system and higher number of process
  running on a idle system.

  To help with debugging I have built and uploaded the following images
  and package manifests to
  https://people.canonical.com/~philroche/20230824-manticl-minimal-
  LP2032933/

  * 
20230618-before-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * Before kernel change and before seed change
  * 
20230824-after-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and before seed change
  * 
20230821.1-after-kernel-change-after-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and after seed change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2032933/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2032933] Re: Mantic (23.10) minimal images increase in memory consumption, port usage and processes running

2023-08-28 Thread Philip Roche
@paelzer

> The change of the image build sadly combined it all

See the description noting
https://people.canonical.com/~philroche/20230824-manticl-minimal-
LP2032933/ which should help in determining where the changes were
introduced as I have provided three images across the various stages of
changes - no change -> new kernel -> new kernel + new seed

For example https://pastebin.ubuntu.com/p/sJ5wGk4G7h/ show the process
diff between previous images and images with new kernel only

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2032933

Title:
  Mantic (23.10) minimal images increase in memory consumption, port
  usage and processes running

Status in cloud-images:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  The Mantic (Ubuntu 23.10) images are undergoing some big changes prior
  to 23.10 release in October.

  This is a devel release so this is the perfect time to be making these
  changes but we are noticing some changes that were not expected.

  This bug is to track the unexpected changes and discuss/resolve these.

  The changes that have been made to mantic minimal:

  * Move to the linux-generic kernel from the linux-kvm kernel
    * This also involved removal of the virtio-blk driver, which is the default 
for QEMU and OpenStack, but this is being restored in an upcoming 6.5 mantic 
kernel and is being trakced @ 
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2030745
  * Move to using minimal-cloud seed - see 
https://ubuntu-archive-team.ubuntu.com/seeds/ubuntu.mantic/cloud-minimal
  * No longer installing Recommends packages
  * No initramfs fallback for boot - only initramfsless boot

  The latest mantic minimal images are available @ http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ and are also available in the
  public clouds.

  A package name manifest diff can be seen @
  https://pastebin.ubuntu.com/p/rRd6STnNmK/

  We have had reports of higher memory usage on an idle system, higher
  number of ports open on an idle system and higher number of process
  running on a idle system.

  To help with debugging I have built and uploaded the following images
  and package manifests to
  https://people.canonical.com/~philroche/20230824-manticl-minimal-
  LP2032933/

  * 
20230618-before-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * Before kernel change and before seed change
  * 
20230824-after-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and before seed change
  * 
20230821.1-after-kernel-change-after-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and after seed change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2032933/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2032933] Re: Mantic (23.10) minimal images increase in memory consumption, port usage and processes running

2023-08-28 Thread Philip Roche
The diff in process count from kernel change image -> kernel change +
seed change image is actually a reduction in processes - see diff @
https://pastebin.ubuntu.com/p/PXtQM9gB2K/

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2032933

Title:
  Mantic (23.10) minimal images increase in memory consumption, port
  usage and processes running

Status in cloud-images:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  The Mantic (Ubuntu 23.10) images are undergoing some big changes prior
  to 23.10 release in October.

  This is a devel release so this is the perfect time to be making these
  changes but we are noticing some changes that were not expected.

  This bug is to track the unexpected changes and discuss/resolve these.

  The changes that have been made to mantic minimal:

  * Move to the linux-generic kernel from the linux-kvm kernel
    * This also involved removal of the virtio-blk driver, which is the default 
for QEMU and OpenStack, but this is being restored in an upcoming 6.5 mantic 
kernel and is being trakced @ 
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2030745
  * Move to using minimal-cloud seed - see 
https://ubuntu-archive-team.ubuntu.com/seeds/ubuntu.mantic/cloud-minimal
  * No longer installing Recommends packages
  * No initramfs fallback for boot - only initramfsless boot

  The latest mantic minimal images are available @ http://cloud-
  images.ubuntu.com/minimal/daily/mantic/ and are also available in the
  public clouds.

  A package name manifest diff can be seen @
  https://pastebin.ubuntu.com/p/rRd6STnNmK/

  We have had reports of higher memory usage on an idle system, higher
  number of ports open on an idle system and higher number of process
  running on a idle system.

  To help with debugging I have built and uploaded the following images
  and package manifests to
  https://people.canonical.com/~philroche/20230824-manticl-minimal-
  LP2032933/

  * 
20230618-before-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * Before kernel change and before seed change
  * 
20230824-after-kernel-change-before-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and before seed change
  * 
20230821.1-after-kernel-change-after-seed-change-mantic-minimal-cloudimg-amd64
    * After kernel change and after seed change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/2032933/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2017790] Re: Intel Wi-Fi 6 AX201 failing in 23.04

2023-04-29 Thread David Roche
Windows was installed but it’s removed now. I have seen suggestions to
create a bootable usb to disable the WiFi adapter but I’m not sure if
that would work

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-lowlatency in Ubuntu.
https://bugs.launchpad.net/bugs/2017790

Title:
  Intel Wi-Fi 6 AX201 failing in 23.04

Status in linux-lowlatency package in Ubuntu:
  Confirmed

Bug description:
  After upgrading from 22.04 to 23.04 the intel wifi driver is not
  loading, I am seeing the following in dmesg:

  [  215.024382] Loading of unsigned module is rejected
  [  262.051998] [ cut here ]
  [  262.052001] WARNING: CPU: 2 PID: 6946 at net/netlink/genetlink.c:570 
genl_validate_ops+0x1cc/0x270
  [  262.052007] Modules linked in: cfg80211(O+) rfcomm xt_conntrack 
nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack 
nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo xt_addrtype nft_compat 
nf_tables libcrc32c nfnetlink br_netfilter bridge stp llc 
vmw_vsock_vmci_transport vsock vmw_vmci snd_seq_dummy snd_hrtimer overlay cmac 
algif_hash algif_skcipher af_alg bnep dell_rbu typec_displayport 
snd_hda_codec_hdmi snd_ctl_led binfmt_misc snd_hda_codec_realtek 
snd_hda_codec_generic snd_sof_pci_intel_tgl snd_sof_intel_hda_common 
soundwire_intel hid_logitech_hidpp soundwire_generic_allocation nls_iso8859_1 
soundwire_cadence snd_sof_intel_hda snd_sof_pci snd_sof_xtensa_dsp snd_sof 
snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_acpi_intel_match 
snd_soc_acpi soundwire_bus snd_soc_core snd_compress ac97_bus 
x86_pkg_temp_thermal r8153_ecm snd_pcm_dmaengine intel_powerclamp cdc_ether 
coretemp snd_hda_intel usbnet snd_usb_audio kvm_intel snd_intel_dspcfg r8152 
snd_intel_sdw_acpi
  [  262.052038]  mii hid_plantronics hid_logitech_dj snd_usbmidi_lib 
snd_hda_codec kvm snd_hda_core snd_hwdep irqbypass mei_hdcp mei_pxp 
intel_rapl_msr dell_laptop snd_pcm crct10dif_pclmul i915 polyval_clmulni 
snd_seq_midi polyval_generic snd_seq_midi_event ghash_clmulni_intel 
sha512_ssse3 snd_rawmidi uvcvideo btusb videobuf2_vmalloc 
hid_sensor_custom_intel_hinge aesni_intel hid_sensor_gyro_3d 
hid_sensor_accel_3d dell_wmi videobuf2_memops snd_seq crypto_simd 
hid_sensor_trigger btrtl drm_buddy snd_seq_device btbcm videobuf2_v4l2 btintel 
processor_thermal_device_pci_legacy cryptd cmdlinepart 
industrialio_triggered_buffer dell_smbios rapl dcdbas snd_timer ttm 
dell_wmi_sysman btmtk videodev kfifo_buf spi_nor processor_thermal_device 
hid_sensor_iio_common processor_thermal_rfim intel_cstate 
firmware_attributes_class ledtrig_audio drm_display_helper dell_wmi_descriptor 
wmi_bmof industrialio mei_me mtd bluetooth snd videobuf2_common mc cec 
processor_thermal_mbox soundcore rc_core mei ecdh_generic
  [  262.052068]  processor_thermal_rapl iwlwifi_compat(O) drm_kms_helper ecc 
ucsi_acpi joydev i2c_algo_bit intel_rapl_common typec_ucsi syscopyarea 
intel_soc_dts_iosf sysfillrect typec sysimgblt igen6_edac int3403_thermal 
soc_button_array int340x_thermal_zone int3400_thermal intel_hid 
acpi_thermal_rel acpi_pad acpi_tad sparse_keymap hid_multitouch input_leds 
mac_hid serio_raw msr parport_pc ppdev drm lp parport efi_pstore dmi_sysfs 
ip_tables x_tables autofs4 usbhid hid_sensor_custom hid_sensor_hub 
intel_ishtp_hid hid_generic nvme nvme_core intel_ish_ipc i2c_hid_acpi 
spi_intel_pci rtsx_pci_sdmmc crc32_pclmul video i2c_i801 i2c_hid intel_lpss_pci 
xhci_pci spi_intel intel_ishtp nvme_common thunderbolt psmouse i2c_smbus 
intel_lpss rtsx_pci idma64 xhci_pci_renesas hid wmi pinctrl_tigerlake
  [  262.052097] CPU: 2 PID: 6946 Comm: modprobe Tainted: GW  O   
6.2.0-1003-lowlatency #3-Ubuntu
  [  262.052098] Hardware name: Dell Inc. Latitude 7420/07MHG4, BIOS 1.24.2 
02/24/2023
  [  262.052099] RIP: 0010:genl_validate_ops+0x1cc/0x270
  [  262.052102] Code: 81 c4 d8 00 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d 31 d2 31 
c9 31 ff c3 cc cc cc cc 49 83 7d 50 00 0f 85 b9 fe ff ff 0f 0b eb bd <0f> 0b eb 
b9 0f 0b eb b5 0f 0b eb b1 45 84 ff 75 04 31 c0 eb ad 4d
  [  262.052103] RSP: 0018:a91bc7103a88 EFLAGS: 00010206
  [  262.052105] RAX: 0003 RBX: a91bc7103af0 RCX: 

  [  262.052106] RDX:  RSI:  RDI: 

  [  262.052106] RBP: a91bc7103b88 R08:  R09: 

  [  262.052107] R10:  R11:  R12: 
0001
  [  262.052107] R13: c1824780 R14: a91bc7103a88 R15: 

  [  262.052108] FS:  7fd67c380040() GS:8fcc7f68() 
knlGS:
  [  262.052109] CS:  0010 DS:  ES:  CR0: 80050033
  [  262.052110] CR2: 7fff38c0e848 CR3: 00010a938006 CR4: 
00770ee0
  [  262.052111] PKRU: 5554
  [  262.052112] Call Trace:
  [  262.052113]  
  [  262.052115]  ? __pfx_nl80211_pre_doit+0x10/0x10 [cfg80211]
  [  262.052146]  

[Kernel-packages] [Bug 2017790] Re: Intel Wi-Fi 6 AX201 failing in 23.04

2023-04-29 Thread David Roche
@Matthew No luck i'm afraid I removed the module and I'm still getting
the same issue. @Jeremy is there any other steps I can try ?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-lowlatency in Ubuntu.
https://bugs.launchpad.net/bugs/2017790

Title:
  Intel Wi-Fi 6 AX201 failing in 23.04

Status in linux-lowlatency package in Ubuntu:
  Confirmed

Bug description:
  After upgrading from 22.04 to 23.04 the intel wifi driver is not
  loading, I am seeing the following in dmesg:

  [  215.024382] Loading of unsigned module is rejected
  [  262.051998] [ cut here ]
  [  262.052001] WARNING: CPU: 2 PID: 6946 at net/netlink/genetlink.c:570 
genl_validate_ops+0x1cc/0x270
  [  262.052007] Modules linked in: cfg80211(O+) rfcomm xt_conntrack 
nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack 
nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo xt_addrtype nft_compat 
nf_tables libcrc32c nfnetlink br_netfilter bridge stp llc 
vmw_vsock_vmci_transport vsock vmw_vmci snd_seq_dummy snd_hrtimer overlay cmac 
algif_hash algif_skcipher af_alg bnep dell_rbu typec_displayport 
snd_hda_codec_hdmi snd_ctl_led binfmt_misc snd_hda_codec_realtek 
snd_hda_codec_generic snd_sof_pci_intel_tgl snd_sof_intel_hda_common 
soundwire_intel hid_logitech_hidpp soundwire_generic_allocation nls_iso8859_1 
soundwire_cadence snd_sof_intel_hda snd_sof_pci snd_sof_xtensa_dsp snd_sof 
snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_acpi_intel_match 
snd_soc_acpi soundwire_bus snd_soc_core snd_compress ac97_bus 
x86_pkg_temp_thermal r8153_ecm snd_pcm_dmaengine intel_powerclamp cdc_ether 
coretemp snd_hda_intel usbnet snd_usb_audio kvm_intel snd_intel_dspcfg r8152 
snd_intel_
 sdw_acpi
  [  262.052038]  mii hid_plantronics hid_logitech_dj snd_usbmidi_lib 
snd_hda_codec kvm snd_hda_core snd_hwdep irqbypass mei_hdcp mei_pxp 
intel_rapl_msr dell_laptop snd_pcm crct10dif_pclmul i915 polyval_clmulni 
snd_seq_midi polyval_generic snd_seq_midi_event ghash_clmulni_intel 
sha512_ssse3 snd_rawmidi uvcvideo btusb videobuf2_vmalloc 
hid_sensor_custom_intel_hinge aesni_intel hid_sensor_gyro_3d 
hid_sensor_accel_3d dell_wmi videobuf2_memops snd_seq crypto_simd 
hid_sensor_trigger btrtl drm_buddy snd_seq_device btbcm videobuf2_v4l2 btintel 
processor_thermal_device_pci_legacy cryptd cmdlinepart 
industrialio_triggered_buffer dell_smbios rapl dcdbas snd_timer ttm 
dell_wmi_sysman btmtk videodev kfifo_buf spi_nor processor_thermal_device 
hid_sensor_iio_common processor_thermal_rfim intel_cstate 
firmware_attributes_class ledtrig_audio drm_display_helper dell_wmi_descriptor 
wmi_bmof industrialio mei_me mtd bluetooth snd videobuf2_common mc cec 
processor_thermal_mbox soundcore rc_core mei ecdh_
 generic
  [  262.052068]  processor_thermal_rapl iwlwifi_compat(O) drm_kms_helper ecc 
ucsi_acpi joydev i2c_algo_bit intel_rapl_common typec_ucsi syscopyarea 
intel_soc_dts_iosf sysfillrect typec sysimgblt igen6_edac int3403_thermal 
soc_button_array int340x_thermal_zone int3400_thermal intel_hid 
acpi_thermal_rel acpi_pad acpi_tad sparse_keymap hid_multitouch input_leds 
mac_hid serio_raw msr parport_pc ppdev drm lp parport efi_pstore dmi_sysfs 
ip_tables x_tables autofs4 usbhid hid_sensor_custom hid_sensor_hub 
intel_ishtp_hid hid_generic nvme nvme_core intel_ish_ipc i2c_hid_acpi 
spi_intel_pci rtsx_pci_sdmmc crc32_pclmul video i2c_i801 i2c_hid intel_lpss_pci 
xhci_pci spi_intel intel_ishtp nvme_common thunderbolt psmouse i2c_smbus 
intel_lpss rtsx_pci idma64 xhci_pci_renesas hid wmi pinctrl_tigerlake
  [  262.052097] CPU: 2 PID: 6946 Comm: modprobe Tainted: GW  O   
6.2.0-1003-lowlatency #3-Ubuntu
  [  262.052098] Hardware name: Dell Inc. Latitude 7420/07MHG4, BIOS 1.24.2 
02/24/2023
  [  262.052099] RIP: 0010:genl_validate_ops+0x1cc/0x270
  [  262.052102] Code: 81 c4 d8 00 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d 31 d2 31 
c9 31 ff c3 cc cc cc cc 49 83 7d 50 00 0f 85 b9 fe ff ff 0f 0b eb bd <0f> 0b eb 
b9 0f 0b eb b5 0f 0b eb b1 45 84 ff 75 04 31 c0 eb ad 4d
  [  262.052103] RSP: 0018:a91bc7103a88 EFLAGS: 00010206
  [  262.052105] RAX: 0003 RBX: a91bc7103af0 RCX: 

  [  262.052106] RDX:  RSI:  RDI: 

  [  262.052106] RBP: a91bc7103b88 R08:  R09: 

  [  262.052107] R10:  R11:  R12: 
0001
  [  262.052107] R13: c1824780 R14: a91bc7103a88 R15: 

  [  262.052108] FS:  7fd67c380040() GS:8fcc7f68() 
knlGS:
  [  262.052109] CS:  0010 DS:  ES:  CR0: 80050033
  [  262.052110] CR2: 7fff38c0e848 CR3: 00010a938006 CR4: 
00770ee0
  [  262.052111] PKRU: 5554
  [  262.052112] Call Trace:
  [  262.052113]  
  [  262.052115]  ? __pfx_nl80211_pre_doit+0x10/0x10 [cfg80211]
  [  262.052146]  ? __pfx_nl80211_get_w

[Kernel-packages] [Bug 2017790] Re: Intel Wi-Fi 6 AX201 failing in 23.04

2023-04-28 Thread David Roche
I have removed the backports and now i'm getting this:

sudo dmesg |grep -i wifi
[3.412202] Intel(R) Wireless WiFi driver for Linux
[3.412300] iwlwifi :00:14.3: enabling device ( -> 0002)
[3.525445] iwlwifi :00:14.3: CSR_RESET = 0x10
[3.525473] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x0
[3.525525] iwlwifi :00:14.3: value [iter 0]: 0x
[3.525577] iwlwifi :00:14.3: value [iter 1]: 0x
[3.525627] iwlwifi :00:14.3: value [iter 2]: 0x
[3.525667] iwlwifi :00:14.3: value [iter 3]: 0x
[3.525700] iwlwifi :00:14.3: value [iter 4]: 0x
[3.525751] iwlwifi :00:14.3: value [iter 5]: 0x
[3.525796] iwlwifi :00:14.3: value [iter 6]: 0x
[3.525838] iwlwifi :00:14.3: value [iter 7]: 0x
[3.525870] iwlwifi :00:14.3: value [iter 8]: 0x
[3.525919] iwlwifi :00:14.3: value [iter 9]: 0x
[3.525985] iwlwifi :00:14.3: value [iter 10]: 0x
[3.526037] iwlwifi :00:14.3: value [iter 11]: 0x
[3.526075] iwlwifi :00:14.3: value [iter 12]: 0x
[3.526131] iwlwifi :00:14.3: value [iter 13]: 0x
[3.526201] iwlwifi :00:14.3: value [iter 14]: 0x
[3.526225] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x1
[3.526284] iwlwifi :00:14.3: value [iter 0]: 0x
[3.526340] iwlwifi :00:14.3: value [iter 1]: 0x
[3.526381] iwlwifi :00:14.3: value [iter 2]: 0x
[3.526434] iwlwifi :00:14.3: value [iter 3]: 0x
[3.526494] iwlwifi :00:14.3: value [iter 4]: 0x
[3.526543] iwlwifi :00:14.3: value [iter 5]: 0x
[3.526585] iwlwifi :00:14.3: value [iter 6]: 0x
[3.526651] iwlwifi :00:14.3: value [iter 7]: 0x
[3.526702] iwlwifi :00:14.3: value [iter 8]: 0x
[3.526763] iwlwifi :00:14.3: value [iter 9]: 0x
[3.526822] iwlwifi :00:14.3: value [iter 10]: 0x
[3.526875] iwlwifi :00:14.3: value [iter 11]: 0x
[3.526936] iwlwifi :00:14.3: value [iter 12]: 0x
[3.526995] iwlwifi :00:14.3: value [iter 13]: 0x
[3.527041] iwlwifi :00:14.3: value [iter 14]: 0x
[3.527065] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x6
[3.527113] iwlwifi :00:14.3: value [iter 0]: 0x
[3.527151] iwlwifi :00:14.3: value [iter 1]: 0x
[3.527190] iwlwifi :00:14.3: value [iter 2]: 0x
[3.527230] iwlwifi :00:14.3: value [iter 3]: 0x
[3.527270] iwlwifi :00:14.3: value [iter 4]: 0x
[3.527338] iwlwifi :00:14.3: value [iter 5]: 0x
[3.527369] iwlwifi :00:14.3: value [iter 6]: 0x
[3.527405] iwlwifi :00:14.3: value [iter 7]: 0x
[3.527439] iwlwifi :00:14.3: value [iter 8]: 0x
[3.527472] iwlwifi :00:14.3: value [iter 9]: 0x
[3.527513] iwlwifi :00:14.3: value [iter 10]: 0x
[3.527561] iwlwifi :00:14.3: value [iter 11]: 0x
[3.527602] iwlwifi :00:14.3: value [iter 12]: 0x
[3.527636] iwlwifi :00:14.3: value [iter 13]: 0x
[3.527698] iwlwifi :00:14.3: value [iter 14]: 0x
[3.527718] iwlwifi :00:14.3: Host monitor block 0x22 vector 0x0
[3.527774] iwlwifi :00:14.3: value [iter 0]: 0x
[3.527820] iwlwifi: probe of :00:14.3 failed with error -110

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-lowlatency in Ubuntu.
https://bugs.launchpad.net/bugs/2017790

Title:
  Intel Wi-Fi 6 AX201 failing in 23.04

Status in linux-lowlatency package in Ubuntu:
  Confirmed

Bug description:
  After upgrading from 22.04 to 23.04 the intel wifi driver is not
  loading, I am seeing the following in dmesg:

  [  215.024382] Loading of unsigned module is rejected
  [  262.051998] [ cut here ]
  [  262.052001] WARNING: CPU: 2 PID: 6946 at net/netlink/genetlink.c:570 
genl_validate_ops+0x1cc/0x270
  [  262.052007] Modules linked in: cfg80211(O+) rfcomm xt_conntrack 
nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack 
nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo xt_addrtype nft_compat 
nf_tables libcrc32c nfnetlink br_netfilter bridge stp llc 
vmw_vsock_vmci_transport vsock vmw_vmci snd_seq_dummy snd_hrtimer overlay cmac 
algif_hash algif_skcipher af_alg bnep dell_rbu typec_displayport 
snd_hda_codec_hdmi snd_ctl_led binfmt_misc snd_hda_codec_realtek 
snd_hda_codec_generic snd_sof_pci_intel_tgl snd_sof_intel_hda_common 
soundwire_intel hid_logitech_hidpp soundwire_generic_alloca

[Kernel-packages] [Bug 2017790] Re: Intel Wi-Fi 6 AX201 failing in 23.04

2023-04-26 Thread David Roche
Alos seeing this:

[2.691388] iwlwifi_compat: loading out-of-tree module taints kernel.
[2.698204] Loading modules backported from iwlwifi
[2.698208] iwlwifi-stack-public:master:9904:0e80336f

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-lowlatency in Ubuntu.
https://bugs.launchpad.net/bugs/2017790

Title:
  Intel Wi-Fi 6 AX201 failing in 23.04

Status in linux-lowlatency package in Ubuntu:
  New

Bug description:
  After upgrading from 22.04 to 23.04 the intel wifi driver is not
  loading, I am seeing the following in dmesg:

  [  215.024382] Loading of unsigned module is rejected
  [  262.051998] [ cut here ]
  [  262.052001] WARNING: CPU: 2 PID: 6946 at net/netlink/genetlink.c:570 
genl_validate_ops+0x1cc/0x270
  [  262.052007] Modules linked in: cfg80211(O+) rfcomm xt_conntrack 
nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack 
nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo xt_addrtype nft_compat 
nf_tables libcrc32c nfnetlink br_netfilter bridge stp llc 
vmw_vsock_vmci_transport vsock vmw_vmci snd_seq_dummy snd_hrtimer overlay cmac 
algif_hash algif_skcipher af_alg bnep dell_rbu typec_displayport 
snd_hda_codec_hdmi snd_ctl_led binfmt_misc snd_hda_codec_realtek 
snd_hda_codec_generic snd_sof_pci_intel_tgl snd_sof_intel_hda_common 
soundwire_intel hid_logitech_hidpp soundwire_generic_allocation nls_iso8859_1 
soundwire_cadence snd_sof_intel_hda snd_sof_pci snd_sof_xtensa_dsp snd_sof 
snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_acpi_intel_match 
snd_soc_acpi soundwire_bus snd_soc_core snd_compress ac97_bus 
x86_pkg_temp_thermal r8153_ecm snd_pcm_dmaengine intel_powerclamp cdc_ether 
coretemp snd_hda_intel usbnet snd_usb_audio kvm_intel snd_intel_dspcfg r8152 
snd_intel_
 sdw_acpi
  [  262.052038]  mii hid_plantronics hid_logitech_dj snd_usbmidi_lib 
snd_hda_codec kvm snd_hda_core snd_hwdep irqbypass mei_hdcp mei_pxp 
intel_rapl_msr dell_laptop snd_pcm crct10dif_pclmul i915 polyval_clmulni 
snd_seq_midi polyval_generic snd_seq_midi_event ghash_clmulni_intel 
sha512_ssse3 snd_rawmidi uvcvideo btusb videobuf2_vmalloc 
hid_sensor_custom_intel_hinge aesni_intel hid_sensor_gyro_3d 
hid_sensor_accel_3d dell_wmi videobuf2_memops snd_seq crypto_simd 
hid_sensor_trigger btrtl drm_buddy snd_seq_device btbcm videobuf2_v4l2 btintel 
processor_thermal_device_pci_legacy cryptd cmdlinepart 
industrialio_triggered_buffer dell_smbios rapl dcdbas snd_timer ttm 
dell_wmi_sysman btmtk videodev kfifo_buf spi_nor processor_thermal_device 
hid_sensor_iio_common processor_thermal_rfim intel_cstate 
firmware_attributes_class ledtrig_audio drm_display_helper dell_wmi_descriptor 
wmi_bmof industrialio mei_me mtd bluetooth snd videobuf2_common mc cec 
processor_thermal_mbox soundcore rc_core mei ecdh_
 generic
  [  262.052068]  processor_thermal_rapl iwlwifi_compat(O) drm_kms_helper ecc 
ucsi_acpi joydev i2c_algo_bit intel_rapl_common typec_ucsi syscopyarea 
intel_soc_dts_iosf sysfillrect typec sysimgblt igen6_edac int3403_thermal 
soc_button_array int340x_thermal_zone int3400_thermal intel_hid 
acpi_thermal_rel acpi_pad acpi_tad sparse_keymap hid_multitouch input_leds 
mac_hid serio_raw msr parport_pc ppdev drm lp parport efi_pstore dmi_sysfs 
ip_tables x_tables autofs4 usbhid hid_sensor_custom hid_sensor_hub 
intel_ishtp_hid hid_generic nvme nvme_core intel_ish_ipc i2c_hid_acpi 
spi_intel_pci rtsx_pci_sdmmc crc32_pclmul video i2c_i801 i2c_hid intel_lpss_pci 
xhci_pci spi_intel intel_ishtp nvme_common thunderbolt psmouse i2c_smbus 
intel_lpss rtsx_pci idma64 xhci_pci_renesas hid wmi pinctrl_tigerlake
  [  262.052097] CPU: 2 PID: 6946 Comm: modprobe Tainted: GW  O   
6.2.0-1003-lowlatency #3-Ubuntu
  [  262.052098] Hardware name: Dell Inc. Latitude 7420/07MHG4, BIOS 1.24.2 
02/24/2023
  [  262.052099] RIP: 0010:genl_validate_ops+0x1cc/0x270
  [  262.052102] Code: 81 c4 d8 00 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d 31 d2 31 
c9 31 ff c3 cc cc cc cc 49 83 7d 50 00 0f 85 b9 fe ff ff 0f 0b eb bd <0f> 0b eb 
b9 0f 0b eb b5 0f 0b eb b1 45 84 ff 75 04 31 c0 eb ad 4d
  [  262.052103] RSP: 0018:a91bc7103a88 EFLAGS: 00010206
  [  262.052105] RAX: 0003 RBX: a91bc7103af0 RCX: 

  [  262.052106] RDX:  RSI:  RDI: 

  [  262.052106] RBP: a91bc7103b88 R08:  R09: 

  [  262.052107] R10:  R11:  R12: 
0001
  [  262.052107] R13: c1824780 R14: a91bc7103a88 R15: 

  [  262.052108] FS:  7fd67c380040() GS:8fcc7f68() 
knlGS:
  [  262.052109] CS:  0010 DS:  ES:  CR0: 80050033
  [  262.052110] CR2: 7fff38c0e848 CR3: 00010a938006 CR4: 
00770ee0
  [  262.052111] PKRU: 5554
  [  262.052112] Call Trace:
  [  262.052113]  
  [  262.052115]  ? __pfx_nl80211_pre

[Kernel-packages] [Bug 2017790] [NEW] Intel Wi-Fi 6 AX201 failing in 23.04

2023-04-26 Thread David Roche
Public bug reported:

After upgrading from 22.04 to 23.04 the intel wifi driver is not
loading, I am seeing the following in dmesg:

[  215.024382] Loading of unsigned module is rejected
[  262.051998] [ cut here ]
[  262.052001] WARNING: CPU: 2 PID: 6946 at net/netlink/genetlink.c:570 
genl_validate_ops+0x1cc/0x270
[  262.052007] Modules linked in: cfg80211(O+) rfcomm xt_conntrack 
nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack 
nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo xt_addrtype nft_compat 
nf_tables libcrc32c nfnetlink br_netfilter bridge stp llc 
vmw_vsock_vmci_transport vsock vmw_vmci snd_seq_dummy snd_hrtimer overlay cmac 
algif_hash algif_skcipher af_alg bnep dell_rbu typec_displayport 
snd_hda_codec_hdmi snd_ctl_led binfmt_misc snd_hda_codec_realtek 
snd_hda_codec_generic snd_sof_pci_intel_tgl snd_sof_intel_hda_common 
soundwire_intel hid_logitech_hidpp soundwire_generic_allocation nls_iso8859_1 
soundwire_cadence snd_sof_intel_hda snd_sof_pci snd_sof_xtensa_dsp snd_sof 
snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_acpi_intel_match 
snd_soc_acpi soundwire_bus snd_soc_core snd_compress ac97_bus 
x86_pkg_temp_thermal r8153_ecm snd_pcm_dmaengine intel_powerclamp cdc_ether 
coretemp snd_hda_intel usbnet snd_usb_audio kvm_intel snd_intel_dspcfg r8152 
snd_intel_sd
 w_acpi
[  262.052038]  mii hid_plantronics hid_logitech_dj snd_usbmidi_lib 
snd_hda_codec kvm snd_hda_core snd_hwdep irqbypass mei_hdcp mei_pxp 
intel_rapl_msr dell_laptop snd_pcm crct10dif_pclmul i915 polyval_clmulni 
snd_seq_midi polyval_generic snd_seq_midi_event ghash_clmulni_intel 
sha512_ssse3 snd_rawmidi uvcvideo btusb videobuf2_vmalloc 
hid_sensor_custom_intel_hinge aesni_intel hid_sensor_gyro_3d 
hid_sensor_accel_3d dell_wmi videobuf2_memops snd_seq crypto_simd 
hid_sensor_trigger btrtl drm_buddy snd_seq_device btbcm videobuf2_v4l2 btintel 
processor_thermal_device_pci_legacy cryptd cmdlinepart 
industrialio_triggered_buffer dell_smbios rapl dcdbas snd_timer ttm 
dell_wmi_sysman btmtk videodev kfifo_buf spi_nor processor_thermal_device 
hid_sensor_iio_common processor_thermal_rfim intel_cstate 
firmware_attributes_class ledtrig_audio drm_display_helper dell_wmi_descriptor 
wmi_bmof industrialio mei_me mtd bluetooth snd videobuf2_common mc cec 
processor_thermal_mbox soundcore rc_core mei ecdh_ge
 neric
[  262.052068]  processor_thermal_rapl iwlwifi_compat(O) drm_kms_helper ecc 
ucsi_acpi joydev i2c_algo_bit intel_rapl_common typec_ucsi syscopyarea 
intel_soc_dts_iosf sysfillrect typec sysimgblt igen6_edac int3403_thermal 
soc_button_array int340x_thermal_zone int3400_thermal intel_hid 
acpi_thermal_rel acpi_pad acpi_tad sparse_keymap hid_multitouch input_leds 
mac_hid serio_raw msr parport_pc ppdev drm lp parport efi_pstore dmi_sysfs 
ip_tables x_tables autofs4 usbhid hid_sensor_custom hid_sensor_hub 
intel_ishtp_hid hid_generic nvme nvme_core intel_ish_ipc i2c_hid_acpi 
spi_intel_pci rtsx_pci_sdmmc crc32_pclmul video i2c_i801 i2c_hid intel_lpss_pci 
xhci_pci spi_intel intel_ishtp nvme_common thunderbolt psmouse i2c_smbus 
intel_lpss rtsx_pci idma64 xhci_pci_renesas hid wmi pinctrl_tigerlake
[  262.052097] CPU: 2 PID: 6946 Comm: modprobe Tainted: GW  O   
6.2.0-1003-lowlatency #3-Ubuntu
[  262.052098] Hardware name: Dell Inc. Latitude 7420/07MHG4, BIOS 1.24.2 
02/24/2023
[  262.052099] RIP: 0010:genl_validate_ops+0x1cc/0x270
[  262.052102] Code: 81 c4 d8 00 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d 31 d2 31 
c9 31 ff c3 cc cc cc cc 49 83 7d 50 00 0f 85 b9 fe ff ff 0f 0b eb bd <0f> 0b eb 
b9 0f 0b eb b5 0f 0b eb b1 45 84 ff 75 04 31 c0 eb ad 4d
[  262.052103] RSP: 0018:a91bc7103a88 EFLAGS: 00010206
[  262.052105] RAX: 0003 RBX: a91bc7103af0 RCX: 
[  262.052106] RDX:  RSI:  RDI: 
[  262.052106] RBP: a91bc7103b88 R08:  R09: 
[  262.052107] R10:  R11:  R12: 0001
[  262.052107] R13: c1824780 R14: a91bc7103a88 R15: 
[  262.052108] FS:  7fd67c380040() GS:8fcc7f68() 
knlGS:
[  262.052109] CS:  0010 DS:  ES:  CR0: 80050033
[  262.052110] CR2: 7fff38c0e848 CR3: 00010a938006 CR4: 00770ee0
[  262.052111] PKRU: 5554
[  262.052112] Call Trace:
[  262.052113]  
[  262.052115]  ? __pfx_nl80211_pre_doit+0x10/0x10 [cfg80211]
[  262.052146]  ? __pfx_nl80211_get_wiphy+0x10/0x10 [cfg80211]
[  262.052171]  ? __pfx_nl80211_post_doit+0x10/0x10 [cfg80211]
[  262.052195]  ? __pfx_nl80211_dump_wiphy+0x10/0x10 [cfg80211]
[  262.052217]  ? __pfx_nl80211_dump_wiphy_done+0x10/0x10 [cfg80211]
[  262.052238]  genl_register_family+0x29/0x200
[  262.052240]  ? rtnl_unlock+0xe/0x20
[  262.052244]  nl80211_init+0x16/0xc50 [cfg80211]
[  262.052267]  __init_backport+0x78/0xf0 [cfg80211]
[  262.052288]  ? __pfx_init_module+0x10/0x10 [cfg80211

[Kernel-packages] [Bug 1874241] Re: iwlwifi intel ax201 crashing on intel nuc10i7fnh

2023-03-28 Thread David Roche
I'm on the latest kernel for 22.04 and I still can't get the wifi driver
to load I keep seeing the following:

[ 3.837973] audit: type=1400 audit(1677506792.544:10): apparmor="STATUS" 
operation="profile_load" profile="unconfined" name="man_gr
off" pid=522 comm="apparmor_parser"
[ 3.960935] iwlwifi :00:14.3: CSR_RESET = 0x10
[ 3.960965] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x0
[ 3.961034] iwlwifi :00:14.3: value [iter 0]: 0x
[ 3.961098] iwlwifi :00:14.3: value [iter 1]: 0x
[ 3.961162] iwlwifi :00:14.3: value [iter 2]: 0x
[ 3.961222] iwlwifi :00:14.3: value [iter 3]: 0x
[ 3.961291] iwlwifi :00:14.3: value [iter 4]: 0x
[ 3.961354] iwlwifi :00:14.3: value [iter 5]: 0x
[ 3.961412] iwlwifi :00:14.3: value [iter 6]: 0x
[ 3.961468] iwlwifi :00:14.3: value [iter 7]: 0x
[ 3.961533] iwlwifi :00:14.3: value [iter 8]: 0x
[ 3.961584] iwlwifi :00:14.3: value [iter 9]: 0x
[ 3.961633] iwlwifi :00:14.3: value [iter 10]: 0x
[ 3.961683] iwlwifi :00:14.3: value [iter 11]: 0x
[ 3.961741] iwlwifi :00:14.3: value [iter 12]: 0x
[ 3.961790] iwlwifi :00:14.3: value [iter 13]: 0x
[ 3.961859] iwlwifi :00:14.3: value [iter 14]: 0x
[ 3.961879] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x1
[ 3.961949] iwlwifi :00:14.3: value [iter 0]: 0x
[ 3.962012] iwlwifi :00:14.3: value [iter 1]: 0x
[ 3.962073] iwlwifi :00:14.3: value [iter 2]: 0x
[ 3.962133] iwlwifi :00:14.3: value [iter 3]: 0x
[ 3.962213] iwlwifi :00:14.3: value [iter 4]: 0x
[ 3.962279] iwlwifi :00:14.3: value [iter 5]: 0x
[ 3.962339] iwlwifi :00:14.3: value [iter 6]: 0x
[ 3.962399] iwlwifi :00:14.3: value [iter 7]: 0x
[ 3.962466] iwlwifi :00:14.3: value [iter 8]: 0x
[ 3.962528] iwlwifi :00:14.3: value [iter 9]: 0x
[ 3.962590] iwlwifi :00:14.3: value [iter 10]: 0x
[ 3.962655] iwlwifi :00:14.3: value [iter 11]: 0x
[ 3.962723] iwlwifi :00:14.3: value [iter 12]: 0x
[ 3.962786] iwlwifi :00:14.3: value [iter 13]: 0x
[ 3.962842] iwlwifi :00:14.3: value [iter 14]: 0x
[ 3.962853] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x6
[ 3.962906] iwlwifi :00:14.3: value [iter 0]: 0x
[ 3.962971] iwlwifi :00:14.3: value [iter 1]: 0x
[ 3.963028] iwlwifi :00:14.3: value [iter 2]: 0x
[ 3.963074] iwlwifi :00:14.3: value [iter 3]: 0x
[ 3.963488] iwlwifi :00:14.3: value [iter 4]: 0x
[ 3.963546] iwlwifi :00:14.3: value [iter 5]: 0x
[ 3.963595] iwlwifi :00:14.3: value [iter 6]: 0x
[ 3.963643] iwlwifi :00:14.3: value [iter 7]: 0x
[ 3.963695] iwlwifi :00:14.3: value [iter 8]: 0x
[ 3.963771] iwlwifi :00:14.3: value [iter 9]: 0x
[ 3.963829] iwlwifi :00:14.3: value [iter 10]: 0x
[ 3.963891] iwlwifi :00:14.3: value [iter 11]: 0x
[ 3.963951] iwlwifi :00:14.3: value [iter 12]: 0x
[ 3.964019] iwlwifi :00:14.3: value [iter 13]: 0x
[ 3.964089] iwlwifi :00:14.3: value [iter 14]: 0x
[ 3.964118] iwlwifi :00:14.3: Host monitor block 0x22 vector 0x0
[ 3.964186] iwlwifi :00:14.3: value [iter 0]: 0x
[ 3.964237] iwlwifi: probe of :00:14.3 failed with error -110
[ 3.974367] AVX2 version of gcm_enc/dec engaged.
[ 3.974401] AES CTR mode by8 optimization enabled

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-firmware in Ubuntu.
https://bugs.launchpad.net/bugs/1874241

Title:
  iwlwifi intel ax201 crashing on intel nuc10i7fnh

Status in linux-firmware package in Ubuntu:
  Incomplete

Bug description:
  The intel iwlwifi firmware will crash when you insert a device into
  the thunderbolt3 port, and this will be trigger probabilistically.

  wifi card model:Intel(R) Wi-Fi 6 AX201 160MHz, REV=0x354
  Firmware version: TLV_FW_FSEQ_VERSION: FSEQ Version: 43.2.23.17
  Linux-firmware version: 1.187
  thunderbolt3 chip model: Intel Corporation JHL7540 Thunderbolt 3 Bridge 
[Titan Ridge 2C 2018] (rev 06)

  dmesg output:
  [  103.777908] pcieport :00:1c.0: PME: Spurious native interrupt!
  [  103.777918] pcieport :00:1c.0: PME: Spurious native interrupt!
  [  104.118148] usb 4-1: new SuperSpeed Gen 1 USB device number 2 using 
xhci_hcd
  [  104.147184] usb 4-1: New USB device found, idVendor=05e3, idProduct=0749, 
bcdDevice=15.32
  [  104.147190] usb 4-1: New USB device strings: Mfr=3, Product=4, 
SerialNumber=2
  [  104.147194] usb 4-1: Product: USB3.0 Card Reader
  [  104.147197] usb 4-1: Manufacturer: Generic
  [  104.147199] usb 4-1: SerialNumber: 1532
  [  104.183374] usb-storage 4-1:1.0: USB Mass Storage device detected
  [  104.183952] scsi host3: usb-storage 4-1:1.

[Kernel-packages] [Bug 1874241] Re: iwlwifi intel ax201 crashing on intel nuc10i7fnh

2023-02-28 Thread David Roche
Can we get a confirmation if this issue will be fixed at some point or
can we get a workaround ?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-firmware in Ubuntu.
https://bugs.launchpad.net/bugs/1874241

Title:
  iwlwifi intel ax201 crashing on intel nuc10i7fnh

Status in linux-firmware package in Ubuntu:
  Confirmed

Bug description:
  The intel iwlwifi firmware will crash when you insert a device into
  the thunderbolt3 port, and this will be trigger probabilistically.

  wifi card model:Intel(R) Wi-Fi 6 AX201 160MHz, REV=0x354
  Firmware version: TLV_FW_FSEQ_VERSION: FSEQ Version: 43.2.23.17
  Linux-firmware version: 1.187
  thunderbolt3 chip model: Intel Corporation JHL7540 Thunderbolt 3 Bridge 
[Titan Ridge 2C 2018] (rev 06)

  dmesg output:
  [  103.777908] pcieport :00:1c.0: PME: Spurious native interrupt!
  [  103.777918] pcieport :00:1c.0: PME: Spurious native interrupt!
  [  104.118148] usb 4-1: new SuperSpeed Gen 1 USB device number 2 using 
xhci_hcd
  [  104.147184] usb 4-1: New USB device found, idVendor=05e3, idProduct=0749, 
bcdDevice=15.32
  [  104.147190] usb 4-1: New USB device strings: Mfr=3, Product=4, 
SerialNumber=2
  [  104.147194] usb 4-1: Product: USB3.0 Card Reader
  [  104.147197] usb 4-1: Manufacturer: Generic
  [  104.147199] usb 4-1: SerialNumber: 1532
  [  104.183374] usb-storage 4-1:1.0: USB Mass Storage device detected
  [  104.183952] scsi host3: usb-storage 4-1:1.0
  [  104.184172] usbcore: registered new interface driver usb-storage
  [  104.187897] usbcore: registered new interface driver uas
  [  105.217035] scsi 3:0:0:0: Direct-Access Generic  STORAGE DEVICE   1532 
PQ: 0 ANSI: 6
  [  105.217792] sd 3:0:0:0: Attached scsi generic sg1 type 0
  [  105.233978] sd 3:0:0:0: [sdb] Attached SCSI removable disk
  [  109.998995] iwlwifi :00:14.3: Microcode SW error detected. Restarting 
0x0.
  [  109.999102] iwlwifi :00:14.3: Start IWL Error Log Dump:
  [  109.999111] iwlwifi :00:14.3: Status: 0x0040, count: 6
  [  109.999119] iwlwifi :00:14.3: Loaded firmware version: 48.4fa0041f.0
  [  109.999128] iwlwifi :00:14.3: 0x4435 | ADVANCED_SYSASSERT  
  [  109.999135] iwlwifi :00:14.3: 0x008026F4 | trm_hw_status0
  [  109.999142] iwlwifi :00:14.3: 0x | trm_hw_status1
  [  109.999148] iwlwifi :00:14.3: 0x004CA228 | branchlink2
  [  109.999154] iwlwifi :00:14.3: 0x0E26 | interruptlink1
  [  109.999161] iwlwifi :00:14.3: 0x0E26 | interruptlink2
  [  109.999168] iwlwifi :00:14.3: 0x000161A0 | data1
  [  109.999174] iwlwifi :00:14.3: 0xDEADBEEF | data2
  [  109.999180] iwlwifi :00:14.3: 0xDEADBEEF | data3
  [  109.999186] iwlwifi :00:14.3: 0xF90167B5 | beacon time
  [  109.999192] iwlwifi :00:14.3: 0x51938809 | tsf low
  [  109.999199] iwlwifi :00:14.3: 0x0010 | tsf hi
  [  109.999205] iwlwifi :00:14.3: 0x | time gp1
  [  109.999211] iwlwifi :00:14.3: 0x064A1430 | time gp2
  [  109.999217] iwlwifi :00:14.3: 0x0001 | uCode revision type
  [  109.999224] iwlwifi :00:14.3: 0x0030 | uCode version major
  [  109.999231] iwlwifi :00:14.3: 0x4FA0041F | uCode version minor
  [  109.999239] iwlwifi :00:14.3: 0x0351 | hw version
  [  109.999245] iwlwifi :00:14.3: 0x00C89004 | board version
  [  109.999252] iwlwifi :00:14.3: 0x069E001C | hcmd
  [  109.999259] iwlwifi :00:14.3: 0x8002 | isr0
  [  109.999265] iwlwifi :00:14.3: 0x0100 | isr1
  [  109.999271] iwlwifi :00:14.3: 0x08F2 | isr2
  [  109.999278] iwlwifi :00:14.3: 0x04C1FFCC | isr3
  [  109.999284] iwlwifi :00:14.3: 0x | isr4
  [  109.999290] iwlwifi :00:14.3: 0x069D001C | last cmd Id
  [  109.999297] iwlwifi :00:14.3: 0x8B70 | wait_event
  [  109.999304] iwlwifi :00:14.3: 0x4208 | l2p_control
  [  109.999310] iwlwifi :00:14.3: 0x2020 | l2p_duration
  [  109.999317] iwlwifi :00:14.3: 0x033F | l2p_mhvalid
  [  109.999324] iwlwifi :00:14.3: 0x00E6 | l2p_addr_match
  [  109.999331] iwlwifi :00:14.3: 0x0009 | lmpm_pmg_sel
  [  109.999337] iwlwifi :00:14.3: 0x | timestamp
  [  109.999344] iwlwifi :00:14.3: 0xB8DC | flow_handler
  [  109.999391] iwlwifi :00:14.3: Start IWL Error Log Dump:
  [  109.999398] iwlwifi :00:14.3: Status: 0x0040, count: 7
  [  109.999406] iwlwifi :00:14.3: 0x2070 | NMI_INTERRUPT_LMAC_FATAL
  [  109.999413] iwlwifi :00:14.3: 0x | umac branchlink1
  [  109.999420] iwlwifi :00:14.3: 0xC008D49C | umac branchlink2
  [  109.999427] iwlwifi :00:14.3: 0x8048DBD2 | umac interruptlink1
  [  109.999434] iwlwifi :00:14.3: 0x8048DBD2 | umac interruptlink2
  [  109.999441] iwlwifi :00:14.3: 0x0400 | umac data1
  [  109.999447] iwlwifi :00:14.3: 0x8048DBD2 | umac data2
  [  109.999454] iwlwifi :00:14.3: 0x | umac data3
  [ 

[Kernel-packages] [Bug 2008706] Re: Intel Wi-Fi 6 AX201 failing in 22.04.2 LTS

2023-02-27 Thread David Roche
This has been going on for a while but since the last linux firmware
update the ax201 card is not working at all.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-signed-hwe-5.19 in Ubuntu.
https://bugs.launchpad.net/bugs/2008706

Title:
  Intel Wi-Fi 6 AX201 failing in 22.04.2 LTS

Status in linux-signed-hwe-5.19 package in Ubuntu:
  New

Bug description:
  [3.837973] audit: type=1400 audit(1677506792.544:10): apparmor="STATUS" 
operation="profile_load" profile="unconfined" name="man_gr
  off" pid=522 comm="apparmor_parser"
  [3.960935] iwlwifi :00:14.3: CSR_RESET = 0x10
  [3.960965] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x0
  [3.961034] iwlwifi :00:14.3: value [iter 0]: 0x
  [3.961098] iwlwifi :00:14.3: value [iter 1]: 0x
  [3.961162] iwlwifi :00:14.3: value [iter 2]: 0x
  [3.961222] iwlwifi :00:14.3: value [iter 3]: 0x
  [3.961291] iwlwifi :00:14.3: value [iter 4]: 0x
  [3.961354] iwlwifi :00:14.3: value [iter 5]: 0x
  [3.961412] iwlwifi :00:14.3: value [iter 6]: 0x
  [3.961468] iwlwifi :00:14.3: value [iter 7]: 0x
  [3.961533] iwlwifi :00:14.3: value [iter 8]: 0x
  [3.961584] iwlwifi :00:14.3: value [iter 9]: 0x
  [3.961633] iwlwifi :00:14.3: value [iter 10]: 0x
  [3.961683] iwlwifi :00:14.3: value [iter 11]: 0x
  [3.961741] iwlwifi :00:14.3: value [iter 12]: 0x
  [3.961790] iwlwifi :00:14.3: value [iter 13]: 0x
  [3.961859] iwlwifi :00:14.3: value [iter 14]: 0x
  [3.961879] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x1
  [3.961949] iwlwifi :00:14.3: value [iter 0]: 0x
  [3.962012] iwlwifi :00:14.3: value [iter 1]: 0x
  [3.962073] iwlwifi :00:14.3: value [iter 2]: 0x
  [3.962133] iwlwifi :00:14.3: value [iter 3]: 0x
  [3.962213] iwlwifi :00:14.3: value [iter 4]: 0x
  [3.962279] iwlwifi :00:14.3: value [iter 5]: 0x
  [3.962339] iwlwifi :00:14.3: value [iter 6]: 0x
  [3.962399] iwlwifi :00:14.3: value [iter 7]: 0x
  [3.962466] iwlwifi :00:14.3: value [iter 8]: 0x
  [3.962528] iwlwifi :00:14.3: value [iter 9]: 0x
  [3.962590] iwlwifi :00:14.3: value [iter 10]: 0x
  [3.962655] iwlwifi :00:14.3: value [iter 11]: 0x
  [3.962723] iwlwifi :00:14.3: value [iter 12]: 0x
  [3.962786] iwlwifi :00:14.3: value [iter 13]: 0x
  [3.962842] iwlwifi :00:14.3: value [iter 14]: 0x
  [3.962853] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x6
  [3.962906] iwlwifi :00:14.3: value [iter 0]: 0x
  [3.962971] iwlwifi :00:14.3: value [iter 1]: 0x
  [3.963028] iwlwifi :00:14.3: value [iter 2]: 0x
  [3.963074] iwlwifi :00:14.3: value [iter 3]: 0x
  [3.963488] iwlwifi :00:14.3: value [iter 4]: 0x
  [3.963546] iwlwifi :00:14.3: value [iter 5]: 0x
  [3.963595] iwlwifi :00:14.3: value [iter 6]: 0x
  [3.963643] iwlwifi :00:14.3: value [iter 7]: 0x
  [3.963695] iwlwifi :00:14.3: value [iter 8]: 0x
  [3.963771] iwlwifi :00:14.3: value [iter 9]: 0x
  [3.963829] iwlwifi :00:14.3: value [iter 10]: 0x
  [3.963891] iwlwifi :00:14.3: value [iter 11]: 0x
  [3.963951] iwlwifi :00:14.3: value [iter 12]: 0x
  [3.964019] iwlwifi :00:14.3: value [iter 13]: 0x
  [3.964089] iwlwifi :00:14.3: value [iter 14]: 0x
  [3.964118] iwlwifi :00:14.3: Host monitor block 0x22 vector 0x0
  [3.964186] iwlwifi :00:14.3: value [iter 0]: 0x
  [3.964237] iwlwifi: probe of :00:14.3 failed with error -110
  [3.974367] AVX2 version of gcm_enc/dec engaged.
  [3.974401] AES CTR mode by8 optimization enabled

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-5.19.0-32-generic 5.19.0-32.33~22.04.1
  ProcVersionSignature: Ubuntu 5.19.0-32.33~22.04.1-generic 5.19.17
  Uname: Linux 5.19.0-32-generic x86_64
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  Date: Mon Feb 27 14:11:18 2023
  InstallationDate: Installed on 2022-06-01 (271 days ago)
  InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 
(20220419)
  SourcePackage: linux-signed-hwe-5.19
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notif

[Kernel-packages] [Bug 2008706] [NEW] Intel Wi-Fi 6 AX201 failing in 22.04.2 LTS

2023-02-27 Thread David Roche
Public bug reported:

[3.837973] audit: type=1400 audit(1677506792.544:10): apparmor="STATUS" 
operation="profile_load" profile="unconfined" name="man_gr
off" pid=522 comm="apparmor_parser"
[3.960935] iwlwifi :00:14.3: CSR_RESET = 0x10
[3.960965] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x0
[3.961034] iwlwifi :00:14.3: value [iter 0]: 0x
[3.961098] iwlwifi :00:14.3: value [iter 1]: 0x
[3.961162] iwlwifi :00:14.3: value [iter 2]: 0x
[3.961222] iwlwifi :00:14.3: value [iter 3]: 0x
[3.961291] iwlwifi :00:14.3: value [iter 4]: 0x
[3.961354] iwlwifi :00:14.3: value [iter 5]: 0x
[3.961412] iwlwifi :00:14.3: value [iter 6]: 0x
[3.961468] iwlwifi :00:14.3: value [iter 7]: 0x
[3.961533] iwlwifi :00:14.3: value [iter 8]: 0x
[3.961584] iwlwifi :00:14.3: value [iter 9]: 0x
[3.961633] iwlwifi :00:14.3: value [iter 10]: 0x
[3.961683] iwlwifi :00:14.3: value [iter 11]: 0x
[3.961741] iwlwifi :00:14.3: value [iter 12]: 0x
[3.961790] iwlwifi :00:14.3: value [iter 13]: 0x
[3.961859] iwlwifi :00:14.3: value [iter 14]: 0x
[3.961879] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x1
[3.961949] iwlwifi :00:14.3: value [iter 0]: 0x
[3.962012] iwlwifi :00:14.3: value [iter 1]: 0x
[3.962073] iwlwifi :00:14.3: value [iter 2]: 0x
[3.962133] iwlwifi :00:14.3: value [iter 3]: 0x
[3.962213] iwlwifi :00:14.3: value [iter 4]: 0x
[3.962279] iwlwifi :00:14.3: value [iter 5]: 0x
[3.962339] iwlwifi :00:14.3: value [iter 6]: 0x
[3.962399] iwlwifi :00:14.3: value [iter 7]: 0x
[3.962466] iwlwifi :00:14.3: value [iter 8]: 0x
[3.962528] iwlwifi :00:14.3: value [iter 9]: 0x
[3.962590] iwlwifi :00:14.3: value [iter 10]: 0x
[3.962655] iwlwifi :00:14.3: value [iter 11]: 0x
[3.962723] iwlwifi :00:14.3: value [iter 12]: 0x
[3.962786] iwlwifi :00:14.3: value [iter 13]: 0x
[3.962842] iwlwifi :00:14.3: value [iter 14]: 0x
[3.962853] iwlwifi :00:14.3: Host monitor block 0x0 vector 0x6
[3.962906] iwlwifi :00:14.3: value [iter 0]: 0x
[3.962971] iwlwifi :00:14.3: value [iter 1]: 0x
[3.963028] iwlwifi :00:14.3: value [iter 2]: 0x
[3.963074] iwlwifi :00:14.3: value [iter 3]: 0x
[3.963488] iwlwifi :00:14.3: value [iter 4]: 0x
[3.963546] iwlwifi :00:14.3: value [iter 5]: 0x
[3.963595] iwlwifi :00:14.3: value [iter 6]: 0x
[3.963643] iwlwifi :00:14.3: value [iter 7]: 0x
[3.963695] iwlwifi :00:14.3: value [iter 8]: 0x
[3.963771] iwlwifi :00:14.3: value [iter 9]: 0x
[3.963829] iwlwifi :00:14.3: value [iter 10]: 0x
[3.963891] iwlwifi :00:14.3: value [iter 11]: 0x
[3.963951] iwlwifi :00:14.3: value [iter 12]: 0x
[3.964019] iwlwifi :00:14.3: value [iter 13]: 0x
[3.964089] iwlwifi :00:14.3: value [iter 14]: 0x
[3.964118] iwlwifi :00:14.3: Host monitor block 0x22 vector 0x0
[3.964186] iwlwifi :00:14.3: value [iter 0]: 0x
[3.964237] iwlwifi: probe of :00:14.3 failed with error -110
[3.974367] AVX2 version of gcm_enc/dec engaged.
[3.974401] AES CTR mode by8 optimization enabled

ProblemType: Bug
DistroRelease: Ubuntu 22.04
Package: linux-image-5.19.0-32-generic 5.19.0-32.33~22.04.1
ProcVersionSignature: Ubuntu 5.19.0-32.33~22.04.1-generic 5.19.17
Uname: Linux 5.19.0-32-generic x86_64
ApportVersion: 2.20.11-0ubuntu82.3
Architecture: amd64
CasperMD5CheckResult: pass
CurrentDesktop: ubuntu:GNOME
Date: Mon Feb 27 14:11:18 2023
InstallationDate: Installed on 2022-06-01 (271 days ago)
InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 (20220419)
SourcePackage: linux-signed-hwe-5.19
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: linux-signed-hwe-5.19 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug jammy

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-signed-hwe-5.19 in Ubuntu.
https://bugs.launchpad.net/bugs/2008706

Title:
  Intel Wi-Fi 6 AX201 failing in 22.04.2 LTS

Status in linux-signed-hwe-5.19 package in Ubuntu:
  New

Bug description:
  [3.837973] audit: type=1400 audit(1677506792.544:10): apparmor="STATUS" 
operation="profile_load" profile="unconfined

[Kernel-packages] [Bug 2003226] Re: libvirt live migrate to a lower generation processor freeze the migrated vm

2023-02-17 Thread Daniel Roche
- 5.16.0-051600-generic  = CRASH also

should i test 5.16-rc versions ?

best regards.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago

[Kernel-packages] [Bug 2003226] Re: libvirt live migrate to a lower generation processor freeze the migrated vm

2023-02-17 Thread Daniel Roche
Hello Again,

i have done some tests with mainline kernels  :

- 5.9.16-050916-generic   = OK 
- 5.12.19-051219-generic  = OK
- 5.15.94-051594-generic  = OK
- 5.16.20-051620-generic  = CRASH 

i will do some more test with intermediate version 5.16.xx

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/de

[Kernel-packages] [Bug 2003226] Re: libvirt live migrate to a lower generation processor freeze the migrated vm

2023-02-16 Thread Daniel Roche
so far,  kernel 5.4.0-139-generic  ( the last one from ubuntu 20.04 )
does not have the problem.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (

[Kernel-packages] [Bug 2003226] WifiSyslog.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "WifiSyslog.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647841/+files/WifiSyslog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.0

[Kernel-packages] [Bug 2003226] UdevDb.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "UdevDb.txt"
   https://bugs.launchpad.net/bugs/2003226/+attachment/5647840/+files/UdevDb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  Insta

[Kernel-packages] [Bug 2003226] ProcModules.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647839/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22

[Kernel-packages] [Bug 2003226] ProcInterrupts.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647838/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubu

[Kernel-packages] [Bug 2003226] ProcCpuinfoMinimal.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647837/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRele

[Kernel-packages] [Bug 2003226] ProcCpuinfo.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647836/+files/ProcCpuinfo.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22

[Kernel-packages] [Bug 2003226] Lsusb-v.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "Lsusb-v.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647835/+files/Lsusb-v.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  In

[Kernel-packages] [Bug 2003226] Re: libvirt live migrate to a lower generation processor freeze the migrated vm

2023-02-16 Thread Daniel Roche
apport-collect 2003226  is done  for both system.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Ser

[Kernel-packages] [Bug 2003226] acpidump.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647842/+files/acpidump.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  

[Kernel-packages] [Bug 2003226] Lspci.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/2003226/+attachment/5647833/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  Install

[Kernel-packages] [Bug 2003226] Lspci-vt.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "Lspci-vt.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647834/+files/Lspci-vt.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  

[Kernel-packages] [Bug 2003226] KernLog.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "KernLog.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647832/+files/KernLog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  In

[Kernel-packages] [Bug 2003226] ProcCpuinfoMinimal.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647825/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRele

[Kernel-packages] [Bug 2003226] ProcModules.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647827/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22

[Kernel-packages] [Bug 2003226] CurrentDmesg.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "CurrentDmesg.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647831/+files/CurrentDmesg.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 

[Kernel-packages] [Bug 2003226] ProcCpuinfo.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647824/+files/ProcCpuinfo.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22

[Kernel-packages] [Bug 2003226] WifiSyslog.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "WifiSyslog.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647829/+files/WifiSyslog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.0

[Kernel-packages] [Bug 2003226] acpidump.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647830/+files/acpidump.txt

** Description changed:

  Hi,
  
  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz
  
  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.
  
  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.
  
  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem
  
  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :
  
  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system
  
  this one freeze the guest
  
  while the opposite migration :
  
  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system
  
  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 
  
  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  

  
  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
+ --- 
+ ProblemType: Bug
+ AlsaDevices:
+  total 0
+  crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
+  crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
+ AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
+ ApportVersion: 2.20.11-0ubuntu82.3
+ Architecture: amd64
+ ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
+ AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
+ CRDA: N/A
+ CasperMD5CheckResult: pass
+ DistroRelease: Ubuntu 22.04
+ InstallationDate: Installed on 2023-02-15 (0 days ago)
+ InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
+ IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
+ Lsusb:
+  Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
+  Bus 001 Device 004: ID 04b3

[Kernel-packages] [Bug 2003226] ProcInterrupts.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647826/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubu

[Kernel-packages] [Bug 2003226] UdevDb.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "UdevDb.txt"
   https://bugs.launchpad.net/bugs/2003226/+attachment/5647828/+files/UdevDb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  Insta

[Kernel-packages] [Bug 2003226] Lsusb-v.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "Lsusb-v.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647823/+files/Lsusb-v.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  In

[Kernel-packages] [Bug 2003226] Lsusb.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "Lsusb.txt"
   https://bugs.launchpad.net/bugs/2003226/+attachment/5647821/+files/Lsusb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  Install

[Kernel-packages] [Bug 2003226] Lspci.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/2003226/+attachment/5647819/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  Install

[Kernel-packages] [Bug 2003226] Lsusb-t.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "Lsusb-t.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647822/+files/Lsusb-t.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  In

[Kernel-packages] [Bug 2003226] Lspci-vt.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "Lspci-vt.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647820/+files/Lspci-vt.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  

[Kernel-packages] [Bug 2003226] KernLog.txt

2023-02-16 Thread Daniel Roche
apport information

** Attachment added: "KernLog.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647818/+files/KernLog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2023-02-15 (0 days ago)
  InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2530 M2
  Package: linux (not installed)
  PciMultimedia:
   
  ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=fr_FR.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
  ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-60-generic N/A
   linux-backports-modules-5.15.0-60-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.10
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  jammy
  Uname: Linux 5.15.0-60-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 09/29/2016
  dmi.bios.release: 1.10
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
  dmi.board.name: D3279-B1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3279-B12 WGS03 GS02
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2530M2R1
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2530 M2
  dmi.product.sku: ABN:K1565-V101-236
  dmi.product.version: GS01
  dmi.sys.vendor: FUJITSU
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
   crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu82.3
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 22.04
  In

[Kernel-packages] [Bug 2003226] Re: libvirt live migrate to a lower generation processor freeze the migrated vm

2023-02-16 Thread Daniel Roche
apport information

** Tags added: apport-collected jammy

** Description changed:

  Hi,
  
  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz
  
  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.
  
  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.
  
  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem
  
  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :
  
  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system
  
  this one freeze the guest
  
  while the opposite migration :
  
  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system
  
  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 
  
  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  

  
- i have tried several ( almost all ) other virtual cpu configuration ,
- always with the same problem.
+ i have tried several ( almost all ) other virtual cpu configuration , always 
with the same problem.
+ --- 
+ ProblemType: Bug
+ AlsaDevices:
+  total 0
+  crw-rw 1 root audio 116,  1 févr. 16 12:39 seq
+  crw-rw 1 root audio 116, 33 févr. 16 12:39 timer
+ AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
+ ApportVersion: 2.20.11-0ubuntu82.3
+ Architecture: amd64
+ ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
+ AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
+ CRDA: N/A
+ CasperMD5CheckResult: pass
+ DistroRelease: Ubuntu 22.04
+ InstallationDate: Installed on 2023-02-15 (0 days ago)
+ InstallationMedia: Ubuntu-Server 22.04.1 LTS "Jammy Jellyfish" - Release 
amd64 (20220809)
+ IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
+ MachineType: FUJITSU PRIMERGY RX2530 M2
+ Package: linux (not installed)
+ PciMultimedia:
+  
+ ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
+ ProcEnviron:
+  TERM=xterm-256color
+  PATH=(custom, no user)
+  XDG_RUNTIME_DIR=
+  LANG=fr_FR.UTF-8
+  SHELL=/bin/bash
+ ProcFB: 0 mgag200drmfb
+ ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-60-generic 
root=UUID=13e84e97-ad18-49ed-8050-c8f7293e5e7d ro
+ ProcVersionSignature: Ubuntu 5.15.0-60.66-generic 5.15.78
+ RelatedPackageVersions:
+  linux-restricted-modules-5.15.0-60-generic N/A
+  linux-backports-modules-5.15.0-60-generic  N/A
+  linux-firmware 20220329.git681281e4-0ubuntu3.10
+ RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
+ Tags:  jammy
+ Uname: Linux 5.15.0-60-generic x86_64
+ UpgradeStatus: No upgrade log present (probably fresh install)
+ UserGroups: N/A
+ _MarkForUpload: True
+ dmi.bios.date: 09/29/2016
+ dmi.bios.release: 1.10
+ dmi.bios.vendor: FUJITSU // American Megatrends Inc.
+ dmi.bios.version: V5.0.0.11 R1.10.0 for D3279-B1x
+ dmi.board.name: D3279-B1
+ dmi.board.vendor: FUJITSU
+ dmi.board.version: S26361-D3279-B12 WGS03 GS02
+ dmi.chassis.asset.tag: System Asset Tag
+ dmi.chassis.type: 23
+ dmi.chassis.vendor: FUJITSU
+ dmi.chassis.version: RX2530M2R1
+ dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.11R1.10.0forD3279-B1x:bd09/29/2016:br1.10:svnFUJITSU:pnPRIMERGYRX2530M2:pvrGS01:rvnFUJITSU:rnD3279-B1:rvrS26361-D3279-B12WGS03GS02:cvnFUJITSU:ct23:cvrRX2530M2R1:skuABNK1565-V101-236:
+ dmi.product.family: SERVER
+ dmi.product.name: PRIMERGY RX2530 M2
+ dmi.product.sku: ABN:K1565-V101-236
+ dmi.product.version: GS01
+ dmi.sys.vendor: FUJITSU

** Attachment added: "CurrentDmesg.txt"
   
https://bugs.launchpad.net/bugs/2003226/+attachment/5647817/+files/CurrentDmesg.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to old

[Kernel-packages] [Bug 2003226] Re: libvirt live migrate to a lower generation processor freeze the migrated vm

2023-02-16 Thread Daniel Roche
Hello Again,

thank you for your respons,  
i have plan to test different kernel, and i will post the apport-collect result 
very soon.

meanwhile, i can confirm some new informations :

i re-installed the two system into ubuntu 22.04.1, (kernel 5.15.0-60-generic)
with this, i reproduce the problem every time.
i downgraded the kernel to 4.15.0-204 on both systems, without changing 
anything else , 
and then the problem is gone,  i have done dozens of virsh migrate without any 
issue.

i guess this confirm the kernel issue.

i will come back soon with apport-collect result and more kernel tests.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2003226

Title:
  libvirt live migrate to a lower generation processor freeze the
  migrated vm

Status in libvirt package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Hi,

  i have several libvirt hosts servers with differents CPU generation, in 
particular :
  - older generation : Intel Xeon E5-2640 v4 2.40GHz
  - newer generation : Intel Xeon Gold 5215 2.50GHz

  i recently re-install all this servers into ubuntu server 22.04.1
  and since, when i live migrate a VM from a new generation processor to older 
generation processor
  the migrated guest freeze without generating any error logs.

  if i migrate the opposite way  ( older cpu to newer cpu ) it works
  perfectly.

  previous version of hosts ( same hardware on ubuntu 16.04 ) did not
  present the problem

  the live migration is done with the following command ( issued from a
  third server playing the role of 'virtual-center' ) :

  virsh -c qemu+ssh://root@new_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@old_server/system

  this one freeze the guest

  while the opposite migration :

  virsh -c qemu+ssh://root@old_server/system migrate --verbose --live
  --undefinesource --persistent --unsafe guest_name
  qemu+ssh://root@new_server/system

  works without problem
  migrate between  2 servers with same generation CPU also works perfectly 

  the cpu configuration of guest is generic : 

  qemu64
  
  
  
  


  i have tried several ( almost all ) other virtual cpu configuration ,
  always with the same problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/2003226/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2022-06-01 Thread Didier Roche
** Changed in: grub2 (Ubuntu)
 Assignee: Didier Roche (didrocks) => (unassigned)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875767] Re: When operating install/removal with apt, zed floods log and apparently crashes snapshoting

2022-06-01 Thread Didier Roche
** Changed in: zsys (Ubuntu)
 Assignee: Didier Roche (didrocks) => (unassigned)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875767

Title:
  When operating install/removal with apt, zed floods log and apparently
  crashes snapshoting

Status in zfs-linux package in Ubuntu:
  Won't Fix
Status in zsys package in Ubuntu:
  Incomplete

Bug description:
  Hello!

  When I ran a install, it behaved like this:

  ERROR rpc error: code = DeadlineExceeded desc = context deadline exceeded 
  ... etc apt messages ...
  A processar 'triggers' para libc-bin (2.31-0ubuntu9) ...
  ERROR rpc error: code = Unavailable desc = transport is closing 

  Log gets flooded by the follow message:

  abr 28 20:41:48 manauara zed[512257]: eid=10429 class=history_event 
pool_guid=0x7E8B0F177C4DD12C
  abr 28 20:41:49 manauara zed[508106]: Missed 1 events

  And machine load gets high for incredible amount of time. Workarround
  is:

  systemctl restart zsysd
  systemctl restart zed

  System also gets a bit slow and fans get high for a while (because the
  load).

  This is a fresh install of ubuntu 20.04 with ZFS on SATA SSD.

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: zfsutils-linux 0.8.3-1ubuntu12
  ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
  Uname: Linux 5.4.0-28-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
  ApportVersion: 2.20.11-0ubuntu27
  Architecture: amd64
  CasperMD5CheckResult: skip
  Date: Tue Apr 28 20:49:14 2020
  InstallationDate: Installed on 2020-04-27 (1 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
  ProcEnviron:
   LANGUAGE=pt_BR:pt:en
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=pt_BR.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875767/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1946808] Re: zsys fail during reboot

2021-10-12 Thread Didier Roche
The issue is in zfs-linux, where the merge from debian
http://launchpadlibrarian.net/535966758/zfs-
linux_2.0.2-1ubuntu5_2.0.3-8ubuntu1.diff.gz once again reverted some of
the fixes and rolled back the patch to an earlier version. The fix was
already reverted erronously in hirsute during the debian merge and we
reintroduce the fix in https://launchpad.net/ubuntu/+source/zfs-
linux/2.0.2-1ubuntu3.

Colin, do you mind having a look and reintroducing the patch as a 0-days
SRU (the first time we introduced it was in
https://launchpad.net/ubuntu/+source/zfs-linux/0.8.4-1ubuntu14)?

Can you check that you haven’t reverted by error other part of the patch and 
fix this one?
As this is happening consecutively in 2 releases where the debian merge doesn’t 
seem to start from the latest version in ubuntu but reintroduce an older 
version of the patch, can you have a look at the local setup issue you may have 
when doing the merges?

** Package changed: zsys (Ubuntu) => zfs-linux (Ubuntu)

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Summary changed:

- zsys fail during reboot
+ zsys fail reverting to a previous snapshot on reboot

** Summary changed:

- zsys fail reverting to a previous snapshot on reboot
+ zfs fails reverting to a previous snapshot on reboot when selected on grub

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => High

** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

** Changed in: zfs-linux (Ubuntu)
   Status: Incomplete => Triaged

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1946808

Title:
  zfs fails reverting to a previous snapshot on reboot when selected on
  grub

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  After creating a snapshot with: zsysctl save 211012-linux13-19 -s
  the reboot fails as shown on the screenshot, the other screenshot shows the 
result of the snapshot.

  ProblemType: Bug
  DistroRelease: Ubuntu 21.10
  Package: zsys 0.5.8
  ProcVersionSignature: Ubuntu 5.13.0-19.19-generic 5.13.14
  Uname: Linux 5.13.0-19-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu70
  Architecture: amd64
  CasperMD5CheckResult: pass
  CurrentDesktop: XFCE
  Date: Tue Oct 12 19:11:43 2021
  InstallationDate: Installed on 2021-10-12 (0 days ago)
  InstallationMedia: Xubuntu 21.10 "Impish Indri" - Release amd64 (20211012)
  Mounts: Error: [Errno 40] Too many levels of symbolic links: '/proc/mounts'
  ProcKernelCmdLine: BOOT_IMAGE=/BOOT/ubuntu_zgtuq6@/vmlinuz-5.13.0-19-generic 
root=ZFS=rpool/ROOT/ubuntu_zgtuq6 ro quiet splash
  RelatedPackageVersions:
   zfs-initramfs  2.0.6-1ubuntu2
   zfsutils-linux 2.0.6-1ubuntu2
  SourcePackage: zsys
  SystemdFailedUnits:
   
  UpgradeStatus: No upgrade log present (probably fresh install)
  ZFSImportedPools:
   NAMESIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUPHEALTH 
 ALTROOT
   bpool   768M  79.2M   689M- - 0%10%  1.00xONLINE 
 -
   rpool14G  3.33G  10.7G- - 1%23%  1.00xONLINE 
 -
  ZFSListcache-bpool:
   bpool/boot   off on  on  off on  off on  
off -   none-   -   -   -   -   -   -   
-
   bpool/BOOT   noneoff on  on  off on  off on  
off -   none-   -   -   -   -   -   -   
-
   bpool/BOOT/ubuntu_zgtuq6 /boot   on  on  on  off on  
off on  off -   none-   -   -   -   -   
-   -   -
  ZSYSJournal:
   -- Journal begins at Tue 2021-10-12 18:10:37 AST, ends at Tue 2021-10-12 
19:11:52 AST. --
   -- No entries --
  modified.conffile..etc.apt.apt.conf.d.90_zsys_system_autosnapshot: [deleted]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1946808/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1894329] Re: ZFS revert from grub menu not working.

2020-11-27 Thread Didier Roche
** Description changed:

+ [Impact]
+ 
+  * Users can’t revert to previous snapshots when enabling the hw enablement 
stack kernel on focal or using any more recent version.
+  * The option is available on grub and will let you with a broken system, 
partially cloned.
+ 
+ [Test Case]
+ 
+  * Boot on a system, using ZFS and ZSys.
+  * In grub, select "History" entry
+  * Select one of the "Revert" option: the system should boot after being 
reverted with an older version.
+ 
+ 
+ [Where problems could occur]
+  * The code is in the initramfs, where the generated id suffix for all our 
ZFS datasets was empty due to new coreutils/kernels.
+  * We replace dd with another way (more robust and simple) for generating 
this ID.
+ 
+ 
+ -
+ 
  @coreutils maintainers, any idea why dd is being flagged as having an
  executable stack?
  
  
  
  When I try to revert to a previous state from the grub menu, the boot
  fails. The system drops me to a repair modus.
  
  zfs-mount-generator fails with the message:
  couldn't ensure boot: Mounted clone bootFS dataset created by initramfs 
doesn't have a valid _suffix (at least .*_): \"rpool/ROOT/ubuntu_\"".
  
  After a reboot I have an extra clone called "rpool/ROOT/ubuntu_", indeed 
without a suffix.
  After a little investigation I found the problem in 
/usr/share/initramfs-tools/scripts/zfs at the end in function
  uid()
  {
     dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null | tr -dc 
'a-z0-9' | cut -c-6
  }, the dd command fails during boot with the message "process 'dd' started 
with executable stack.
  After this an empty uid is returned which explains the dataset without a 
proper suffix.
  Replacing the function  with:
  uid()
  {
     grep -a -m10 -E "\*" /dev/urandom 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6
  }
  
  fixes the problem.
  
  Ubuntu version is:
  Description:Ubuntu Groovy Gorilla (development branch)
  Release:20.10
  
  zfs-initramfs version is:
  0.8.4-1ubuntu11
  
  With regards,
  
  Usarin Heininga
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: zfs-initramfs 0.8.4-1ubuntu11
  ProcVersionSignature: Ubuntu 5.8.0-18.19-generic 5.8.4
  Uname: Linux 5.8.0-18-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu45
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: KDE
  Date: Fri Sep  4 20:23:44 2020
  InstallationDate: Installed on 2020-09-02 (2 days ago)
  InstallationMedia: Ubuntu 20.10 "Groovy Gorilla" - Alpha amd64 (20200831)
  ProcEnviron:
   LANGUAGE=
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=nl_NL.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1894329

Title:
  ZFS revert from grub menu not working.

Status in coreutils package in Ubuntu:
  Incomplete
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in coreutils source package in Focal:
  Incomplete
Status in zfs-linux source package in Focal:
  Triaged
Status in coreutils source package in Groovy:
  Incomplete
Status in zfs-linux source package in Groovy:
  Triaged

Bug description:
  [Impact]

   * Users can’t revert to previous snapshots when enabling the hw enablement 
stack kernel on focal or using any more recent version.
   * The option is available on grub and will let you with a broken system, 
partially cloned.

  [Test Case]

   * Boot on a system, using ZFS and ZSys.
   * In grub, select "History" entry
   * Select one of the "Revert" option: the system should boot after being 
reverted with an older version.

  
  [Where problems could occur]
   * The code is in the initramfs, where the generated id suffix for all our 
ZFS datasets was empty due to new coreutils/kernels.
   * We replace dd with another way (more robust and simple) for generating 
this ID.

  
  -

  @coreutils maintainers, any idea why dd is being flagged as having an
  executable stack?

  

  When I try to revert to a previous state from the grub menu, the boot
  fails. The system drops me to a repair modus.

  zfs-mount-generator fails with the message:
  couldn't ensure boot: Mounted clone bootFS dataset created by initramfs 
doesn't have a valid _suffix (at least .*_): \"rpool/ROOT/ubuntu_\"".

  After a reboot I have an extra clone called "rpool/ROOT/ubuntu_", indeed 
without a suffix.
  After a little investigation I found the problem in 
/usr/share/initramfs-tools/scripts/zfs at the end in function
  uid()
  {
     dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null | tr -dc 
'a-z0-9' | cut -c-6
  }, the dd command fails during boot with the message "process 'dd' started 
with executable stack.
  After this an empty uid is returned which explains the d

[Kernel-packages] [Bug 1894329] Re: ZFS revert from grub menu not working.

2020-11-16 Thread Didier Roche
We will backport your patch to previous releases soon.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1894329

Title:
  ZFS revert from grub menu not working.

Status in coreutils package in Ubuntu:
  Incomplete
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in coreutils source package in Focal:
  Incomplete
Status in zfs-linux source package in Focal:
  Triaged
Status in coreutils source package in Groovy:
  Incomplete
Status in zfs-linux source package in Groovy:
  Triaged

Bug description:
  @coreutils maintainers, any idea why dd is being flagged as having an
  executable stack?

  

  When I try to revert to a previous state from the grub menu, the boot
  fails. The system drops me to a repair modus.

  zfs-mount-generator fails with the message:
  couldn't ensure boot: Mounted clone bootFS dataset created by initramfs 
doesn't have a valid _suffix (at least .*_): \"rpool/ROOT/ubuntu_\"".

  After a reboot I have an extra clone called "rpool/ROOT/ubuntu_", indeed 
without a suffix.
  After a little investigation I found the problem in 
/usr/share/initramfs-tools/scripts/zfs at the end in function
  uid()
  {
     dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null | tr -dc 
'a-z0-9' | cut -c-6
  }, the dd command fails during boot with the message "process 'dd' started 
with executable stack.
  After this an empty uid is returned which explains the dataset without a 
proper suffix.
  Replacing the function  with:
  uid()
  {
     grep -a -m10 -E "\*" /dev/urandom 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6
  }

  fixes the problem.

  Ubuntu version is:
  Description:Ubuntu Groovy Gorilla (development branch)
  Release:20.10

  zfs-initramfs version is:
  0.8.4-1ubuntu11

  With regards,

  Usarin Heininga

  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: zfs-initramfs 0.8.4-1ubuntu11
  ProcVersionSignature: Ubuntu 5.8.0-18.19-generic 5.8.4
  Uname: Linux 5.8.0-18-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu45
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: KDE
  Date: Fri Sep  4 20:23:44 2020
  InstallationDate: Installed on 2020-09-02 (2 days ago)
  InstallationMedia: Ubuntu 20.10 "Groovy Gorilla" - Alpha amd64 (20200831)
  ProcEnviron:
   LANGUAGE=
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=nl_NL.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/coreutils/+bug/1894329/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1894329] Re: ZFS revert from grub menu not working.

2020-11-16 Thread Didier Roche
Thanks for the confirmation :)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1894329

Title:
  ZFS revert from grub menu not working.

Status in coreutils package in Ubuntu:
  Incomplete
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in coreutils source package in Focal:
  Incomplete
Status in zfs-linux source package in Focal:
  Triaged
Status in coreutils source package in Groovy:
  Incomplete
Status in zfs-linux source package in Groovy:
  Triaged

Bug description:
  @coreutils maintainers, any idea why dd is being flagged as having an
  executable stack?

  

  When I try to revert to a previous state from the grub menu, the boot
  fails. The system drops me to a repair modus.

  zfs-mount-generator fails with the message:
  couldn't ensure boot: Mounted clone bootFS dataset created by initramfs 
doesn't have a valid _suffix (at least .*_): \"rpool/ROOT/ubuntu_\"".

  After a reboot I have an extra clone called "rpool/ROOT/ubuntu_", indeed 
without a suffix.
  After a little investigation I found the problem in 
/usr/share/initramfs-tools/scripts/zfs at the end in function
  uid()
  {
     dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null | tr -dc 
'a-z0-9' | cut -c-6
  }, the dd command fails during boot with the message "process 'dd' started 
with executable stack.
  After this an empty uid is returned which explains the dataset without a 
proper suffix.
  Replacing the function  with:
  uid()
  {
     grep -a -m10 -E "\*" /dev/urandom 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6
  }

  fixes the problem.

  Ubuntu version is:
  Description:Ubuntu Groovy Gorilla (development branch)
  Release:20.10

  zfs-initramfs version is:
  0.8.4-1ubuntu11

  With regards,

  Usarin Heininga

  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: zfs-initramfs 0.8.4-1ubuntu11
  ProcVersionSignature: Ubuntu 5.8.0-18.19-generic 5.8.4
  Uname: Linux 5.8.0-18-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu45
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: KDE
  Date: Fri Sep  4 20:23:44 2020
  InstallationDate: Installed on 2020-09-02 (2 days ago)
  InstallationMedia: Ubuntu 20.10 "Groovy Gorilla" - Alpha amd64 (20200831)
  ProcEnviron:
   LANGUAGE=
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=nl_NL.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/coreutils/+bug/1894329/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875767] Re: When operating install/removal with apt, zed floods log and apparently crashes snapshoting

2020-09-02 Thread Didier Roche
Hey! Is this reproducible today? We made some performance improvements
on zsys since then.

Please also, use the apport hook to help debugging:
apport-collect -p zsys 1875767

** Changed in: zsys (Ubuntu)
   Status: Triaged => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875767

Title:
  When operating install/removal with apt, zed floods log and apparently
  crashes snapshoting

Status in zfs-linux package in Ubuntu:
  New
Status in zsys package in Ubuntu:
  Incomplete

Bug description:
  Hello!

  When I ran a install, it behaved like this:

  ERROR rpc error: code = DeadlineExceeded desc = context deadline exceeded 
  ... etc apt messages ...
  A processar 'triggers' para libc-bin (2.31-0ubuntu9) ...
  ERROR rpc error: code = Unavailable desc = transport is closing 

  Log gets flooded by the follow message:

  abr 28 20:41:48 manauara zed[512257]: eid=10429 class=history_event 
pool_guid=0x7E8B0F177C4DD12C
  abr 28 20:41:49 manauara zed[508106]: Missed 1 events

  And machine load gets high for incredible amount of time. Workarround
  is:

  systemctl restart zsysd
  systemctl restart zed

  System also gets a bit slow and fans get high for a while (because the
  load).

  This is a fresh install of ubuntu 20.04 with ZFS on SATA SSD.

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: zfsutils-linux 0.8.3-1ubuntu12
  ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
  Uname: Linux 5.4.0-28-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
  ApportVersion: 2.20.11-0ubuntu27
  Architecture: amd64
  CasperMD5CheckResult: skip
  Date: Tue Apr 28 20:49:14 2020
  InstallationDate: Installed on 2020-04-27 (1 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
  ProcEnviron:
   LANGUAGE=pt_BR:pt:en
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=pt_BR.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875767/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1891867] Re: zfs not correctly imported at boot

2020-08-18 Thread Didier Roche
Please run apport-collect to attach logs so that we can debug your setting.
@baling: why subscribing zsys to this bu? There is no mention of zsys being 
used here, it seems directly a manual zfs setup.

** Package changed: zsys (Ubuntu) => zfs-linux (Ubuntu)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1891867

Title:
  zfs not correctly imported at boot

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  On a fresh and up-to-date Ubuntu 20.04 amd64 installation I configured
  two encrypted partitions on the same hdd. On these I created a
  stripped zpool. After login I can import and mount the pool without
  problems, but the at-boot import fails after the first partitions is
  available and never tried again.

  zpool version:
  zfs-0.8.3-1ubuntu12.2
  zfs-kmod-0.8.3-1ubuntu12.2
  uname -a:
  Linux hostname 5.4.0-40-generic #44-Ubuntu SMP Tue Jun 23 00:01:04 UTC 2020 
x86_64 x86_64 x86_64 GNU/Linux
  systemd --version
  systemd 245 (245.4-4ubuntu3.2)
  +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP 
+GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 
default-hierarchy=hybrid

  Relevant logs:
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] 3907029168 512-byte 
logical blocks: (2.00 TB/1.82 TiB)
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] Write Protect is off
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read 
cache: enabled, doesn't support DPO or FUA
  Aug 17 07:12:25 hostname kernel:  sdb: sdb1 sdb2
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] Attached SCSI disk
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found ordering 
cycle on cryptsetup.target/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on systemd-cryptsetup@vol\x2dswap_crypt.service/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on systemd-random-seed.service/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on zfs-mount.service/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on zfs-import.target/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on zfs-import-cache.service/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Job 
cryptsetup.target/start deleted to break ordering cycle starting with 
zfs-import-cache.service/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-mount.service/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import.target/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import-cache.service/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-mount.service/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import.target/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import-cache.service/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-mount.service/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import.target/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import-cache.service/start
  Aug 17 07:12:26 hostname systemd[1]: Starting Cryptography Setup for 
sdb1_crypt...
  Aug 17 07:12:26 hostname systemd[1]: Starting Cryptography Setup for 
sdb2_crypt...
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-mount.service/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import.target/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import-cache.service/start
  Aug 17 07:12:32 hostname systemd[1]: Finished Cryptography Setup for 
sdb2_crypt.
  Aug 17 07:12:32 hostname systemd[1]: Reached target Block Device Preparation 
for /dev/mapper/sdb2_crypt.
  Aug 17 07:12:32 hostname zpool[1887]: cannot import 'sdb': no such pool or 
dataset
  Aug 17 07:12:32 hostname zpool[1887]: Destroy and re-create the pool 
from
  Aug 17 07:12:32 hostname zpool[1887]: a backup source.
  Aug 17 07:12:32 hostname systemd[1]: zfs-import-cache.service: Main process 
exited, code=exited, status=1/FAILURE
  Aug 17 07:12:32 hostname systemd[1]: zfs-import-cache.service: Failed with 
result 'exit-code'.
  Aug 17 07:12:34 hostname systemd[1]: Finished Cryptography Setup for 
sdb1_crypt.
  Aug 17 07:12:34 hostname systemd[1]: Reached target Block Device Preparation 
for /dev/mapper/sdb1_crypt.

To manage notifications about this bug go 

[Kernel-packages] [Bug 1882955] Re: LXD 4.2 broken on linux-kvm due to missing VLAN filtering

2020-06-23 Thread Philip Roche
CPC are seeing this issue in _all_ minimal cloud images testing with LXD
snap version 4.2 or greater. This blocks promotion of all minimal cloud
download images and blocks build and publication of both daily and
release cloud images.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-kvm in Ubuntu.
https://bugs.launchpad.net/bugs/1882955

Title:
  LXD 4.2 broken on linux-kvm due to missing VLAN filtering

Status in linux-kvm package in Ubuntu:
  Triaged

Bug description:
  This is another case of linux-kvm having unexplained differences
  compared to linux-generic in areas that aren't related to hardware
  drivers (see other bug we filed for missing nft).

  This time, CPC is reporting that LXD no longer works on linux-kvm as
  we now set vlan filtering on our bridges to prevent containers from
  escaping firewalling through custom vlan tags.

  This relies on CONFIG_BRIDGE_VLAN_FILTERING which is a built-in on the
  generic kernel but is apparently missing on linux-kvm (I don't have
  any system running that kernel to confirm its config, but the behavior
  certainly matches that).

  We need this fixed in focal and groovy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-kvm/+bug/1882955/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2020-06-18 Thread Didier Roche
I will have a look (I don’t remember if the grub task is due to the
grub.cfg generation or to grub code itself), but TBH, this is low
priority on my list (downgrading the bug task priority as such, as this
is a multi-system corner-case)

** Changed in: systemd (Ubuntu)
   Importance: Medium => Low

** Changed in: grub2 (Ubuntu)
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  Triaged
Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-06-17 Thread Didier Roche
The patch doesn’t fix all instances of the bug (see upstream report
linked above). I think we should clarify that before backporting it.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1882975] Re: crypttab not found error causes boot failure with changes in zfs-initramfs_0.8.4-1ubuntu5

2020-06-10 Thread Didier Roche
Thanks for the bug report and sorry for this, you are right. Uploaded in
-proposed


** Changed in: zfs-linux (Ubuntu)
   Status: New => Fix Committed

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => Critical

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Didier Roche (didrocks)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1882975

Title:
  crypttab not found error causes boot failure with changes in zfs-
  initramfs_0.8.4-1ubuntu5

Status in zfs-linux package in Ubuntu:
  Fix Committed

Bug description:
  boot ends before rpool loads with a failure to find the crypttab file,
  which doesn't exist.

  Maybe this has a dependency upon a package that makes that?

  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: zfs-initramfs 0.8.4-1ubuntu5
  ProcVersionSignature: Ubuntu 5.4.0-34.38-generic 5.4.41
  Uname: Linux 5.4.0-34-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu38
  Architecture: amd64
  CasperMD5CheckResult: skip
  Date: Wed Jun 10 11:42:55 2020
  InstallationDate: Installed on 2019-10-19 (235 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1882975/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1881541] Re: Prevent segfault immediately after install when zfs kernel module isn't loaded

2020-06-04 Thread Didier Roche
Sorry Colin, this was ZSys and I targetted the wrong component when
filing batch-bugs for ZSys 0.5 upload.

Fixed in https://launchpad.net/ubuntu/+source/zsys/0.5.0.

** Package changed: zfs-linux (Ubuntu) => zsys (Ubuntu)

** Changed in: zsys (Ubuntu)
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1881541

Title:
  Prevent segfault immediately after install when zfs kernel module
  isn't loaded

Status in zsys package in Ubuntu:
  Fix Released

Bug description:
  Install zsys on a non ZFS system without the kernel module loaded
  leaded to a segfault.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zsys/+bug/1881541/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1881541] [NEW] Prevent segfault immediately after install when zfs kernel module isn't loaded

2020-06-01 Thread Didier Roche
Public bug reported:

Install zsys on a non ZFS system without the kernel module loaded leaded
to a segfault.

** Affects: zfs-linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1881541

Title:
  Prevent segfault immediately after install when zfs kernel module
  isn't loaded

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Install zsys on a non ZFS system without the kernel module loaded
  leaded to a segfault.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881541/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Didier Roche
Great to hear John! Thanks for confirming and thanks to Richard for the
patch.

I’m happy to SRU it to focal once it’s proposed upstream. (Keep me
posted Richard, you can drop a link here and I will monitor)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Didier Roche
On an installed packaged system, the files are in different directories
(and don’t have the .in extension as they have been built with the
prefix replacement). Their names and locations are:

/lib/systemd/system/zfs-mount.service
/lib/systemd/system-generators/zfs-mount-generator

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Didier Roche
Your patch makes sense Richard and I think it will be a good upstream
candidates. In all approaches you proposed, this is my prefered one
because this is the most flexible IMHO.

Tell me when you get a chance to test it and maybe John, you can confirm
this fixes it for you?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1876052] [NEW] Nvidia driver, default configuration, "Use dedicated card option" for app triggers software acceleration

2020-04-30 Thread Didier Roche
Public bug reported:

Fresh install of 20.04 LTS with nvidia binary driver from our archive.
(dual Intel/Nvidia setup)

No setting change has been done. The card supports "On demand".

Tested with Firefox (about:support) and Chrome (config:cpu). Both are showing 
the same result:
- default launch -> Intel drive, OK
GL_VENDOR   Intel
GL_RENDERER Mesa Intel(R) UHD Graphics 630 (CFL GT2)
GL_VERSION  4.6 (Core Profile) Mesa 20.0.4
- select use dedicated card -> Sofware acceleration! KO
GL_VENDOR   Google Inc.
GL_RENDERER Google SwiftShader
GL_VERSION  OpenGL ES 3.0 SwiftShader 4.1.0.7


Selecting the option for performance though is worse than not selecting it with 
our default configuration.


If you open nvidia-settings, you have only one tab available (which is showing 
Performance mode), which is misleading because this is not the mode you are in. 
Note that you not on On Demand mode either as selecting it + reboot restores a 
an expected behavior (multiple tabs in nvidia-settings).

For completeness, here are the other settings:

* On Demand (manually selected): OK
Right click menu option shows Use dedicated card card option: OK
- default launch -> Intel drive, OK
- select use dedicated card -> Nvidia, OK

* Power saving mode (manually selected): OK
- default launch -> Intel drive, OK

* Performance mode (manually selected, meaning choose another option to change 
the default and selecting it back): KO
Right click menu option shows Use dedicated card card option! KO
- default launch -> Nvidia, OK
- select use dedicated card -> Nvidia, OK, but this option shouldn’t be present.
Reported this one as bug #1876049

2 additional things:
- It would be great the default to be either Performance mode (real one) or On 
Demand for supported card (nvidia-settings has the option only if this is 
supported AFAIK, so it would be good to default dynamically to this one. Filed 
as bug #1876051
- It would be great to have a way to pin an application with "Use dedicated 
card". bug #1876050

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: gnome-shell 3.36.1-5ubuntu1
ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
Uname: Linux 5.4.0-28-generic x86_64
NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
ApportVersion: 2.20.11-0ubuntu27
Architecture: amd64
CasperMD5CheckResult: skip
CurrentDesktop: ubuntu:GNOME
Date: Thu Apr 30 09:22:04 2020
DisplayManager: gdm3
InstallationDate: Installed on 2020-04-24 (5 days ago)
InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=fr_FR.UTF-8
 SHELL=/bin/bash
RelatedPackageVersions: mutter-common 3.36.1-3ubuntu3
SourcePackage: gnome-shell
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: gnome-shell (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: nvidia-settings (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal nvidia-dedicatedcard-option

** Also affects: nvidia-settings (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-settings in Ubuntu.
https://bugs.launchpad.net/bugs/1876052

Title:
  Nvidia driver, default configuration, "Use dedicated card option" for
  app triggers software acceleration

Status in gnome-shell package in Ubuntu:
  New
Status in nvidia-settings package in Ubuntu:
  New

Bug description:
  Fresh install of 20.04 LTS with nvidia binary driver from our archive.
  (dual Intel/Nvidia setup)

  No setting change has been done. The card supports "On demand".

  Tested with Firefox (about:support) and Chrome (config:cpu). Both are showing 
the same result:
  - default launch -> Intel drive, OK
  GL_VENDOR   Intel
  GL_RENDERER Mesa Intel(R) UHD Graphics 630 (CFL GT2)
  GL_VERSION  4.6 (Core Profile) Mesa 20.0.4
  - select use dedicated card -> Sofware acceleration! KO
  GL_VENDOR   Google Inc.
  GL_RENDERER Google SwiftShader
  GL_VERSION  OpenGL ES 3.0 SwiftShader 4.1.0.7

  
  Selecting the option for performance though is worse than not selecting it 
with our default configuration.

  
  If you open nvidia-settings, you have only one tab available (which is 
showing Performance mode), which is misleading because this is not the mode you 
are in. Note that you not on On Demand mode either as selecting it + reboot 
restores a an expected behavior (multiple tabs in nvidia-settings).

  For completeness, here are the other settings:

  * On Demand (manually selected): OK
  Right click menu option shows Use dedicated card card option: OK
  - default launch -> Intel drive, OK
  - select use dedicated card -> Nvidia, OK

  * Power saving mode (manually selected): OK
  - default launch -> Intel drive, OK

  * Performance mode (manually selected, meaning choose another option to 
cha

[Kernel-packages] [Bug 1876051] [NEW] Default acceleration mode option is none of the 3 nvidia settings option

2020-04-30 Thread Didier Roche
Public bug reported:

As stated on bug #1876052, the default acceleration mode option is none
of the 3 nvidia settings option.

It’s displayed as "Performance mode" when you launch it for the first time, 
however:
- default launch is Intel (so no performance mode)
- there is a "Use dedicated card" option (which shouldn't be displayed in 
"Performance mode" and this one is triggering software acceleration)
- nvidia settings is only displaying that tab, and selecting another mode, then 
selecting it back this one after reboot will display all other tab options, so 
nvidia settings knows that the default setting is different from Performance 
mode.

It seems nvidia settings is only showing the On demand option for cards that 
support it. I suggest thus that our default selection represents a better 
option for our users:
- If the card supports On demand acceleration -> select that by default
- If the card doesn’t support On demand acceleration -> select Performance mode 
by default
- Remove the current "weird" status it's currently on by default.

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: nvidia-settings 440.64-0ubuntu1
ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
Uname: Linux 5.4.0-28-generic x86_64
NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
ApportVersion: 2.20.11-0ubuntu27
Architecture: amd64
CasperMD5CheckResult: skip
CurrentDesktop: ubuntu:GNOME
Date: Thu Apr 30 09:43:31 2020
InstallationDate: Installed on 2020-04-24 (5 days ago)
InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=fr_FR.UTF-8
 SHELL=/bin/bash
SourcePackage: nvidia-settings
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: nvidia-settings (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal nvidia-dedicatedcard-option

** Description changed:

- As stated on bug #…, the default acceleration mode option is none of the
- 3 nvidia settings option.
+ As stated on bug #1876052, the default acceleration mode option is none
+ of the 3 nvidia settings option.
  
  It’s displayed as "Performance mode" when you launch it for the first time, 
however:
  - default launch is Intel (so no performance mode)
  - there is a "Use dedicated card" option (which shouldn't be displayed in 
"Performance mode" and this one is triggering software acceleration)
  - nvidia settings is only displaying that tab, and selecting another mode, 
then selecting it back this one after reboot will display all other tab 
options, so nvidia settings knows that the default setting is different from 
Performance mode.
  
  It seems nvidia settings is only showing the On demand option for cards that 
support it. I suggest thus that our default selection represents a better 
option for our users:
  - If the card supports On demand acceleration -> select that by default
  - If the card doesn’t support On demand acceleration -> select Performance 
mode by default
  - Remove the current "weird" status it's currently on by default.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: nvidia-settings 440.64-0ubuntu1
  ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
  Uname: Linux 5.4.0-28-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
  ApportVersion: 2.20.11-0ubuntu27
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Apr 30 09:43:31 2020
  InstallationDate: Installed on 2020-04-24 (5 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
  ProcEnviron:
-  TERM=xterm-256color
-  PATH=(custom, no user)
-  XDG_RUNTIME_DIR=
-  LANG=fr_FR.UTF-8
-  SHELL=/bin/bash
+  TERM=xterm-256color
+  PATH=(custom, no user)
+  XDG_RUNTIME_DIR=
+  LANG=fr_FR.UTF-8
+  SHELL=/bin/bash
  SourcePackage: nvidia-settings
  UpgradeStatus: No upgrade log present (probably fresh install)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-settings in Ubuntu.
https://bugs.launchpad.net/bugs/1876051

Title:
  Default acceleration mode option is none of the 3 nvidia settings
  option

Status in nvidia-settings package in Ubuntu:
  New

Bug description:
  As stated on bug #1876052, the default acceleration mode option is
  none of the 3 nvidia settings option.

  It’s displayed as "Performance mode" when you launch it for the first time, 
however:
  - default launch is Intel (so no performance mode)
  - there is a "Use dedicated card" option (which shouldn't be displayed in 
"Performance mode" and this one is triggering software acceleration)
  - nvidia settings is only displaying that tab, and selecting another mode, 
then selecting it back this one after reboot will display all other tab 
options, so nvidia settings knows that the default setting is different from 
Performance 

[Kernel-packages] [Bug 1849522] Re: imported non-rpool/bpool zpools are not being reimported after reboot

2020-04-02 Thread Didier Roche
See my previous comment: this is only related to zfs-linux with the
version I mentioned. Also, we didnt’ make any change to grub for ZFS
since 26 February, and if you have an empty grub.cfg, this may be due to
other bugs, like multiple rpool/bpool, which isn’t what this one was
about. Ensure that your bpool was imported before generating the grub
menu and is in the cache. This may be why your grub config is empty.

Just to scope this one:
- have a bootable system (preferably installed with the beta image to not get 
stuck in a previous bug)
- create a pool that you import
- reboot -> the pool should still be there

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1849522

Title:
  imported non-rpool/bpool zpools are not being reimported after reboot

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  Installed ubuntu 19.10 onto a zfs bpool/rpool.

  Installed zsys.

  Did a "zpool import" of my existing zfs pools.

  Rebooted.

  The previously imported zpools are not imported at boot!

  I am currently using this hacky workaround:

  https://gist.github.com/satmandu/4da5e900c2c80c93da38c76537291507

  
  I would expect that local zpools I have manually imported would re-import 
when the system is rebooted.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zsys 0.2.2
  ProcVersionSignature: Ubuntu 5.3.0-19.20-generic 5.3.1
  Uname: Linux 5.3.0-19-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  Date: Wed Oct 23 11:40:36 2019
  InstallationDate: Installed on 2019-10-19 (4 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zsys
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1849522/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2020-04-02 Thread Didier Roche
This is probably because your bpool is not in the zfs cache file.

Either reinstall from the beta image which has a fix in the installer, or:
- clean up any files and directories (after unmounting /boot/grub and 
/boot/efi) under /boot (not /boot itself)
- zpool import bpool
- zpool set cachefile= bpool
- sudo mount -a (to remount /boot/grub and /boot/efi)
- update-grub

-> you souldn’t have any issue on reboot anymore and will be equivalent
to a new install from the beta image.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  Triaged
Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1849522] Re: imported non-rpool/bpool zpools are not being reimported after reboot

2020-04-01 Thread Didier Roche
Thanks for your bug report! This is now fixed in zfs-linux
0.8.3-1ubuntu10 in focal.

** Package changed: zsys (Ubuntu) => zfs-linux (Ubuntu)

** Changed in: zfs-linux (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1849522

Title:
  imported non-rpool/bpool zpools are not being reimported after reboot

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  Installed ubuntu 19.10 onto a zfs bpool/rpool.

  Installed zsys.

  Did a "zpool import" of my existing zfs pools.

  Rebooted.

  The previously imported zpools are not imported at boot!

  I am currently using this hacky workaround:

  https://gist.github.com/satmandu/4da5e900c2c80c93da38c76537291507

  
  I would expect that local zpools I have manually imported would re-import 
when the system is rebooted.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zsys 0.2.2
  ProcVersionSignature: Ubuntu 5.3.0-19.20-generic 5.3.1
  Uname: Linux 5.3.0-19-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  Date: Wed Oct 23 11:40:36 2019
  InstallationDate: Installed on 2019-10-19 (4 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zsys
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1849522/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2020-03-31 Thread Didier Roche
Hey Balint. I just added the task post ZFS upload (the upload was
yesterday and I added the task this morning) so indeed, there is some
work needed, part of it being in systemd.

Basically, systemd isn’t capable of mounting datasets when pool names are 
duplicated on a machine
zfs-mount-generator generates .mount units with the pool name. systemd needs to 
either, for all poo«ls mactching the desired name
- prefers pool id matching zpool.cache
- check every pools for their dataset and import the first matching one (same 
dataset path)
- or the .mount unit should be able to import by ID and zfs-mount-generator 
upstream should generate a pool id somewhere in the unit file.

** Changed in: systemd (Ubuntu)
   Status: Incomplete => Confirmed

** Changed in: systemd (Ubuntu)
   Status: Confirmed => Triaged

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  Triaged
Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2020-03-31 Thread Didier Roche
** Also affects: systemd (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1850130] Re: zpools fail to import after reboot on fresh install of eoan

2020-03-27 Thread Didier Roche
** Also affects: grub2 (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1850130

Title:
  zpools fail to import after reboot on fresh install of eoan

Status in grub2 package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  Confirmed
Status in grub2 source package in Focal:
  New
Status in zfs-linux source package in Focal:
  Confirmed

Bug description:
  Fresh installation of stock Ubuntu 19.10 Eoan with experimental root on ZFS.
  System has existing zpools with data.

  Installation is uneventful. First boot with no problems. Updates
  applied. No other changes from fresh installation. Reboot.

  External pool 'tank' imports with no errors. Reboot.

  External pool has failed to import on boot. In contrast bpool and
  rpool are ok. Manually re-import 'tank' with no issues. I can see both
  'tank' and its path in /dev/disk/by-id/ in /etc/zfs/zpool.cache.
  Reboot.

  'tank' has failed to import on boot. It is also missing from
  /etc/zfs/zpool.cache. Is it possible that the cache is being re-
  generated on reboot, and the newly imported pools are getting erased
  from it? I can re-import the pools again manually with no issues, but
  they don't persist between re-boots.

  Installing normally on ext4 this is not an issue and data pools import
  automatically on boot with no further effort.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1850130/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1862776] Re: [MIR] alsa-ucm-conf & alsa-topology-conf (b-d of alsa-lib)

2020-03-11 Thread Didier Roche
$ ./change-override -c main -S alsa-ucm-conf
Override component to main
alsa-ucm-conf 1.2.2-1 in focal: universe/misc -> main
alsa-ucm-conf 1.2.2-1 in focal amd64: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal arm64: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal armhf: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal i386: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal ppc64el: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal s390x: universe/libs/optional/100% -> main
Override [y|N]? y
7 publications overridden.
$ ./change-override -c main -S alsa-topology-conf
Override component to main
alsa-topology-conf 1.2.2-1 in focal: universe/misc -> main
alsa-topology-conf 1.2.2-1 in focal amd64: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal arm64: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal armhf: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal i386: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal ppc64el: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal s390x: universe/libs/optional/100% -> main
Override [y|N]? y
7 publications overridden.


** Changed in: alsa-topology-conf (Ubuntu)
   Status: New => Fix Released

** Changed in: alsa-ucm-conf (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to alsa-topology-conf in Ubuntu.
https://bugs.launchpad.net/bugs/1862776

Title:
  [MIR] alsa-ucm-conf & alsa-topology-conf (b-d of alsa-lib)

Status in alsa-topology-conf package in Ubuntu:
  Fix Released
Status in alsa-ucm-conf package in Ubuntu:
  Fix Released

Bug description:
  * alsa-ucm-conf

  = Availability =
  Built for all supported architectures as it's an arch all binary.
  In sync with Debian.
  https://launchpad.net/ubuntu/+source/alsa-ucm-conf/1.2.1.2-2

  = Rationale =
  It's providing data useful to alsa to know how to handle hardware

  = Security =
  No known CVEs.

  https://security-tracker.debian.org/tracker/source-package/alsa-ucm-conf
  https://launchpad.net/ubuntu/+source/alsa-ucm-conf/+cve

  = Quality assurance =
  - Kernel Packages is subscribed to the ubuntu source
  - no tests, the package provides data file only

  https://bugs.launchpad.net/ubuntu/+source/alsa-ucm-conf
  https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=alsa-ucm-conf

  = Dependencies =
  No universe binary dependencies, the package has no depends

  = Standards compliance =
  standard dh12 packaging

  = Maintenance =
  Maintained with alsa upstream and in Debian

  * alsa-topology-conf

  = Availability =
  Built for all supported architectures as it's an arch all binary.
  In sync with Debian.
  https://launchpad.net/ubuntu/+source/alsa-topology-conf/1.2.1-2

  = Rationale =
  It's providing data useful to alsa to know how to handle hardware

  = Security =
  No known CVEs.

  https://security-tracker.debian.org/tracker/source-package/alsa-topology-conf
  https://launchpad.net/ubuntu/+source/alsa-topology-conff/+cve

  = Quality assurance =
  - Kernel Packages is subscribed to the ubuntu source
  - no tests, the package provides data file only

  https://bugs.launchpad.net/ubuntu/+source/alsa-topology-conf
  https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=alsa-topology-conf

  = Dependencies =
  No universe binary dependencies, the package has no depends

  = Standards compliance =
  standard dh12 packaging

  = Maintenance =
  Maintained with alsa upstream and in Debian

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/alsa-topology-conf/+bug/1862776/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1862776] Re: [MIR] alsa-ucm-conf & alsa-topology-conf (b-d of alsa-lib)

2020-03-11 Thread Didier Roche
Ack on both. Simple configuration files, simple packaging and build
system. All good +1

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to alsa-topology-conf in Ubuntu.
https://bugs.launchpad.net/bugs/1862776

Title:
  [MIR] alsa-ucm-conf & alsa-topology-conf (b-d of alsa-lib)

Status in alsa-topology-conf package in Ubuntu:
  Fix Released
Status in alsa-ucm-conf package in Ubuntu:
  Fix Released

Bug description:
  * alsa-ucm-conf

  = Availability =
  Built for all supported architectures as it's an arch all binary.
  In sync with Debian.
  https://launchpad.net/ubuntu/+source/alsa-ucm-conf/1.2.1.2-2

  = Rationale =
  It's providing data useful to alsa to know how to handle hardware

  = Security =
  No known CVEs.

  https://security-tracker.debian.org/tracker/source-package/alsa-ucm-conf
  https://launchpad.net/ubuntu/+source/alsa-ucm-conf/+cve

  = Quality assurance =
  - Kernel Packages is subscribed to the ubuntu source
  - no tests, the package provides data file only

  https://bugs.launchpad.net/ubuntu/+source/alsa-ucm-conf
  https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=alsa-ucm-conf

  = Dependencies =
  No universe binary dependencies, the package has no depends

  = Standards compliance =
  standard dh12 packaging

  = Maintenance =
  Maintained with alsa upstream and in Debian

  * alsa-topology-conf

  = Availability =
  Built for all supported architectures as it's an arch all binary.
  In sync with Debian.
  https://launchpad.net/ubuntu/+source/alsa-topology-conf/1.2.1-2

  = Rationale =
  It's providing data useful to alsa to know how to handle hardware

  = Security =
  No known CVEs.

  https://security-tracker.debian.org/tracker/source-package/alsa-topology-conf
  https://launchpad.net/ubuntu/+source/alsa-topology-conff/+cve

  = Quality assurance =
  - Kernel Packages is subscribed to the ubuntu source
  - no tests, the package provides data file only

  https://bugs.launchpad.net/ubuntu/+source/alsa-topology-conf
  https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=alsa-topology-conf

  = Dependencies =
  No universe binary dependencies, the package has no depends

  = Standards compliance =
  standard dh12 packaging

  = Maintenance =
  Maintained with alsa upstream and in Debian

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/alsa-topology-conf/+bug/1862776/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1861704] Re: ubuntu-fan recommends netcat package in universe

2020-02-03 Thread Philip Roche
It appears that a bug was already filed against netcat

https://bugs.launchpad.net/ubuntu/+source/netcat-openbsd/+bug/1780316

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to ubuntu-fan in Ubuntu.
https://bugs.launchpad.net/bugs/1861704

Title:
  ubuntu-fan recommends netcat package  in universe

Status in ubuntu-fan package in Ubuntu:
  New

Bug description:
  Following some debug of the docker.io package in universe, we
  (Canonical CPC) discovered that the ubuntu-fan package from main and
  the netcat-traditional package from universe were being installed.

  This was due to the following dependency/recommends tree:

  * docker.io (in universe) recommends ubuntu-fan (main)
  * ubuntu-fan (main) recommends netcat (universe)
  * netcat (universe) is a transitional package and depends on 
netcat-traditional (universe)

  Our concern is that this might be a packaging violation as ubuntu-fan
  is recommending a package not in main.

  > In addition, the packages in main
  > 
  > must not require a package outside of main for compilation or execution 
(thus, the package must 
  > not declare a "Depends", "Recommends", or "Build-Depends" relationship on a 
non-main package),

  Source: https://people.canonical.com/~cjwatson/ubuntu-
  policy/policy.html/ch-archive.html#s-main

  I will file a bug against netcat too to start a discussion on netcat
  being built from netcat-openbsd (main) instead of netcat-traditional
  (universe).

  
  Our feeling is that netcat is such a frequently depended on or recommended 
package that it being present in main would benefit Ubuntu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-fan/+bug/1861704/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1861704] [NEW] ubuntu-fan recommends netcat package in universe

2020-02-03 Thread Philip Roche
Public bug reported:

Following some debug of the docker.io package in universe, we (Canonical
CPC) discovered that the ubuntu-fan package from main and the netcat-
traditional package from universe were being installed.

This was due to the following dependency/recommends tree:

* docker.io (in universe) recommends ubuntu-fan (main)
* ubuntu-fan (main) recommends netcat (universe)
* netcat (universe) is a transitional package and depends on netcat-traditional 
(universe)

Our concern is that this might be a packaging violation as ubuntu-fan is
recommending a package not in main.

> In addition, the packages in main
> 
> must not require a package outside of main for compilation or execution 
> (thus, the package must 
> not declare a "Depends", "Recommends", or "Build-Depends" relationship on a 
> non-main package),

Source: https://people.canonical.com/~cjwatson/ubuntu-policy/policy.html
/ch-archive.html#s-main

I will file a bug against netcat too to start a discussion on netcat
being built from netcat-openbsd (main) instead of netcat-traditional
(universe).


Our feeling is that netcat is such a frequently depended on or recommended 
package that it being present in main would benefit Ubuntu.

** Affects: ubuntu-fan (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to ubuntu-fan in Ubuntu.
https://bugs.launchpad.net/bugs/1861704

Title:
  ubuntu-fan recommends netcat package  in universe

Status in ubuntu-fan package in Ubuntu:
  New

Bug description:
  Following some debug of the docker.io package in universe, we
  (Canonical CPC) discovered that the ubuntu-fan package from main and
  the netcat-traditional package from universe were being installed.

  This was due to the following dependency/recommends tree:

  * docker.io (in universe) recommends ubuntu-fan (main)
  * ubuntu-fan (main) recommends netcat (universe)
  * netcat (universe) is a transitional package and depends on 
netcat-traditional (universe)

  Our concern is that this might be a packaging violation as ubuntu-fan
  is recommending a package not in main.

  > In addition, the packages in main
  > 
  > must not require a package outside of main for compilation or execution 
(thus, the package must 
  > not declare a "Depends", "Recommends", or "Build-Depends" relationship on a 
non-main package),

  Source: https://people.canonical.com/~cjwatson/ubuntu-
  policy/policy.html/ch-archive.html#s-main

  I will file a bug against netcat too to start a discussion on netcat
  being built from netcat-openbsd (main) instead of netcat-traditional
  (universe).

  
  Our feeling is that netcat is such a frequently depended on or recommended 
package that it being present in main would benefit Ubuntu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-fan/+bug/1861704/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-01-06 Thread Didier Roche
One last thing: I think we should test this on rotational disk and
assess the performance impacts before pushing it as a default. This will
give us a good baseline to decide if this should be pushed or if we need
to add even more warnings on the ZFS install option.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


  1   2   >