[Kernel-packages] [Bug 2000403] Re: series of i5000 kernel bug reports every boot, latest jammy

2024-09-01 Thread Matthew Bradley
>From just a cursory look it seems like maybe one of the two drivers (the
memory temp driver or underlying EDAC driver) isn't handling some dimm
configurations correctly. my dimm configuration looks like:

/sys/devices/system/edac/mc/mc0/dimm0/dimm_location:branch 0 channel 0 slot 0 
/sys/devices/system/edac/mc/mc0/dimm12/dimm_location:branch 1 channel 1 slot 0 
/sys/devices/system/edac/mc/mc0/dimm13/dimm_location:branch 1 channel 1 slot 1 
/sys/devices/system/edac/mc/mc0/dimm1/dimm_location:branch 0 channel 0 slot 1 
/sys/devices/system/edac/mc/mc0/dimm4/dimm_location:branch 0 channel 1 slot 0 
/sys/devices/system/edac/mc/mc0/dimm5/dimm_location:branch 0 channel 1 slot 1 
/sys/devices/system/edac/mc/mc0/dimm8/dimm_location:branch 1 channel 0 slot 0 
/sys/devices/system/edac/mc/mc0/dimm9/dimm_location:branch 1 channel 0 slot 1

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2000403

Title:
  series of i5000 kernel bug reports every boot, latest jammy

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Kernel bugs on boot, latest jammy, edac/i5000 related?

  [  +0.214131] systemd-journald[502]: Received client request to flush runtime 
journal.
  [  +2.520374] 

  [  +0.008680] UBSAN: array-index-out-of-bounds in 
/build/linux-MUxl3y/linux-5.15.0/drivers/edac/i5000_edac.c:956:20
  [  +0.008754] index 4 is out of range for type 'u16 [4]'
  [  +0.008790] CPU: 3 PID: 577 Comm: systemd-udevd Not tainted 
5.15.0-56-generic #62-Ubuntu
  [  +0.07] Hardware name: Intel S5000PSL/S5000PSL, BIOS 
S5000.86B.15.00.0101.110920101604 11/09/2010
  [  +0.03] Call Trace:
  [  +0.03]  
  [  +0.05]  show_stack+0x52/0x5c
  [  +0.09]  dump_stack_lvl+0x4a/0x63
  [  +0.09]  dump_stack+0x10/0x16
  [  +0.03]  ubsan_epilogue+0x9/0x49
  [  +0.03]  __ubsan_handle_out_of_bounds.cold+0x44/0x49
  [  +0.04]  ? i5000_get_mc_regs.isra.0+0x14c/0x1c0 [i5000_edac]
  [  +0.07]  i5000_probe1+0x506/0x5c0 [i5000_edac]
  [  +0.04]  ? pci_bus_read_config_byte+0x40/0x70
  [  +0.06]  ? do_pci_enable_device+0xa4/0x110
  [  +0.06]  i5000_init_one+0x26/0x30 [i5000_edac]
  [  +0.04]  local_pci_probe+0x4b/0x90
  [  +0.04]  pci_device_probe+0x119/0x1f0
  [  +0.04]  really_probe+0x222/0x420
  [  +0.04]  __driver_probe_device+0x119/0x190
  [  +0.03]  driver_probe_device+0x23/0xc0
  [  +0.03]  __driver_attach+0xbd/0x1f0
  [  +0.04]  ? __device_attach_driver+0x120/0x120
  [  +0.03]  bus_for_each_dev+0x7f/0xd0
  [  +0.03]  driver_attach+0x1e/0x30
  [  +0.03]  bus_add_driver+0x148/0x220
  [  +0.03]  ? vunmap_range_noflush+0x3d5/0x470
  [  +0.06]  driver_register+0x95/0x100
  [  +0.03]  ? 0xc079
  [  +0.047507] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.050854] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.055477] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.060441] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.062717] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.064828] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.067094] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.070663] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for 

[Kernel-packages] [Bug 2000403] Re: series of i5000 kernel bug reports every boot, latest jammy

2024-08-31 Thread Matthew Bradley
I should note: ECC ram.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2000403

Title:
  series of i5000 kernel bug reports every boot, latest jammy

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Kernel bugs on boot, latest jammy, edac/i5000 related?

  [  +0.214131] systemd-journald[502]: Received client request to flush runtime 
journal.
  [  +2.520374] 

  [  +0.008680] UBSAN: array-index-out-of-bounds in 
/build/linux-MUxl3y/linux-5.15.0/drivers/edac/i5000_edac.c:956:20
  [  +0.008754] index 4 is out of range for type 'u16 [4]'
  [  +0.008790] CPU: 3 PID: 577 Comm: systemd-udevd Not tainted 
5.15.0-56-generic #62-Ubuntu
  [  +0.07] Hardware name: Intel S5000PSL/S5000PSL, BIOS 
S5000.86B.15.00.0101.110920101604 11/09/2010
  [  +0.03] Call Trace:
  [  +0.03]  
  [  +0.05]  show_stack+0x52/0x5c
  [  +0.09]  dump_stack_lvl+0x4a/0x63
  [  +0.09]  dump_stack+0x10/0x16
  [  +0.03]  ubsan_epilogue+0x9/0x49
  [  +0.03]  __ubsan_handle_out_of_bounds.cold+0x44/0x49
  [  +0.04]  ? i5000_get_mc_regs.isra.0+0x14c/0x1c0 [i5000_edac]
  [  +0.07]  i5000_probe1+0x506/0x5c0 [i5000_edac]
  [  +0.04]  ? pci_bus_read_config_byte+0x40/0x70
  [  +0.06]  ? do_pci_enable_device+0xa4/0x110
  [  +0.06]  i5000_init_one+0x26/0x30 [i5000_edac]
  [  +0.04]  local_pci_probe+0x4b/0x90
  [  +0.04]  pci_device_probe+0x119/0x1f0
  [  +0.04]  really_probe+0x222/0x420
  [  +0.04]  __driver_probe_device+0x119/0x190
  [  +0.03]  driver_probe_device+0x23/0xc0
  [  +0.03]  __driver_attach+0xbd/0x1f0
  [  +0.04]  ? __device_attach_driver+0x120/0x120
  [  +0.03]  bus_for_each_dev+0x7f/0xd0
  [  +0.03]  driver_attach+0x1e/0x30
  [  +0.03]  bus_add_driver+0x148/0x220
  [  +0.03]  ? vunmap_range_noflush+0x3d5/0x470
  [  +0.06]  driver_register+0x95/0x100
  [  +0.03]  ? 0xc079
  [  +0.047507] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.050854] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.055477] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.060441] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.062717] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.064828] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.067094] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.070663] systemd[1]: 
/etc/systemd/system/ceph-4067126d-01cb-40af-824a-881c130140f8@.service:24: Unit 
configured to use KillMode=none. This is unsafe, as it disables systemd's 
process lifecycle management for the service. Please update your service to use 
>
  [  +0.198066] systemd[1]: Queued start job for default target Graphical 
Interface.
  [  +0.033252] systemd[1]: Created slice Virtual Machine and Container Slice.
  [  +0.037074] systemd[1]: Created slice Slice 
/system/ceph-4067126d-01cb-40af-824a-881c130140f8.
  [  +0.036383] systemd[1]: Created slice Slice /system/modprobe.
  [  +0.035778] systemd[1]: Created slice Slice /system/systemd-fsck.
  [  +0.050206] systemd[1]: Created slice User and Session Slice.
  [  +0.033712] systemd[1]: Started ntpsec-systemd-netif.path.
  [  +0.032925] systemd[1]: Started Forward Password Requests to Wall Directory 
Watch.
  [  +0.032846] systemd[1]: Set up automount Arbitrary Executable File Formats 
File System Automount Point.
  [  +0.032468] systemd[1]: Reached ta

[Kernel-packages] [Bug 2000403] Re: series of i5000 kernel bug reports every boot, latest jammy

2024-08-31 Thread Matthew Bradley
Hi, I have this EXACT same issue on a system here, down to the same line
number in the kernel source that throws the error.

The system is a Precision 490 workstation with 32 GB of ram, across 8 x
4GB DIMMs.

Relevant log lines:

[5.088551] loop12: detected capacity change from 0 to 32
[5.088762] loop11: detected capacity change from 0 to 79328
[5.088764] loop10: detected capacity change from 0 to 24416
[5.089226] loop13: detected capacity change from 0 to 27680
[5.673025] i5k_amb: probe of i5k_amb.0 failed with error -16
[5.680900] 

[5.680908] UBSAN: array-index-out-of-bounds in 
/build/linux-hwe-5.15-AvrTps/linux-hwe-5.15-5.15.0/drivers/edac/i5000_edac.c:956:20
[5.680913] index 4 is out of range for type 'u16 [4]'
[5.680916] CPU: 2 PID: 362 Comm: systemd-udevd Tainted: G  I   
5.15.0-119-generic #129~20.04.1-Ubuntu
[5.680919] Hardware name: Dell Inc. Precision WorkStation 
490/0GU083, BIOS A08 04/25/2008
[5.680922] Call Trace:
[5.680925]  
[5.680929]  dump_stack_lvl+0x4a/0x63
[5.680937]  dump_stack+0x10/0x16
[5.680939]  ubsan_epilogue+0x9/0x36
[5.680943]  __ubsan_handle_out_of_bounds.cold+0x44/0x49
[5.680946]  i5000_probe1+0x5ce/0x630 [i5000_edac]
[5.680951]  ? pci_bus_read_config_byte+0x40/0x70
[5.680956]  ? do_pci_enable_device+0x4/0x110
[5.680960]  i5000_init_one+0x27/0x30 [i5000_edac]
[5.680964]  local_pci_probe+0x4b/0x90
[5.680968]  pci_device_probe+0x191/0x200
[5.680971]  really_probe.part.0+0xcb/0x380
[5.680975]  really_probe+0x40/0x80
[5.680977]  __driver_probe_device+0xe8/0x140
[5.680980]  driver_probe_device+0x23/0xb0
[5.680982]  __driver_attach+0xc5/0x180
[5.680984]  ? __device_attach_driver+0x140/0x140
[5.680986]  bus_for_each_dev+0x7e/0xd0
[5.680991]  driver_attach+0x1e/0x30
[5.680995]  bus_add_driver+0x178/0x220
[5.680998]  driver_register+0x74/0xe0
[5.681000]  ? 0xc070
[5.681002]  __pci_register_driver+0x68/0x70
[5.681005]  i5000_init+0x36/0x1000 [i5000_edac]
[5.681009]  do_one_initcall+0x48/0x1e0
[5.681014]  ? __cond_resched+0x19/0x40
[5.681019]  ? kmem_cache_alloc_trace+0x15a/0x420
[5.681024]  do_init_module+0x52/0x230
[5.681029]  load_module+0x12ae/0x1520
[5.681033]  __do_sys_finit_module+0xbf/0x120
[5.681036]  ? __do_sys_finit_module+0xbf/0x120
[5.681040]  __x64_sys_finit_module+0x1a/0x20
[5.681042]  x64_sys_call+0x1ac3/0x1fa0
[5.681045]  do_syscall_64+0x54/0xb0
[5.681050]  ? syscall_exit_to_user_mode+0x2c/0x50
[5.681053]  ? x64_sys_call+0x3b9/0x1fa0
[5.681056]  ? do_syscall_64+0x61/0xb0
[5.681058]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
[5.681062] RIP: 0033:0x7f61860f195d
[5.681065] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 
f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 
f0 ff ff 73 01 c3 48 8b 0d 03 35 0d 00 f7 d8 64 89 01 48
[5.681068] RSP: 002b:7ffef18051d8 EFLAGS: 0246 ORIG_RAX: 
0139
[5.681072] RAX: ffda RBX: 55f2f7fc1440 RCX: 7f61860f195d
[5.681074] RDX:  RSI: 7f6185fd1ded RDI: 000e
[5.681076] RBP: 0002 R08:  R09: 
[5.681078] R10: 000e R11: 0246 R12: 7f6185fd1ded
[5.681080] R13:  R14: 55f2f7daba20 R15: 55f2f7fc1440
[5.681083]  
[5.681084] 

[5.681086] 

[5.681088] UBSAN: array-index-out-of-bounds in 
/build/linux-hwe-5.15-AvrTps/linux-hwe-5.15-5.15.0/drivers/edac/i5000_edac.c:958:20
[5.681091] index 4 is out of range for type 'u16 [4]'
[5.681093] CPU: 2 PID: 362 Comm: systemd-udevd Tainted: G  I   
5.15.0-119-generic #129~20.04.1-Ubuntu
[5.681095] Hardware name: Dell Inc. Precision WorkStation 
490/0GU083, BIOS A08 04/25/2008
[5.681097] Call Trace:
[5.681098]  
[5.681099]  dump_stack_lvl+0x4a/0x63
[5.681102]  dump_stack+0x10/0x16
[5.681104]  ubsan_epilogue+0x9/0x36
[5.681106]  __ubsan_handle_out_of_bounds.cold+0x44/0x49
[5.681110]  i5000_probe1+0x4a8/0x630 [i5000_edac]
[5.681114]  ? pci_bus_read_config_byte+0x40/0x70
[5.681116]  ? do_pci_enable_device+0x4/0x110
[5.681119]  i5000_init_one+0x27/0x30 [i5000_edac]
[5.681123]  local_pci_probe+0x4b/0x90
[5.681126]  pci_device_probe+0x191/0x200
[5.681129]  really_probe.part.0+0xcb/0x380
[5.681131]  really_probe+0x40/0x80
[5.681134]  __driver_probe_device+0xe8/0x140
[5.681136]  driver_probe_device+0x23/0xb0
[5.681138]  __driver_attach+0xc5/0x180
[5.681140]  ? __device_attach_driver+0

[Kernel-packages] [Bug 2044657] Re: Multiple data corruption issues in zfs

2024-01-12 Thread Matthew Bradley
>Should we quickly do such a minimal backport to get that fix out
quickly

Unequivocally, yes. It has been several weeks and users of ZFS across
multiple releases are still exposed to a data corruption bug. The fix
should go out asap, especially when it's only a single-line fix.

This bug has already been fixed in every other OS supporting ZFS. Even
if the bug is rare on older versions of ZFS, it's extremely troubling to
users when data corruption fixes aren't rolled out in a timely manor.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/2044657

Title:
  Multiple data corruption issues in zfs

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Xenial:
  Confirmed
Status in zfs-linux source package in Bionic:
  Confirmed
Status in zfs-linux source package in Focal:
  Confirmed
Status in zfs-linux source package in Jammy:
  Confirmed
Status in zfs-linux source package in Lunar:
  Confirmed
Status in zfs-linux source package in Mantic:
  Incomplete
Status in zfs-linux source package in Noble:
  Fix Released

Bug description:
  [ Impact ]

   * Multiple data corruption issues have been identified and fixed in
  ZFS. Some of them, at varying real-life reproducibility frequency have
  been deterimed to affect very old zfs releases. Recommendation is to
  upgrade to 2.2.2 or 2.1.14 or backport dnat patch alone. This is to
  ensure users get other potentially related fixes and runtime tunables
  to possibly mitigate other bugs that are related and are being fixed
  upstream for future releases.

   * For jammy the 2.1.14 upgrade will bring HWE kernel support and also
  compatiblity/support for hardened kernel builds that mitigate SLS
  (straight-line-speculation).

  [ Test Plan ]

   * !!! Danger !!! use reproducer from
  https://zfsonlinux.topicbox.com/groups/zfs-discuss/T12876116b8607cdb
  and confirm if that issue is resolved or not. Do not run on production
  ZFS pools / systems.

   * autopkgtest pass (from https://ubuntu-archive-
  team.ubuntu.com/proposed-migration/ )

   * adt-matrix pass (from https://kernel.ubuntu.com/adt-matrix/ )

   * kernel regression zfs testsuite pass (from Kernel team RT test
  results summary, private)

   * zsys integration test pass (upgrade of zsys installed systems for
  all releases)

   * zsys install test pass (for daily images of LTS releases only that
  have such installer support, as per iso tracker test case)

   * LXD (ping LXD team to upgrade vendored in tooling to 2.2.2 and
  2.1.14, and test LXD on these updated kernels)

  
  [ Where problems could occur ]

   * Upgrade to 2.1.14 on jammy with SLS mitigations compatiblity will 
introduce slight slow down on amd64 (for hw accelerated assembly code-paths 
only in the encryption primitives)
   
   * Uncertain of the perfomance impact of the extra checks in dnat patch fix 
itself. Possibly affecting speed of operation, at the benefit of correctness.

  [ Other Info ]
   
   * https://github.com/openzfs/zfs/pull/15571 is most current consideration of 
affairs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/2044657/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2044657] Re: zfs block cloning file system corruption

2023-12-05 Thread Matthew Bradley
Since this bug has been around since 2006, is there a possibility this
will be backported to the 0.8.3-1ubuntu12.15 version of ZFS? It's the
up-to-date version of ZFS installed on systems here with ubuntu 20.04.6.

OS: Ubuntu 20.04.6 LTS x86_64
Kernel: 5.4.0-167-generic

uname -a
Linux archive-box 5.4.0-167-generic #184-Ubuntu SMP Tue Oct 31 09:21:49 UTC 
2023 x86_64 x86_64 x86_64 GNU/Linux

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-hwe-6.5 in Ubuntu.
https://bugs.launchpad.net/bugs/2044657

Title:
  zfs block cloning file system corruption

Status in linux-hwe-6.2 package in Ubuntu:
  Confirmed
Status in linux-hwe-6.5 package in Ubuntu:
  Confirmed
Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  OpenZFS 2.2 reportedly has a bug where block cloning might lead to
  file system corruption and data loss. This was fixed in OpenZFS 2.2.1.

  Original bug report: https://github.com/openzfs/zfs/issues/15526

  and 2.2.1 release notes:
  https://github.com/openzfs/zfs/releases/tag/zfs-2.2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-hwe-6.2/+bug/2044657/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1962572] Re: Impossible to Delete UEFI Dump Files - CONFIG_EFIVAR_FS & CONFIG_EFI_VARS both =y

2022-03-03 Thread Matthew Bradley
This is related to https://bugs.launchpad.net/ubuntu/+source/acpi-
call/+bug/1953261 which HAS NOT BEEN FIXED.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1962572

Title:
  Impossible to Delete  UEFI Dump Files - CONFIG_EFIVAR_FS &
  CONFIG_EFI_VARS both =y

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  See this bug for where this issue appears:
  https://bugs.launchpad.net/ubuntu/+source/acpi-call/+bug/1953261/+login

  I am seeing in the Arch UEFI documentation:

  UEFI Runtime Variables Support (efivarfs filesystem -
  /sys/firmware/efi/efivars). This option is important as this is
  required to manipulate UEFI runtime variables using tools like
  /usr/bin/efibootmgr. The configuration option below has been added in
  kernel 3.10 and later.

  CONFIG_EFIVAR_FS=y

  UEFI Runtime Variables Support (old efivars sysfs interface -
  /sys/firmware/efi/vars). This option should be disabled to prevent any
  potential issues with both efivarfs and sysfs-efivars enabled.

  CONFIG_EFI_VARS=n

  In Ubuntu 20.04, both of these are set =y.  This appears to be the
  cause of my inability to delete /sys/firmware/efi/efivars/dump*
  variables.  These variables are mirrored as directories of the same
  name in /sys/firmware/efi/vars/dump*.  If I delete the files
  ../efivars/dump*, they reappear on reboot.

  Systems have been bricked this way, when people thought they had cleared out 
the variable storage, but had not, and then found it full on the next boot.  
Some Thinkpads of the W530 vintage do not have a way to clear out that storage 
at post time.
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu27.21
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: ubuntu:GNOME
  DistroRelease: Ubuntu 20.04
  InstallationDate: Installed on 2021-02-05 (389 days ago)
  InstallationMedia: Ubuntu 20.04.2 LTS "Focal Fossa" - Release amd64 (20210204)
  MachineType: LENOVO 2436CTO
  Package: linux (not installed)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.13.0-30-generic 
root=UUID=5d036090-3f9e-4adf-a198-e0db3da45582 ro rootflags=subvol=@ 
luks.crypttab=no intel_iommu=on quiet
  ProcVersionSignature: Ubuntu 5.13.0-30.33~20.04.1-generic 5.13.19
  RelatedPackageVersions:
   linux-restricted-modules-5.13.0-30-generic N/A
   linux-backports-modules-5.13.0-30-generic  N/A
   linux-firmware 1.187.26
  RfKill:
   0: phy0: Wireless LAN
Soft blocked: no
Hard blocked: yes
  Tags:  focal
  Uname: Linux 5.13.0-30-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm btrbk cdrom debian-tor dip libvirt lpadmin lxd plugdev 
sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 06/11/2018
  dmi.bios.release: 2.72
  dmi.bios.vendor: LENOVO
  dmi.bios.version: G5ETB2WW (2.72 )
  dmi.board.asset.tag: Not Available
  dmi.board.name: 2436CTO
  dmi.board.vendor: LENOVO
  dmi.board.version: Not Defined
  dmi.chassis.asset.tag: No Asset Information
  dmi.chassis.type: 10
  dmi.chassis.vendor: LENOVO
  dmi.chassis.version: Not Available
  dmi.ec.firmware.release: 1.13
  dmi.modalias: 
dmi:bvnLENOVO:bvrG5ETB2WW(2.72):bd06/11/2018:br2.72:efr1.13:svnLENOVO:pn2436CTO:pvrThinkPadW530:rvnLENOVO:rn2436CTO:rvrNotDefined:cvnLENOVO:ct10:cvrNotAvailable:skuLENOVO_MT_2436:
  dmi.product.family: ThinkPad W530
  dmi.product.name: 2436CTO
  dmi.product.sku: LENOVO_MT_2436
  dmi.product.version: ThinkPad W530
  dmi.sys.vendor: LENOVO

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1962572/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp