[Kernel-packages] [Bug 1987711] Re: Ubuntu display get shifted after some time.

2022-08-28 Thread Daniel van Vugt
Thanks. It sounds like the problem is now happening in Wayland sessions.
Since essentially the same bug was first reported in Xorg that would
make this a kernel bug.

Please try adding this line to /etc/environment:

  MUTTER_DEBUG_ENABLE_ATOMIC_KMS=0

and then reboot. It will change the way the screen is rendered in
Wayland sessions.


** Package changed: xorg-server (Ubuntu) => linux (Ubuntu)

** Summary changed:

- Ubuntu display get shifted after some time. 
+ [i915] Ubuntu display get shifted after some time.

** Tags added: i915

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987711

Title:
  [i915] Ubuntu display get shifted after some time.

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  It is continuation of this bug:
  
https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/1986583
  where advised fix was applied. 

  Summay: After ubuntu is turned on, after some time screen gets shifted
  to side. (and starts to overlap)

  I setup webcamera to record screen, and video below shows what it looked 
like. 
  after fix were applied.
  www.ms.mff.cuni.cz/~krasicei/capture-0231.bug.crop.slow.mp4
  (video was cropped, and slowed down 2x)

  before fix was applied shift looked somehow like this:
  www.ms.mff.cuni.cz/~krasicei/bug17-50-18-8-2022.c.mp4
  (video is also cropped, and slowed down 2x)

  During shift no one was present on computer.

  Is there any idea how to enable some more detailed log or something? I 
probably can pinpoint exact moment in time when shift happened. 
  i tried to read log journalctl, but I found nothing interesting.

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: xorg 1:7.7+23ubuntu2
  ProcVersionSignature: Ubuntu 5.15.0-46.49-generic 5.15.39
  Uname: Linux 5.15.0-46-generic x86_64
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  BootLog:
   
  CasperMD5CheckResult: unknown
  Date: Fri Aug 26 02:31:10 2022
  DistUpgraded: 2022-08-02 14:36:41,704 DEBUG Running PostInstallScript: 
'/usr/lib/ubuntu-advantage/upgrade_lts_contract.py'
  DistroCodename: jammy
  DistroVariant: ubuntu
  ExtraDebuggingInterest: Yes, including running git bisection searches
  GraphicsCard:
   Intel Corporation Skylake GT2 [HD Graphics 520] [8086:1916] (rev 07) 
(prog-if 00 [VGA controller])
 Subsystem: Intel Corporation Skylake GT2 [HD Graphics 520] [8086:2015]
  InstallationDate: Installed on 2022-02-17 (189 days ago)
  InstallationMedia: Ubuntu 20.04.3 LTS "Focal Fossa" - Release amd64 (20210819)
  Lsusb:
   Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
   Bus 001 Device 003: ID 222a:0001 ILI Technology Corp. Multi-Touch Screen
   Bus 001 Device 002: ID 1a2c:2d23 China Resource Semico Co., Ltd Keyboard
   Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  MachineType: Default string Default string
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic 
root=UUID=582fb3cc-8ef6-460b-b9ae-a16f81c604a7 ro quiet splash vt.handoff=7
  SourcePackage: xorg
  UpgradeStatus: Upgraded to jammy on 2022-08-02 (23 days ago)
  dmi.bios.date: 10/07/2021
  dmi.bios.release: 5.12
  dmi.bios.vendor: American Megatrends Inc.
  dmi.bios.version: GSKU0504.V54
  dmi.board.asset.tag: Default string
  dmi.board.name: SKYBAY
  dmi.board.vendor: Default string
  dmi.board.version: Default string
  dmi.chassis.asset.tag: Default string
  dmi.chassis.type: 3
  dmi.chassis.vendor: Default string
  dmi.chassis.version: Default string
  dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrGSKU0504.V54:bd10/07/2021:br5.12:svnDefaultstring:pnDefaultstring:pvrDefaultstring:rvnDefaultstring:rnSKYBAY:rvrDefaultstring:cvnDefaultstring:ct3:cvrDefaultstring:skuDefaultstring:
  dmi.product.family: Default string
  dmi.product.name: Default string
  dmi.product.sku: Default string
  dmi.product.version: Default string
  dmi.sys.vendor: Default string
  version.compiz: compiz N/A
  version.libdrm2: libdrm2 2.4.110-1ubuntu1
  version.libgl1-mesa-dri: libgl1-mesa-dri 22.0.5-0ubuntu0.1
  version.libgl1-mesa-glx: libgl1-mesa-glx N/A
  version.xserver-xorg-core: xserver-xorg-core 2:21.1.3-2ubuntu2.1
  version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
  version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.1.0-2ubuntu1
  version.xserver-xorg-video-intel: xserver-xorg-video-intel N/A
  version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 
1:1.0.17-2build1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987711/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987711] [NEW] Ubuntu display get shifted after some time.

2022-08-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

It is continuation of this bug:
https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/1986583
where advised fix was applied. 

Summay: After ubuntu is turned on, after some time screen gets shifted
to side. (and starts to overlap)

I setup webcamera to record screen, and video below shows what it looked like. 
after fix were applied.
www.ms.mff.cuni.cz/~krasicei/capture-0231.bug.crop.slow.mp4
(video was cropped, and slowed down 2x)

before fix was applied shift looked somehow like this:
www.ms.mff.cuni.cz/~krasicei/bug17-50-18-8-2022.c.mp4
(video is also cropped, and slowed down 2x)

During shift no one was present on computer.

Is there any idea how to enable some more detailed log or something? I probably 
can pinpoint exact moment in time when shift happened. 
i tried to read log journalctl, but I found nothing interesting.

ProblemType: Bug
DistroRelease: Ubuntu 22.04
Package: xorg 1:7.7+23ubuntu2
ProcVersionSignature: Ubuntu 5.15.0-46.49-generic 5.15.39
Uname: Linux 5.15.0-46-generic x86_64
ApportVersion: 2.20.11-0ubuntu82.1
Architecture: amd64
BootLog:
 
CasperMD5CheckResult: unknown
Date: Fri Aug 26 02:31:10 2022
DistUpgraded: 2022-08-02 14:36:41,704 DEBUG Running PostInstallScript: 
'/usr/lib/ubuntu-advantage/upgrade_lts_contract.py'
DistroCodename: jammy
DistroVariant: ubuntu
ExtraDebuggingInterest: Yes, including running git bisection searches
GraphicsCard:
 Intel Corporation Skylake GT2 [HD Graphics 520] [8086:1916] (rev 07) (prog-if 
00 [VGA controller])
   Subsystem: Intel Corporation Skylake GT2 [HD Graphics 520] [8086:2015]
InstallationDate: Installed on 2022-02-17 (189 days ago)
InstallationMedia: Ubuntu 20.04.3 LTS "Focal Fossa" - Release amd64 (20210819)
Lsusb:
 Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
 Bus 001 Device 003: ID 222a:0001 ILI Technology Corp. Multi-Touch Screen
 Bus 001 Device 002: ID 1a2c:2d23 China Resource Semico Co., Ltd Keyboard
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
MachineType: Default string Default string
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic 
root=UUID=582fb3cc-8ef6-460b-b9ae-a16f81c604a7 ro quiet splash vt.handoff=7
SourcePackage: xorg
UpgradeStatus: Upgraded to jammy on 2022-08-02 (23 days ago)
dmi.bios.date: 10/07/2021
dmi.bios.release: 5.12
dmi.bios.vendor: American Megatrends Inc.
dmi.bios.version: GSKU0504.V54
dmi.board.asset.tag: Default string
dmi.board.name: SKYBAY
dmi.board.vendor: Default string
dmi.board.version: Default string
dmi.chassis.asset.tag: Default string
dmi.chassis.type: 3
dmi.chassis.vendor: Default string
dmi.chassis.version: Default string
dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrGSKU0504.V54:bd10/07/2021:br5.12:svnDefaultstring:pnDefaultstring:pvrDefaultstring:rvnDefaultstring:rnSKYBAY:rvrDefaultstring:cvnDefaultstring:ct3:cvrDefaultstring:skuDefaultstring:
dmi.product.family: Default string
dmi.product.name: Default string
dmi.product.sku: Default string
dmi.product.version: Default string
dmi.sys.vendor: Default string
version.compiz: compiz N/A
version.libdrm2: libdrm2 2.4.110-1ubuntu1
version.libgl1-mesa-dri: libgl1-mesa-dri 22.0.5-0ubuntu0.1
version.libgl1-mesa-glx: libgl1-mesa-glx N/A
version.xserver-xorg-core: xserver-xorg-core 2:21.1.3-2ubuntu2.1
version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:19.1.0-2ubuntu1
version.xserver-xorg-video-intel: xserver-xorg-video-intel N/A
version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.17-2build1

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: Incomplete


** Tags: amd64 apport-bug jammy ubuntu
-- 
Ubuntu display get shifted after some time. 
https://bugs.launchpad.net/bugs/1987711
You received this bug notification because you are a member of Kernel Packages, 
which is subscribed to linux in Ubuntu.

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987134] Re: Black screen after wake from S3 suspend on Nvidia proprietary driver

2022-08-28 Thread Daniel van Vugt
OK this sounds like an Nvidia kernel driver bug, or some other kernel
driver bug.

P.S. If you have TLP installed then please remove it as it is known to
also cause bugs like this.

** No longer affects: mutter (Ubuntu)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-graphics-drivers-515 in Ubuntu.
https://bugs.launchpad.net/bugs/1987134

Title:
  Black screen after wake from S3 suspend on Nvidia proprietary driver

Status in nvidia-graphics-drivers-515 package in Ubuntu:
  New

Bug description:
  An Nvidia driver upgrade in Ubuntu 20.04 caused waking from S3 suspend
  to lead to a black screen.

  Upgrading to Ubuntu 22.04 using latest stable Nvidia driver 515.65.01
  did not resolve the issue.

  There are several Nvidia sleep tickets that at first seem related, but
  mention dmesg entries that I am not seeing which makes me suspect I
  have a different issue:

  * 
https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers-510/+bug/1970088
  * 
https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers-470/+bug/1946303

  Workaround dhenry mentions in 1970088 of disabling the nvidia suspend
  and hibernate systemd services makes no difference for me.

  Currrently running Ubuntu 22.04, Nvidia GTX 970 on driver 515.65.01,
  AMD Ryzen 7 5800X.

  Here are journalctl excerpts during the reboot process, with and
  without nvidia systemd suspend services. dmesg and /var/log/kern.log
  show no errors as are mentioned in the above tickets.

  With disabled nvidia suspend systemd services:

  Aug 19 12:42:37 uf-panacea ModemManager[2731]:   [sleep-monitor] system 
is about to suspend
  Aug 19 12:42:37 uf-panacea NetworkManager[2617]:   [1660938157.8491] 
manager: NetworkManager state>
  Aug 19 12:42:37 uf-panacea systemd[1]: Reached target Sleep.
  Aug 19 12:42:37 uf-panacea systemd[1]: Starting Record successful boot for 
GRUB...
  Aug 19 12:42:37 uf-panacea systemd[1]: Starting System Suspend...
  Aug 19 12:42:37 uf-panacea systemd-sleep[13048]: Entering sleep state 
'suspend'...
  Aug 19 12:42:37 uf-panacea systemd[1]: grub-common.service: Deactivated 
successfully.
  Aug 19 12:42:37 uf-panacea systemd[1]: Finished Record successful boot for 
GRUB.

  With enabled nvidia suspend systemd services:

  Aug 19 12:12:25 uf-panacea systemd[1]: Starting NVIDIA system suspend 
actions...
  Aug 19 12:12:25 uf-panacea suspend[15998]: nvidia-suspend.service
  Aug 19 12:12:25 uf-panacea logger[15998]: <13>Aug 19 12:12:25 suspend: 
nvidia-suspend.service
  Aug 19 12:12:25 uf-panacea systemd[1]: grub-common.service: Deactivated 
successfully.
  Aug 19 12:12:25 uf-panacea systemd[1]: Finished Record successful boot for 
GRUB.
  Aug 19 12:12:25 uf-panacea systemd[1]: Starting GRUB failed boot detection...
  Aug 19 12:12:25 uf-panacea systemd[1]: grub-initrd-fallback.service: 
Deactivated successfully.
  Aug 19 12:12:25 uf-panacea systemd[1]: Finished GRUB failed boot detection.
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"40"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (II) event1  - 
Power Button: device removed
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"43"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (II) event0  - 
Power Button: device removed
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"44"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (II) event4  - 
USB Mouse OKLIC: device removed
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"45"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (II) event5  - 
USB Mouse OKLIC Mouse: device removed
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"46"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (II) event6  - 
USB Mouse OKLIC System Control: device removed
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"47"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (II) event7  - 
USB Mouse OKLIC Consumer Control: device removed
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"53"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"53"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (II) event11 - 
Valve Software Steam Controller: device removed
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"50"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"50"
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (II) event9  - 
Logitech G305: device removed
  Aug 19 12:12:25 uf-panacea rtkit-daemon[3286]: Supervising 10 threads of 4 
processes of 1 users.
  Aug 19 12:12:25 uf-panacea /usr/libexec/gdm-x-session[4092]: (**) Option "fd" 
"51"
  Aug 19 12:12:25 uf-pana

[Kernel-packages] [Bug 1988018] [NEW] [mlx5] Intermittent VF-LAG activation failure

2022-08-28 Thread Frode Nordahl
Public bug reported:

During system initialization there is a specific sequence that must be
followed to enable the use of hardware offload and VF-LAG.

Intermittently one may see that VF-LAG initialization fails:
[Thu Jul 21 10:54:58 2022] mlx5_core :08:00.0: lag map port 1:1 port 2:2 
shared_fdb:1
[Thu Jul 21 10:54:58 2022] mlx5_core :08:00.0: mlx5_cmd_check:782:(pid 9): 
CREATE_LAG(0x840) op_mod(0x0) failed, status bad parameter(0x3), syndrome 
(0x7d49cb)
[Thu Jul 21 10:54:58 2022] mlx5_core :08:00.0: mlx5_create_lag:248:(pid 9): 
Failed to create LAG (-22)
[Thu Jul 21 10:54:58 2022] mlx5_core :08:00.0: mlx5_activate_lag:288:(pid 
9): Failed to activate VF LAG
   Make sure all VFs are unbound prior to VF LAG 
activation or deactivation

This is caused by rebinding the driver prior to the VF lag being ready.

A sysfs knob has recently been added to the driver [0] and we should
monitor it before attempting to rebind the driver:

$ cat /sys/kernel/debug/mlx5/\:08\:00.0/lag/state

The kernel feature is available in the upcoming Kinetic 5.19 kernel and
we should probably backport it to the Jammy 5.15 kernel.

0:
https://github.com/torvalds/linux/commit/7f46a0b7327ae261f9981888708dbca22c283900

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: Fix Committed

** Affects: netplan.io (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: linux (Ubuntu Jammy)
 Importance: Undecided
 Status: New

** Affects: netplan.io (Ubuntu Jammy)
 Importance: Undecided
 Status: New

** Affects: linux (Ubuntu Kinetic)
 Importance: Undecided
 Status: Fix Committed

** Affects: netplan.io (Ubuntu Kinetic)
 Importance: Undecided
 Status: New

** Also affects: linux (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: linux (Ubuntu Kinetic)
   Importance: Undecided
   Status: New

** Changed in: linux (Ubuntu Kinetic)
   Status: New => Fix Committed

** Also affects: netplan.io (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1988018

Title:
  [mlx5] Intermittent VF-LAG activation failure

Status in linux package in Ubuntu:
  Fix Committed
Status in netplan.io package in Ubuntu:
  New
Status in linux source package in Jammy:
  New
Status in netplan.io source package in Jammy:
  New
Status in linux source package in Kinetic:
  Fix Committed
Status in netplan.io source package in Kinetic:
  New

Bug description:
  During system initialization there is a specific sequence that must be
  followed to enable the use of hardware offload and VF-LAG.

  Intermittently one may see that VF-LAG initialization fails:
  [Thu Jul 21 10:54:58 2022] mlx5_core :08:00.0: lag map port 1:1 port 2:2 
shared_fdb:1
  [Thu Jul 21 10:54:58 2022] mlx5_core :08:00.0: mlx5_cmd_check:782:(pid 
9): CREATE_LAG(0x840) op_mod(0x0) failed, status bad parameter(0x3), syndrome 
(0x7d49cb)
  [Thu Jul 21 10:54:58 2022] mlx5_core :08:00.0: mlx5_create_lag:248:(pid 
9): Failed to create LAG (-22)
  [Thu Jul 21 10:54:58 2022] mlx5_core :08:00.0: mlx5_activate_lag:288:(pid 
9): Failed to activate VF LAG
 Make sure all VFs are unbound prior to VF LAG 
activation or deactivation

  This is caused by rebinding the driver prior to the VF lag being
  ready.

  A sysfs knob has recently been added to the driver [0] and we should
  monitor it before attempting to rebind the driver:

  $ cat /sys/kernel/debug/mlx5/\:08\:00.0/lag/state

  The kernel feature is available in the upcoming Kinetic 5.19 kernel
  and we should probably backport it to the Jammy 5.15 kernel.

  0:
  
https://github.com/torvalds/linux/commit/7f46a0b7327ae261f9981888708dbca22c283900

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1988018/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1799679] Re: Nvidia driver causes Xorg to use 100% CPU and huge lag when dragging OpenGL app windows

2022-08-28 Thread Daniel van Vugt
** Tags added: jammy

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-graphics-drivers-418 in Ubuntu.
https://bugs.launchpad.net/bugs/1799679

Title:
  Nvidia driver causes Xorg to use 100% CPU and huge lag when dragging
  OpenGL app windows

Status in Mutter:
  Unknown
Status in metacity package in Ubuntu:
  Invalid
Status in mutter package in Ubuntu:
  Invalid
Status in nvidia-graphics-drivers-390 package in Ubuntu:
  Confirmed
Status in nvidia-graphics-drivers-410 package in Ubuntu:
  Won't Fix
Status in nvidia-graphics-drivers-418 package in Ubuntu:
  Won't Fix
Status in nvidia-graphics-drivers-510 package in Ubuntu:
  Confirmed
Status in nvidia-graphics-drivers-515 package in Ubuntu:
  Confirmed

Bug description:
  Nvidia driver causes Xorg to use 100% CPU and shows high lag and
  stutter... but only when dragging glxgears/glxheads, or any window
  over them. Other apps do not exhibit the problem.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.10
  Package: nvidia-driver-390 390.87-0ubuntu1
  ProcVersionSignature: Ubuntu 4.18.0-10.11-generic 4.18.12
  Uname: Linux 4.18.0-10-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.10-0ubuntu13
  Architecture: amd64
  Date: Wed Oct 24 19:11:15 2018
  InstallationDate: Installed on 2018-05-26 (151 days ago)
  InstallationMedia: Ubuntu 18.10 "Cosmic Cuttlefish" - Alpha amd64 (20180525)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_AU.UTF-8
   SHELL=/bin/bash
  SourcePackage: nvidia-graphics-drivers-390
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/mutter/+bug/1799679/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1799679] Re: Nvidia driver causes Xorg to use 100% CPU and huge lag when dragging OpenGL app windows

2022-08-28 Thread Daniel van Vugt
Nvidia's latest assessment is that this is indeed a driver bug so we
don't need to keep the mutter and metacity tasks open here...

https://gitlab.gnome.org/GNOME/mutter/-/issues/2233#note_1538392

** Also affects: nvidia-graphics-drivers-515 (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: metacity (Ubuntu)
   Status: Confirmed => Invalid

** Changed in: mutter (Ubuntu)
   Status: Confirmed => Invalid

** Changed in: nvidia-graphics-drivers-515 (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-graphics-drivers-418 in Ubuntu.
https://bugs.launchpad.net/bugs/1799679

Title:
  Nvidia driver causes Xorg to use 100% CPU and huge lag when dragging
  OpenGL app windows

Status in Mutter:
  Unknown
Status in metacity package in Ubuntu:
  Invalid
Status in mutter package in Ubuntu:
  Invalid
Status in nvidia-graphics-drivers-390 package in Ubuntu:
  Confirmed
Status in nvidia-graphics-drivers-410 package in Ubuntu:
  Won't Fix
Status in nvidia-graphics-drivers-418 package in Ubuntu:
  Won't Fix
Status in nvidia-graphics-drivers-510 package in Ubuntu:
  Confirmed
Status in nvidia-graphics-drivers-515 package in Ubuntu:
  Confirmed

Bug description:
  Nvidia driver causes Xorg to use 100% CPU and shows high lag and
  stutter... but only when dragging glxgears/glxheads, or any window
  over them. Other apps do not exhibit the problem.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.10
  Package: nvidia-driver-390 390.87-0ubuntu1
  ProcVersionSignature: Ubuntu 4.18.0-10.11-generic 4.18.12
  Uname: Linux 4.18.0-10-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.10-0ubuntu13
  Architecture: amd64
  Date: Wed Oct 24 19:11:15 2018
  InstallationDate: Installed on 2018-05-26 (151 days ago)
  InstallationMedia: Ubuntu 18.10 "Cosmic Cuttlefish" - Alpha amd64 (20180525)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_AU.UTF-8
   SHELL=/bin/bash
  SourcePackage: nvidia-graphics-drivers-390
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/mutter/+bug/1799679/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987996] Missing required logs.

2022-08-28 Thread Ubuntu Kernel Bot
This bug is missing log files that will aid in diagnosing the problem.
While running an Ubuntu kernel (not a mainline or third-party kernel)
please enter the following command in a terminal window:

apport-collect 1987996

and then change the status of the bug to 'Confirmed'.

If, due to the nature of the issue you have encountered, you are unable
to run this command, please add a comment stating that fact and change
the bug status to 'Confirmed'.

This change has been made by an automated script, maintained by the
Ubuntu Kernel Team.

** Changed in: linux (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987996

Title:
  boot is failing shows error ACPI BIOS error (bug): could not resolve
  symbol [\_SB.PR00._CPC] I tried reinstalling ubuntu and erase the
  current files however the installer crashed.

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  I have read this is a kernel issue, however I don't know how to
  resolve it.

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: ubiquity 22.04.15
  ProcVersionSignature: Ubuntu 5.15.0-25.25-generic 5.15.30
  Uname: Linux 5.15.0-25-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu82
  Architecture: amd64
  CasperMD5CheckResult: pass
  CasperVersion: 1.470
  CurrentDesktop: ubuntu:GNOME
  Date: Sun Aug 28 17:17:31 2022
  InstallCmdLine: BOOT_IMAGE=/casper/vmlinuz file=/cdrom/preseed/ubuntu.seed 
maybe-ubiquity quiet splash ---
  LiveMediaBuild: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 (20220419)
  ProcEnviron:
   LANGUAGE=en_US.UTF-8
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   LC_NUMERIC=C.UTF-8
  SourcePackage: ubiquity
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987996/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987996] Re: boot is failing shows error ACPI BIOS error (bug): could not resolve symbol [\_SB.PR00._CPC] I tried reinstalling ubuntu and erase the current files however the ins

2022-08-28 Thread Chris Guiver
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

By filing this bug against `ubiquity`, you've reported an issue against
the desktop installer, and details gained by the apport tool were geared
at exploration of your installer issues.

I've changed the filing to be against the kernel (ie. linux) however it
may still be difficult for any exploration given the reported detail
includes almost no kernel details (instead getting installer details)

** Package changed: ubiquity (Ubuntu) => linux (Ubuntu)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987996

Title:
  boot is failing shows error ACPI BIOS error (bug): could not resolve
  symbol [\_SB.PR00._CPC] I tried reinstalling ubuntu and erase the
  current files however the installer crashed.

Status in linux package in Ubuntu:
  New

Bug description:
  I have read this is a kernel issue, however I don't know how to
  resolve it.

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: ubiquity 22.04.15
  ProcVersionSignature: Ubuntu 5.15.0-25.25-generic 5.15.30
  Uname: Linux 5.15.0-25-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu82
  Architecture: amd64
  CasperMD5CheckResult: pass
  CasperVersion: 1.470
  CurrentDesktop: ubuntu:GNOME
  Date: Sun Aug 28 17:17:31 2022
  InstallCmdLine: BOOT_IMAGE=/casper/vmlinuz file=/cdrom/preseed/ubuntu.seed 
maybe-ubiquity quiet splash ---
  LiveMediaBuild: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 (20220419)
  ProcEnviron:
   LANGUAGE=en_US.UTF-8
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   LC_NUMERIC=C.UTF-8
  SourcePackage: ubiquity
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987996/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987997] acpidump.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612244/+files/acpidump.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value

[Kernel-packages] [Bug 1987996] [NEW] boot is failing shows error ACPI BIOS error (bug): could not resolve symbol [\_SB.PR00._CPC] I tried reinstalling ubuntu and erase the current files however the i

2022-08-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I have read this is a kernel issue, however I don't know how to resolve
it.

ProblemType: Bug
DistroRelease: Ubuntu 22.04
Package: ubiquity 22.04.15
ProcVersionSignature: Ubuntu 5.15.0-25.25-generic 5.15.30
Uname: Linux 5.15.0-25-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.11-0ubuntu82
Architecture: amd64
CasperMD5CheckResult: pass
CasperVersion: 1.470
CurrentDesktop: ubuntu:GNOME
Date: Sun Aug 28 17:17:31 2022
InstallCmdLine: BOOT_IMAGE=/casper/vmlinuz file=/cdrom/preseed/ubuntu.seed 
maybe-ubiquity quiet splash ---
LiveMediaBuild: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 (20220419)
ProcEnviron:
 LANGUAGE=en_US.UTF-8
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=en_US.UTF-8
 LC_NUMERIC=C.UTF-8
SourcePackage: ubiquity
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug jammy ubiquity-22.04.15 ubuntu
-- 
boot is failing shows error ACPI BIOS error (bug): could not resolve symbol 
[\_SB.PR00._CPC] I tried reinstalling ubuntu and erase the current files 
however the installer crashed.
https://bugs.launchpad.net/bugs/1987996
You received this bug notification because you are a member of Kernel Packages, 
which is subscribed to linux in Ubuntu.

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1970127] Re: Ubuntu-22.04 Live CD not booting on HP ENVY X360 notebook (Ryzen 7 3700U)

2022-08-28 Thread kenshir
I'm experiencing quite exactly the same situation of Roxnny on a Hp ENVY x360 
15-eusl
I had booting troubles with 22.04.1 live usb and also with an installed 20.04 
after a kernel update (5.15.0-46)

Anyway I managed to boot 22.04 live usb adding acpi=off parameter to boot 
options.
After installation and update of 22.04 I still can boot the system just if I 
add acpi=off
Unfortunately that brings some bad malfunctionings: I can't see battery level, 
touchpad doesn't work, and it seems from system monitor that just one cpu core 
is working.

A couple of further notes:
- during installation I had (twice) a grub install failure, I had to install it 
manually later
- sometimes firefox starts with just a black screen, I have to close it and 
launch it again

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1970127

Title:
  Ubuntu-22.04 Live CD not booting on HP ENVY X360 notebook (Ryzen 7
  3700U)

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  The ubuntu-22.04-desktop-amd64.iso live cd does not boot on a HP ENVY
  X360 notebook (Ryzen 7 3700U).

  Model:
  HP ENVY X360
  13-ar0777ng
  9YN58EA#ABD

  After a few minutes the screen simply switches to black. No
  possibility to get a console by pressing CTRL-ALT-F1, F2, ...

  I removed the boot options "quiet splash" and recorded the boot via video.
  (just ask if you need the full video)
  I attach a significantly looking screenshot from that video, showing a kernel 
bug message.

  
  Currently the notebook runs with Ubuntu-20.04 using the Linux-5.11 HWE kernel.
  But suspend to memory isn't working.
  Related: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1903292

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1970127/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987997] Status changed to Confirmed

2022-08-28 Thread Ubuntu Kernel Bot
This change was made by a bot.

** Changed in: linux (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [562599.837310] RSP: 002b:7f4066ffc850

[Kernel-packages] [Bug 1987997] WifiSyslog.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "WifiSyslog.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612243/+files/WifiSyslog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP v

[Kernel-packages] [Bug 1987998] [NEW] LSM: Configuring Too Many LSMs Causes Kernel Panic on Boot

2022-08-28 Thread Matthew Ruffell
Public bug reported:

BugLink: https://bugs.launchpad.net/bugs/1987998

[Impact]

The Ubuntu kernel carries an out of tree patchet, known as "LSM: Module
stacking for AppArmor" upstream, to enable stackable LSMs for
containers. The revision the Ubuntu kernel carries is an older one, from
2020, and has some slight divergences from the latest revision in
development.

One such divergence, is support for Landlock as a stackable LSM. When
the stackable LSM patchset was applied, Landlock was still in
development and not mainlined yet, and wasn't present in the earlier
revision of the "LSM: Module stacking for AppArmor" patchset. Support
for this was added by us.

There was a minor omission made during enabling support for Landlock.
The LSM slot type was marked as LSMBLOB_NEEDED, when it should have been
LSMBLOB_NOT_NEEDED.

Landlock itself does not provide any of the hooks that use a struct
lsmblob, such as secid_to_secctx, secctx_to_secid, inode_getsecid,
cred_getsecid, kernel_act_as task_getsecid_subj task_getsecid_obj and
ipc_getsecid.

When we set .slot = LSMBLOB_NEEDED, this indicates that we need an entry
in struct lsmblob, and we need to increment LSMBLOB_ENTRIES by one to
fit the entry into the secid array:

#define LSMBLOB_ENTRIES ( \
   (IS_ENABLED(CONFIG_SECURITY_SELINUX) ? 1 : 0) + \
   (IS_ENABLED(CONFIG_SECURITY_SMACK) ? 1 : 0) + \
   (IS_ENABLED(CONFIG_SECURITY_APPARMOR) ? 1 : 0) + \
   (IS_ENABLED(CONFIG_BPF_LSM) ? 1 : 0))

struct lsmblob {
   u32 secid[LSMBLOB_ENTRIES];
};

Currently, we don't increment LSMBLOB_ENTRIES by one to make an entry
for Landlock, so for the Ubuntu kernel, we can fit a maximum of two
entries, one for Apparmor and one for bpf.

If you try and configure three LSMs like so and reboot:

GRUB_CMDLINE_LINUX_DEFAULT="lsm=landlock,bpf,apparmor"

You will receive the following panic:

LSM: Security Framework initializing
landlock: Up and running.
LSM support for eBPF active
Kernel panic - not syncing: security_add_hooks Too many LSMs registered.
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.15.0-46-generic #49-Ubuntu
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014
Call Trace:
 
 show_stack+0x52/0x5c
 dump_stack_lvl+0x4a/0x63
 dump_stack+0x10/0x16
 panic+0x149/0x321
 security_add_hooks+0x45/0x13a
 apparmor_init+0x189/0x1ef
 initialize_lsm+0x54/0x74
 ordered_lsm_init+0x379/0x392
 security_init+0x40/0x49
 start_kernel+0x466/0x4dc
 x86_64_start_reservations+0x24/0x2a
 x86_64_start_kernel+0xe4/0xef
 secondary_startup_64_no_verify+0xc2/0xcb
 
---[ end Kernel panic - not syncing: security_add_hooks Too many LSMs 
registered. ]---

There is a check added in security_add_hooks() that makes sure that you
cannot configure too many LSMs:

if (lsmid->slot == LSMBLOB_NEEDED) {
 if (lsm_slot >= LSMBLOB_ENTRIES)
  panic("%s Too many LSMs registered.\n", __func__);
 lsmid->slot = lsm_slot++;
 init_debug("%s assigned lsmblob slot %d\n", lsmid->lsm,
 lsmid->slot);
}

A workaround is to enable no more than 2 LSMs until this is fixed.

[Fix]

If you read the following mailing list thread on linux-security-modules
from May 2021:

https://lore.kernel.org/selinux/202105141224.942DE93@keescook/T/

It is explained that Landlock does not provide any of the hooks that use
a struct lsmblob, such as secid_to_secctx, secctx_to_secid,
inode_getsecid, cred_getsecid, kernel_act_as task_getsecid_subj
task_getsecid_obj and ipc_getsecid.

I verified this with:

ubuntu-jammy$ grep -Rin "secid_to_secctx" security/landlock/
ubuntu-jammy$ grep -Rin "secctx_to_secid" security/landlock/
ubuntu-jammy$ grep -Rin "inode_getsecid" security/landlock/
ubuntu-jammy$ grep -Rin "cred_getsecid" security/landlock/
ubuntu-jammy$ grep -Rin "kernel_act_as" security/landlock/
ubuntu-jammy$ grep -Rin "task_getsecid_subj" security/landlock/
ubuntu-jammy$ grep -Rin "task_getsecid_obj" security/landlock/
ubuntu-jammy$ grep -Rin "ipc_getsecid" security/landlock/

The fix is to change Landlock from LSMBLOB_NEEDED to LSMBLOB_NOT_NEEDED.

Due to the "LSM: Module stacking for AppArmor" patchset being 25 patches
long, it was impractical to revert just the below patch and reapply with
the fix, due to a large amount of conflicts:

commit f17b27a2790e72198d2aaf45242453e5a9043049
Author: Casey Schaufler 
Date:   Mon Aug 17 16:02:56 2020 -0700
Subject: UBUNTU: SAUCE: LSM: Create and manage the lsmblob data structure.
Link: 
https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/jammy/commit/?id=f17b27a2790e72198d2aaf45242453e5a9043049

So instead, I wrote up a fix that just changes the Landlock LSM slots to
follow the latest upstream development, from V37 of the patchset:

https://lore.kernel.org/selinux/20220628005611.13106-4-casey@schaufler-
ca.com/

I refactored the landlock_lsmid struct to only be in one place, and to
be marked as extern from security/landlock/setup.h.

[Testcase]

Launch a Jammy or Kinetic VM.

1. Edit /etc/default/grub and append the following to 
GRUB_CMDLINE_

[Kernel-packages] [Bug 1987997] acpidump.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612245/+files/acpidump.txt

** Description changed:

  We run multiple machines that have the purpose of acting as a data
- repository for Veeam backup. Each machine has 2x 264G XFS volumes that
+ repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works for
  a week again.
  
  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process and
  XFS.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-124-generic 5.4.0-124.140
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  Uname: Linux 5.4.0-124-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  Date: Mon Aug 29 00:34:28 2022
  InstallationDate: Installed on 2021-02-10 (564 days ago)
  InstallationMedia: Ubuntu-Server 20.04.2 LTS "Focal Fossa" - Release amd64 
(20210201.2)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  PciMultimedia:
  
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  RelatedPackageVersions:
   linux-restricted-modules-5.4.0-124-generic N/A
   linux-backports-modules-5.4.0-124-generic  N/A
   linux-firmware 1.187.33
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 07/05/2022
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.15.0
  dmi.board.name: 0R10KJ
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A02
  dmi.chassis.type: 25
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: PowerEdge M1000e
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA02:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
  dmi.product.name: PowerEdge M630
  dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
  dmi.sys.vendor: Dell Inc.
- --- 
- ProblemType: Bug
- AlsaDevices:
-  total 0
-  crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
-  crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
- AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
- ApportVersion: 2.20.11-0ubuntu27.24
- Architecture: amd64
- ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
- AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
- CasperMD5CheckResult: pass
- DistroRelease: Ubuntu 20.04
- InstallationDate: Installed on 2021-09-16 (346 days ago)
- InstallationMedia: Ubuntu-Server 20.04.3 LTS "Focal Fossa" - Release amd64 
(20210824)
- IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
- MachineType: Dell Inc. PowerEdge M630
- Package: linux (not installed)
- PciMultimedia:
-  
- ProcEnviron:
-  TERM=xterm
-  PATH=(custom, no user)
-  LANG=en_US.UTF-8
-  SHELL=/bin/bash
- ProcFB: 0 mgag200drmfb
- ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
- ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
- RelatedPackageVersions:
-  linux-restricted-modules-5.4.0-124-generic N/A
-  linux-backports-modules-5.4.0-124-generic  N/A
-  linux-firmware 1.187.33
- RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
- Tags:  focal uec-images
- Uname: Linux 5.4.0-124-generic x86_64
- UpgradeStatus: No upgrade log present (probably fresh install)
- UserGroups: N/A
- _MarkForUpload: True
- dmi.bios.date: 07/05/2022
- dmi.bios.vendor: Dell Inc.
- dmi.bios.version: 2.15.0
- dmi.board.name: 0R10KJ
- dmi.board.vendor: Dell Inc.
- dmi.board.version: A05
- dmi.chassis.type: 25
- dmi.chassis.vendor: Dell Inc.
- dmi.chassis.version: PowerEdge M1000e
- dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA05:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
- dmi.product.name: PowerEdge M630
- dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
- dmi.sys.vendor: Dell Inc.
- --- 
- ProblemType: Bug
- AlsaDevices:
-  total 0
-  crw-rw 1 root audi

[Kernel-packages] [Bug 1987997] WifiSyslog.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "WifiSyslog.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612272/+files/WifiSyslog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP v

[Kernel-packages] [Bug 1987997] acpidump.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612273/+files/acpidump.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value

[Kernel-packages] [Bug 1987997] UdevDb.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "UdevDb.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612271/+files/UdevDb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [

[Kernel-packages] [Bug 1987997] ProcModules.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612270/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP

[Kernel-packages] [Bug 1987997] ProcCpuinfoMinimal.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612268/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308]

[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612269/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: B

[Kernel-packages] [Bug 1987997] Lsusb-t.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-t.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612265/+files/Lsusb-t.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] ProcCpuinfo.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612267/+files/ProcCpuinfo.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP

[Kernel-packages] [Bug 1987997] Lsusb-v.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-v.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612266/+files/Lsusb-v.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] Lsusb.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612264/+files/Lsusb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] Lspci-vt.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci-vt.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612263/+files/Lspci-vt.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value

[Kernel-packages] [Bug 1987997] acpidump.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612259/+files/acpidump.txt

** Description changed:

  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works for
  a week again.
  
  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process and
  XFS.
  
  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40
  
  
  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [562599.837310] RSP: 002b:7f4066ffc850 EFLAGS: 0293 ORIG_RAX: 
004a
  [562599.837313] RAX: ffda RBX: 7f4066ffd650 RCX: 
7f4092abd93b
  [562599.837315] RDX: 7f4066ffc800 RSI: 7f4066ffc800 RDI: 
0

[Kernel-packages] [Bug 1987997] CurrentDmesg.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "CurrentDmesg.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612261/+files/CurrentDmesg.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad R

[Kernel-packages] [Bug 1987997] Lspci.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612262/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] CRDA.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "CRDA.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612260/+files/CRDA.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [5625

[Kernel-packages] [Bug 1987997] ProcModules.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612256/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP

[Kernel-packages] [Bug 1987997] UdevDb.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "UdevDb.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612257/+files/UdevDb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [

[Kernel-packages] [Bug 1987997] WifiSyslog.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "WifiSyslog.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612258/+files/WifiSyslog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP v

[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612255/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: B

[Kernel-packages] [Bug 1987997] ProcCpuinfoMinimal.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612254/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308]

[Kernel-packages] [Bug 1987997] ProcCpuinfo.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612253/+files/ProcCpuinfo.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP

[Kernel-packages] [Bug 1987997] Lsusb-t.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-t.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612251/+files/Lsusb-t.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] Lsusb-v.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-v.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612252/+files/Lsusb-v.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] Lspci.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612248/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] Lspci-vt.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci-vt.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612249/+files/Lspci-vt.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value

[Kernel-packages] [Bug 1987997] Lsusb.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612250/+files/Lsusb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] CurrentDmesg.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "CurrentDmesg.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612247/+files/CurrentDmesg.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad R

[Kernel-packages] [Bug 1987997] Re: xfs freeze every week on multiple machines

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Description changed:

  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works for
  a week again.
  
  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process and
  XFS.
  
  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40
  
  
  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [562599.837310] RSP: 002b:7f4066ffc850 EFLAGS: 0293 ORIG_RAX: 
004a
  [562599.837313] RAX: ffda RBX: 7f4066ffd650 RCX: 
7f4092abd93b
  [562599.837315] RDX: 7f4066ffc800 RSI: 7f4066ffc800 RDI: 
0b06
  [562599.837316] RBP: 7f40440329e0 R08:  R09: 
7f40580008d0
  [562599.837317] R10: 0

[Kernel-packages] [Bug 1987997] UdevDb.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "UdevDb.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612238/+files/UdevDb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [

[Kernel-packages] [Bug 1987997] WifiSyslog.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "WifiSyslog.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612242/+files/WifiSyslog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP v

[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612239/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: B

[Kernel-packages] [Bug 1987997] UdevDb.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "UdevDb.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612241/+files/UdevDb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [

[Kernel-packages] [Bug 1987997] Lsusb-v.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-v.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612227/+files/Lsusb-v.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] ProcModules.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612240/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP

[Kernel-packages] [Bug 1987997] ProcModules.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612219/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP

[Kernel-packages] [Bug 1987997] ProcCpuinfo.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612236/+files/ProcCpuinfo.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP

[Kernel-packages] [Bug 1987997] ProcCpuinfoMinimal.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612237/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308]

[Kernel-packages] [Bug 1987997] ProcCpuinfoMinimal.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612231/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308]

[Kernel-packages] [Bug 1987997] Lspci-vt.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci-vt.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612228/+files/Lspci-vt.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value

[Kernel-packages] [Bug 1987997] Lsusb.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612230/+files/Lsusb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] Lspci.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612226/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612233/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: B

[Kernel-packages] [Bug 1987997] Lsusb.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612223/+files/Lsusb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] ProcModules.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612235/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP

[Kernel-packages] [Bug 1987997] Lsusb-v.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-v.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612234/+files/Lsusb-v.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] ProcCpuinfo.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612229/+files/ProcCpuinfo.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP

[Kernel-packages] [Bug 1987997] Lsusb-t.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-t.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612232/+files/Lsusb-t.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] ProcCpuinfoMinimal.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612215/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308]

[Kernel-packages] [Bug 1987997] CurrentDmesg.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "CurrentDmesg.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612224/+files/CurrentDmesg.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad R

[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612217/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: B

[Kernel-packages] [Bug 1987997] CRDA.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "CRDA.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612216/+files/CRDA.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [5625

[Kernel-packages] [Bug 1987997] Lspci.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612220/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] CurrentDmesg.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "CurrentDmesg.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612218/+files/CurrentDmesg.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad R

[Kernel-packages] [Bug 1987997] ProcCpuinfo.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612214/+files/ProcCpuinfo.txt

** Description changed:

  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 264G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works for
  a week again.
  
  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process and
  XFS.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-124-generic 5.4.0-124.140
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  Uname: Linux 5.4.0-124-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  Date: Mon Aug 29 00:34:28 2022
  InstallationDate: Installed on 2021-02-10 (564 days ago)
  InstallationMedia: Ubuntu-Server 20.04.2 LTS "Focal Fossa" - Release amd64 
(20210201.2)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  PciMultimedia:
  
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  RelatedPackageVersions:
   linux-restricted-modules-5.4.0-124-generic N/A
   linux-backports-modules-5.4.0-124-generic  N/A
   linux-firmware 1.187.33
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 07/05/2022
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.15.0
  dmi.board.name: 0R10KJ
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A02
  dmi.chassis.type: 25
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: PowerEdge M1000e
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA02:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
  dmi.product.name: PowerEdge M630
  dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
  dmi.sys.vendor: Dell Inc.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 20.04
  InstallationDate: Installed on 2021-09-16 (346 days ago)
  InstallationMedia: Ubuntu-Server 20.04.3 LTS "Focal Fossa" - Release amd64 
(20210824)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  Package: linux (not installed)
  PciMultimedia:
   
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  RelatedPackageVersions:
   linux-restricted-modules-5.4.0-124-generic N/A
   linux-backports-modules-5.4.0-124-generic  N/A
   linux-firmware 1.187.33
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  focal uec-images
  Uname: Linux 5.4.0-124-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 07/05/2022
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.15.0
  dmi.board.name: 0R10KJ
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A05
  dmi.chassis.type: 25
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: PowerEdge M1000e
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA05:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
  dmi.product.name: PowerEdge M630
  dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
  dmi.sys.vendor: Dell Inc.
+ --- 
+ ProblemType: Bug
+ AlsaDevices:
+  total 0
+  crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
+  crw-rw 1 root audio 116, 33 Aug 2

[Kernel-packages] [Bug 1987997] Lsusb-v.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-v.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612213/+files/Lsusb-v.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] Lsusb-t.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-t.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612225/+files/Lsusb-t.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] CRDA.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "CRDA.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/561/+files/CRDA.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [5625

[Kernel-packages] [Bug 1987997] Lspci-vt.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci-vt.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612221/+files/Lspci-vt.txt

** Description changed:

  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 264G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works for
  a week again.
  
  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process and
  XFS.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-124-generic 5.4.0-124.140
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  Uname: Linux 5.4.0-124-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  Date: Mon Aug 29 00:34:28 2022
  InstallationDate: Installed on 2021-02-10 (564 days ago)
  InstallationMedia: Ubuntu-Server 20.04.2 LTS "Focal Fossa" - Release amd64 
(20210201.2)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  PciMultimedia:
  
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  RelatedPackageVersions:
   linux-restricted-modules-5.4.0-124-generic N/A
   linux-backports-modules-5.4.0-124-generic  N/A
   linux-firmware 1.187.33
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 07/05/2022
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.15.0
  dmi.board.name: 0R10KJ
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A02
  dmi.chassis.type: 25
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: PowerEdge M1000e
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA02:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
  dmi.product.name: PowerEdge M630
  dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
  dmi.sys.vendor: Dell Inc.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  DistroRelease: Ubuntu 20.04
  InstallationDate: Installed on 2021-09-16 (346 days ago)
  InstallationMedia: Ubuntu-Server 20.04.3 LTS "Focal Fossa" - Release amd64 
(20210824)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  Package: linux (not installed)
  PciMultimedia:
   
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  RelatedPackageVersions:
   linux-restricted-modules-5.4.0-124-generic N/A
   linux-backports-modules-5.4.0-124-generic  N/A
   linux-firmware 1.187.33
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  Tags:  focal uec-images
  Uname: Linux 5.4.0-124-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: N/A
  _MarkForUpload: True
  dmi.bios.date: 07/05/2022
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.15.0
  dmi.board.name: 0R10KJ
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A05
  dmi.chassis.type: 25
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: PowerEdge M1000e
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA05:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
  dmi.product.name: PowerEdge M630
  dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
  dmi.sys.vendor: Dell Inc.
  --- 
  ProblemType: Bug
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:5

[Kernel-packages] [Bug 1987997] CurrentDmesg.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "CurrentDmesg.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612208/+files/CurrentDmesg.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad R

[Kernel-packages] [Bug 1987997] Lspci-vt.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci-vt.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612210/+files/Lspci-vt.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value

[Kernel-packages] [Bug 1987997] Lsusb-t.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb-t.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612212/+files/Lsusb-t.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.

[Kernel-packages] [Bug 1987997] Lsusb.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lsusb.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612211/+files/Lsusb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] Lspci.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/1987997/+attachment/5612209/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [56

[Kernel-packages] [Bug 1987997] Re: xfs freeze every week on multiple machines

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Description changed:

- xfs freeze
+ We run multiple machines that have the purpose of acting as a data
+ repository for Veeam backup. Each machine has 2x 264G XFS volumes that
+ use heavy reflinking. This works as intended, for one issue: our xfs
+ freeze once a week, a few minutes after midnight on Monday nights. We
+ can only do a reboot to get the servers working again. Then it works for
+ a week again.
+ 
+ We have been interacting with support from Veeam, but this looks like
+ some XFS kernel issue, or a race condition between the Veeam process and
+ XFS.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-124-generic 5.4.0-124.140
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  Uname: Linux 5.4.0-124-generic x86_64
  AlsaDevices:
-  total 0
-  crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
-  crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
+  total 0
+  crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
+  crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  Date: Mon Aug 29 00:34:28 2022
  InstallationDate: Installed on 2021-02-10 (564 days ago)
  InstallationMedia: Ubuntu-Server 20.04.2 LTS "Focal Fossa" - Release amd64 
(20210201.2)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  PciMultimedia:
-  
+ 
  ProcEnviron:
-  TERM=xterm
-  PATH=(custom, no user)
-  LANG=en_US.UTF-8
-  SHELL=/bin/bash
+  TERM=xterm
+  PATH=(custom, no user)
+  LANG=en_US.UTF-8
+  SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  RelatedPackageVersions:
-  linux-restricted-modules-5.4.0-124-generic N/A
-  linux-backports-modules-5.4.0-124-generic  N/A
-  linux-firmware 1.187.33
+  linux-restricted-modules-5.4.0-124-generic N/A
+  linux-backports-modules-5.4.0-124-generic  N/A
+  linux-firmware 1.187.33
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 07/05/2022
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.15.0
  dmi.board.name: 0R10KJ
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A02
  dmi.chassis.type: 25
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: PowerEdge M1000e
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA02:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
  dmi.product.name: PowerEdge M630
  dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
  dmi.sys.vendor: Dell Inc.

** Tags added: apport-collected

** Description changed:

  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 264G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works for
  a week again.
  
  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process and
  XFS.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-124-generic 5.4.0-124.140
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  Uname: Linux 5.4.0-124-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  Date: Mon Aug 29 00:34:28 2022
  InstallationDate: Installed on 2021-02-10 (564 days ago)
  InstallationMedia: Ubuntu-Server 20.04.2 LTS "Focal Fossa" - Release amd64 
(20210201.2)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  PciMultimedia:
  
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  RelatedPackageVersions:
   linux-restricted-modules-5.4.0-124-generic N/A
   linux-backports-modules-5.4.0-124-generic  N/A
   linux-firmware   

[Kernel-packages] [Bug 1987997] Re: xfs freeze every week on multiple machines

2022-08-28 Thread Olafur Helgi Haraldsson
** Summary changed:

- xfs freeze
+ xfs freeze every week on multiple machines

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  New

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 264G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-124-generic 5.4.0-124.140
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  Uname: Linux 5.4.0-124-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  Date: Mon Aug 29 00:34:28 2022
  InstallationDate: Installed on 2021-02-10 (564 days ago)
  InstallationMedia: Ubuntu-Server 20.04.2 LTS "Focal Fossa" - Release amd64 
(20210201.2)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  PciMultimedia:

  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  RelatedPackageVersions:
   linux-restricted-modules-5.4.0-124-generic N/A
   linux-backports-modules-5.4.0-124-generic  N/A
   linux-firmware 1.187.33
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 07/05/2022
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.15.0
  dmi.board.name: 0R10KJ
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A02
  dmi.chassis.type: 25
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: PowerEdge M1000e
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA02:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
  dmi.product.name: PowerEdge M630
  dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
  dmi.sys.vendor: Dell Inc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987997/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987997] [NEW] xfs freeze every week on multiple machines

2022-08-28 Thread Olafur Helgi Haraldsson
Public bug reported:

xfs freeze

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: linux-image-5.4.0-124-generic 5.4.0-124.140
ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
Uname: Linux 5.4.0-124-generic x86_64
AlsaDevices:
 total 0
 crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
 crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
ApportVersion: 2.20.11-0ubuntu27.24
Architecture: amd64
ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
CasperMD5CheckResult: pass
Date: Mon Aug 29 00:34:28 2022
InstallationDate: Installed on 2021-02-10 (564 days ago)
InstallationMedia: Ubuntu-Server 20.04.2 LTS "Focal Fossa" - Release amd64 
(20210201.2)
IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
MachineType: Dell Inc. PowerEdge M630
PciMultimedia:
 
ProcEnviron:
 TERM=xterm
 PATH=(custom, no user)
 LANG=en_US.UTF-8
 SHELL=/bin/bash
ProcFB: 0 mgag200drmfb
ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
RelatedPackageVersions:
 linux-restricted-modules-5.4.0-124-generic N/A
 linux-backports-modules-5.4.0-124-generic  N/A
 linux-firmware 1.187.33
RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
SourcePackage: linux
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 07/05/2022
dmi.bios.vendor: Dell Inc.
dmi.bios.version: 2.15.0
dmi.board.name: 0R10KJ
dmi.board.vendor: Dell Inc.
dmi.board.version: A02
dmi.chassis.type: 25
dmi.chassis.vendor: Dell Inc.
dmi.chassis.version: PowerEdge M1000e
dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA02:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
dmi.product.name: PowerEdge M630
dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
dmi.sys.vendor: Dell Inc.

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal uec-images

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  New

Bug description:
  xfs freeze

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-124-generic 5.4.0-124.140
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  Uname: Linux 5.4.0-124-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  Date: Mon Aug 29 00:34:28 2022
  InstallationDate: Installed on 2021-02-10 (564 days ago)
  InstallationMedia: Ubuntu-Server 20.04.2 LTS "Focal Fossa" - Release amd64 
(20210201.2)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  PciMultimedia:
   
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  RelatedPackageVersions:
   linux-restricted-modules-5.4.0-124-generic N/A
   linux-backports-modules-5.4.0-124-generic  N/A
   linux-firmware 1.187.33
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 07/05/2022
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.15.0
  dmi.board.name: 0R10KJ
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A02
  dmi.chassis.type: 25
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: PowerEdge M1000e
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA02:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
  dmi.product.name: PowerEdge M630
  dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
  dmi.sys.vendor: Dell Inc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987997/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987829] Re: LG Gram 12gen high CPU use when USB-C/TB is in use

2022-08-28 Thread rustyx
Here are some ACPI event traces:

echo 0x0f > /sys/module/acpi/parameters/debug_layer
echo 0x0f > /sys/module/acpi/parameters/debug_level

[ 2432.074662]evmisc-0132 ev_queue_notify_reques: Dispatching Notify on 
[UBTC] (Device) Value 0x80 (Status Change) Node c4abd855
[ 2432.126632] ACPI Debug:  0x00E0
[ 2432.171756] ACPI Debug:  "UBTC._DSM(2)"
[ 2432.242916] ACPI Debug:  "_Q79"
[ 2432.246137] ACPI Debug:  "_Q79"
[ 2432.247420] ACPI Debug:  "UCEV"
[ 2432.287432] ACPI Debug:  "UBTC._DSM(2)"
[ 2432.394856]evmisc-0132 ev_queue_notify_reques: Dispatching Notify on 
[UBTC] (Device) Value 0x80 (Status Change) Node c4abd855
[ 2432.397313] ACPI Debug:  "UCEV"
[ 2432.481271] ACPI Debug:  "UBTC._DSM(2)"
[ 2432.582744]evmisc-0132 ev_queue_notify_reques: Dispatching Notify on 
[UBTC] (Device) Value 0x80 (Status Change) Node c4abd855
[ 2432.664302] ACPI Debug:  "UBTC._DSM(2)"
[ 2432.752154] ACPI Debug:  "UBTC._DSM(2)"
[ 2432.840088] ACPI Debug:  "UBTC._DSM(1)"
[ 2433.071576] ACPI Debug:  0x00E0
[ 2433.110414] ACPI Debug:  "UBTC._DSM(1)"
[ 2433.118887] ACPI Debug:  "_Q79"
[ 2433.119819] ACPI Debug:  "UCEV"
[ 2433.198510] ACPI Debug:  "_Q79"
[ 2433.371280] ACPI Debug:  0x00E0
[ 2433.474478]evmisc-0132 ev_queue_notify_reques: Dispatching Notify on 
[UBTC] (Device) Value 0x80 (Status Change) Node c4abd855
[ 2433.476980] ACPI Debug:  "UCEV"
[ 2433.502853] ACPI Debug:  "UBTC._DSM(2)"
[ 2433.509903] ACPI Debug:  "_Q79"
[ 2433.594694] ACPI Debug:  "_Q79"
[ 2433.626579]evmisc-0132 ev_queue_notify_reques: Dispatching Notify on 
[UBTC] (Device) Value 0x80 (Status Change) Node c4abd855
[ 2433.629063] ACPI Debug:  "UCEV"
[ 2433.697112] ACPI Debug:  "UBTC._DSM(2)"
[ 2433.802857]evmisc-0132 ev_queue_notify_reques: Dispatching Notify on 
[UBTC] (Device) Value 0x80 (Status Change) Node c4abd855
[ 2433.805345] ACPI Debug:  "UCEV"
[ 2433.893565] ACPI Debug:  "UBTC._DSM(2)"


** Attachment added: "lg-gram-acpi-debug.txt"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987829/+attachment/5612174/+files/lg-gram-acpi-debug.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987829

Title:
  LG Gram 12gen high CPU use when USB-C/TB is in use

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  LG Gram laptop 17Z90Q with a Core i7-1260P CPU.

  Whenever an external monitor is connected to USB-C/Thunderbolt 4,
  average load goes above 3.0 and the machine is getting very hot.

  Output from top -H shows a lot of kworker CPU usage:

  top - 11:45:06 up 33 min,  2 users,  load average: 3,30, 3,08, 2,79
  Threads: 1442 total,   2 running, 1440 sleeping,   0 stopped,   0 zombie
  %Cpu(s):  0,1 us,  3,7 sy,  0,0 ni, 96,1 id,  0,0 wa,  0,0 hi,  0,1 si,  0,0 
st
  MiB Mem :  15684,6 total,   8510,2 free,   2580,8 used,   4593,6 buff/cache
  MiB Swap:   3815,0 total,   3815,0 free,  0,0 used.  11326,9 avail Mem 

  PID USER  PR  NIVIRTRESSHR S  %CPU  %MEM TIME+ WCHAN  
COMMAND
 7766 root  20   0   0  0  0 R  19,8   0,0   0:56.05 
worker_th+ kworker/0:2-events
  196 root  20   0   0  0  0 D  15,8   0,0   1:18.12 
ec_guard   kworker/u32:2+USBC000:00-con0
10237 root  20   0   0  0  0 I  12,9   0,0   0:26.44 
worker_th+ kworker/0:0-events
 1027 root  20   0   0  0  0 I   6,6   0,0   0:43.30 
worker_th+ kworker/1:3-events
10971 root  20   0   0  0  0 I   4,0   0,0   0:00.20 
worker_th+ kworker/15:0-events
  175 root  20   0   0  0  0 I   2,3   0,0   0:03.24 
worker_th+ kworker/11:1-events
 2410 root  20   0   0  0  0 I   1,7   0,0   0:05.49 
worker_th+ kworker/9:3-events

  Perf shows a lot of time spent inside
  handle_irq_event/acpi_ev_gpe_detect/acpi_hw_gpe_read.

  Additionally, kernel log is getting spammed with these lines every 4
  seconds (but also without any USB-C device attached):

  [  223.514304] ACPI Error: No handler for Region [XIN1] (f2ad4f1f) 
[UserDefinedRegion] (20210730/evregion-130)
  [  223.514323] ACPI Error: Region UserDefinedRegion (ID=143) has no handler 
(20210730/exfldio-261)

  [  223.514337] 
 Initialized Local Variables for Method [_TMP]:
  [  223.514339]   Local0: 21495082Integer 
0034

  [  223.514349] No Arguments are initialized for method [_TMP]

  [  223.514354] ACPI Error: Aborting method
  \_SB.PC00.LPCB.LGEC.SEN2._TMP due to previous error (AE_NOT_EXIST)
  (20210730/psparse-529)

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-5.15.0-46-generic 5.15.0-46.49
  ProcVersionSignature: Ubuntu 5.15.0-46.49-generic 5.15.39
  Uname: Linux 5.15.0-46-generic x86_64
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUs

[Kernel-packages] [Bug 1987829] Re: LG Gram 12gen high CPU use when USB-C/TB is in use

2022-08-28 Thread rustyx
** Attachment added: "LG-Gram-17Z90Q-ACPI-dump.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987829/+attachment/5612173/+files/LG-Gram-17Z90Q-ACPI-dump.tar.gz

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987829

Title:
  LG Gram 12gen high CPU use when USB-C/TB is in use

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  LG Gram laptop 17Z90Q with a Core i7-1260P CPU.

  Whenever an external monitor is connected to USB-C/Thunderbolt 4,
  average load goes above 3.0 and the machine is getting very hot.

  Output from top -H shows a lot of kworker CPU usage:

  top - 11:45:06 up 33 min,  2 users,  load average: 3,30, 3,08, 2,79
  Threads: 1442 total,   2 running, 1440 sleeping,   0 stopped,   0 zombie
  %Cpu(s):  0,1 us,  3,7 sy,  0,0 ni, 96,1 id,  0,0 wa,  0,0 hi,  0,1 si,  0,0 
st
  MiB Mem :  15684,6 total,   8510,2 free,   2580,8 used,   4593,6 buff/cache
  MiB Swap:   3815,0 total,   3815,0 free,  0,0 used.  11326,9 avail Mem 

  PID USER  PR  NIVIRTRESSHR S  %CPU  %MEM TIME+ WCHAN  
COMMAND
 7766 root  20   0   0  0  0 R  19,8   0,0   0:56.05 
worker_th+ kworker/0:2-events
  196 root  20   0   0  0  0 D  15,8   0,0   1:18.12 
ec_guard   kworker/u32:2+USBC000:00-con0
10237 root  20   0   0  0  0 I  12,9   0,0   0:26.44 
worker_th+ kworker/0:0-events
 1027 root  20   0   0  0  0 I   6,6   0,0   0:43.30 
worker_th+ kworker/1:3-events
10971 root  20   0   0  0  0 I   4,0   0,0   0:00.20 
worker_th+ kworker/15:0-events
  175 root  20   0   0  0  0 I   2,3   0,0   0:03.24 
worker_th+ kworker/11:1-events
 2410 root  20   0   0  0  0 I   1,7   0,0   0:05.49 
worker_th+ kworker/9:3-events

  Perf shows a lot of time spent inside
  handle_irq_event/acpi_ev_gpe_detect/acpi_hw_gpe_read.

  Additionally, kernel log is getting spammed with these lines every 4
  seconds (but also without any USB-C device attached):

  [  223.514304] ACPI Error: No handler for Region [XIN1] (f2ad4f1f) 
[UserDefinedRegion] (20210730/evregion-130)
  [  223.514323] ACPI Error: Region UserDefinedRegion (ID=143) has no handler 
(20210730/exfldio-261)

  [  223.514337] 
 Initialized Local Variables for Method [_TMP]:
  [  223.514339]   Local0: 21495082Integer 
0034

  [  223.514349] No Arguments are initialized for method [_TMP]

  [  223.514354] ACPI Error: Aborting method
  \_SB.PC00.LPCB.LGEC.SEN2._TMP due to previous error (AE_NOT_EXIST)
  (20210730/psparse-529)

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-5.15.0-46-generic 5.15.0-46.49
  ProcVersionSignature: Ubuntu 5.15.0-46.49-generic 5.15.39
  Uname: Linux 5.15.0-46-generic x86_64
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  me 1678 F pulseaudio
   /dev/snd/controlC1:  me 1678 F pulseaudio
  CRDA: N/A
  CasperMD5CheckResult: pass
  Date: Fri Aug 26 11:57:05 2022
  InstallationDate: Installed on 2022-08-25 (1 days ago)
  InstallationMedia: Ubuntu 22.04.1 LTS "Jammy Jellyfish" - Release amd64 
(20220809.1)
  MachineType: LG Electronics 17Z90Q-G.AA78N
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic 
root=UUID=e2f96916-a67c-432e-b687-730071271216 ro quiet splash vt.handoff=7
  PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No 
PulseAudio daemon running, or not running as session daemon.
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-46-generic N/A
   linux-backports-modules-5.15.0-46-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.4
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 07/06/2022
  dmi.bios.release: 0.1
  dmi.bios.vendor: Phoenix Technologies Ltd.
  dmi.bios.version: A1ZG0380 X64
  dmi.board.asset.tag: Base Board Asset Tag
  dmi.board.name: 17Z90Q
  dmi.board.vendor: LG Electronics
  dmi.board.version: FAB1
  dmi.chassis.asset.tag: Asset Tag
  dmi.chassis.type: 10
  dmi.chassis.vendor: LG Electronics
  dmi.chassis.version: 0.1
  dmi.ec.firmware.release: 33.0
  dmi.modalias: 
dmi:bvnPhoenixTechnologiesLtd.:bvrA1ZG0380X64:bd07/06/2022:br0.1:efr33.0:svnLGElectronics:pn17Z90Q-G.AA78N:pvr0.1:rvnLGElectronics:rn17Z90Q:rvrFAB1:cvnLGElectronics:ct10:cvr0.1:skuEVO:
  dmi.product.family: LG gram PC
  dmi.product.name: 17Z90Q-G.AA78N
  dmi.product.sku: EVO
  dmi.product.version: 0.1
  dmi.sys.vendor: LG Electronics

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987829/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to

[Kernel-packages] [Bug 1946303] Re: No video after wake from S3 due to Nvidia driver crash

2022-08-28 Thread Gerhard Radatz
also affects me after upgrading from 20.04 to 22.04.
Suspend/resume worked fine on 20.04 with nvidia-470 driver.
Now it fails with every driver I tried (470, 510, 515) running Xorg session.
Also the suggested workaround does not work for me.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-graphics-drivers-470 in Ubuntu.
https://bugs.launchpad.net/bugs/1946303

Title:
  No video after wake from S3 due to Nvidia driver crash

Status in nvidia-graphics-drivers-470 package in Ubuntu:
  Confirmed
Status in nvidia-graphics-drivers-510 package in Ubuntu:
  Confirmed

Bug description:
  Since upgrading to Ubuntu 21.10, my computer sometimes fails to properly wake 
from suspend. It does start running again, but there is no video output. I'm 
attaching text for two crashes from kernel log output. First is:
  /var/lib/dkms/nvidia/470.63.01/build/nvidia/nv.c:3967 
nv_restore_user_channels+0xce/0xe0 [nvidia]
  Second is:
  /var/lib/dkms/nvidia/470.63.01/build/nvidia/nv.c:4162 
nv_set_system_power_state+0x2c8/0x3d0 [nvidia]

  Apparently I'm not the only one having this problem with 470 drivers.
  https://forums.linuxmint.com/viewtopic.php?t=354445
  
https://forums.developer.nvidia.com/t/fixed-suspend-resume-issues-with-the-driver-version-470/187150

  Driver 470 uses the new suspend mechanism via /usr/lib/systemd/system-
  sleep/nvidia. But I was using that mechanism with driver 460 in Ubuntu
  21.04 and sleep was reliable then. Right now I'm going back to driver
  460.

  ProblemType: Bug
  DistroRelease: Ubuntu 21.10
  Package: nvidia-driver-470 470.63.01-0ubuntu4
  ProcVersionSignature: Ubuntu 5.13.0-16.16-generic 5.13.13
  Uname: Linux 5.13.0-16-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.11-0ubuntu70
  Architecture: amd64
  CasperMD5CheckResult: unknown
  CurrentDesktop: KDE
  Date: Wed Oct  6 23:24:02 2021
  SourcePackage: nvidia-graphics-drivers-470
  UpgradeStatus: Upgraded to impish on 2021-10-02 (4 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers-470/+bug/1946303/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1978359] Re: boot kernel errors

2022-08-28 Thread Tom Reynolds
This is probably a duplicate of bug 1981783.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1978359

Title:
  boot kernel errors

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Jun 10 23:34:40 luke kernel: ACPI BIOS Error (bug): Could not resolve symbol 
[\_PR.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  Jun 10 23:34:40 luke kernel: fbcon: Taking over console
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Local Variables are initialized for Method 
[_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Arguments are initialized for method [_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: ACPI Error: Aborting method \_PR.PR01._CPC due 
to previous error (AE_NOT_FOUND) (20210730/psparse-529)
  Jun 10 23:34:40 luke kernel: ACPI BIOS Error (bug): Could not resolve symbol 
[\_PR.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Local Variables are initialized for Method 
[_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Arguments are initialized for method [_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: ACPI Error: Aborting method \_PR.PR02._CPC due 
to previous error (AE_NOT_FOUND) (20210730/psparse-529)
  Jun 10 23:34:40 luke kernel: ACPI BIOS Error (bug): Could not resolve symbol 
[\_PR.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Local Variables are initialized for Method 
[_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Arguments are initialized for method [_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: ACPI Error: Aborting method \_PR.PR03._CPC due 
to previous error (AE_NOT_FOUND) (20210730/psparse-529)
  Jun 10 23:34:40 luke kernel: ACPI BIOS Error (bug): Could not resolve symbol 
[\_PR.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Local Variables are initialized for Method 
[_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Arguments are initialized for method [_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: ACPI Error: Aborting method \_PR.PR04._CPC due 
to previous error (AE_NOT_FOUND) (20210730/psparse-529)
  Jun 10 23:34:40 luke kernel: ACPI BIOS Error (bug): Could not resolve symbol 
[\_PR.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Local Variables are initialized for Method 
[_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Arguments are initialized for method [_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: ACPI Error: Aborting method \_PR.PR05._CPC due 
to previous error (AE_NOT_FOUND) (20210730/psparse-529)
  Jun 10 23:34:40 luke kernel: ACPI BIOS Error (bug): Could not resolve symbol 
[\_PR.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Local Variables are initialized for Method 
[_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Arguments are initialized for method [_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: ACPI Error: Aborting method \_PR.PR06._CPC due 
to previous error (AE_NOT_FOUND) (20210730/psparse-529)
  Jun 10 23:34:40 luke kernel: ACPI BIOS Error (bug): Could not resolve symbol 
[\_PR.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Local Variables are initialized for Method 
[_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: No Arguments are initialized for method [_CPC]
  Jun 10 23:34:40 luke kernel: 
  Jun 10 23:34:40 luke kernel: ACPI Error: Aborting method \_PR.PR07._CPC due 
to previous error (AE_NOT_FOUND) (20210730/psparse-529)

  This happens while booting a newer kernel than the gm kernel (-25) 
  bug is reported here and a fix already posted 
https://github.com/intel/linux-intel-lts/issues/22

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-5.15.0-37-generic 5.15.0-37.39
  ProcVersionSignature: Ubuntu 5.15.0-37.39-generic 5.15.35
  Uname: Linux 5.15.0-37-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl icp
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  alorenz1772 F pulseaudio
   /dev/snd/pcmC0D0p:   alorenz1772 F...m pulseaudio
  CRDA: N/A
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  Date: Fri Jun 10 23:45:45 2022
  InstallationDate: Installed on 2022-06-06 (4 days ago)
  InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 
(20220419)
  Lsusb:
   Bus 002 Device 001: ID 1

[Kernel-packages] [Bug 1987987] [NEW] thinkbook 14 Gen4+. screen flickering when logging in.

2022-08-28 Thread 李清伟
Public bug reported:

linux-headers-5.17.0-1015-oem
linux-image-5.17.0-1015-oem
linux-modules-5.17.0-1015-oem
linux-oem-22.04

After I `apt install linux-oem-22.04`, it seems that my kernel has been
updated to 5.17.0 version. After reboot, I get screen flickering when I
move my mouse. But when I use 5.15.0-46-generic version linux kernel to
boot, no problems occur.

So, is there any problem with the kernel 5.17.0 in my new laptop
thinkbook 14 Gen4+?

information about my laptop

uname -a
Linux lqw-ThinkBook-14-G4-IAP 5.15.0-46-generic #49-Ubuntu SMP Thu Aug 4 
18:03:25 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

lsb_release -rd
Description:Ubuntu 22.04.1 LTS
Release:22.04

apt-cache policy linux-modules-5.17.0-1015-oem
linux-modules-5.17.0-1015-oem:
  已安装:(无)
  候选: 5.17.0-1015.16
  版本列表:
 5.17.0-1015.16 500
500 https://mirrors.ustc.edu.cn/ubuntu jammy-security/main amd64 
Packages
500 https://mirrors.ustc.edu.cn/ubuntu jammy-updates/main amd64 Packages

apt-cache policy linux-oem-5.17-headers-5.17.0-1015
linux-oem-5.17-headers-5.17.0-1015:
  已安装:(无)
  候选: 5.17.0-1015.16
  版本列表:
 5.17.0-1015.16 500
500 https://mirrors.ustc.edu.cn/ubuntu jammy-security/main amd64 
Packages
500 https://mirrors.ustc.edu.cn/ubuntu jammy-security/main i386 Packages
500 https://mirrors.ustc.edu.cn/ubuntu jammy-updates/main amd64 Packages
500 https://mirrors.ustc.edu.cn/ubuntu jammy-updates/main i386 Packages

expected to happen:

no screen flickering when logging and moving mouse.

What happended instead:

screen flickering when logging and moving mouse.

ProblemType: Bug
DistroRelease: Ubuntu 22.04
Package: linux-headers-5.17.0-1015-oem (not installed)
ProcVersionSignature: Ubuntu 5.15.0-46.49-generic 5.15.39
Uname: Linux 5.15.0-46-generic x86_64
ApportVersion: 2.20.11-0ubuntu82.1
Architecture: amd64
CasperMD5CheckResult: pass
CurrentDesktop: ubuntu:GNOME
Date: Mon Aug 29 00:26:02 2022
InstallationDate: Installed on 2022-08-28 (0 days ago)
InstallationMedia: Ubuntu 22.04.1 LTS "Jammy Jellyfish" - Release amd64 
(20220809.1)
RebootRequiredPkgs: Error: path contained symlinks.
SourcePackage: linux-oem-5.17
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: linux-oem-5.17 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug jammy

** Description changed:

  linux-headers-5.17.0-1015-oem
  linux-image-5.17.0-1015-oem
  linux-modules-5.17.0-1015-oem
  linux-oem-22.04
  
  After I `apt install linux-oem-22.04`, it seems that my kernel has been
  updated to 5.17.0 version. After reboot, I get screen flickering when I
  move my mouse. But when I use 5.15.0-46-generic version linux kernel to
  boot, no problems occur.
  
  So, is there any problem with the kernel 5.17.0 in my new laptop
  thinkbook 14 Gen4+?
  
  infomation about my laptop
  
  lsb_release -rd
  Description:  Ubuntu 22.04.1 LTS
  Release:  22.04
  
  apt-cache policy linux-modules-5.17.0-1015-oem
  linux-modules-5.17.0-1015-oem:
-   已安装:(无)
-   候选: 5.17.0-1015.16
-   版本列表:
-  5.17.0-1015.16 500
- 500 https://mirrors.ustc.edu.cn/ubuntu jammy-security/main amd64 
Packages
- 500 https://mirrors.ustc.edu.cn/ubuntu jammy-updates/main amd64 
Packages
+   已安装:(无)
+   候选: 5.17.0-1015.16
+   版本列表:
+  5.17.0-1015.16 500
+ 500 https://mirrors.ustc.edu.cn/ubuntu jammy-security/main amd64 
Packages
+ 500 https://mirrors.ustc.edu.cn/ubuntu jammy-updates/main amd64 
Packages
  
- apt-cache policy linux-oem-5.17-headers-5.17.0-1015 
+ apt-cache policy linux-oem-5.17-headers-5.17.0-1015
  linux-oem-5.17-headers-5.17.0-1015:
-   已安装:(无)
-   候选: 5.17.0-1015.16
-   版本列表:
-  5.17.0-1015.16 500
- 500 https://mirrors.ustc.edu.cn/ubuntu jammy-security/main amd64 
Packages
- 500 https://mirrors.ustc.edu.cn/ubuntu jammy-security/main i386 
Packages
- 500 https://mirrors.ustc.edu.cn/ubuntu jammy-updates/main amd64 
Packages
- 500 https://mirrors.ustc.edu.cn/ubuntu jammy-updates/main i386 
Packages
+   已安装:(无)
+   候选: 5.17.0-1015.16
+   版本列表:
+  5.17.0-1015.16 500
+ 500 https://mirrors.ustc.edu.cn/ubuntu jammy-security/main amd64 
Packages
+ 500 https://mirrors.ustc.edu.cn/ubuntu jammy-security/main i386 
Packages
+ 500 https://mirrors.ustc.edu.cn/ubuntu jammy-updates/main amd64 
Packages
+ 500 https://mirrors.ustc.edu.cn/ubuntu jammy-updates/main i386 
Packages
  
  expected to happen:
  
  no screen flickering when logging and moving mouse.
  
  What happended instead:
  
  screen flickering when logging and moving mouse.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-headers-5.17.0-1015-oem (not installed)
  ProcVersionSignature: Ubuntu 5.15.0-46.49-generic 5.15.39
  Uname: Linux 5.15.0-46-generic x86_64
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  Date: Mon Aug 29 

[Kernel-packages] [Bug 1912880] Re: Touchpad (MSFT) not detected on Lenovo Ideapad Flex 5 AMD

2022-08-28 Thread S. Jared Henley
This is sitting here dormant for awhile, I have a Lenovo 14ARE05 and my
trackpad isn't working. I don't know if anyone has any solution for
this? I tried the suspend / reboot method and it's not working for me.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-signed-hwe-5.8 in Ubuntu.
https://bugs.launchpad.net/bugs/1912880

Title:
  Touchpad (MSFT) not detected on Lenovo Ideapad Flex 5 AMD

Status in linux-signed-hwe-5.8 package in Ubuntu:
  Confirmed

Bug description:
  The Laptop is an Ideapad Flex 5 14ARE05
  Model 81X2

  My touchpad is not recognized at all in xinput or libinput list-devices.
  There are however some lines mentioned in dmesg about MSFT, Mouse and PS/2 
which I think is the touchpad.

  [0.004374] ACPI: SSDT 0xC968F000 007216 (v02 LENOVO AmdTable 
0002 MSFT 0400)
  [1.009575] i2c_hid i2c-MSFT0001:00: supply vdd not found, using dummy 
regulator
  [1.009599] i2c_hid i2c-MSFT0001:00: supply vddl not found, using dummy 
regulator
  [1.010058] i2c_hid i2c-MSFT0001:00: hid_descr_cmd failed

  [0.910718] hid-generic 0018:056A:5214.0001: input,hidraw0: I2C HID
  v1.00 Mouse [WACF2200:00 056A:5214] on i2c-WACF2200:00

  [0.602905] i8042: PNP: PS/2 Controller [PNP0303:KBD0] at 0x60,0x64 irq 1
  [0.602905] i8042: PNP: PS/2 appears to have AUX port disabled, if this is 
incorrect please boot with i8042.nopnp
  [0.604083] mousedev: PS/2 mouse device common for all mice

  The touchpad is an MSFT0001:00:

  The spec sheet for the laptop mentiones:

  "Buttonless Mylar® surface multi-touch touchpad"

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.8.0-40-generic 5.8.0-40.45~20.04.1
  ProcVersionSignature: Ubuntu 5.8.0-40.45~20.04.1-generic 5.8.18
  Uname: Linux 5.8.0-40-generic x86_64
  ApportVersion: 2.20.11-0ubuntu27.14
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: ubuntu:GNOME
  Date: Sat Jan 23 09:19:23 2021
  InstallationDate: Installed on 2021-01-06 (16 days ago)
  InstallationMedia: Ubuntu 20.04.1 LTS "Focal Fossa" - Release amd64 (20200731)
  SourcePackage: linux-signed-hwe-5.8
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-signed-hwe-5.8/+bug/1912880/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1983180] Re: ACPI Error _CPC not found

2022-08-28 Thread Tom Reynolds
** Bug watch added: Linux Kernel Bug Tracker #213023
   https://bugzilla.kernel.org/show_bug.cgi?id=213023

** Also affects: linux via
   https://bugzilla.kernel.org/show_bug.cgi?id=213023
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1983180

Title:
  ACPI Error _CPC not found

Status in Linux:
  Unknown
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  just recently (I guess since the last update) I get a few ACPI error
  messages during start up. Those are also visible with dmesg:

  ...
  [0.713907] ACPI: AC: AC Adapter [AC] (on-line)
  [0.713978] input: Sleep Button as 
/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input0
  [0.714011] ACPI: button: Sleep Button [SLPB]
  [0.714040] input: Lid Switch as 
/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input1
  [0.714061] ACPI: button: Lid Switch [LID]
  [0.714087] input: Power Button as 
/devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
  [0.714105] ACPI: button: Power Button [PWRF]
  [0.714187] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.714199] No Local Variables are initialized for Method [_CPC]
  [0.714201] No Arguments are initialized for method [_CPC]
  [0.714203] ACPI Error: Aborting method \_SB.PR01._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.714395] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.714404] No Local Variables are initialized for Method [_CPC]
  [0.714405] No Arguments are initialized for method [_CPC]
  [0.714407] ACPI Error: Aborting method \_SB.PR02._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.714480] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.714488] No Local Variables are initialized for Method [_CPC]
  [0.714490] No Arguments are initialized for method [_CPC]
  [0.714492] ACPI Error: Aborting method \_SB.PR03._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.714640] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.714651] No Local Variables are initialized for Method [_CPC]
  [0.714653] No Arguments are initialized for method [_CPC]
  [0.714655] ACPI Error: Aborting method \_SB.PR04._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.714940] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.714952] No Local Variables are initialized for Method [_CPC]
  [0.714953] No Arguments are initialized for method [_CPC]
  [0.714955] ACPI Error: Aborting method \_SB.PR05._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.715106] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.715118] No Local Variables are initialized for Method [_CPC]
  [0.715119] No Arguments are initialized for method [_CPC]
  [0.715121] ACPI Error: Aborting method \_SB.PR06._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.715309] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.715321] No Local Variables are initialized for Method [_CPC]
  [0.715322] No Arguments are initialized for method [_CPC]
  [0.715324] ACPI Error: Aborting method \_SB.PR07._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.715611] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.715623] No Local Variables are initialized for Method [_CPC]
  [0.715624] No Arguments are initialized for method [_CPC]
  [0.715626] ACPI Error: Aborting method \_SB.PR08._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.716055] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.716067] No Local Variables are initialized for Method [_CPC]
  [0.716069] No Arguments are initialized for method [_CPC]
  [0.716071] ACPI Error: Aborting method \_SB.PR09._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.716360] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (20210730/psargs-330)
  [0.716371] No Local Variables are initialized for Method [_CPC]
  [0.716373] No Arguments are initialized for method [_CPC]
  [0.716375] ACPI Error: Aborting method \_SB.PR10._CPC due to previous 
error (AE_NOT_FOUND) (20210730/psparse-529)
  [0.716669] ACPI BIOS Error (bug): Could not resolve symbol 
[\_SB.PR00._CPC], AE_NOT_FOUND (2021073

[Kernel-packages] [Bug 1987971] Status changed to Confirmed

2022-08-28 Thread Ubuntu Kernel Bot
This change was made by a bot.

** Changed in: linux (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987971

Title:
  package linux-image-5.4.0-117-generic 5.4.0-117.132 failed to
  install/upgrade: package is in a very bad inconsistent state; you
  should  reinstall it before attempting a removal

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Encountered as part of software upgrade.

  ProblemType: Package
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-117-generic 5.4.0-117.132
  ProcVersionSignature: Ubuntu 5.4.0-125.141-generic 5.4.195
  Uname: Linux 5.4.0-125-generic x86_64
  ApportVersion: 2.20.11-0ubuntu27.24
  AptOrdering:
   linux-image-5.4.0-117-generic:amd64: Remove
   linux-modules-5.4.0-117-generic:amd64: Remove
   language-pack-gnome-en:amd64: Install
   language-pack-gnome-en-base:amd64: Install
   NULL: ConfigurePending
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  abhinandpusuluri   2351 F pulseaudio
  CasperMD5CheckResult: skip
  Date: Sun Aug 28 12:44:17 2022
  DpkgTerminalLog:
   dpkg: error processing package linux-image-5.4.0-117-generic (--remove):
package is in a very bad inconsistent state; you should
reinstall it before attempting a removal
   dpkg: too many errors, stopping
  ErrorMessage: package is in a very bad inconsistent state; you should  
reinstall it before attempting a removal
  HibernationDevice: RESUME=UUID=cce28b8d-5190-450f-85d5-5d99369d8c21
  InstallationDate: Installed on 2018-09-10 (1447 days ago)
  InstallationMedia: Ubuntu 16.04.3 LTS "Xenial Xerus" - Release amd64 
(20170801)
  MachineType: LENOVO 80RU
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.4.0-125-generic 
root=UUID=a1e2d809-1b3f-4885-8825-d1e0dd8562f6 ro quiet splash vt.handoff=7
  PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No 
PulseAudio daemon running, or not running as session daemon.
  Python3Details: /usr/bin/python3.8, Python 3.8.10, python3-minimal, 
3.8.2-0ubuntu2
  PythonDetails: /usr/bin/python2.7, Python 2.7.18, python-is-python2, 2.7.17-4
  RelatedPackageVersions: grub-pc N/A
  SourcePackage: linux
  Title: package linux-image-5.4.0-117-generic 5.4.0-117.132 failed to 
install/upgrade: package is in a very bad inconsistent state; you should  
reinstall it before attempting a removal
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 03/09/2018
  dmi.bios.vendor: LENOVO
  dmi.bios.version: E5CN62WW
  dmi.board.asset.tag: No Asset Tag
  dmi.board.name: Lenovo ideapad 700-15ISK
  dmi.board.vendor: LENOVO
  dmi.board.version: SDK0J40709 WIN
  dmi.chassis.asset.tag: No Asset Tag
  dmi.chassis.type: 10
  dmi.chassis.vendor: LENOVO
  dmi.chassis.version: Lenovo ideapad 700-15ISK
  dmi.modalias: 
dmi:bvnLENOVO:bvrE5CN62WW:bd03/09/2018:svnLENOVO:pn80RU:pvrLenovoideapad700-15ISK:rvnLENOVO:rnLenovoideapad700-15ISK:rvrSDK0J40709WIN:cvnLENOVO:ct10:cvrLenovoideapad700-15ISK:
  dmi.product.family: IDEAPAD
  dmi.product.name: 80RU
  dmi.product.sku: LENOVO_MT_80RU_BU_idea_FM_Lenovo ideapad 700-15ISK
  dmi.product.version: Lenovo ideapad 700-15ISK
  dmi.sys.vendor: LENOVO

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987971/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987971] [NEW] package linux-image-5.4.0-117-generic 5.4.0-117.132 failed to install/upgrade: package is in a very bad inconsistent state; you should reinstall it before attemp

2022-08-28 Thread Abhinand Pusuluri
Public bug reported:

Encountered as part of software upgrade.

ProblemType: Package
DistroRelease: Ubuntu 20.04
Package: linux-image-5.4.0-117-generic 5.4.0-117.132
ProcVersionSignature: Ubuntu 5.4.0-125.141-generic 5.4.195
Uname: Linux 5.4.0-125-generic x86_64
ApportVersion: 2.20.11-0ubuntu27.24
AptOrdering:
 linux-image-5.4.0-117-generic:amd64: Remove
 linux-modules-5.4.0-117-generic:amd64: Remove
 language-pack-gnome-en:amd64: Install
 language-pack-gnome-en-base:amd64: Install
 NULL: ConfigurePending
Architecture: amd64
AudioDevicesInUse:
 USERPID ACCESS COMMAND
 /dev/snd/controlC0:  abhinandpusuluri   2351 F pulseaudio
CasperMD5CheckResult: skip
Date: Sun Aug 28 12:44:17 2022
DpkgTerminalLog:
 dpkg: error processing package linux-image-5.4.0-117-generic (--remove):
  package is in a very bad inconsistent state; you should
  reinstall it before attempting a removal
 dpkg: too many errors, stopping
ErrorMessage: package is in a very bad inconsistent state; you should  
reinstall it before attempting a removal
HibernationDevice: RESUME=UUID=cce28b8d-5190-450f-85d5-5d99369d8c21
InstallationDate: Installed on 2018-09-10 (1447 days ago)
InstallationMedia: Ubuntu 16.04.3 LTS "Xenial Xerus" - Release amd64 (20170801)
MachineType: LENOVO 80RU
ProcFB: 0 i915drmfb
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.4.0-125-generic 
root=UUID=a1e2d809-1b3f-4885-8825-d1e0dd8562f6 ro quiet splash vt.handoff=7
PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No 
PulseAudio daemon running, or not running as session daemon.
Python3Details: /usr/bin/python3.8, Python 3.8.10, python3-minimal, 
3.8.2-0ubuntu2
PythonDetails: /usr/bin/python2.7, Python 2.7.18, python-is-python2, 2.7.17-4
RelatedPackageVersions: grub-pc N/A
SourcePackage: linux
Title: package linux-image-5.4.0-117-generic 5.4.0-117.132 failed to 
install/upgrade: package is in a very bad inconsistent state; you should  
reinstall it before attempting a removal
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 03/09/2018
dmi.bios.vendor: LENOVO
dmi.bios.version: E5CN62WW
dmi.board.asset.tag: No Asset Tag
dmi.board.name: Lenovo ideapad 700-15ISK
dmi.board.vendor: LENOVO
dmi.board.version: SDK0J40709 WIN
dmi.chassis.asset.tag: No Asset Tag
dmi.chassis.type: 10
dmi.chassis.vendor: LENOVO
dmi.chassis.version: Lenovo ideapad 700-15ISK
dmi.modalias: 
dmi:bvnLENOVO:bvrE5CN62WW:bd03/09/2018:svnLENOVO:pn80RU:pvrLenovoideapad700-15ISK:rvnLENOVO:rnLenovoideapad700-15ISK:rvrSDK0J40709WIN:cvnLENOVO:ct10:cvrLenovoideapad700-15ISK:
dmi.product.family: IDEAPAD
dmi.product.name: 80RU
dmi.product.sku: LENOVO_MT_80RU_BU_idea_FM_Lenovo ideapad 700-15ISK
dmi.product.version: Lenovo ideapad 700-15ISK
dmi.sys.vendor: LENOVO

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: Confirmed


** Tags: amd64 apport-package focal

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987971

Title:
  package linux-image-5.4.0-117-generic 5.4.0-117.132 failed to
  install/upgrade: package is in a very bad inconsistent state; you
  should  reinstall it before attempting a removal

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Encountered as part of software upgrade.

  ProblemType: Package
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-117-generic 5.4.0-117.132
  ProcVersionSignature: Ubuntu 5.4.0-125.141-generic 5.4.195
  Uname: Linux 5.4.0-125-generic x86_64
  ApportVersion: 2.20.11-0ubuntu27.24
  AptOrdering:
   linux-image-5.4.0-117-generic:amd64: Remove
   linux-modules-5.4.0-117-generic:amd64: Remove
   language-pack-gnome-en:amd64: Install
   language-pack-gnome-en-base:amd64: Install
   NULL: ConfigurePending
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  abhinandpusuluri   2351 F pulseaudio
  CasperMD5CheckResult: skip
  Date: Sun Aug 28 12:44:17 2022
  DpkgTerminalLog:
   dpkg: error processing package linux-image-5.4.0-117-generic (--remove):
package is in a very bad inconsistent state; you should
reinstall it before attempting a removal
   dpkg: too many errors, stopping
  ErrorMessage: package is in a very bad inconsistent state; you should  
reinstall it before attempting a removal
  HibernationDevice: RESUME=UUID=cce28b8d-5190-450f-85d5-5d99369d8c21
  InstallationDate: Installed on 2018-09-10 (1447 days ago)
  InstallationMedia: Ubuntu 16.04.3 LTS "Xenial Xerus" - Release amd64 
(20170801)
  MachineType: LENOVO 80RU
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.4.0-125-generic 
root=UUID=a1e2d809-1b3f-4885-8825-d1e0dd8562f6 ro quiet splash vt.handoff=7
  PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No 
PulseAudio daemon running, or not running as session daemon.
  Python3Details: /usr/bin/python3.8, Python 3.8.10, python3-minimal, 
3.8.2-0ubuntu2
  PythonDe

[Kernel-packages] [Bug 1987249] Re: Asus ROG Zephyrus GX701L sound problem

2022-08-28 Thread Ivan
This is a dump from Windows in the attachment.

Can someone finally help me and solve this problem, and finally get
sound from the speakers?

** Attachment added: "RtHDDump.txt"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987249/+attachment/5612084/+files/RtHDDump.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987249

Title:
  Asus ROG Zephyrus GX701L sound problem

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Hello,

  Please, can someone add a kernel fix for ROG Zephyrus S17
  GX701LWS_GX701LWS, Subsystem Id: 0x10431f01?

  ```

  [codec]

  0x10ec0294 0x10431f01 0

  [pincfg]

  0x19 0x03A11050

  0x1a 0x03A11C30

  ```

  This is what a quirk should look like:

  +SND_PCI_QUIRK(0x1043, 0x1f01, “ASUS GX701L”, ALC294_FIXUP_ASUS_SPK)

  
  [2.396344] snd_hda_codec_realtek hdaudioC0D0: autoconfig for ALC294: 
line_outs=1 (0x17/0x0/0x0/0x0/0x0) type:speaker
  [2.396348] snd_hda_codec_realtek hdaudioC0D0:speaker_outs=0 
(0x0/0x0/0x0/0x0/0x0)
  [2.396349] snd_hda_codec_realtek hdaudioC0D0:hp_outs=1 
(0x21/0x0/0x0/0x0/0x0)
  [2.396350] snd_hda_codec_realtek hdaudioC0D0:mono: mono_out=0x0
  [2.396351] snd_hda_codec_realtek hdaudioC0D0:inputs:
  [2.396352] snd_hda_codec_realtek hdaudioC0D0:  Headset Mic=0x19
  [2.396353] snd_hda_codec_realtek hdaudioC0D0:  Internal Mic=0x12

  
  If you need any more data, or smth just say so.
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  rakic  1415 F pulseaudio
   /dev/snd/controlC1:  rakic  1415 F pulseaudio
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2022-08-22 (0 days ago)
  InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 
(20220419)
  MachineType: ASUSTeK COMPUTER INC. ROG Zephyrus S17 GX701LWS_GX701LWS
  NonfreeKernelModules: nvidia_modeset nvidia
  Package: linux (not installed)
  ProcFB: 0 EFI VGA
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic 
root=UUID=cba43497-441a-4919-8141-a95a789a9239 ro quiet splash vt.handoff=7
  ProcVersionSignature: Ubuntu 5.15.0-46.49-generic 5.15.39
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-46-generic N/A
   linux-backports-modules-5.15.0-46-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.4
  Tags:  jammy
  Uname: Linux 5.15.0-46-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dip lpadmin lxd plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 04/19/2021
  dmi.bios.release: 5.17
  dmi.bios.vendor: American Megatrends Inc.
  dmi.bios.version: GX701LWS.310
  dmi.board.asset.tag: ATN12345678901234567
  dmi.board.name: GX701LWS
  dmi.board.vendor: ASUSTeK COMPUTER INC.
  dmi.board.version: 1.0
  dmi.chassis.asset.tag: No Asset Tag
  dmi.chassis.type: 10
  dmi.chassis.vendor: ASUSTeK COMPUTER INC.
  dmi.chassis.version: 1.0
  dmi.ec.firmware.release: 3.7
  dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrGX701LWS.310:bd04/19/2021:br5.17:efr3.7:svnASUSTeKCOMPUTERINC.:pnROGZephyrusS17GX701LWS_GX701LWS:pvr1.0:rvnASUSTeKCOMPUTERINC.:rnGX701LWS:rvr1.0:cvnASUSTeKCOMPUTERINC.:ct10:cvr1.0:sku:
  dmi.product.family: ROG Zephyrus S17
  dmi.product.name: ROG Zephyrus S17 GX701LWS_GX701LWS
  dmi.product.version: 1.0
  dmi.sys.vendor: ASUSTeK COMPUTER INC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987249/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987200] Re: 22.04 is unstabile

2022-08-28 Thread Timo Kangas
Solution:
I uninstalled Gjs and removed all of the links and dependencies. Installed 
again. System is now stabile/usable.

This is just one feature which proves that 22.04 wasn't ready for
release. Akonadi still out but I don't need it.

This can be closed.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987200

Title:
  22.04 is unstabile

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  I upgraded yesterday from 20.04 to 22.05. System came very unstabile, 
especially when I', using a lots of communication application. Typically I have 
Telegram, Skype, Thunderbird and Chromium on all of the time. Especially 
Thunderbird seems to create issues, Communicaton apps will give "no reponse 
from... Force quit or wait" message. Ocationally GUI will freeze. I have seen 
two o/s crashes as well, one when Impress presentation on. (crash when writing 
this report)
  I upgraded yesterday from 20.04 to 22.05. System came very unstabile, 
especially when I', using a lots of communication application. Typically I have 
Telegram, Skype, Thunderbird and Chromium on all of the time. Especially 
Thunderbird seems to create issues, Communicaton apps will give "no reponse 
from... Force quit or wait" message. Ocationally GUI will freeze. I have seen 
two o/s crashes as well, one when Impress presentation on. (crash when writing 
this report)

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-5.15.0-46-generic 5.15.0-46.49
  ProcVersionSignature: Ubuntu 5.15.0-46.49-generic 5.15.39
  Uname: Linux 5.15.0-46-generic x86_64
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC2:  timo   2219 F pulseaudio
   /dev/snd/controlC0:  timo   2219 F pulseaudio
   /dev/snd/controlC1:  timo   2219 F pulseaudio
  CasperMD5CheckResult: unknown
  CurrentDesktop: ubuntu:GNOME
  Date: Sun Aug 21 13:01:46 2022
  HibernationDevice: RESUME=UUID=21e5e306-bb15-4ad7-904f-e7d2f5d39861
  MachineType: HP HP Notebook
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.15.0-46-generic 
root=UUID=7da986a5-81ea-4ede-83aa-5c5798c46c56 ro quiet splash
  RelatedPackageVersions:
   linux-restricted-modules-5.15.0-46-generic N/A
   linux-backports-modules-5.15.0-46-generic  N/A
   linux-firmware 20220329.git681281e4-0ubuntu3.4
  SourcePackage: linux
  UpgradeStatus: Upgraded to jammy on 2022-08-20 (0 days ago)
  dmi.bios.date: 05/18/2016
  dmi.bios.release: 15.16
  dmi.bios.vendor: Insyde
  dmi.bios.version: F.10
  dmi.board.asset.tag: Type2 - Board Asset Tag
  dmi.board.name: 81DF
  dmi.board.vendor: HP
  dmi.board.version: KBC Version 70.12
  dmi.chassis.asset.tag: 5CG6294Q15
  dmi.chassis.type: 10
  dmi.chassis.vendor: HP
  dmi.chassis.version: Chassis Version
  dmi.ec.firmware.release: 70.12
  dmi.modalias: 
dmi:bvnInsyde:bvrF.10:bd05/18/2016:br15.16:efr70.12:svnHP:pnHPNotebook:pvrCNB1:rvnHP:rn81DF:rvrKBCVersion70.12:cvnHP:ct10:cvrChassisVersion:skuY0B49EA#UUW:
  dmi.product.family: 103C_5335KV G=N L=CON B=HP
  dmi.product.name: HP Notebook
  dmi.product.sku: Y0B49EA#UUW
  dmi.product.version: CNB1
  dmi.sys.vendor: HP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987200/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987964] ProcEnviron.txt

2022-08-28 Thread Fernando Marcelino Muniz
apport information

** Attachment added: "ProcEnviron.txt"
   
https://bugs.launchpad.net/bugs/1987964/+attachment/5612083/+files/ProcEnviron.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987964

Title:
  Read-Only crash on Samsung Galaxy Book S (Intel)

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Ubuntu 22.04 LTS (Kernel updated to version 5.18.19) has chronic read-
  only crashes when using Samsung KLUFG8RHDA-B2D1, the UFS of Samsung
  Galaxy Book S (Intel) - SAMSUNG ELECTRONICS CO., LTD. 767XCL

  And from Kernel version 5.19 onwards, it doesn't even boot, it stops on 
"initramfs".
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2022-08-18 (10 days ago)
  InstallationMedia: Ubuntu 22.04.1 2022.08.17 LTS "Custom Jammy Jellyfish" 
(20220817)
  Package: linux (not installed)
  Tags:  jammy
  Uname: Linux 5.18.19-051819-generic x86_64
  UnreportableReason: The running kernel is not an Ubuntu kernel
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dip lpadmin lxd plugdev sambashare sudo
  _MarkForUpload: True

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987964/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1987964] Re: Read-Only crash on Samsung Galaxy Book S (Intel)

2022-08-28 Thread Fernando Marcelino Muniz
apport information

** Tags added: apport-collected jammy

** Description changed:

  Ubuntu 22.04 LTS (Kernel updated to version 5.18.19) has chronic read-
  only crashes when using Samsung KLUFG8RHDA-B2D1, the UFS of Samsung
  Galaxy Book S (Intel) - SAMSUNG ELECTRONICS CO., LTD. 767XCL
  
- And from Kernel version 5.19 onwards, it doesn't even boot, it stops on
- "initramfs".
+ And from Kernel version 5.19 onwards, it doesn't even boot, it stops on 
"initramfs".
+ --- 
+ ProblemType: Bug
+ ApportVersion: 2.20.11-0ubuntu82.1
+ Architecture: amd64
+ CasperMD5CheckResult: pass
+ CurrentDesktop: ubuntu:GNOME
+ DistroRelease: Ubuntu 22.04
+ InstallationDate: Installed on 2022-08-18 (10 days ago)
+ InstallationMedia: Ubuntu 22.04.1 2022.08.17 LTS "Custom Jammy Jellyfish" 
(20220817)
+ Package: linux (not installed)
+ Tags:  jammy
+ Uname: Linux 5.18.19-051819-generic x86_64
+ UnreportableReason: The running kernel is not an Ubuntu kernel
+ UpgradeStatus: No upgrade log present (probably fresh install)
+ UserGroups: adm cdrom dip lpadmin lxd plugdev sambashare sudo
+ _MarkForUpload: True

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/1987964/+attachment/5612082/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987964

Title:
  Read-Only crash on Samsung Galaxy Book S (Intel)

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Ubuntu 22.04 LTS (Kernel updated to version 5.18.19) has chronic read-
  only crashes when using Samsung KLUFG8RHDA-B2D1, the UFS of Samsung
  Galaxy Book S (Intel) - SAMSUNG ELECTRONICS CO., LTD. 767XCL

  And from Kernel version 5.19 onwards, it doesn't even boot, it stops on 
"initramfs".
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu82.1
  Architecture: amd64
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  DistroRelease: Ubuntu 22.04
  InstallationDate: Installed on 2022-08-18 (10 days ago)
  InstallationMedia: Ubuntu 22.04.1 2022.08.17 LTS "Custom Jammy Jellyfish" 
(20220817)
  Package: linux (not installed)
  Tags:  jammy
  Uname: Linux 5.18.19-051819-generic x86_64
  UnreportableReason: The running kernel is not an Ubuntu kernel
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dip lpadmin lxd plugdev sambashare sudo
  _MarkForUpload: True

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1987964/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp