[Kernel-packages] [Bug 1987997] acpidump.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612244/+files/acpidump.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP 

[Kernel-packages] [Bug 1987997] acpidump.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612245/+files/acpidump.txt

** Description changed:

  We run multiple machines that have the purpose of acting as a data
- repository for Veeam backup. Each machine has 2x 264G XFS volumes that
+ repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works for
  a week again.
  
  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process and
  XFS.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-124-generic 5.4.0-124.140
  ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
  Uname: Linux 5.4.0-124-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
   crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27.24
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: pass
  Date: Mon Aug 29 00:34:28 2022
  InstallationDate: Installed on 2021-02-10 (564 days ago)
  InstallationMedia: Ubuntu-Server 20.04.2 LTS "Focal Fossa" - Release amd64 
(20210201.2)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: Dell Inc. PowerEdge M630
  PciMultimedia:
  
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
  RelatedPackageVersions:
   linux-restricted-modules-5.4.0-124-generic N/A
   linux-backports-modules-5.4.0-124-generic  N/A
   linux-firmware 1.187.33
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 07/05/2022
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.15.0
  dmi.board.name: 0R10KJ
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A02
  dmi.chassis.type: 25
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: PowerEdge M1000e
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA02:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
  dmi.product.name: PowerEdge M630
  dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
  dmi.sys.vendor: Dell Inc.
- --- 
- ProblemType: Bug
- AlsaDevices:
-  total 0
-  crw-rw 1 root audio 116,  1 Aug 22 11:51 seq
-  crw-rw 1 root audio 116, 33 Aug 22 11:51 timer
- AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
- ApportVersion: 2.20.11-0ubuntu27.24
- Architecture: amd64
- ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
- AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
- CasperMD5CheckResult: pass
- DistroRelease: Ubuntu 20.04
- InstallationDate: Installed on 2021-09-16 (346 days ago)
- InstallationMedia: Ubuntu-Server 20.04.3 LTS "Focal Fossa" - Release amd64 
(20210824)
- IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
- MachineType: Dell Inc. PowerEdge M630
- Package: linux (not installed)
- PciMultimedia:
-  
- ProcEnviron:
-  TERM=xterm
-  PATH=(custom, no user)
-  LANG=en_US.UTF-8
-  SHELL=/bin/bash
- ProcFB: 0 mgag200drmfb
- ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-5.4.0-124-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
- ProcVersionSignature: Ubuntu 5.4.0-124.140-generic 5.4.195
- RelatedPackageVersions:
-  linux-restricted-modules-5.4.0-124-generic N/A
-  linux-backports-modules-5.4.0-124-generic  N/A
-  linux-firmware 1.187.33
- RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
- Tags:  focal uec-images
- Uname: Linux 5.4.0-124-generic x86_64
- UpgradeStatus: No upgrade log present (probably fresh install)
- UserGroups: N/A
- _MarkForUpload: True
- dmi.bios.date: 07/05/2022
- dmi.bios.vendor: Dell Inc.
- dmi.bios.version: 2.15.0
- dmi.board.name: 0R10KJ
- dmi.board.vendor: Dell Inc.
- dmi.board.version: A05
- dmi.chassis.type: 25
- dmi.chassis.vendor: Dell Inc.
- dmi.chassis.version: PowerEdge M1000e
- dmi.modalias: 
dmi:bvnDellInc.:bvr2.15.0:bd07/05/2022:svnDellInc.:pnPowerEdgeM630:pvr:rvnDellInc.:rn0R10KJ:rvrA05:cvnDellInc.:ct25:cvrPowerEdgeM1000e:
- dmi.product.name: PowerEdge M630
- dmi.product.sku: SKU=NotProvided;ModelName=PowerEdge M630
- dmi.sys.vendor: Dell Inc.
- --- 
- ProblemType: Bug
- AlsaDevices:
-  total 0
-  crw-rw 1 root 

[Kernel-packages] [Bug 1987997] acpidump.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612273/+files/acpidump.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP 

[Kernel-packages] [Bug 1987997] acpidump.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "acpidump.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612259/+files/acpidump.txt

** Description changed:

  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works for
  a week again.
  
  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process and
  XFS.
  
  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40
  
  
  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: Bad RIP value.
  [562599.837310] RSP: 002b:7f4066ffc850 EFLAGS: 0293 ORIG_RAX: 
004a
  [562599.837313] RAX: ffda RBX: 7f4066ffd650 RCX: 
7f4092abd93b
  [562599.837315] RDX: 7f4066ffc800 RSI: 7f4066ffc800 RDI: