[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612269/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: 

[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612255/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: 

[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612239/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: 

[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612233/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: 

[Kernel-packages] [Bug 1987997] ProcInterrupts.txt

2022-08-28 Thread Olafur Helgi Haraldsson
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1987997/+attachment/5612217/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1987997

Title:
  xfs freeze every week on multiple machines

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We run multiple machines that have the purpose of acting as a data
  repository for Veeam backup. Each machine has 2x 256G XFS volumes that
  use heavy reflinking. This works as intended, for one issue: our xfs
  freeze once a week, a few minutes after midnight on Monday nights. We
  can only do a reboot to get the servers working again. Then it works
  for a week again.

  We have been interacting with support from Veeam, but this looks like
  some XFS kernel issue, or a race condition between the Veeam process
  and XFS.

  
  [562599.834734] INFO: task kworker/6:3:3534660 blocked for more than 120 
seconds.
  [562599.834794]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.834832] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.834891] kworker/6:3 D0 3534660  2 0x80004000
  [562599.834962] Workqueue: xfs-conv/dm-3 xfs_end_io [xfs]
  [562599.834964] Call Trace:
  [562599.834975]  __schedule+0x2e3/0x740
  [562599.835026]  ? xfs_log_ticket_put+0x1f/0x30 [xfs]
  [562599.835031]  ? kmem_cache_free+0x288/0x2b0
  [562599.835035]  schedule+0x42/0xb0
  [562599.835041]  rwsem_down_write_slowpath+0x244/0x4d0
  [562599.835045]  ? __switch_to_asm+0x40/0x70
  [562599.835088]  ? __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835091]  down_write+0x41/0x50
  [562599.835137]  xfs_ilock+0x7b/0x110 [xfs]
  [562599.835178]  __xfs_setfilesize+0x31/0x110 [xfs]
  [562599.835181]  ? __switch_to_asm+0x40/0x70
  [562599.835220]  xfs_setfilesize_ioend+0x49/0x60 [xfs]
  [562599.835257]  xfs_end_ioend+0x7b/0x1b0 [xfs]
  [562599.835260]  ? __switch_to_asm+0x34/0x70
  [562599.835298]  xfs_end_io+0xb1/0xe0 [xfs]
  [562599.835304]  process_one_work+0x1eb/0x3b0
  [562599.835309]  worker_thread+0x4d/0x400
  [562599.835312]  kthread+0x104/0x140
  [562599.835316]  ? process_one_work+0x3b0/0x3b0
  [562599.835319]  ? kthread_park+0x90/0x90
  [562599.835322]  ret_from_fork+0x35/0x40


  [562599.836171] INFO: task veeamagent:3674754 blocked for more than 120 
seconds.
  [562599.836219]   Not tainted 5.4.0-124-generic #140-Ubuntu
  [562599.836261] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [562599.836318] veeamagent  D0 3674754 3674651 0x4000
  [562599.836321] Call Trace:
  [562599.836326]  __schedule+0x2e3/0x740
  [562599.836330]  schedule+0x42/0xb0
  [562599.836333]  schedule_timeout+0x10e/0x160
  [562599.836335]  ? schedule_timeout+0x10e/0x160
  [562599.836337]  __down+0x82/0xd0
  [562599.836341]  ? wake_up_q+0x70/0x70
  [562599.836383]  ? xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836391]  down+0x47/0x60
  [562599.836434]  xfs_buf_lock+0x37/0xf0 [xfs]
  [562599.836476]  xfs_buf_find.isra.0+0x3bf/0x610 [xfs]
  [562599.836518]  xfs_buf_get_map+0x43/0x2b0 [xfs]
  [562599.836557]  xfs_buf_read_map+0x2f/0x1d0 [xfs]
  [562599.836610]  xfs_trans_read_buf_map+0xca/0x350 [xfs]
  [562599.836643]  xfs_read_agf+0x97/0x130 [xfs]
  [562599.836664]  ? update_load_avg+0x7c/0x670
  [562599.836700]  xfs_alloc_read_agf+0x45/0x1a0 [xfs]
  [562599.836753]  ? xfs_alloc_space_available+0x4a/0xf0 [xfs]
  [562599.836783]  xfs_alloc_fix_freelist+0x41e/0x4e0 [xfs]
  [562599.836786]  ? check_preempt_curr+0x7a/0x90
  [562599.836788]  ? ttwu_do_wakeup+0x1e/0x150
  [562599.836793]  ? radix_tree_lookup+0xd/0x10
  [562599.836836]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836839]  ? radix_tree_lookup+0xd/0x10
  [562599.836877]  ? xfs_perag_get+0x2d/0xb0 [xfs]
  [562599.836906]  xfs_alloc_vextent+0x19f/0x550 [xfs]
  [562599.836938]  xfs_bmap_btalloc+0x57b/0x940 [xfs]
  [562599.836973]  xfs_bmap_alloc+0x34/0x40 [xfs]
  [562599.837004]  xfs_bmapi_allocate+0xdc/0x2d0 [xfs]
  [562599.837043]  xfs_bmapi_convert_delalloc+0x26f/0x4b0 [xfs]
  [562599.837084]  xfs_map_blocks+0x15a/0x3f0 [xfs]
  [562599.837123]  xfs_do_writepage+0x118/0x420 [xfs]
  [562599.837130]  write_cache_pages+0x1ae/0x4b0
  [562599.837171]  ? xfs_vm_releasepage+0x80/0x80 [xfs]
  [562599.837209]  xfs_vm_writepages+0x6a/0xa0 [xfs]
  [562599.837215]  do_writepages+0x43/0xd0
  [562599.837221]  __filemap_fdatawrite_range+0xd5/0x110
  [562599.837226]  file_write_and_wait_range+0x74/0xc0
  [562599.837268]  xfs_file_fsync+0x5d/0x230 [xfs]
  [562599.837274]  ? __do_sys_newfstat+0x61/0x70
  [562599.837281]  vfs_fsync_range+0x49/0x80
  [562599.837284]  do_fsync+0x3d/0x70
  [562599.837288]  __x64_sys_fsync+0x14/0x20
  [562599.837295]  do_syscall_64+0x57/0x190
  [562599.837298]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
  [562599.837301] RIP: 0033:0x7f4092abd93b
  [562599.837308] Code: