------- Comment From dougm...@us.ibm.com 2018-04-23 09:58 EDT-------
Regarding rcu_sched stalls, or any other manifestation of hangs, having a work 
item in this condition (next, prev point to itself) on an active worklist would 
effectively cause a linked-list loop. When a kworker thread reaches this work 
item, it will effectively hang, performing this work function over and over. 
That would likely result in any of Hard LOCKUP, soft lockup, and/or rcu_sched 
stall warnings. The key malfunction here seems to be that this work item had 
INIT_WORK() performed on it while it was still on an (active) pool->worklist. 
The fact that two kworker threads are both running this work item (probably 
both hung in the loop) suggests that this work item got added to two pools, but 
it's not clear why we see the work item in the INIT_WORK state - unless a third 
instance was in qlt_unreg_sess() and had just performed INIT_WORK.

A closer look at the kworker pools might be in order, as well as
searching for any tasks currently in qlt_unreg_sess().

The qlt_unreg_sess() function performs two main operations on the work
item: INIT_WORK() and schedule_work(). There are no normal program
breaks between these two operations, so ordinarily they would both get
performed in a very short period of time. However, it is conceivable
that at thread might be performing INIT_WORK() at the same time that
another thread(s) is executing the work item, and that might cause the
panic and leave the work item in this partially-queued state.

It still seems we must be getting multiple threads performing
qlt_unreg_sess() on the same port at the same time. Not sure the best
way to try and catch that, but perhaps some way of verifying the state
of the work item before doing INIT_WORK might do the job. One problem
may be if the work item is left in an indeterminate state after it is
run. I don't see any sort of mutex in the fc_port structure, so not sure
how qla2xxx ensures only one operation is performed at a time. Also,
this may not be two threads actually running in qlt_unreg_sess() at the
same time, but simply two threads successively running qlt_unreg_sess()
without the work item being completed in between.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1762844

Title:
  ISST-LTE:KVM:Ubuntu1804:BostonLC:boslcp3: Host crashed & enters into
  xmon after moving to 4.15.0-15.16 kernel

Status in The Ubuntu-power-systems project:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete
Status in linux source package in Bionic:
  Incomplete

Bug description:
  Problem Description:
  ===================
  Host crashed & enters into xmon after updating to  4.15.0-15.16 kernel kernel.

  Steps to re-create:
  ==================

  1. boslcp3 is up with BMC:118 & PNOR: 20180330 levels
  2. Installed boslcp3 with latest kernel 
      4.15.0-13-generic 
  3. Enabled "-proposed" kernel in /etc/apt/sources.list file
  4. Ran sudo apt-get update & apt-get upgrade

  5. root@boslcp3:~# ls /boot
  abi-4.15.0-13-generic         retpoline-4.15.0-13-generic
  abi-4.15.0-15-generic         retpoline-4.15.0-15-generic
  config-4.15.0-13-generic      System.map-4.15.0-13-generic
  config-4.15.0-15-generic      System.map-4.15.0-15-generic
  grub                          vmlinux
  initrd.img                    vmlinux-4.15.0-13-generic
  initrd.img-4.15.0-13-generic  vmlinux-4.15.0-15-generic
  initrd.img-4.15.0-15-generic  vmlinux.old
  initrd.img.old

  6. Rebooted & booted with 4.15.0-15 kernel
  7. Enabled xmon by editing file "vi /etc/default/grub" and ran update-grub
  8. Rebooted host.
  9. Booted with 4.15.0-15  & provided root/password credentials in login 
prompt 

  10. Host crashed & enters into XMON state with 'Unable to handle
  kernel paging request'

  root@boslcp3:~# [   66.295233] Unable to handle kernel paging request for 
data at address 0x8882f6ed90e9151a
  [   66.295297] Faulting instruction address: 0xc00000000038a110
  cpu 0x50: Vector: 380 (Data Access Out of Range) at [c00000000692f650]
      pc: c00000000038a110: kmem_cache_alloc_node+0x2f0/0x350
      lr: c00000000038a0fc: kmem_cache_alloc_node+0x2dc/0x350
      sp: c00000000692f8d0
     msr: 9000000000009033
     dar: 8882f6ed90e9151a
    current = 0xc00000000698fd00
    paca    = 0xc00000000fab7000   softe: 0        irq_happened: 0x01
      pid   = 1762, comm = systemd-journal
  Linux version 4.15.0-15-generic (buildd@bos02-ppc64el-002) (gcc version 7.3.0 
(Ubuntu 7.3.0-14ubuntu1)) #16-Ubuntu SMP Wed Apr 4 13:57:51 UTC 2018 (Ubuntu 
4.15.0-15.16-generic 4.15.15)
  enter ? for help
  [c00000000692f8d0] c000000000389fd4 kmem_cache_alloc_node+0x1b4/0x350 
(unreliable)
  [c00000000692f940] c000000000b2ec6c __alloc_skb+0x6c/0x220
  [c00000000692f9a0] c000000000b30b6c alloc_skb_with_frags+0x7c/0x2e0
  [c00000000692fa30] c000000000b247cc sock_alloc_send_pskb+0x29c/0x2c0
  [c00000000692fae0] c000000000c5705c unix_dgram_sendmsg+0x15c/0x8f0
  [c00000000692fbc0] c000000000b1ec64 sock_sendmsg+0x64/0x90
  [c00000000692fbf0] c000000000b20abc ___sys_sendmsg+0x31c/0x390
  [c00000000692fd90] c000000000b221ec __sys_sendmsg+0x5c/0xc0
  [c00000000692fe30] c00000000000b184 system_call+0x58/0x6c
  --- Exception: c00 (System Call) at 000074826f6fa9c4
  SP (7ffff5dc5510) is in userspace
  50:mon>
  50:mon>

  10. Attached Host console logs

  I rebooted the host just to see if it would hit the issue again and
  this time I didn't even get to the login prompt but it crashed in the
  same location:

  50:mon> r
  R00 = c000000000389fd4   R16 = c000200e0b20fdc0
  R01 = c000200e0b20f8d0   R17 = 0000000000000048
  R02 = c0000000016eb400   R18 = 000000000001fe80
  R03 = 0000000000000001   R19 = 0000000000000000
  R04 = 0048ca1cff37803d   R20 = 0000000000000000
  R05 = 0000000000000688   R21 = 0000000000000000
  R06 = 0000000000000001   R22 = 0000000000000048
  R07 = 0000000000000687   R23 = 4882d6e3c8b7ab55
  R08 = 48ca1cff37802b68   R24 = c000200e5851df01
  R09 = 0000000000000000   R25 = 8882f6ed90e67454
  R10 = 0000000000000000   R26 = c000000000b2ec6c
  R11 = c000000000d10f78   R27 = c000000ff901ee00
  R12 = 0000000000002000   R28 = ffffffffffffffff
  R13 = c00000000fab7000   R29 = 00000000015004c0
  R14 = c000200e4c973fc8   R30 = c000200e5851df01
  R15 = c000200e4c974238   R31 = c000000ff901ee00
  pc  = c00000000038a110 kmem_cache_alloc_node+0x2f0/0x350
  cfar= c000000000016e1c arch_local_irq_restore+0x1c/0x90
  lr  = c00000000038a0fc kmem_cache_alloc_node+0x2dc/0x350
  msr = 9000000000009033   cr  = 28002844
  ctr = c00000000061e1b0   xer = 0000000000000000   trap =  380
  dar = 8882f6ed90e67454   dsisr = c000200e40bd8400
  50:mon> t
  [c000200e0b20f8d0] c000000000389fd4 kmem_cache_alloc_node+0x1b4/0x350 
(unreliable)
  [c000200e0b20f940] c000000000b2ec6c __alloc_skb+0x6c/0x220
  [c000200e0b20f9a0] c000000000b30b6c alloc_skb_with_frags+0x7c/0x2e0
  [c000200e0b20fa30] c000000000b247cc sock_alloc_send_pskb+0x29c/0x2c0
  [c000200e0b20fae0] c000000000c56ae4 unix_stream_sendmsg+0x264/0x5c0
  [c000200e0b20fbc0] c000000000b1ec64 sock_sendmsg+0x64/0x90
  [c000200e0b20fbf0] c000000000b20abc ___sys_sendmsg+0x31c/0x390
  [c000200e0b20fd90] c000000000b221ec __sys_sendmsg+0x5c/0xc0
  [c000200e0b20fe30] c00000000000b184 system_call+0x58/0x6c
  --- Exception: c01 (System Call) at 00007d16a993a940
  SP (7ffffbee2270) is in userspace

  Mirroring to Canonical to advise them that this might be possible
  regression. Didn't see any obvious changes in this area in the
  changelog published at
  https://launchpad.net/ubuntu/+source/linux/4.15.0-15.16 but it would
  be good to have Canonical help reviewing the deltas as we try to
  isolate this further.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-power-systems/+bug/1762844/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to