Seems this is tied to a PV guest. The HVM guest using the exactly same
installation, memory, VCPUs and all did not show any issues. But there
is one big difference here. Only the PV guest uses the paravirtualized
spinlocks (which I think, do allow a bit of nestedness). So right now I
would narrow things to:

1. PV guest with at least 8 VCPUs
2. autogroups at least active at boot (not completely sure why but suspecting 
that this leaves some processes in task groups which then causes a slightly 
different behaviour of the CFS scheduling.
3. The reproducer causing a lot of threaded IO.

What is strange is that archlinux would not be affected here. If  they
are based on upstream, they should have the same pv spinlock code and
autogroup code (if its not disabled by default in some way ->
/proc/sys/kernel/sched_autogroup_enabled).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1011792

Title:
  Kernel lockup running 3.0.0 and 3.2.0 on multiple EC2 instance types

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1011792/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to