lxd-armhf1 (8 CPUs) is again in a state where "lxc list" and even "top"
hang forever. lxd-armhf2 was unfortunately shutdown in the previous
days, so I just booted it again.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.l
Another data point: I tried to install 3.19 (the kernel that we have on
the buildds) on the xlarge instance, and lxc list now hangs there as
well.
I haven't yet seem lxc list hang on a large (4 CPUs) instance, but the
whole thing (running tests in containers) is still very slow. TBC on
Monday..
-
I did install haveged which indeed seems to help quite a bit. But now
after having used an xlarge (8 CPU) instance for a while, I again get
hanging processes, like
ubuntu2317 0.0 0.0 0 0 pts/0D+ 16:14 0:00 [tail]
I used that tail on /var/log/lxd/lxd.log to see what's going
The rcu messages though annoying do seem to be benign as they do not
increase in time.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1531768
Title:
arm64 kernel and multiple CPUs is unusably slow
T
The lxc hangs component looks to be an lxd related issue. Specificially
the go libraries in use consume a large ammount of entropy and hang
waiting for it to become available. Installing haveged seems to resolve
these hangs.
--
You received this bug notification because you are a member of Ubun
I split out the xenial bridge regression into bug 1534545, so that this
can keep focussing on the "processes become slow and hang after a while"
main aspect.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bu
FTR, the "networking broken in containers" was an MTU mismatch, worked
around now. Thanks to Andy for figuring this out!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1531768
Title:
arm64 kernel and
> Can you collect apport information from the host system as well?
Sorry, I can't. I can create Scalingstack instances, but I have no
access to the host systems. The IS team certainly can, though.
> Do you get the same effects with a single vCPU?
So far that test system is holding up and I haven
** Changed in: linux (Ubuntu)
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1531768
Title:
arm64 kernel and multiple CPUs is unusably slow
To manage notifications
Martin,
Can you collect apport information from the host system as well?
Do you get the same effects with a single vCPU?
--chris
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1531768
Title:
arm64 ke
I take that back. It does survive for much longer, but after some 15
minutes of running I again run into tons of
[ 2424.611668] INFO: task systemd-udevd:1320 blocked for more than 120 seconds.
[ 2424.613514] Tainted: GW 4.2.0-22-generic #27-Ubuntu
[ 2424.615183] "echo 0 > /proc
I retried the same on m1.medium with 2 CPUs and 4 GB RAM, and lxd works
fine there with the 4.2 kernel on wily. Unfortunately that's too small
for my purposes. m1.large with 4 CPUs/8 GB RAM also seems to work well,
I can make-do with that.
William points out that the hosts on bos01 only have 8 CPU
12 matches
Mail list logo