FYI still happening to me on 18.04 with HWE kernel,
similar behavior as #38: kworker with steady high
cpu usage after un-docking, re-docking didn't solve
it tho.
kernel: 4.18.0-15-generic
hardware: Thinkpad x270, Thinkpad Ultra Dock, network: enp0s31f6 (dock eth) and
wlp3s0 up
--
You received
For other souls facing this "Medium" issue,
a hammer-ish workaround that works for me:
1) Run:
apt-get install cpulimit
2) edit /lib/systemd/system/systemd-resolved.service:
2a) Comment out:
#Type=notify
2b) Replace line (may want to remove the -k to let cpulimit throttle it):
Also happening to me after 16.04 -> 18.04 LTS upgrade (via do-release-
upgrade)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1670959
Title:
systemd-resolved using 100% CPU
To manage notifications
FYI this is also happening for me, LTS 16.04.3 + HWE (kernel and xorg pkgs),
Thinkpad x270 w/ Integrated Graphics Chipset: Intel(R) HD Graphics 620.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Indeed that had been the case, thx for replying.
** Changed in: iproute2 (Ubuntu)
Status: Confirmed => Invalid
** Changed in: linux (Ubuntu)
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
Context: deploying nodes via maas with bonds and vlans
on top of them, in particular for the example below:
bong-stg and bond-stg.600
~# dpkg-query -W vlan
vlan1.9-3.2ubuntu1.16.04.1
* /etc/network/interfaces excerpt as setup by Maas
(obfuscated for ip and mac
We really need to get xenial added for its HWE kernels:
we have several BootStacks running with them, mainly for
latest needed drivers while keeping LTS (mellanox for
VNFs as an example)- all these are now obviously at
risk on the next reboot.
Note also that recovering from this issue does
FYI we're also hitting this on trusty/mitaka for what looks
like incompletely deleted instances:
* still running at hypervisor, ie
virsh dominfo UUID # shows it ok
* deleted both at nova 'instances' and 'block_device_mapping' tables.
Once certain it's still running at hypervisor,
our
FYI because of other maintenance I had to do on the affected nodes,
after upgrading to linux-generic-lts-xenial 4.4.0-66-generic this
issue didn't show anymore.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I can't make it work even after manually installing squashfuse
(FYI lxc created by juju deploy cs:ubuntu --to lxc:1 )
root@juju-machine-1-lxc-14:~# uname -a
Linux juju-machine-1-lxc-14 4.8.0-34-generic #36~16.04.1-Ubuntu SMP Wed Dec 21
18:55:08 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
FTR/FYI (as per chatter w/kamal) we're waiting for >= 4.8.0-28
to be available at https://launchpad.net/ubuntu/+source/linux-hwe-edge
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1638700
Title:
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1582278
Title:
[SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from
one NUMA node and PCI device
Public bug reported:
Because linux-image-generic pkg doesn't include mlx5_core,
stock ubuntu cloud-images can't be used by VM guests using
mellanox VFs, forcing the creation of an ad-hoc cloud image
with added linux-image-extra-virtual
** Affects: cloud-images
Importance: Undecided
** Also affects: python-cryptography (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1601986
Title:
RuntimeError: osrandom engine already
Some ~recent alike finding, in case it helps:
https://github.com/TobleMiner/wintron7.0/issues/2
- worked around with clocksource=tsc, guess that
ntpq should also show a large drift.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
See https://bugs.launchpad.net/juju-core/+bug/1588403/comments/5
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1570657
Title:
Bash completion needed for versioned juju commands
To manage
See https://github.com/juju/juju/pull/5057 for *beta9*
and above (ie "applications" instead of "services") -
can try it with:
sudo -i # become root
rm /etc/bash_completion.d/juju2
wget -O /etc/bash_completion.d/juju-2.0 \
See updated https://github.com/juju/juju/pull/5057, in short
it adds below two files to /etc/bash_completion.d/
(sort == load order) ->
juju-2.0 (added)
juju-core(existing from juju1)
juju-version (added)
, so that:
* juju-2.0: completion for `juju-2.0`
, but also plain `juju` (ie
** Also affects: python-openstackclient (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1567895
Title:
openstack volume create does not support
For experimenting purposes / measuring the difference against
what would be a (better) behaving epoll_wait usage, I created:
https://github.com/jjo/dl-hack-lp1518430
, which implements hooking epoll_wait() and select()
(via LD_PRELOAD) to limit the rate of calls with zero timeouts.
WfM'd on an
FYI I'm running trusty with linux-generic-lts-xenial 4.4.0.18.10,
was getting same thermald spamming until I manually upgraded
to the package as per comment#8 above:
- upgrade thermald:amd64 1.4.3-5~14.04.2 1.4.3-5~14.04.3
, then no more kern.log spamming, thanks!.
--
You received this bug
Chris: confirming this bug most likely fixed indeed by
1:2014.1.5-0ubuntu3, as there has been no further alerts from missing
tun_ids since it got installed 1 week ago (recall we had been getting
several of those per week).
Thanks! :) --J
--
You received this bug notification because you are a
Chris: confirming this bug most likely fixed indeed by
1:2014.1.5-0ubuntu3, as there has been no further alerts from missing
tun_ids since it got installed 1 week ago (recall we had been getting
several of those per week).
Thanks! :) --J
--
You received this bug notification because you are a
Thanks Chris for the updates - FYI we've upgraded all of our compute nodes
1:2014.1.5-0ubuntu3 from proposed, no (extra)issues so far after some hours,
FYI this stack has ~30 nodes, ~1k+ active instances.
We expect this change to (obviously) stop those KeyError messages at log,
and likely also
Thanks Chris for the updates - FYI we've upgraded all of our compute nodes
1:2014.1.5-0ubuntu3 from proposed, no (extra)issues so far after some hours,
FYI this stack has ~30 nodes, ~1k+ active instances.
We expect this change to (obviously) stop those KeyError messages at log,
and likely also
Public bug reported:
Filing this on ubuntu/neutron package, as neutron itself is EOL'd for
Icehouse.
FYI this is a nonHA icehouse/trusty deploy using serverteam's juju
charms.
On one of our production environments with a rather high rate of API
calls, (sp for transient VMs from CI), we
FYI today's openvswitch-switch upgrade triggered a cluster-wide outage
on one (or more) of our production openstacks.
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
FYI today's openvswitch-switch upgrade triggered a cluster-wide outage
on one (or more) of our production openstacks.
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
** Also affects: keystone (Juju Charms Collection)
Importance: Undecided
Status: New
** Also affects: cinder (Juju Charms Collection)
Importance: Undecided
Status: New
** Also affects: glance (Ubuntu)
Importance: Undecided
Status: New
** No longer affects: glance
** Also affects: keystone (Juju Charms Collection)
Importance: Undecided
Status: New
** Also affects: cinder (Juju Charms Collection)
Importance: Undecided
Status: New
** Also affects: glance (Ubuntu)
Importance: Undecided
Status: New
** No longer affects: glance
Public bug reported:
Context: openstack juju/maas deploy using 1510 charms release
on trusty, with:
openstack-origin: "cloud:trusty-liberty"
source: "cloud:trusty-updates/liberty
* Several openstack nova- and neutron- services, at least:
nova-compute, neutron-server, nova-conductor,
Public bug reported:
Context: openstack juju/maas deploy using 1510 charms release
on trusty, with:
openstack-origin: "cloud:trusty-liberty"
source: "cloud:trusty-updates/liberty
* Several openstack nova- and neutron- services, at least:
nova-compute, neutron-server, nova-conductor,
Public bug reported:
lxc packages:
*** 1.0.7-0ubuntu0.9 0
500 http://archive.ubuntu.com//ubuntu/ trusty-updates/main amd64
Packages
lxc apparmor profiles loading fails with:
root@host:~# apparmor_parser -r /etc/apparmor.d/lxc/lxc-default-with-mounting
Found reference to variable PROC,
Public bug reported:
lxc packages:
*** 1.0.7-0ubuntu0.9 0
500 http://archive.ubuntu.com//ubuntu/ trusty-updates/main amd64
Packages
lxc apparmor profiles loading fails with:
root@host:~# apparmor_parser -r /etc/apparmor.d/lxc/lxc-default-with-mounting
Found reference to variable PROC,
** Changed in: lxc (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1511495
Title:
lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles
To manage
** Changed in: lxc (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1511495
Title:
lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles
To manage notifications
w000T! \o/ using @jsalisbury kernel from comment#7
3.19.0-30-generic #33~lp1497812 ,
I can't reproduce the failing behavior under same host + setup
- no mirrored frames or alike dmesg
- containers networking ok
Comparison between stock vivid
3.19.0-30-generic #33~14.04.1-Ubuntu and above:
-
@gz: I got this at our staging environment, where we re-deploy
HA'd juju + openstacks several times a week (or day), 1st time
I positively observe this behavior, so I'd guess it's unfortunately
a subtle race condition or alike.
I did save /var/lib/juju/db/, /var/log/syslog and
@gz: I got this at our staging environment, where we re-deploy
HA'd juju + openstacks several times a week (or day), 1st time
I positively observe this behavior, so I'd guess it's unfortunately
a subtle race condition or alike.
I did save /var/lib/juju/db/, /var/log/syslog and
Confirming _not_ observing reported issue on an
equivalent setup w/ LXCs frames hitting phy interfaces
( bridged towards br0 -> bond0 -> {eth3, eth4} ):
* linux 4.2.0-12-generic #14~14.04.1-Ubuntu (from canonical-kernel-team/ppa)
* i40e version 1.3.4-k
# ethtool -i eth3
driver: i40e
version:
FYI we found these issues while deploying openstack via juju/maas
over a pool of 8 nodes having 4x i40e NICs, where we also found
linux-hwe-generic-trusty (lts-utopic) to be unreliable from its old
i40e driver (0.4.10-k).
Below is a summary of our i40e findings using lts-vivid and lts-utopic
re:
ERRATA on comment #2 : OK i40e driver version is 1.2.48,
as per original report URL.
Comment #2 table is actually:
#1 3.19.0-28-generic w/stock 1.2.2-k: non-phy mirrored frames (this bug)
#2 3.16.0-49-generic w/stock 0.4.10-k: unreliable deploys
#3 3.19.0-28-generic w/built 1.2.48: OK (*)
#4
Public bug reported:
Using 3.19.0-28-generic #30~14.04.1-Ubuntu with stock i40e
driver version 2.2.2-k makes every 'non physical' MAC output
frame appear as copied back at input, as if the switch was
doing frame 'mirroring' (and/or hair-pinning).
FYI same setup, with i40e upgraded to 1.2.48 from
Thanks for the quick turnaround, could you please backport the fix
to 1507 trunk ?
We have several stacks where we need to manually apply
above workaround for corosync/pacemaker to behave properly,
and several coming down the line before 1510.
FYI I while fixing hacluster trunk (essentially came
Thanks for the quick turnaround, could you please backport the fix
to 1507 trunk ?
We have several stacks where we need to manually apply
above workaround for corosync/pacemaker to behave properly,
and several coming down the line before 1510.
FYI I while fixing hacluster trunk (essentially came
After trying several corosync/pacemaker restarts without luck,
I was able to workaround this by adding an 'uidgid'
entry for hacluster:haclient:
* from /var/log/syslog:
Aug 31 18:33:18 juju-machine-3-lxc-3 corosync[901082]: [MAIN ] Denied
connection attempt from 108:113
$ getent passwd 108
FYI to re-check workaround (then possible actual fix), kicked corosync+pacemaker
on cinder, glance services deployed with juju:
$ juju run --service=cinder,glance "service corosync restart; service
pacemaker restart"
, which broke pacemaker start on all of them, with same "Invalid IPC
After trying several corosync/pacemaker restarts without luck,
I was able to workaround this by adding an 'uidgid'
entry for hacluster:haclient:
* from /var/log/syslog:
Aug 31 18:33:18 juju-machine-3-lxc-3 corosync[901082]: [MAIN ] Denied
connection attempt from 108:113
$ getent passwd 108
FYI to re-check workaround (then possible actual fix), kicked corosync+pacemaker
on cinder, glance services deployed with juju:
$ juju run --service=cinder,glance "service corosync restart; service
pacemaker restart"
, which broke pacemaker start on all of them, with same "Invalid IPC
fixed by
https://github.com/jjo/nicstat/commit/3c2407da66c2fd2914e7f362f41f729cc21ff1e4,
see strace comparison (stock vs compiled with above) at a host
with ~270 interfaces:
http://paste.ubuntu.com/12137566/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Public bug reported:
nicstat falsely assumes that a single read from /proc/net/dev
will return all its content (even if using a ~large buffer, 128K)
** Affects: nicstat (Ubuntu)
Importance: Undecided
Status: New
** Tags: canonical-bootstack
--
You received this bug notification
FYI I'm able to successfully drive netns inside LXC, manually then also
via openstack neutron-gateways, via this crafted aa profile:
/etc/apparmor.d/lxc/lxc-default-with-netns -
https://gist.github.com/jjo/ff32b08e48e4a52bfc36
--
You received this bug notification because you are a member of
FYI I'm able to successfully drive netns inside LXC, manually then also
via openstack neutron-gateways, via this crafted aa profile:
/etc/apparmor.d/lxc/lxc-default-with-netns -
https://gist.github.com/jjo/ff32b08e48e4a52bfc36
--
You received this bug notification because you are a member of
Public bug reported:
On an openstack HA kilo deployment using charms trunks,
several services failing to properly restart haproxy, leaving
old instances running, showing cinder/0 as example:
$ juju ssh cinder/0 'pgrep -f haproxy | xargs ps -o pid,ppid,lstart,cmd -p;
egrep St.*ing.haproxy
Public bug reported:
On an openstack HA kilo deployment using charms trunks,
several services failing to properly restart haproxy, leaving
old instances running, showing cinder/0 as example:
$ juju ssh cinder/0 'pgrep -f haproxy | xargs ps -o pid,ppid,lstart,cmd -p;
egrep St.*ing.haproxy
With maas 1.9 deprecating d-i, what option is left for maas swraid installs ?
Please consider re-prioritizing.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1356392
Title:
lacks sw raid1 install
With maas 1.9 deprecating d-i, what option is left for maas swraid installs ?
Please consider re-prioritizing.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to curtin in Ubuntu.
https://bugs.launchpad.net/bugs/1356392
Title:
lacks sw
Public bug reported:
We're using bcache under trusty HWE kernel (3.16.0-38-generic)
with bcache-tools 1.0.7-0ubuntu1 (built from src).
As trusty has util-linux 2.20.1, udev rules for auto registering
bcache devices are skipped:
# blkid was run by the standard udev rules
# It recognised
Public bug reported:
We're using bcache under trusty HWE kernel (3.16.0-38-generic)
with bcache-tools 1.0.7-0ubuntu1 (built from src).
As trusty has util-linux 2.20.1, udev rules for auto registering
bcache devices are skipped:
# blkid was run by the standard udev rules
# It recognised
** Changed in: linux (Ubuntu)
Status: Incomplete = Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all zero after upgrade to Trusty
To
** Changed in: linux (Ubuntu)
Status: Incomplete = Confirmed
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all zero after
With kernels from http://kernel.ubuntu.com/~kernel-ppa/mainline/,
I've narrowed down to:
* OK: tc-class-stats.3.10.76-031076-generic.txt: rate 1600bit 2pps backlog 0b
0p requeues 0
* BAD: tc-class-stats.3.11.0-031100rc1-generic.txt: rate 0bit 0pps backlog 0b
0p requeues 0
* BAD:
With kernels from http://kernel.ubuntu.com/~kernel-ppa/mainline/,
I've narrowed down to:
* OK: tc-class-stats.3.10.76-031076-generic.txt: rate 1600bit 2pps backlog 0b
0p requeues 0
* BAD: tc-class-stats.3.11.0-031100rc1-generic.txt: rate 0bit 0pps backlog 0b
0p requeues 0
* BAD:
As per comment #13, I've added the following tags:
* kernel-fixed-upstream-3.10
* kernel-bug-exists-upstream kernel-bug-exists-upstream-3.11rc1
kernel-bug-exists-upstream-4.1-rc1
Please correct them if I misunderstood the naming convention,
FYI my narrowed bisect corresponds to:
*** OK ***:
As per comment #13, I've added the following tags:
* kernel-fixed-upstream-3.10
* kernel-bug-exists-upstream kernel-bug-exists-upstream-3.11rc1
kernel-bug-exists-upstream-4.1-rc1
Please correct them if I misunderstood the naming convention,
FYI my narrowed bisect corresponds to:
*** OK ***:
FYI peeking at patch-3.11-rc1, shows
[...]
- struct gnet_stats_rate_est tcfc_rate_est;
+ struct gnet_stats_rate_est64tcfc_rate_est;
with its correspondent addition:
+ * struct gnet_stats_rate_est64 - rate estimator
+ * @bps: current byte rate
+ * @pps: current packet rate
+
FYI peeking at patch-3.11-rc1, shows
[...]
- struct gnet_stats_rate_est tcfc_rate_est;
+ struct gnet_stats_rate_est64tcfc_rate_est;
with its correspondent addition:
+ * struct gnet_stats_rate_est64 - rate estimator
+ * @bps: current byte rate
+ * @pps: current packet rate
+
@peanlvch: FYI as per comment #4 I already tested v4.1-rc1-vivid, same
bad results.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all zero after upgrade
@peanlvch: FYI as per comment #4 I already tested v4.1-rc1-vivid, same
bad results.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all
FYI tried iproute2-3.19.0, same zero rate output.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all zero after upgrade to Trusty
To manage
FYI there are several changes at
https://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.10.12
that refer to htb rate handling.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
FYI this has been reported to debian also (kernel 3.16):
https://lists.debian.org/debian-kernel/2014/11/msg00288.html
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class
FYI this has been reported to debian also (kernel 3.16):
https://lists.debian.org/debian-kernel/2014/11/msg00288.html
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
FYI linux-image-4.1.0-040100rc1-generic_4.1.0-040100rc1.201504270235_i386.deb
(from ~kernel-ppa) failed the same way.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
FYI linux-image-4.1.0-040100rc1-generic_4.1.0-040100rc1.201504270235_i386.deb
(from ~kernel-ppa) failed the same way.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class
By installing different kernel versions (trusty, manual download and dpkg -i),
I narrowed this down to:
- linux-image-3.8.0-44-generic: OK
- linux-image-3.11.0-26-generic: BAD (zero rate counters).
FYI I used this script:
# cat htb.sh
/sbin/tc qdisc del dev eth0 root
/sbin/tc qdisc add dev eth0
By installing different kernel versions (trusty, manual download and dpkg -i),
I narrowed this down to:
- linux-image-3.8.0-44-generic: OK
- linux-image-3.11.0-26-generic: BAD (zero rate counters).
FYI I used this script:
# cat htb.sh
/sbin/tc qdisc del dev eth0 root
/sbin/tc qdisc add dev eth0
FYI tried iproute2-3.19.0, same zero rate output.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all zero after upgrade to Trusty
To
FYI there are several changes at
https://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.10.12
that refer to htb rate handling.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc
** Also affects: linux-meta (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all
** Also affects: linux-meta (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all zero after upgrade to
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1379567
Title:
maas-proxy is an open proxy with no ACLs and listening on all
interfaces
To manage
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1379567
Title:
maas-proxy is an open proxy with no ACLs and listening on all
interfaces
To manage notifications
@sinzui: closing this as invalid, as I later confirmed this to be a MTU
issue.
** Changed in: juju-core
Status: Triaged = Invalid
** Changed in: juju-core (Ubuntu)
Status: New = Invalid
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is
@sinzui: closing this as invalid, as I later confirmed this to be a MTU
issue.
** Changed in: juju-core
Status: Triaged = Invalid
** Changed in: juju-core (Ubuntu)
Status: New = Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1355813
Title:
Interface MTU management across MAAS/juju
To manage notifications about this bug go to:
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1355813
Title:
Interface MTU management across MAAS/juju
To manage notifications about this bug
This deployment has 2 metal nodes hosting LXC units (machine:
0, 18), then 'juju deploy cs:ubuntu --to lxc:0' does ok, while
'--to lxc:18' was consistently failing as described above.
FYI I've worked around this by removing machine 18 down to
'maas ready' and reacquiring it from juju, now all new
This deployment has 2 metal nodes hosting LXC units (machine:
0, 18), then 'juju deploy cs:ubuntu --to lxc:0' does ok, while
'--to lxc:18' was consistently failing as described above.
FYI I've worked around this by removing machine 18 down to
'maas ready' and reacquiring it from juju, now all new
Public bug reported:
FYI this is the same environment from lp#1392810 (1.18-1.19-1.20),
juju version: 1.20.11-trusty-amd64
New units deployed (to LXC over maas) stay at agent-state: pending:
http://paste.ubuntu.com/9057045/
#1 TCP connects ok to node0:17070
- at the unit:
/var/log/juju/machine-18-lxc-5.log: http://paste.ubuntu.com/9057287/
NOTE there the repeated log stanzas are because of my manual restarts.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
strace at both sides (grepped for specific sockets):
http://paste.ubuntu.com/9057691/,
mind the subsecond date diff.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1393444
Title:
Public bug reported:
FYI this is the same environment from lp#1392810 (1.18-1.19-1.20),
juju version: 1.20.11-trusty-amd64
New units deployed (to LXC over maas) stay at agent-state: pending:
http://paste.ubuntu.com/9057045/
#1 TCP connects ok to node0:17070
- at the unit:
/var/log/juju/machine-18-lxc-5.log: http://paste.ubuntu.com/9057287/
NOTE there the repeated log stanzas are because of my manual restarts.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1393444
strace at both sides (grepped for specific sockets):
http://paste.ubuntu.com/9057691/,
mind the subsecond date diff.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1393444
Title:
machine unit
Also affected by this issue: 1.7rc1 (upgraded from 1.5.2) at /var/log/syslog:
Nov 6 06:25:01 duck squid3: Cannot open stdio:/var/log/maas/proxy/store.log:
(13) Permission denied
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas
Also affected by this issue: 1.7rc1 (upgraded from 1.5.2) at /var/log/syslog:
Nov 6 06:25:01 duck squid3: Cannot open stdio:/var/log/maas/proxy/store.log:
(13) Permission denied
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
To clarify what's happening with the rabbitmq charm: for its units to be
able cluster together, they need to refer to each other by hostname, see
[0] which was done based on the observed pattern as per #4,#7 comments above.
[0] https://code.launchpad.net/~jjo/charms/trusty/rabbitmq-server/fix-
To clarify what's happening with the rabbitmq charm: for its units to be
able cluster together, they need to refer to each other by hostname, see
[0] which was done based on the observed pattern as per #4,#7 comments above.
[0] https://code.launchpad.net/~jjo/charms/trusty/rabbitmq-server/fix-
FYI this is voiding current trusty/rabbitmq charm from deploying on MaaS
1.7beta + LXC, 1.5 had at least PTR resolution for every dhcp'd IP as e.g:
IN PTR 10-1-57-22.maas. , while 1.7beta has none afaicT.
--
You received this bug notification because you are a member of Ubuntu
Server Team,
1 - 100 of 140 matches
Mail list logo