Chris: confirming this bug most likely fixed indeed by
1:2014.1.5-0ubuntu3, as there has been no further alerts from missing
tun_ids since it got installed 1 week ago (recall we had been getting
several of those per week).
Thanks! :) --J
--
You received this bug notification because you are a me
Thanks Chris for the updates - FYI we've upgraded all of our compute nodes
1:2014.1.5-0ubuntu3 from proposed, no (extra)issues so far after some hours,
FYI this stack has ~30 nodes, ~1k+ active instances.
We expect this change to (obviously) stop those KeyError messages at log,
and likely also sto
FYI today's openvswitch-switch upgrade triggered a cluster-wide outage
on one (or more) of our production openstacks.
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.la
** Also affects: keystone (Juju Charms Collection)
Importance: Undecided
Status: New
** Also affects: cinder (Juju Charms Collection)
Importance: Undecided
Status: New
** Also affects: glance (Ubuntu)
Importance: Undecided
Status: New
** No longer affects: glance (U
Public bug reported:
Context: openstack juju/maas deploy using 1510 charms release
on trusty, with:
openstack-origin: "cloud:trusty-liberty"
source: "cloud:trusty-updates/liberty
* Several openstack nova- and neutron- services, at least:
nova-compute, neutron-server, nova-conductor,
neutron-o
** Changed in: lxc (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1511495
Title:
lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles
To manage notif
Public bug reported:
lxc packages:
*** 1.0.7-0ubuntu0.9 0
500 http://archive.ubuntu.com//ubuntu/ trusty-updates/main amd64
Packages
lxc apparmor profiles loading fails with:
root@host:~# apparmor_parser -r /etc/apparmor.d/lxc/lxc-default-with-mounting
Found reference to variable PROC,
@gz: I got this at our staging environment, where we re-deploy
HA'd juju + openstacks several times a week (or day), 1st time
I positively observe this behavior, so I'd guess it's unfortunately
a subtle race condition or alike.
I did save /var/lib/juju/db/, /var/log/syslog and /var/log/juju/machin
Thanks for the quick turnaround, could you please backport the fix
to 1507 trunk ?
We have several stacks where we need to manually apply
above workaround for corosync/pacemaker to behave properly,
and several coming down the line before 1510.
FYI I while fixing hacluster trunk (essentially came o
FYI to re-check workaround (then possible actual fix), kicked corosync+pacemaker
on cinder, glance services deployed with juju:
$ juju run --service=cinder,glance "service corosync restart; service
pacemaker restart"
, which broke pacemaker start on all of them, with same "Invalid IPC
credentia
After trying several corosync/pacemaker restarts without luck,
I was able to workaround this by adding an 'uidgid'
entry for hacluster:haclient:
* from /var/log/syslog:
Aug 31 18:33:18 juju-machine-3-lxc-3 corosync[901082]: [MAIN ] Denied
connection attempt from 108:113
$ getent passwd 108
hacl
FYI I'm able to successfully drive netns inside LXC, manually then also
via openstack neutron-gateways, via this crafted aa profile:
/etc/apparmor.d/lxc/lxc-default-with-netns ->
https://gist.github.com/jjo/ff32b08e48e4a52bfc36
--
You received this bug notification because you are a member of Ub
Public bug reported:
On an openstack HA kilo deployment using charms trunks,
several services failing to properly restart haproxy, leaving
old instances running, showing cinder/0 as example:
$ juju ssh cinder/0 'pgrep -f haproxy | xargs ps -o pid,ppid,lstart,cmd -p;
egrep St.*ing.haproxy /var/lo
With maas 1.9 deprecating d-i, what option is left for maas swraid installs ?
Please consider re-prioritizing.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to curtin in Ubuntu.
https://bugs.launchpad.net/bugs/1356392
Title:
lacks sw
Public bug reported:
We're using bcache under trusty HWE kernel (3.16.0-38-generic)
with bcache-tools 1.0.7-0ubuntu1 (built from src).
As trusty has util-linux 2.20.1, udev rules for auto registering
bcache devices are skipped:
# blkid was run by the standard udev rules
# It recognised bcache
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all zero after upgr
FYI peeking at patch-3.11-rc1, shows
[...]
- struct gnet_stats_rate_est tcfc_rate_est;
+ struct gnet_stats_rate_est64tcfc_rate_est;
with its correspondent addition:
+ * struct gnet_stats_rate_est64 - rate estimator
+ * @bps: current byte rate
+ * @pps: current packet rate
+
As per comment #13, I've added the following tags:
* kernel-fixed-upstream-3.10
* kernel-bug-exists-upstream kernel-bug-exists-upstream-3.11rc1
kernel-bug-exists-upstream-4.1-rc1
Please correct them if I misunderstood the naming convention,
FYI my narrowed bisect corresponds to:
*** OK ***:
lin
With kernels from http://kernel.ubuntu.com/~kernel-ppa/mainline/,
I've narrowed down to:
* OK: tc-class-stats.3.10.76-031076-generic.txt: rate 1600bit 2pps backlog 0b
0p requeues 0
* BAD: tc-class-stats.3.11.0-031100rc1-generic.txt: rate 0bit 0pps backlog 0b
0p requeues 0
* BAD: tc-class-sta
@peanlvch: FYI as per comment #4 I already tested v4.1-rc1-vivid, same
bad results.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all
FYI there are several changes at
https://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.10.12
that refer to htb rate handling.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
FYI tried iproute2-3.19.0, same zero rate output.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all zero after upgrade to Trusty
To m
** Also affects: linux-meta (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
tc class statistics rates are all zer
By installing different kernel versions (trusty, manual download and dpkg -i),
I narrowed this down to:
- linux-image-3.8.0-44-generic: OK
- linux-image-3.11.0-26-generic: BAD (zero rate counters).
FYI I used this script:
# cat htb.sh
/sbin/tc qdisc del dev eth0 root
/sbin/tc qdisc add dev eth0 ro
FYI linux-image-4.1.0-040100rc1-generic_4.1.0-040100rc1.201504270235_i386.deb
(from ~kernel-ppa) failed the same way.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
FYI this has been reported to debian also (kernel 3.16):
https://lists.debian.org/debian-kernel/2014/11/msg00288.html
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589
Title:
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1379567
Title:
maas-proxy is an open proxy with no ACLs and listening on all
interfaces
To manage n
@sinzui: closing this as invalid, as I later confirmed this to be a MTU
issue.
** Changed in: juju-core
Status: Triaged => Invalid
** Changed in: juju-core (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Server Team, which i
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1355813
Title:
Interface MTU management across MAAS/juju
To manage notifications about this bug
This deployment has 2 metal nodes hosting LXC units (machine:
0, 18), then 'juju deploy cs:ubuntu --to lxc:0' does ok, while
'--to lxc:18' was consistently failing as described above.
FYI I've worked around this by removing machine 18 down to
'maas ready' and reacquiring it from juju, now all new
strace at both sides (grepped for specific sockets):
http://paste.ubuntu.com/9057691/,
mind the subsecond date diff.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1393444
Title:
/var/log/juju/machine-18-lxc-5.log: http://paste.ubuntu.com/9057287/
NOTE there the repeated log stanzas are because of my manual restarts.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/
Public bug reported:
FYI this is the same environment from lp#1392810 (1.18->1.19->1.20),
juju version: 1.20.11-trusty-amd64
New units deployed (to LXC over maas) stay at "agent-state: pending":
http://paste.ubuntu.com/9057045/
#1 TCP connects ok to node0:17070
- at the unit:
ubuntu@juju-machine
Also affected by this issue: 1.7rc1 (upgraded from 1.5.2) at /var/log/syslog:
Nov 6 06:25:01 duck squid3: Cannot open stdio:/var/log/maas/proxy/store.log:
(13) Permission denied
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in
To clarify what's happening with the rabbitmq charm: for its units to be
able cluster together, they need to refer to each other by hostname, see
[0] which was done based on the observed pattern as per #4,#7 comments above.
[0] https://code.launchpad.net/~jjo/charms/trusty/rabbitmq-server/fix-
nod
1.7beta generated named PTR file (fyi all entries there correspond to metal
hosts):
http://paste.ubuntu.com/8575506/
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1382190
Title:
LXCs
To clarify: comment #12 was for clustered rabbitmq (as already reported
above), FYI trying to force the charm to use plain IPs instead of hostnames
(recall that other units need to be able to refer to each other as
e.g.. rabbit@$OTHER_HOST) fails with e.g.:
2014-10-16 18:51:13 INFO config-changed
FYI this is voiding current trusty/rabbitmq charm from deploying on MaaS
1.7beta + LXC, 1.5 had at least PTR resolution for every dhcp'd IP as e.g:
IN PTR 10-1-57-22.maas. , while 1.7beta has none afaicT.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1376459
Title:
1.5.4+bzr2294-0ubuntu1.1 maas-dhcp package update stops external dhcpd
To manage notif
FYI nova services output also refers to hostnames (which are then
unresolvable):
Here below, nova-compute services are deployed to the metal-s, others
to LXCs on them:
$ nova service-list
++--+--+-+---+-...
| Binary | Host
FYI using /next branch, r103 on trusty/icehouse, single nova--c-c and
mysql units, I'm getting:
unit: nova-cloud-controller/0: machine: 0/lxc/8 agent-state: error
details: hook failed: "shared-db-relation-changed"
2014-09-30 19:01:36 INFO shared-db-relation-changed
sqlalchemy.exc.OperationalError
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1372893
Title:
Neutron has an empty database after deploying juno on utopic
To manage notification
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1274947
Title:
juju lxc instances deployed via MAAS don't have resolvable hostnames
To manage notific
This issue makes several HA services to fail (or split-brain):
* rabbitmq-server:
Followup from above, this MP[0] forces rabbit nodename to be the resolvable
hostname for private-address[0], and fixes it clustering.
* mongodb:
No way: mongod uses gethostname() at rs.initiate() to initialize clust
Hi Dustin, I'll send you the new certs.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to pollinate in Ubuntu.
https://bugs.launchpad.net/bugs/1304777
Title:
entropy.ubuntu.com SSL certificate needs to be updated
To manage notificatio
(sorry, early comment posting) - ie querying the version now cross
releases would involve something alike:
version=$((lxc-version 2>/dev/null || lxc-start --version) | sed 's/.*
//')
That 'sed' is to cope with lxc-version textual formatting, while lxc-
start --version shows it straight.
--
You
FYI " lxc-start --version" fails on precise (and saucy), which then
makes this check release dependent (and/or over existence of lxc-version
binary).
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.
Public bug reported:
Until 1.0.0beta2 I was using lxc-version command, beta3 seems to have
dropped it, leaving no way to programatically query for it:
* precise, saucy:
$ lxc-version
lxc version: 1.0.0.alpha1
$ lxc-cgroup --version
[ ... ERROR ]
* trusty (as of ~today):
$ lxc-version
lxc-versio
48 matches
Mail list logo