[Bug 1393391] Re: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout..

2016-03-15 Thread Edward Hope-Morley
** Tags added: sts-sru

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1393391

Title:
  neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-
  update_fanout..

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393391/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1382079] Re: Project selector not working

2016-03-15 Thread Edward Hope-Morley
** Changed in: horizon (Ubuntu Vivid)
   Status: In Progress => Won't Fix

** Changed in: horizon (Ubuntu Vivid)
 Assignee: Liang Chen (cbjchen) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to horizon in Ubuntu.
https://bugs.launchpad.net/bugs/1382079

Title:
  Project selector not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1382079/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1382079] Re: Project selector not working

2016-03-15 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to horizon in Ubuntu.
https://bugs.launchpad.net/bugs/1382079

Title:
  Project selector not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1382079/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1546445] Re: support vhost user without specifying vhostforce

2016-02-23 Thread Edward Hope-Morley
** Also affects: qemu (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu in Ubuntu.
https://bugs.launchpad.net/bugs/1546445

Title:
  support vhost user without specifying vhostforce

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1546445/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1508428] Re: nova image-list failing with SSL enabled on Juno

2016-02-17 Thread Edward Hope-Morley
** Changed in: python-glanceclient (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1508428

Title:
  nova image-list failing with SSL enabled on Juno

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1508428/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1520543] Re: syslog logging is broken in neutron-server

2016-02-03 Thread Edward Hope-Morley
Correct, this should have been resolved by the recent SRU to Wily and
trusty-liberty Cloud Archive - https://launchpad.net/ubuntu/+source
/python-oslo.log/1.11.0-1ubuntu0. Please let me know if you are still
seeing this issue with the latest version.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.log in Ubuntu.
https://bugs.launchpad.net/bugs/1520543

Title:
  syslog logging is broken in neutron-server

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-oslo.log/+bug/1520543/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1521958] Re: rabbit: starvation of connections for reply

2016-02-02 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: oslo.messaging (Ubuntu Vivid)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to oslo.messaging in Ubuntu.
https://bugs.launchpad.net/bugs/1521958

Title:
  rabbit: starvation of connections for reply

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1521958/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1382079] Re: Project selector not working

2016-01-26 Thread Edward Hope-Morley
** Changed in: horizon (Ubuntu Xenial)
   Status: New => Fix Released

** Changed in: horizon (Ubuntu Wily)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to horizon in Ubuntu.
https://bugs.launchpad.net/bugs/1382079

Title:
  Project selector not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1382079/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1499620] Re: [SRU] Unintended assignment of "syslog"

2016-01-18 Thread Edward Hope-Morley
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.log in Ubuntu.
https://bugs.launchpad.net/bugs/1499620

Title:
  [SRU] Unintended assignment of "syslog"

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.log/+bug/1499620/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1460164] Re: restart of openvswitch-switch causes instance network down when l2population enabled

2015-12-20 Thread Edward Hope-Morley
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1460164

Title:
  restart of openvswitch-switch causes instance network down when
  l2population enabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460164/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1499620] Re: Unintended assignment of "syslog"

2015-12-15 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ 
+  * Fixes syslog handler issue which is causing problems with Neutron services
+in Openstack Liberty.
+ 
+ [Test Case]
+ 
+  * Deploy neutron services with use_syslog=True in /etc/neutron/neutron.conf
+and check /var/log/neutron/neutron-server.log for errors. Also check that
+neutron logs are going to /var/log/syslog.
+ 
+ [Regression Potential]
+ 
+  * None
+ 
  Identifier "syslog" is unintendedly reassigned in _setup_logging_from_conf()
  with OSSysLogHandler in oslo_log/log.py.
  It causes an error in _find_facility() which expects "syslog" as module.

** Summary changed:

- Unintended assignment of "syslog"
+ [SRU] Unintended assignment of "syslog"

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.log in Ubuntu.
https://bugs.launchpad.net/bugs/1499620

Title:
  [SRU] Unintended assignment of "syslog"

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.log/+bug/1499620/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1499620] Re: [SRU] Unintended assignment of "syslog"

2015-12-15 Thread Edward Hope-Morley
** Patch added: "lp1499620.debdiff"
   
https://bugs.launchpad.net/ubuntu/wily/+source/python-oslo.log/+bug/1499620/+attachment/4535028/+files/lp1499620.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.log in Ubuntu.
https://bugs.launchpad.net/bugs/1499620

Title:
  [SRU] Unintended assignment of "syslog"

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.log/+bug/1499620/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1499620] Re: [SRU] Unintended assignment of "syslog"

2015-12-15 Thread Edward Hope-Morley
** Patch removed: "lp1499620.debdiff"
   
https://bugs.launchpad.net/ubuntu/wily/+source/python-oslo.log/+bug/1499620/+attachment/4535089/+files/lp1499620.debdiff

** Patch added: "lp1499620.debdiff"
   
https://bugs.launchpad.net/ubuntu/wily/+source/python-oslo.log/+bug/1499620/+attachment/4535091/+files/lp1499620.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.log in Ubuntu.
https://bugs.launchpad.net/bugs/1499620

Title:
  [SRU] Unintended assignment of "syslog"

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.log/+bug/1499620/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1499620] Re: [SRU] Unintended assignment of "syslog"

2015-12-15 Thread Edward Hope-Morley
** Patch removed: "lp1499620.debdiff"
   
https://bugs.launchpad.net/oslo.log/+bug/1499620/+attachment/4535028/+files/lp1499620.debdiff

** Patch added: "lp1499620.debdiff"
   
https://bugs.launchpad.net/oslo.log/+bug/1499620/+attachment/4535089/+files/lp1499620.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.log in Ubuntu.
https://bugs.launchpad.net/bugs/1499620

Title:
  [SRU] Unintended assignment of "syslog"

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.log/+bug/1499620/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1499620] Re: Unintended assignment of "syslog"

2015-12-10 Thread Edward Hope-Morley
** Changed in: python-oslo.log (Ubuntu Wily)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: python-oslo.log (Ubuntu Wily)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.log in Ubuntu.
https://bugs.launchpad.net/bugs/1499620

Title:
  Unintended assignment of "syslog"

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.log/+bug/1499620/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1355813] Re: Interface MTU management across MAAS/juju

2015-11-25 Thread Edward Hope-Morley
** Tags removed: cts
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1355813

Title:
  Interface MTU management across MAAS/juju

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1355813/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1382774] Re: Postgresql installation for MAAS fails on locales missing language packs

2015-11-25 Thread Edward Hope-Morley
** Tags removed: cts
** Tags added: tests

** Tags removed: tests
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to dbconfig-common in Ubuntu.
https://bugs.launchpad.net/bugs/1382774

Title:
  Postgresql installation for MAAS fails on locales missing language
  packs

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1382774/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1498370] Re: [SRU] DHCP agent: interface unplug leads to exception

2015-11-17 Thread Edward Hope-Morley
@pitti Apologies, this SRU was incorrectly targeted. It is intended as
an SRU for Openstack Kilo and therefore should be targeted at Ubuntu
Vivid (then implicitly Trusty Kilo Ubuntu Cloud Archive). The current
Kilo version in Vivid is 1:2015.1.2-0ubuntu1 so this SRU should have
version 1:2015.1.2-0ubuntu2. I will resubmit an updated debdiff.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1498370

Title:
  [SRU] DHCP agent: interface unplug leads to exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498370/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1498370] Re: [SRU] DHCP agent: interface unplug leads to exception

2015-11-17 Thread Edward Hope-Morley
** Patch removed: "lp1498370.patch"
   
https://bugs.launchpad.net/neutron/+bug/1498370/+attachment/4516816/+files/lp1498370.patch

** Patch added: "lp1498370.patch"
   
https://bugs.launchpad.net/neutron/+bug/1498370/+attachment/4520585/+files/lp1498370.patch

** Changed in: neutron (Ubuntu Trusty)
   Status: Incomplete => In Progress

** Changed in: neutron (Ubuntu Vivid)
   Status: Won't Fix => In Progress

** Changed in: neutron (Ubuntu Trusty)
   Status: In Progress => New

** Changed in: neutron (Ubuntu Trusty)
 Assignee: Edward Hope-Morley (hopem) => (unassigned)

** Changed in: neutron (Ubuntu Vivid)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1498370

Title:
  [SRU] DHCP agent: interface unplug leads to exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498370/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1498370] Re: [SRU] DHCP agent: interface unplug leads to exception

2015-11-11 Thread Edward Hope-Morley
** Patch removed: "lp1498370.debdiff"
   
https://bugs.launchpad.net/ubuntu/vivid/+source/neutron/+bug/1498370/+attachment/4516815/+files/lp1498370.debdiff

** Patch added: "lp1498370.patch"
   
https://bugs.launchpad.net/ubuntu/vivid/+source/neutron/+bug/1498370/+attachment/4516816/+files/lp1498370.patch

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1498370

Title:
  [SRU] DHCP agent: interface unplug leads to exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498370/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1498370] Re: DHCP agent: interface unplug leads to exception

2015-11-11 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ 
+ There are edge cases when the DHCP agent attempts to unplug an interface
+ and the device does not exist. This patch ensures that the agent can
+ tolerate this case.
+ 
+ [Test Case]
+ 
+ * create subnet with dhcp enabled
+ * set pdb.set_trace() in neutron.agent.linux.dhcp.DeviceManager.destroy()
+ * manually delete ns- device in tenant namespace
+ * pdb continue and should not raise any error
+ 
+ [Regression Potential]
+ 
+ None
+ 
  2015-09-22 01:23:42.612 ERROR neutron.agent.dhcp.agent [-] Unable to disable 
dhcp for c543db4d-e077-488f-b58c-5805f63f86b6.
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent Traceback (most recent 
call last):
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 221, in disable
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
self._destroy_namespace_and_port()
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 226, in 
_destroy_namespace_and_port
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
self.device_manager.destroy(self.network, self.interface_name)
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1223, in destroy
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
self.driver.unplug(device_name, namespace=network.namespace)
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 358, in unplug
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent tap_name = 
self._get_tap_name(device_name, prefix)
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 299, in 
_get_tap_name
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent dev_name = 
dev_name.replace(prefix or self.DEV_NAME_PREFIX,
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent AttributeError: 
'NoneType' object has no attribute 'replace'
- 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
+ 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent
  2015-09-22 01:23:42.616 INFO neutron.agent.dhcp.agent [-] Synchronizing state 
complete
  
  The reason is the device is None

** Summary changed:

- DHCP agent: interface unplug leads to exception
+ [SRU] DHCP agent: interface unplug leads to exception

** Patch added: "lp1498370.debdiff"
   
https://bugs.launchpad.net/ubuntu/vivid/+source/neutron/+bug/1498370/+attachment/4516815/+files/lp1498370.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1498370

Title:
  [SRU] DHCP agent: interface unplug leads to exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498370/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1498370] Re: DHCP agent: interface unplug leads to exception

2015-11-09 Thread Edward Hope-Morley
** Branch linked: lp:~hopem/neutron/kilo-lp1498370

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1498370

Title:
  DHCP agent: interface unplug leads to exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498370/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1498370] Re: DHCP agent: interface unplug leads to exception

2015-11-09 Thread Edward Hope-Morley
** Branch linked: lp:~hopem/neutron/kilo-lp1498370

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1498370

Title:
  DHCP agent: interface unplug leads to exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498370/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1506257] Re: [SRU] rpcapi version mismatch possible on upgrade

2015-11-02 Thread Edward Hope-Morley
I've performed the verification tests described above and all pass.

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1506257

Title:
  [SRU] rpcapi version mismatch possible on upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506257/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1508428] Re: nova image-list failing with SSL enabled on Juno

2015-10-22 Thread Edward Hope-Morley
I can confirm that the patch from 1362766 does fix this issue. I will
propose the SRU on bug 1362766 so please see that bug for progress.

** Changed in: python-glanceclient (Ubuntu)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: python-glanceclient (Ubuntu)
   Status: New => In Progress

** Changed in: python-glanceclient (Ubuntu)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1508428

Title:
  nova image-list failing with SSL enabled on Juno

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1508428/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1508428] Re: nova image-list failing with SSL enabled on Juno

2015-10-22 Thread Edward Hope-Morley
actually i'll propose SRU to bug 1347150 since it is the original bug
raised for the issue.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1508428

Title:
  nova image-list failing with SSL enabled on Juno

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1508428/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1508428] Re: nova image-list failing with SSL enabled on Juno

2015-10-21 Thread Edward Hope-Morley
This bug is actually in python-glanceclient and should fixed by
https://bugs.launchpad.net/python-glanceclient/+bug/1362766 which landed
in 0.14.2 (Juno UCA has 0.14.0)

** Also affects: python-glanceclient (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu)
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1508428

Title:
  nova image-list failing with SSL enabled on Juno

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1508428/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1508428] Re: nova image-list failing with SSL enabled on Juno

2015-10-21 Thread Edward Hope-Morley
I think you are actually hitting a bug in the Juno version of
glaneclient https://bugs.launchpad.net/python-glanceclient/+bug/1442664.
I need to double check but if correct we should backport that patch.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1508428

Title:
  nova image-list failing with SSL enabled on Juno

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1508428/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1506257] Re: [SRU] rpcapi version mismatch possible on upgrade

2015-10-16 Thread Edward Hope-Morley
** Tags added: ubuntu-sponsors

** Patch added: "lp1506257.debdiff"
   
https://bugs.launchpad.net/ubuntu/trusty/+source/nova/+bug/1506257/+attachment/4496906/+files/lp1506257.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1506257

Title:
  [SRU] rpcapi version mismatch possible on upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506257/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1506257] Re: rpcapi version mismatch possible on upgrade

2015-10-15 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ 
+   Resolves issue described below by making rpc client tolerant of incompatible
+   remote agent for reserve_block_device_name() calls which can occur during
+   upgrades if compute services are upgraded after clients e.g. nova-api. The
+   proposed fix will cause the client to fallback to a known good/suppported 
api
+   version.
+ 
+ [Test Case]
+ 
+   * Deploy openstack with all nova services on the same version and test that
+ volume operations, particularly attach and detach are working correctly.
+ 
+   * Deploy Openstack with only Nova client services upgraded (i.e. don't
+ upgrade nova-compute) and test that volume operations, particularly attach
+ and detach are working correctly.
+ 
+   * Perform same tests as for 1349888 to ensure the fix is still
+ working.
+ 
+ [Regression Potential]
+ 
+   None.
+ 
  The SRU recently landed for https://bugs.launchpad.net/nova/+bug/1349888
  introduced a potential upgrade regression if nova services are not
  upgraded all at once.
  
  2015-10-14 20:45:00.778 10909 TRACE nova.api.openstack RemoteError: Remote 
error: UnsupportedVersion Endpoint does not support RPC version 3.35
  2015-10-14 20:45:00.778 10909 TRACE nova.api.openstack [u'Traceback (most 
recent call last):\n', u'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply\nincoming.message))\n', u'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 185, 
in _dispatch\nraise UnsupportedVersion(version)\n', u'UnsupportedVersion: 
Endpoint does not support RPC version 3.35\n'].
  
  Basically, if nova-compute services are updated after nova-api services
  you will hit this issue if you perform volume operations. A simple
  solution, if possible, is to upgrade nova-compute services so that they
  are all in sync but I still want to remove the possibility for
  regression while keeping the fix from 1349888. I will propose an SRU
  shortly to resolve this.

** Summary changed:

- rpcapi version mismatch possible on upgrade
+ [SRU] rpcapi version mismatch possible on upgrade

** Description changed:

  [Impact]
  
-   Resolves issue described below by making rpc client tolerant of incompatible
-   remote agent for reserve_block_device_name() calls which can occur during
-   upgrades if compute services are upgraded after clients e.g. nova-api. The
-   proposed fix will cause the client to fallback to a known good/suppported 
api
-   version.
+ Resolves issue described below by making rpc client tolerant of incompatible
+ remote agent for reserve_block_device_name() calls which can occur during 
upgrades if compute services are upgraded after clients e.g. nova-api. The 
proposed fix will cause the client to fallback to a known good/supported api 
version.
  
  [Test Case]
  
-   * Deploy openstack with all nova services on the same version and test that
- volume operations, particularly attach and detach are working correctly.
+   * Deploy openstack with all nova services on the same version and test that
+ volume operations, particularly attach and detach are working correctly.
  
-   * Deploy Openstack with only Nova client services upgraded (i.e. don't
- upgrade nova-compute) and test that volume operations, particularly attach
- and detach are working correctly.
+   * Deploy Openstack with only Nova client services upgraded (i.e. don't
+ upgrade nova-compute) and test that volume operations, particularly attach
+ and detach are working correctly.
  
-   * Perform same tests as for 1349888 to ensure the fix is still
+   * Perform same tests as for 1349888 to ensure the fix is still
  working.
  
  [Regression Potential]
  
-   None.
+   None.
  
  The SRU recently landed for https://bugs.launchpad.net/nova/+bug/1349888
  introduced a potential upgrade regression if nova services are not
  upgraded all at once.
  
  2015-10-14 20:45:00.778 10909 TRACE nova.api.openstack RemoteError: Remote 
error: UnsupportedVersion Endpoint does not support RPC version 3.35
  2015-10-14 20:45:00.778 10909 TRACE nova.api.openstack [u'Traceback (most 
recent call last):\n', u'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply\nincoming.message))\n', u'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 185, 
in _dispatch\nraise UnsupportedVersion(version)\n', u'UnsupportedVersion: 
Endpoint does not support RPC version 3.35\n'].
  
  Basically, if nova-compute services are updated after nova-api services
  you will hit this issue if you perform volume operations. A simple
  solution, if possible, is to upgrade nova-compute services so that they
  are all in sync but I still want to remove the possibility for
  regression while keeping the fix from 1349888. I will propose an SRU
  shortly to resolve this.

-- 
You received this bug notification because you are 

[Bug 1506257] Re: rpcapi version mismatch possible on upgrade

2015-10-15 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu Trusty)
   Status: New => In Progress

** Changed in: nova (Ubuntu Trusty)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Trusty)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu)
   Status: In Progress => Fix Released

** Changed in: nova (Ubuntu)
     Assignee: Edward Hope-Morley (hopem) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1506257

Title:
  rpcapi version mismatch possible on upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506257/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1506257] [NEW] rpcapi version mismatch possible on upgrade

2015-10-14 Thread Edward Hope-Morley
Public bug reported:

The SRU recently landed for https://bugs.launchpad.net/nova/+bug/1349888
introduced a potential upgrade regression if nova services are not
upgraded all at once.

2015-10-14 20:45:00.778 10909 TRACE nova.api.openstack RemoteError: Remote 
error: UnsupportedVersion Endpoint does not support RPC version 3.35
2015-10-14 20:45:00.778 10909 TRACE nova.api.openstack [u'Traceback (most 
recent call last):\n', u'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply\nincoming.message))\n', u'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 185, 
in _dispatch\nraise UnsupportedVersion(version)\n', u'UnsupportedVersion: 
Endpoint does not support RPC version 3.35\n'].

Basically, if nova-compute services are updated after nova-api services
you will hit this issue if you perform volume operations. A simple
solution, if possible, is to upgrade nova-compute services so that they
are all in sync but I still want to remove the possibility for
regression while keeping the fix from 1349888. I will propose an SRU
shortly to resolve this.

** Affects: nova (Ubuntu)
 Importance: High
 Assignee: Edward Hope-Morley (hopem)
 Status: In Progress

** Changed in: nova (Ubuntu)
   Status: New => In Progress

** Changed in: nova (Ubuntu)
   Importance: Undecided => High

** Changed in: nova (Ubuntu)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1506257

Title:
  rpcapi version mismatch possible on upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506257/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1506257] Re: rpcapi version mismatch possible on upgrade

2015-10-14 Thread Edward Hope-Morley
** Branch linked: lp:~hopem/nova/icehouse-lp1506257

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1506257

Title:
  rpcapi version mismatch possible on upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506257/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1499643] Re: ipv6 mode vip mysql grant not added unless vip configured on iface

2015-09-25 Thread Edward Hope-Morley
** Package changed: charms => keystone (Juju Charms Collection)

** Also affects: neutron-api (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: nova-cloud-controller (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: cinder (Ubuntu)
   Importance: Undecided
   Status: New

** Package changed: cinder (Ubuntu) => cinder (Juju Charms Collection)

** Also affects: glance (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: swift-proxy (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: ceilometer (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Changed in: ceilometer (Juju Charms Collection)
   Status: New => In Progress

** Changed in: cinder (Juju Charms Collection)
   Status: New => In Progress

** Changed in: glance (Juju Charms Collection)
   Status: New => In Progress

** Changed in: keystone (Juju Charms Collection)
   Status: New => In Progress

** Changed in: neutron-api (Juju Charms Collection)
   Status: New => In Progress

** Changed in: nova-cloud-controller (Juju Charms Collection)
   Status: New => In Progress

** Changed in: swift-proxy (Juju Charms Collection)
   Status: New => In Progress

** Changed in: ceilometer (Juju Charms Collection)
   Importance: Undecided => High

** Changed in: cinder (Juju Charms Collection)
   Importance: Undecided => High

** Changed in: glance (Juju Charms Collection)
   Importance: Undecided => High

** Changed in: keystone (Juju Charms Collection)
   Importance: Undecided => High

** Changed in: neutron-api (Juju Charms Collection)
   Importance: Undecided => High

** Changed in: nova-cloud-controller (Juju Charms Collection)
   Importance: Undecided => High

** Changed in: swift-proxy (Juju Charms Collection)
   Importance: Undecided => High

** Changed in: ceilometer (Juju Charms Collection)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: cinder (Juju Charms Collection)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: glance (Juju Charms Collection)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: keystone (Juju Charms Collection)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: neutron-api (Juju Charms Collection)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: nova-cloud-controller (Juju Charms Collection)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: swift-proxy (Juju Charms Collection)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to cinder in Ubuntu.
https://bugs.launchpad.net/bugs/1499643

Title:
  ipv6 mode vip mysql grant not added unless vip configured on iface

To manage notifications about this bug go to:
https://bugs.launchpad.net/charms/+source/ceilometer/+bug/1499643/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1499435] [NEW] trusty-backports haproxy no longer needed for ipv6 as of liberty

2015-09-24 Thread Edward Hope-Morley
Public bug reported:

As of the Openstack Libery release, the Ubuntu Cloud Archive will carry
haproxy (starting with 1.5.*) so we will no longer need to install
haproxy from trusty-backports i.e. each charm currently has:

# NOTE(xianghui): Need to install haproxy(1.5.3) from trusty-backports
# to support ipv6 address, so check is required to make sure not
# breaking other versions, IPv6 only support for >= Trusty
if ubuntu_rel == 'trusty':
add_source('deb http://archive.ubuntu.com/ubuntu trusty-backports'
   ' main')
apt_update()
apt_install('haproxy/trusty-backports', fatal=True)

** Affects: cinder (Juju Charms Collection)
 Importance: Medium
 Status: New

** Affects: glance (Juju Charms Collection)
 Importance: Medium
 Status: New

** Affects: keystone (Juju Charms Collection)
 Importance: Medium
 Status: New

** Affects: neutron-api (Juju Charms Collection)
 Importance: Medium
 Status: New

** Affects: nova-cloud-controller (Juju Charms Collection)
 Importance: Medium
 Status: New

** Affects: openstack-dashboard (Juju Charms Collection)
 Importance: Medium
 Status: New

** Affects: swift-proxy (Juju Charms Collection)
 Importance: Medium
 Status: New


** Tags: openstack sts

** Also affects: cinder (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

** Package changed: glance (Ubuntu) => glance (Juju Charms Collection)

** Also affects: neutron-api (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: swift-proxy (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: nova-cloud-controller (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Summary changed:

- trusty-backports no longer needed for ipv6 as of liberty
+ trusty-backports haproxy no longer needed for ipv6 as of liberty

** Changed in: cinder (Juju Charms Collection)
Milestone: None => 15.10

** Changed in: glance (Juju Charms Collection)
Milestone: None => 15.10

** Changed in: neutron-api (Juju Charms Collection)
Milestone: None => 15.10

** Changed in: nova-cloud-controller (Juju Charms Collection)
Milestone: None => 15.10

** Changed in: openstack-dashboard (Juju Charms Collection)
Milestone: None => 15.10

** Changed in: swift-proxy (Juju Charms Collection)
Milestone: None => 15.10

** Changed in: cinder (Juju Charms Collection)
   Importance: Undecided => Medium

** Changed in: glance (Juju Charms Collection)
   Importance: Undecided => Medium

** Changed in: neutron-api (Juju Charms Collection)
   Importance: Undecided => Medium

** Changed in: nova-cloud-controller (Juju Charms Collection)
   Importance: Undecided => Medium

** Changed in: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided => Medium

** Changed in: swift-proxy (Juju Charms Collection)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to glance in Ubuntu.
https://bugs.launchpad.net/bugs/1499435

Title:
  trusty-backports haproxy no longer needed for ipv6 as of liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/charms/+source/cinder/+bug/1499435/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1327218] Re: Volume detach failure because of invalid bdm.connection_info

2015-09-20 Thread Edward Hope-Morley
@lmihaiescu Hi, this patch is already in stable/juno and as such will be
included in the next point release of Juno (2014.2.4) but is not yet
targeted for SRU into Juno.

** Changed in: nova (Ubuntu Trusty)
 Assignee: Edward Hope-Morley (hopem) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1327218

Title:
  Volume detach failure because of invalid bdm.connection_info

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327218/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1349888] Re: [SRU] Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.

2015-09-17 Thread Edward Hope-Morley
Deployed Trusty Icehouse with this nova version, ran the test described
above and lgtm +1.

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1349888

Title:
  [SRU] Attempting to attach the same volume multiple times can cause
  bdm record for existing attachment to be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1349888/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1349888] Re: Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.

2015-09-08 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu)
   Status: New => In Progress

** Changed in: nova (Ubuntu)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Trusty)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu Trusty)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Trusty)
   Status: New => In Progress

** Branch linked: lp:~hopem/nova/icehouse-lp1349888

** Summary changed:

- Attempting to attach the same volume multiple times can cause bdm record for 
existing attachment to be deleted.
+ [SRU] Attempting to attach the same volume multiple times can cause bdm 
record for existing attachment to be deleted.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1349888

Title:
  [SRU] Attempting to attach the same volume multiple times can cause
  bdm record for existing attachment to be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1349888/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1349888] Re: [SRU] Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.

2015-09-08 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ 
+  * Ensure attching already attached volume to second instance does not
+interfere with attached instance volume record.
+ 
+ [Test Case]
+ 
+  * Create cinder volume vol1 and two instances vm1 and vm2
+ 
+  * Attach vol1 to vm1 and check that attach was successful by doing:
+ 
+- cinder list
+- nova show 
+ 
+e.g. http://paste.ubuntu.com/12314443/
+ 
+  * Attach vol1 to vm2 and check that attach fails and, crucially, that the
+first attach is unaffected (as above). You also check the Nova db as
+follows:
+ 
+select * from block_device_mapping where source_type='volume' and \
+(instance_uuid='' or instance_uuid='');
+ 
+from which you would expect e.g. http://paste.ubuntu.com/12314416/ which
+shows that vol1 is attached to vm1 and vm2 attach failed.
+ 
+  * finally detach vol1 from vm1 and ensure that it succeeds.
+ 
+ [Regression Potential]
+ 
+  * none
+ 
+    
+ 
  nova assumes there is only ever one bdm per volume. When an attach is
  initiated a new bdm is created, if the attach fails a bdm for the volume
  is deleted however it is not necessarily the one that was just created.
  The following steps show how a volume can get stuck detaching because of
  this.
- 
  
  $ nova list
  
c+--++++-+--+
  | ID   | Name   | Status | Task State | Power 
State | Networks |
  
+--++++-+--+
  | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | -  | 
Running | private=10.0.0.2 |
  
+--++++-+--+
  
  $ cinder list
  
+--+---++--+-+--+-+
  |  ID  |   Status  |  Name  | Size | Volume 
Type | Bootable | Attached to |
  
+--+---++--+-+--+-+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 |  1   | lvm1 
   |  false   | |
  
+--+---++--+-+--+-+
  
  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |
  | id   | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  +--+--+
  
  $ cinder list
  
+--+++--+-+--+--+
  |  ID  | Status |  Name  | Size | Volume Type 
| Bootable | Attached to  |
  
+--+++--+-+--+--+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 |  1   | lvm1
|  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  
+--+++--+-+--+--+
  
  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) 
(Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d)
  
  $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  
  $ cinder list
  
+--+---++--+-+--+--+
  |  ID  |   Status  |  Name  | Size | Volume 
Type | Bootable | Attached to  |
  
+--+---++--+-+--+--+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | detaching | test10 |  1   | lvm1 
   |  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  
+--+---++--+-+--+--+
  
- 
- 
  2014-07-29 14:47:13.952 ERROR oslo.messaging.rpc.dispatcher 
[req-134dfd17-14da-4de0-93fc-5d8d7bbf65a5 admin admin] Exception during message 
handling:  can't be decoded
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File 

[Bug 1349888] Re: [SRU] Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.

2015-09-08 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu)
   Status: In Progress => Fix Released

** Description changed:

  [Impact]
  
-  * Ensure attching already attached volume to second instance does not
-interfere with attached instance volume record.
+  * Ensure attching already attached volume to second instance does not
+    interfere with attached instance volume record.
  
  [Test Case]
  
-  * Create cinder volume vol1 and two instances vm1 and vm2
+  * Create cinder volume vol1 and two instances vm1 and vm2
  
-  * Attach vol1 to vm1 and check that attach was successful by doing:
+  * Attach vol1 to vm1 and check that attach was successful by doing:
  
-- cinder list
-- nova show 
+    - cinder list
+    - nova show 
  
-e.g. http://paste.ubuntu.com/12314443/
+    e.g. http://paste.ubuntu.com/12314443/
  
-  * Attach vol1 to vm2 and check that attach fails and, crucially, that the
-first attach is unaffected (as above). You also check the Nova db as
-follows:
+  * Attach vol1 to vm2 and check that attach fails and, crucially, that the
+    first attach is unaffected (as above). You can also check the Nova db as
+    follows:
  
-select * from block_device_mapping where source_type='volume' and \
-(instance_uuid='' or instance_uuid='');
+    select * from block_device_mapping where source_type='volume' and \
+    (instance_uuid='' or instance_uuid='');
  
-from which you would expect e.g. http://paste.ubuntu.com/12314416/ which
-shows that vol1 is attached to vm1 and vm2 attach failed.
+    from which you would expect e.g. http://paste.ubuntu.com/12314416/ which
+    shows that vol1 is attached to vm1 and vm2 attach failed.
  
-  * finally detach vol1 from vm1 and ensure that it succeeds.
+  * finally detach vol1 from vm1 and ensure that it succeeds.
  
  [Regression Potential]
  
-  * none
+  * none
  
     
  
  nova assumes there is only ever one bdm per volume. When an attach is
  initiated a new bdm is created, if the attach fails a bdm for the volume
  is deleted however it is not necessarily the one that was just created.
  The following steps show how a volume can get stuck detaching because of
  this.
  
  $ nova list
  
c+--++++-+--+
  | ID   | Name   | Status | Task State | Power 
State | Networks |
  
+--++++-+--+
  | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | -  | 
Running | private=10.0.0.2 |
  
+--++++-+--+
  
  $ cinder list
  
+--+---++--+-+--+-+
  |  ID  |   Status  |  Name  | Size | Volume 
Type | Bootable | Attached to |
  
+--+---++--+-+--+-+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 |  1   | lvm1 
   |  false   | |
  
+--+---++--+-+--+-+
  
  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |
  | id   | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  +--+--+
  
  $ cinder list
  
+--+++--+-+--+--+
  |  ID  | Status |  Name  | Size | Volume Type 
| Bootable | Attached to  |
  
+--+++--+-+--+--+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 |  1   | lvm1
|  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  
+--+++--+-+--+--+
  
  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) 
(Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d)
  
  $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  
  $ cinder list
  
+--+---++--+-+--+--+
  |  ID  |   Status  

[Bug 1450563] Re: python-nova is missing gettextutils.py breaking nova-compute-flex

2015-08-21 Thread Edward Konetzko
Anyone found a work around for this bug on 14.04.2 LTS using repo deb
http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/kilo
main.  I just upgraded from juno to kilo and this bug is still an
issue.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1450563

Title:
  python-nova is missing gettextutils.py breaking nova-compute-flex

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1450563/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1453188] Re: Incorrect path to binary in neutron-plugin-linuxbridge-agent

2015-08-14 Thread Edward Hope-Morley
Sam, Openstack Kilo is tracked under Ubuntu Vivid so once
1:2015.1.1-0ubuntu2 has been verified and lands in vivid-updates it will
then be synced into the Kilo Cloud Archive.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1453188

Title:
  Incorrect path to binary in neutron-plugin-linuxbridge-agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1453188/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1307445] Re: openstack: instances with multiple nics on the same network don't have deterministic private-addresses

2015-08-11 Thread Edward Hope-Morley
Michael, please see comments in bug 1435283 which iiuc is the same issue
(so essentially a duplicate). Comment #11 explains one possible solution
to the problem. Essentially what we want is for Juju to pick a
port/interface on unit start and remember it (e.g. by uuid) so that if
extra ports/interfaces are added to that unit, the private address
should always be derived from that remembered port/interface. Shout or
ping me in irc (dosaboy) if you want more clarification.

** Changed in: juju-core
   Status: Incomplete = New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1307445

Title:
  openstack: instances with multiple nics on the same network don't have
  deterministic private-addresses

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1307445/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1472712] Re: Using SSL with rabbitmq prevents communication between nova-compute and conductor after latest nova updates

2015-08-06 Thread Edward Hope-Morley
Finally got to the bottom of this. The issue lies in python-amqp rather
than python-oslo.messaging. The current trusty version of python-amqp
(1.3.3) has a bug that is fixed in 1.4.4 (see
http://amqp.readthedocs.org/en/latest/changelog.html#version-1-4-4). I
tried backporting the Juno/Utopic version (1.4.5) for Trusty and
everything works just fine now. I will shortly propose an SRU to get
python-amqp fixed in Trusty.

** Package changed: python-oslo.messaging (Ubuntu) = python-amqp
(Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.messaging in Ubuntu.
https://bugs.launchpad.net/bugs/1472712

Title:
  Using SSL with rabbitmq prevents communication between nova-compute
  and conductor after latest nova updates

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472712/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1454070] Re: Upgrading to 1.3.0-0ubuntu1.1 causes a large number of connections

2015-08-06 Thread Edward Hope-Morley
I suspect this is caused by the same issue as bug 1472712.

** Package changed: oslo.messaging (Ubuntu) = python-oslo.messaging
(Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.messaging in Ubuntu.
https://bugs.launchpad.net/bugs/1454070

Title:
  Upgrading to 1.3.0-0ubuntu1.1 causes a large number of connections

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-oslo.messaging/+bug/1454070/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1290920] Re: non-default lxc-dir breaks local provider

2015-08-06 Thread Edward Hope-Morley
** Tags removed: cts
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1290920

Title:
  non-default lxc-dir breaks local provider

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1290920/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1472712] Re: Using SSL with rabbitmq prevents communication between nova-compute and conductor after latest nova updates

2015-08-05 Thread Edward Hope-Morley
OK upon further investigation i have found some trace of a root cause.
Oslo.messaging always uses a timeout of 1 second when polling queues and
connections. This appears to be too small when using ssl and frequently
results in SSLError/timeout which cause all threads to fail and
reconnect and fail again repeatedly thus resulting in the number of
connections rising fast and rpc not working, hence why compute and
conductor are not able to communicate. I've played around with
alternative timeout values and I get much better results even with a
value of 2s instead of 1s. I'll propose an initial workaround patch
shortly so we can get out of this bind for now but I think we'll
ultimately need a more intelligent solution than what oslo.messaging
support in this version.

** Changed in: python-oslo.messaging (Ubuntu)
   Status: Confirmed = In Progress

** Changed in: python-oslo.messaging (Ubuntu)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Changed in: python-oslo.messaging (Ubuntu)
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.messaging in Ubuntu.
https://bugs.launchpad.net/bugs/1472712

Title:
  Using SSL with rabbitmq prevents communication between nova-compute
  and conductor after latest nova updates

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472712/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1472712] Re: Using SSL with rabbitmq prevents communication between nova-compute and conductor after latest nova updates

2015-08-04 Thread Edward Hope-Morley
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.messaging in Ubuntu.
https://bugs.launchpad.net/bugs/1472712

Title:
  Using SSL with rabbitmq prevents communication between nova-compute
  and conductor after latest nova updates

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472712/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1472712] Re: Using SSL with rabbitmq prevents communication between nova-compute and conductor after latest nova updates

2015-08-04 Thread Edward Hope-Morley
A bit more info from my end. I've been trying out different scenarios
and it seems that this is constrained to Trusty Icehouse using python-
oslo.messaging version 1.3.0-0ubuntu1.2 configured to connect to
rabbitmq-server using ssl e.g. my nova.conf has:

rabbit_userid = nova
rabbit_virtual_host = openstack
rabbit_password = 
gr6Mx2FJhC8NH3P4dBRGH8tYT39s6LLcMfJChKM6dtb3rpN5wfkRWVBcMLdhqp58
rabbit_host = 10.5.6.86
rabbit_use_ssl = True
rabbit_port = 5671
kombu_ssl_ca_certs = /etc/nova/rabbit-client-ca.pem

I've played around with reverting back to 1.3.0-0ubuntu1 (which does not
appear to exhibit the issue) and re-adding patches one-by-one and have
found that simply adding the patch for bug 1400268 causes the issue to
occur. So, question is what is it about that patch that causes these
issues?

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-oslo.messaging in Ubuntu.
https://bugs.launchpad.net/bugs/1472712

Title:
  Using SSL with rabbitmq prevents communication between nova-compute
  and conductor after latest nova updates

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472712/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1327218] Re: Volume detach failure because of invalid bdm.connection_info

2015-07-24 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu Trusty)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu Trusty)
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1327218

Title:
  Volume detach failure because of invalid bdm.connection_info

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327218/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-24 Thread Edward Hope-Morley
New proposed build was successful. I have now deployed and tested that
this work as expected.

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-23 Thread Edward Hope-Morley
Turns out the problem here a dodgy python-oslo.messaging in proposed
from bug 1362863. This has now been removed so hopefully a re-spin will
succeed.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-20 Thread Edward Hope-Morley
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-18 Thread Edward Hope-Morley
The package uploaded to proposed has failed to build and I see a lot of
these messages in the buildlog:

Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/sphinx/ext/autodoc.py, line 335, in 
import_object
__import__(self.modname)
  File /«PKGBUILDDIR»/nova/tests/__init__.py, line 37, in module
% os.environ.get('EVENTLET_NO_GREENDNS'))
ImportError: eventlet imported before nova/cmd/__init__ (env var set to None)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-18 Thread Edward Hope-Morley
fwiw i just tried building this myself and it succeeded so not sure why
the proposed pocket fails:


https://launchpad.net/~hopem/+archive/ubuntu/trusty-sru-testing-lp1459046/+build/7667621

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1196924] Re: Stop and Delete operations should give the Guest a chance to shutdown

2015-07-15 Thread Edward Hope-Morley
** Branch linked: lp:~hopem/nova/icehouse-sru-lp1459046

** Patch removed: trusty nova patch
   
https://bugs.launchpad.net/nova/+bug/1196924/+attachment/4423093/+files/nova-2014.1.4-0ubuntu2.2-lp1196924.patch

** Changed in: nova (Ubuntu Trusty)
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1196924

Title:
  Stop and Delete operations should give the Guest a chance to shutdown

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1196924/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-15 Thread Edward Hope-Morley
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-15 Thread Edward Hope-Morley
** Branch linked: lp:~hopem/nova/icehouse-sru-lp1459046

** Branch linked: lp:~hopem/nova/juno-sru-lp1459046

** No longer affects: oslo.log

** Patch removed: trusty-icehouse.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1459046/+attachment/4425932/+files/trusty-icehouse.debdiff

** Patch removed: utopic-juno.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1459046/+attachment/4425933/+files/utopic-juno.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-15 Thread Edward Hope-Morley
** Also affects: oslo.log
   Importance: Undecided
   Status: New

** Changed in: oslo.log
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-10 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu Trusty)
 Assignee: Liang Chen (cbjchen) = Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu)
 Assignee: Liang Chen (cbjchen) = Edward Hope-Morley (hopem)

** Changed in: nova
   Status: In Progress = New

** Changed in: nova
 Assignee: Edward Hope-Morley (hopem) = (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-10 Thread Edward Hope-Morley
T/U/V verified *-proposed builds. Deployed Openstack I/J/K with these
packages and all seems good.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-10 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu Utopic)
   Status: New = In Progress

** Changed in: nova (Ubuntu Utopic)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu)
   Importance: Undecided = High

** Changed in: nova (Ubuntu Utopic)
   Importance: Undecided = High

** Changed in: nova (Ubuntu Trusty)
   Importance: Undecided = High

** Description changed:

  [Impact]
  
-  * Nova services fail to start because they cannot connect to rsyslog
+  * If Nova services are configured to log to syslog (use_syslog=True) they 
will
+currently fail with ECONNREFUSED if they cannot connect to syslog. This 
patch
+adds support for allowing nova to retry connecting a configurable number of
+times before print an error message and continuing with startup.
  
  [Test Case]
  
-  * Set user_syslog to True in nova.conf, stop rsyslog service and
- restart nova services.
+  * Configure nova with use_syslog=True in nova.conf, stop rsyslog service and
+restart nova services. Check that upstart nova logs to see retries occuring
+then start rsyslog and observe connection succeed and nova-compute startup.
  
  [Regression Potential]
  
   * None
- 
- 
- When nova services log to syslog, we should make sure the dependency on the 
upstart jobs is set prior to the nova-* services start.

** Description changed:

  [Impact]
  
-  * If Nova services are configured to log to syslog (use_syslog=True) they 
will
-currently fail with ECONNREFUSED if they cannot connect to syslog. This 
patch
-adds support for allowing nova to retry connecting a configurable number of
-times before print an error message and continuing with startup.
+  * If Nova services are configured to log to syslog (use_syslog=True) they
+will currently fail with ECONNREFUSED if they cannot connect to syslog.
+This patch adds support for allowing nova to retry connecting a 
+configurable number of times before print an error message and continuing
+with startup.
  
  [Test Case]
  
-  * Configure nova with use_syslog=True in nova.conf, stop rsyslog service and
-restart nova services. Check that upstart nova logs to see retries occuring
-then start rsyslog and observe connection succeed and nova-compute startup.
+  * Configure nova with use_syslog=True in nova.conf, stop rsyslog service and
+restart nova services. Check that upstart nova logs to see retries 
+occurring then start rsyslog and observe connection succeed and 
+nova-compute startup.
  
  [Regression Potential]
  
-  * None
+  * None

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-08 Thread Edward Hope-Morley
** Patch removed: trusty.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+attachment/4423971/+files/trusty.debdiff

** Patch added: trusty.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+attachment/4426408/+files/trusty.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-07 Thread Edward Hope-Morley
** Patch added: utopic-juno.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1459046/+attachment/4425933/+files/utopic-juno.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: nova-* services do not start if rsyslog is not yet started

2015-07-07 Thread Edward Hope-Morley
** Patch added: trusty-icehouse.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1459046/+attachment/4425932/+files/trusty-icehouse.debdiff

** Summary changed:

- nova-* services do not start if rsyslog is not yet started
+ [SRU] nova-* services do not start if rsyslog is not yet started

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-06 Thread Edward Hope-Morley
** Patch added: wily.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+attachment/4425020/+files/wily.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-04 Thread Edward Hope-Morley
To clarify, the reason I am not using wait-for-state
(http://pastebin.ubuntu.com/11820672/) with WAIT_FOREVER=Y is primarily
because it will attempt to start the WAIT_FOR service once only then go
into an infinite wait loop which could be a problem in nova upgrade
scenarios where we would be waiting for neutron-ovs-cleanup to start
when it may not, itself, be waiting to start perhaps because it (or it's
dependencies) might never start resulting in nova-compute never
starting. So what I am implementing here is a repeated retry with
increasing interval but with a finite number of attempts so that nova-
compute will always eventually start but not after giving neutron-ovs-
cleanup (if installed) plenty of time to start.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-03 Thread Edward Hope-Morley
So it does appear that this a result of the openvswitch-switch service
taking a while to startup on boot/reboot. I have added some retry logic
to nova-compute and it appears that openvswitch is taking around 80s to
start:

neutron-ovs-cleanup start/pre-start, process 3055
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 1s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 3s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 3s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 5s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 5s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 7s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 7s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 9s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 9s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 11s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 11s
neutron-ovs-cleanup start/pre-start, process 3055
Attempting to start neutron-ovs-cleanup
start: Job is already running: neutron-ovs-cleanup
Recheck neutron-ovs-cleanup status in 13s
neutron-ovs-cleanup start/running
2015-07-03 17:51:40.660 15690 DEBUG nova.servicegroup.api [-] ServiceGroup 
driver defined as an instance of db __new__ 
/usr/lib/python2.7/dist-packages/nova/servicegroup/api.py:65
2015-07-03 17:51:43.053 15690 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _periodic_update_dns because its interval is negative
2015-07-03 17:51:44.320 15690 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') _load_plugins 
/usr/lib/python2.7/dist-packages/stevedore/extension.py:156
2015-07-03 17:51:44.602 15690 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') _load_plugins 
/usr/lib/python2.7/dist-packages/stevedore/extension.py:156
2015-07-03 17:51:44.603 15690 INFO nova.virt.driver [-] Loading compute driver 
'libvirt.LibvirtDriver'

I'll have an SRU patch up shortlty

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-03 Thread Edward Hope-Morley
** Patch added: utopic.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+attachment/4423972/+files/utopic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-03 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ 
  This issue appears to be a consequence of
  https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572 where we
  added a 'wait-for-state running' to the nova-compute upstart so as to
  ensure that neutron-ovs-cleanup has finished before nova-compute starts.
  
  I have started to spot, however, that on some hosts (metal only) there
  is now a race between the two whereby nova-compute sometimes fails to
  start on system boot/reboot with the following in /var/log/upstart/nova-
  compute.log:
  
  ...
  libvirt-bin stop/waiting
  wait-for-state stop/waiting
  neutron-ovs-cleanup start/pre-start, process 3084
  start: Job failed to start
  
  If I manually restart nova-compute all is fine. So this looks like a
  race between nova-compute's wait-for-state and neutron-ovs-cleanup's
  pre-start - start/running.
+ 
+ The proposed solution here is add some retry logic to nova-compute
+ upstart job to tolerate neutron-ovs-cleanup not being able to start yet.
+ We, therefore, allow a certain number of retries, every other with an
+ incremented delay, before giving up and allowing nova-compute to start
+ anyway. If ovs-cleanup failed to start after what is a failry liberal
+ retry period, it is assumed to have failed altogether this making is
+ safe(ish) to start nova-compute.
+ 
+ [Test Case]
+ 
+ In one terminal (as root) do:
+ service neutron-ovs-cleanup stop; service openvswitch-switch stop; service 
nova-compute restart
+ 
+ In another do:
+ sudo tail -F /var/log/upstart/nova-compute.log
+ 
+ Observe the retries occurring
+ 
+ Then do 'sudo service openvswitch-switch start' and observe nova-compute
+ retry and succeed.
+ 
+ [Regression Potential]
+ 
+  * If openvswitch-switch does not start within the max retries and
+ intervals nova-compute will start anyway and of ovs-cleanup were at some
+ point to run one would see the behaviour that LP 1420572 was intended to
+ resolve. It does not seem to make sense to wait indefinitely for ovs-
+ cleanup to be up and the coded interval is pretty liberal and should be
+ plenty enough.

** Changed in: nova (Ubuntu Trusty)
   Status: New = In Progress

** Changed in: nova (Ubuntu Utopic)
   Status: New = In Progress

** Changed in: nova (Ubuntu Vivid)
   Status: New = In Progress

** Changed in: nova (Ubuntu Trusty)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu Utopic)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu Vivid)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Description changed:

  [Impact]
  
  This issue appears to be a consequence of
  https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572 where we
  added a 'wait-for-state running' to the nova-compute upstart so as to
  ensure that neutron-ovs-cleanup has finished before nova-compute starts.
  
  I have started to spot, however, that on some hosts (metal only) there
  is now a race between the two whereby nova-compute sometimes fails to
  start on system boot/reboot with the following in /var/log/upstart/nova-
  compute.log:
  
  ...
  libvirt-bin stop/waiting
  wait-for-state stop/waiting
  neutron-ovs-cleanup start/pre-start, process 3084
  start: Job failed to start
  
  If I manually restart nova-compute all is fine. So this looks like a
  race between nova-compute's wait-for-state and neutron-ovs-cleanup's
  pre-start - start/running.
  
  The proposed solution here is add some retry logic to nova-compute
  upstart job to tolerate neutron-ovs-cleanup not being able to start yet.
  We, therefore, allow a certain number of retries, every other with an
  incremented delay, before giving up and allowing nova-compute to start
  anyway. If ovs-cleanup failed to start after what is a failry liberal
- retry period, it is assumed to have failed altogether this making is
+ retry period, it is assumed to have failed altogether thus making is
  safe(ish) to start nova-compute.
  
  [Test Case]
  
  In one terminal (as root) do:
  service neutron-ovs-cleanup stop; service openvswitch-switch stop; service 
nova-compute restart
  
  In another do:
  sudo tail -F /var/log/upstart/nova-compute.log
  
  Observe the retries occurring
  
  Then do 'sudo service openvswitch-switch start' and observe nova-compute
  retry and succeed.
  
  [Regression Potential]
  
-  * If openvswitch-switch does not start within the max retries and
+  * If openvswitch-switch does not start within the max retries and
  intervals nova-compute will start anyway and of ovs-cleanup were at some
  point to run one would see the behaviour that LP 1420572 was intended to
  resolve. It does not seem to make sense to wait indefinitely for ovs-
  cleanup to be up and the coded interval is pretty liberal and should be
  plenty enough.

** Description changed:

  [Impact]
  
  This issue appears to be a consequence of
  https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572 where we
  added a 'wait

[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-03 Thread Edward Hope-Morley
** Patch added: vivid.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+attachment/4423974/+files/vivid.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-03 Thread Edward Hope-Morley
** Patch added: trusty.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+attachment/4423971/+files/trusty.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-03 Thread Edward Hope-Morley
** Description changed:

  This issue appears to be a consequence of
  https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572 where we
  added a 'wait-for-state running' to the nova-compute upstart so as to
  ensure that neutron-ovs-cleanup has finished before nova-compute starts.
  
  I have started to spot, however, that on some hosts (metal only) there
- is now a race between the two whereby nova-compute fails to start or
- boot/reboot with the following in /var/log/upstart/nova-compute.log:
+ is now a race between the two whereby nova-compute sometimes fails to
+ start or boot/reboot with the following in /var/log/upstart/nova-
+ compute.log:
  
  ...
  libvirt-bin stop/waiting
  wait-for-state stop/waiting
  neutron-ovs-cleanup start/pre-start, process 3084
  start: Job failed to start
  
  If I manually restart nova-compute all is fine. So this looks like a
  race between nova-compute's wait-for-state and neutron-ovs-cleanup's
  pre-start - start/running.

** Description changed:

  This issue appears to be a consequence of
  https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572 where we
  added a 'wait-for-state running' to the nova-compute upstart so as to
  ensure that neutron-ovs-cleanup has finished before nova-compute starts.
  
  I have started to spot, however, that on some hosts (metal only) there
  is now a race between the two whereby nova-compute sometimes fails to
- start or boot/reboot with the following in /var/log/upstart/nova-
+ start on system boot/reboot with the following in /var/log/upstart/nova-
  compute.log:
  
  ...
  libvirt-bin stop/waiting
  wait-for-state stop/waiting
  neutron-ovs-cleanup start/pre-start, process 3084
  start: Job failed to start
  
  If I manually restart nova-compute all is fine. So this looks like a
  race between nova-compute's wait-for-state and neutron-ovs-cleanup's
  pre-start - start/running.

** Changed in: neutron (Ubuntu Trusty)
   Importance: Undecided = High

** Changed in: neutron (Ubuntu Utopic)
   Importance: Undecided = High

** Changed in: neutron (Ubuntu Vivid)
   Importance: Undecided = High

** Changed in: neutron (Ubuntu)
   Status: New = In Progress

** Changed in: neutron (Ubuntu)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] Re: [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-03 Thread Edward Hope-Morley
So, this is possibly a result of neutron-ovs-cleaunp failing to start at
the time nova-compute does the wait-for-state (and implicitly tries to
start neutron-ovs-cleanup) due to the fact that openvswitch is not ready
to start at that very moment. I am going to attempt to resolve this by
making the nova-compute wait-for-state logic more accommodating of the
fact that neutron-ovs-cleanup may not be ready to start at the time of
the check.


** Package changed: neutron (Ubuntu) = nova (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1471022] [NEW] [SRU] race between nova-compute and neutron-ovs-cleanup

2015-07-02 Thread Edward Hope-Morley
Public bug reported:

This issue appears to be a consequence of
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572 where we
added a 'wait-for-state running' to the nova-compute upstart so as to
ensure that neutron-ovs-cleanup has finished before nova-compute starts.

I have started to spot, however, that on some hosts (metal only) there
is now a race between the two whereby nova-compute fails to start or
boot/reboot with the following in /var/log/upstart/nova-compute.log:

...
libvirt-bin stop/waiting
wait-for-state stop/waiting
neutron-ovs-cleanup start/pre-start, process 3084
start: Job failed to start

If I manually restart nova-compute all is fine. So this looks like a
race between nova-compute's wait-for-state and neutron-ovs-cleanup's
pre-start - start/running.

** Affects: neutron (Ubuntu)
 Importance: High
 Status: New

** Package changed: nova (Ubuntu) = neutron (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1471022

Title:
  [SRU] race between nova-compute and neutron-ovs-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1471022/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: nova-* services do not start if rsyslog is not yet started

2015-06-18 Thread Edward Hope-Morley
The patch you have provided here for SRU (that is also now part of the
SRU submission in bug 1348244) does not appear to have been submitted
upstream. So, does this affect openstack versions greater than Icehouse?
It looks like Nova logging was switched to use oslo.log as of Kilo so
presumably Juno is also affected by this?

Assuming that Kilo+ is not affected and since this code has been moved
out of tree anyway I think we should apply this to both I and J and make
sure that patch commit message references this bug so that we have a
context reference.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: nova-* services do not start if rsyslog is not yet started

2015-06-18 Thread Edward Hope-Morley
stable/icehouse patch uploaded to
https://review.openstack.org/#/c/193105/

** Changed in: nova (Ubuntu)
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: nova-* services do not start if rsyslog is not yet started

2015-06-18 Thread Edward Hope-Morley
stable/juno patch uploaded to https://review.openstack.org/#/c/193110/

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: nova-* services do not start if rsyslog is not yet started

2015-06-18 Thread Edward Hope-Morley
** Changed in: nova
   Status: New = In Progress

** Changed in: nova
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: nova-* services do not start if rsyslog is not yet started

2015-06-18 Thread Edward Hope-Morley
** Also affects: oslo.log
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1459046] Re: nova-* services do not start if rsyslog is not yet started

2015-06-18 Thread Edward Hope-Morley
** Patch removed: nova-2014.1.4-0ubuntu3-lp1459046.patch
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1459046/+attachment/4407056/+files/nova-2014.1.4-0ubuntu3-lp1459046.patch

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1459046

Title:
  nova-* services do not start if rsyslog is not yet started

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1399088] Re: correct the position of the syslog Handler

2015-06-18 Thread Edward Hope-Morley
** Description changed:

  Nova SRU:
  [Impact]
  
   * syslog handler doesn't have the same settings as other handlers
  
  [Test Case]
  
   * Set user_syslog to True in nova.conf, restart nova services. Logs
     written to syslog doesn't have the same format as its own service
     log
  
  [Regression Potential]
  
   * none
  
  Cinder SRU:
  [Impact]
  
   * syslog handler doesn't have the same settings as other handlers
  
  [Test Case]
  
   * Set user_syslog to True in cinder.conf, restart cinder services. Logs
     written to syslog doesn't have the same format as its own service
     log
  
  [Regression Potential]
  
   * none
- 
- 
- correct the position of the syslog Handler
- 
- syslog Handler should be in front of the line datefmt = CONF.log_date_format
- Then syslog Handler can have the same settings with other handlers.
- 
- openstack/common/log.py
- def _setup_logging_from_conf(project, version):
- log_root = getLogger(None).logger
- for handler in log_root.handlers:
- log_root.removeHandler(handler)
- 
- logpath = _get_log_file_path()
- if logpath:
- filelog = logging.handlers.WatchedFileHandler(logpath)
- log_root.addHandler(filelog)
- 
- if CONF.use_stderr:
- streamlog = ColorHandler()
- log_root.addHandler(streamlog)
- 
- elif not logpath:
- # pass sys.stdout as a positional argument
- # python2.6 calls the argument strm, in 2.7 it's stream
- streamlog = logging.StreamHandler(sys.stdout)
- log_root.addHandler(streamlog)
- 
- if CONF.publish_errors:
- handler = importutils.import_object(
- oslo.messaging.notify.log_handler.PublishErrorsHandler,
- logging.ERROR)
- log_root.addHandler(handler)
- 
- datefmt = CONF.log_date_format
- for handler in log_root.handlers:
- # NOTE(alaski): CONF.log_format overrides everything currently.  This
- # should be deprecated in favor of context aware formatting.
- if CONF.log_format:
- handler.setFormatter(logging.Formatter(fmt=CONF.log_format,
-    datefmt=datefmt))
- log_root.info('Deprecated: log_format is now deprecated and will '
-   'be removed in the next release')
- else:
- handler.setFormatter(ContextFormatter(project=project,
-   version=version,
-   datefmt=datefmt))
- if CONF.debug:
- log_root.setLevel(logging.DEBUG)
- elif CONF.verbose:
- log_root.setLevel(logging.INFO)
- else:
- log_root.setLevel(logging.WARNING)
- 
- for pair in CONF.default_log_levels:
- mod, _sep, level_name = pair.partition('=')
- logger = logging.getLogger(mod)
- # NOTE(AAzza) in python2.6 Logger.setLevel doesn't convert string name
- # to integer code.
- if sys.version_info  (2, 7):
- level = logging.getLevelName(level_name)
- logger.setLevel(level)
- else:
- logger.setLevel(level_name)
- 
- if CONF.use_syslog:
- try:
- facility = _find_facility_from_conf()
- # TODO(bogdando) use the format provided by RFCSysLogHandler
- #   after existing syslog format deprecation in J
- if CONF.use_syslog_rfc_format:
- syslog = RFCSysLogHandler(address='/dev/log',
-   facility=facility)
- else:
- syslog = logging.handlers.SysLogHandler(address='/dev/log',
- facility=facility)
- log_root.addHandler(syslog)
- except socket.error:
- log_root.error('Unable to add syslog handler. Verify that syslog '
-    'is running.')

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1399088

Title:
  correct the position of the syslog Handler

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo-incubator/+bug/1399088/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1452312] Re: glance-registry process spins if rsyslog restarted with syslog logging enabled

2015-06-09 Thread Edward Hope-Morley
@george-shuklin the package will remain in trusty-proposed until it has
been verified. If you want to help test this package please enable the
proposed pocket in your /etc/apt/sources.list, run apt-get update and
install the package. Thanks!

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-eventlet in Ubuntu.
https://bugs.launchpad.net/bugs/1452312

Title:
  glance-registry process spins if rsyslog restarted with syslog logging
  enabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-eventlet/+bug/1452312/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1462692] Re: Ordering cycle on NetworkManager-wait-online.service/start

2015-06-07 Thread Edward Z. Yang
Yes, that diagnosis looks correct.

ezyang@sabre:~$ ls -l  /lib/systemd/system/screen-cleanup.service
ls: cannot access /lib/systemd/system/screen-cleanup.service: No such file or 
directory
ezyang@sabre:~$ dpkg -s screen
Package: screen
Status: deinstall ok config-files
Priority: optional
Section: misc
Installed-Size: 983
Maintainer: Ubuntu Developers ubuntu-devel-disc...@lists.ubuntu.com
Architecture: amd64
Version: 4.2.1-2
Config-Version: 4.2.1-2
Depends: libc6 (= 2.15), libpam0g (= 0.99.7.1), libtinfo5
Suggests: iselect (= 1.4.0-1) | screenie | byobu
Conffiles:
 /etc/init.d/screen-cleanup c1dc791ae42e2ce284cd20aff93e8987
 /etc/screenrc 12c245238eb8b653625bba27dc81df6a
 /etc/init/screen-cleanup.conf 441f4a1c5b41d7f23427be5aa6ccbbcc obsolete
Description: terminal multiplexer with VT100/ANSI terminal emulation
 GNU Screen is a terminal multiplexer that runs several separate screens on
 a single physical character-based terminal. Each virtual terminal emulates a
 DEC VT100 plus several ANSI X3.64 and ISO 2022 functions. Screen sessions
 can be detached and resumed later on a different terminal.
 .
 Screen also supports a whole slew of other features, including configurable
 input and output translation, serial port support, configurable logging,
 and multi-user support.
Original-Maintainer: Axel Beckert a...@debian.org
Homepage: http://savannah.gnu.org/projects/screen

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to screen in Ubuntu.
https://bugs.launchpad.net/bugs/1462692

Title:
  Ordering cycle on NetworkManager-wait-online.service/start

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/screen/+bug/1462692/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1304333] Re: [SRU] Instance left stuck in transitional POWERING state

2015-05-26 Thread Edward Hope-Morley
We've been running this a while in multiple Trusty Icehouse deployments
so +1 for SRU.

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1304333

Title:
  [SRU] Instance left stuck in transitional POWERING state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304333/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1382842] Re: pacemaker should have a binary version dependency on pacemaker libs

2015-05-18 Thread Edward Hope-Morley
** Changed in: hacluster (Juju Charms Collection)
Milestone: None = 15.01

** Changed in: hacluster (Juju Charms Collection)
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1382842

Title:
  pacemaker should have a binary version dependency on pacemaker libs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1382842/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1366997] Re: Duplicate entry error for primary key following cluster size change

2015-05-06 Thread Edward Hope-Morley
** Changed in: percona-cluster (Juju Charms Collection)
 Assignee: Edward Hope-Morley (hopem) = Mario Splivalo (mariosplivalo)

** Changed in: percona-cluster (Juju Charms Collection)
Milestone: None = 15.04

** Changed in: percona-cluster (Juju Charms Collection)
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1366997

Title:
  Duplicate entry error for primary key following cluster size change

To manage notifications about this bug go to:
https://bugs.launchpad.net/codership-mysql/+bug/1366997/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1420572] Re: [SRU] race between neutron-ovs-cleanup and nova-compute

2015-04-07 Thread Edward Hope-Morley
trusty-backports verified

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1420572

Title:
  [SRU] race between neutron-ovs-cleanup and nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1420572] Re: [SRU] race between neutron-ovs-cleanup and nova-compute

2015-04-07 Thread Edward Hope-Morley
trusty-proposed verified

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1420572

Title:
  [SRU] race between neutron-ovs-cleanup and nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1420572] Re: race between neutron-ovs-cleanup and nova-compute

2015-02-22 Thread Edward Hope-Morley
I have tested both the attached Icehouse and Juno patches and can
confirm that they behave as expected i.e.

Installed nova-compute + neutron-plugin-openvswitch-agent (which
installs neutron-ovs-cleanup)

In /var/log/upstart/nova-compute.log I get as expected:

libvirt-bin start/running, process 1409
wait-for-state stop/waiting
neutron-ovs-cleanup stop/waiting
wait-for-state stop/waiting

And if I add a 10 second delay to /usr/bin/neutron-ovs-cleanup I get as
expected:

(time sudo service neutron-ovs-cleanup restart ); time sudo service 
nova-compute restart
nova-compute stop/waiting
neutron-ovs-cleanup stop/waiting
neutron-ovs-cleanup start/running

real0m10.460s
user0m0.010s
sys 0m0.015s
nova-compute start/running, process 3026

real0m10.468s
user0m0.010s
sys 0m0.014s

So, nova-compute will now always wait for ovs-cleanup to complete and I
tested that if ovs-cleanup is not installed it gets ignored and nova-
compute starts.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1420572

Title:
  race between neutron-ovs-cleanup and nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1420572] Re: race between neutron-ovs-cleanup and nova-compute

2015-02-19 Thread Edward Hope-Morley
I think this can be fixed by adding an upstart pre-start rule similar to
the one used in neutron-*-agent.upstart e.g.

pre-start script
  # Check to see if openvswitch plugin in use by checking
  # status of cleanup upstart configuration
  if status neutron-ovs-cleanup; then
start wait-for-state WAIT_FOR=neutron-ovs-cleanup WAIT_STATE=running 
WAITER=nova-compute
  fi
end script

** Changed in: neutron
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Project changed: neutron = nova (Ubuntu)

** Changed in: nova (Ubuntu)
 Assignee: Edward Hope-Morley (hopem) = (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1420572

Title:
  race between neutron-ovs-cleanup and nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1420572] Re: race between neutron-ovs-cleanup and nova-compute

2015-02-19 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ 
+  * We run neutron-ovs-cleanup in startup if neutron installed. If nova-compute
+does not wait for completion it will try to use veth/bridge devices that 
may
+may be in the process of bring deleted.
+ 
+ [Test Case]
+ 
+  * Create neutron (ovs) network and boot an instance with this network
+ as --nic
+ 
+  * Check that creation was successful and network is functional. Also make a
+note corresponding veth and bridge devices (ip a).
+ 
+  * Reboot system, check that expected veth and bridge devices are still
+there and that nova-compute is happy e.g. try sshing to your instance. Also
+check /var/log/upstart/nova-compute.log to see if service waited for
+ovs-cleanup to finish.
+ 
+ [Regression Potential]
+ 
+  * None
+ 
+ 
+    
+ 
  There is a race when both neutron-ovs-cleanup and nova-compute trying to
  do operations on the qvb*** and qvo*** devices. Below is a scenario I
  recently met,
  
  1. nova-compute was started and creating the veth_pair for VM instances
  running on the host -
  
https://github.com/openstack/nova/blob/stable/icehouse/nova/network/linux_net.py#L1298
  
  2. neutron-ovs-cleanup was kicked off and deleted all the ports.
  
  3. when nova-compute tried to set the MTU at
  
https://github.com/openstack/nova/blob/stable/icehouse/nova/network/linux_net.py#L1280
  , Stderr: u'Cannot find device qvo***\n' was reported. Because the
  device that was just created was deleted again by neutron-ovs-cleanup.
  
  As they both operate on the same resources, there needs a way to
  synchronize the operations the two processes do on those resources.

** Description changed:

  [Impact]
  
-  * We run neutron-ovs-cleanup in startup if neutron installed. If nova-compute
-does not wait for completion it will try to use veth/bridge devices that 
may
-may be in the process of bring deleted.
+  * We run neutron-ovs-cleanup in startup if neutron installed. If
+nova-compute does not wait for completion it will try to use
+veth/bridge devices that may be in the process of bring deleted.
  
  [Test Case]
  
-  * Create neutron (ovs) network and boot an instance with this network
- as --nic
+  * Create neutron (ovs) network and boot an instance with this network
+as --nic
  
-  * Check that creation was successful and network is functional. Also make a
-note corresponding veth and bridge devices (ip a).
+  * Check that creation was successful and network is functional. Also make
+a note corresponding veth and bridge devices (ip a).
  
-  * Reboot system, check that expected veth and bridge devices are still
-there and that nova-compute is happy e.g. try sshing to your instance. Also
-check /var/log/upstart/nova-compute.log to see if service waited for
-ovs-cleanup to finish.
+  * Reboot system, check that expected veth and bridge devices are still
+there and that nova-compute is happy e.g. try sshing to your instance.
+Also check /var/log/upstart/nova-compute.log to see if service waited
+for ovs-cleanup to finish.
  
  [Regression Potential]
  
-  * None
- 
+  * None
  
     
  
  There is a race when both neutron-ovs-cleanup and nova-compute trying to
  do operations on the qvb*** and qvo*** devices. Below is a scenario I
  recently met,
  
  1. nova-compute was started and creating the veth_pair for VM instances
  running on the host -
  
https://github.com/openstack/nova/blob/stable/icehouse/nova/network/linux_net.py#L1298
  
  2. neutron-ovs-cleanup was kicked off and deleted all the ports.
  
  3. when nova-compute tried to set the MTU at
  
https://github.com/openstack/nova/blob/stable/icehouse/nova/network/linux_net.py#L1280
  , Stderr: u'Cannot find device qvo***\n' was reported. Because the
  device that was just created was deleted again by neutron-ovs-cleanup.
  
  As they both operate on the same resources, there needs a way to
  synchronize the operations the two processes do on those resources.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1420572

Title:
  race between neutron-ovs-cleanup and nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1420572] Re: race between neutron-ovs-cleanup and nova-compute

2015-02-19 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu)
   Importance: Undecided = High

** Changed in: nova (Ubuntu)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu)
   Status: Confirmed = In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1420572

Title:
  race between neutron-ovs-cleanup and nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1420572] Re: race between neutron-ovs-cleanup and nova-compute

2015-02-19 Thread Edward Hope-Morley
** Patch added: nova-compute-2014.2-lp1420572.patch
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+attachment/4322631/+files/nova-compute-2014.2-lp1420572.patch

** Changed in: nova (Ubuntu Utopic)
   Status: New = In Progress

** Changed in: nova (Ubuntu Utopic)
   Importance: Undecided = High

** Changed in: nova (Ubuntu Trusty)
   Importance: Undecided = High

** Changed in: nova (Ubuntu Trusty)
   Status: New = In Progress

** Changed in: nova (Ubuntu Utopic)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu Trusty)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1420572

Title:
  race between neutron-ovs-cleanup and nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1420572] Re: race between neutron-ovs-cleanup and nova-compute

2015-02-19 Thread Edward Hope-Morley
** Patch added: nova-compute-2014.1-lp1420572.patch
   
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+attachment/4322616/+files/nova-compute-2014.1-lp1420572.patch

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1420572

Title:
  race between neutron-ovs-cleanup and nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1420572/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1405588] Re: database connection failed (Protocol error)

2015-01-05 Thread Edward Hope-Morley
** Tags added: cts openstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to openvswitch in Ubuntu.
https://bugs.launchpad.net/bugs/1405588

Title:
  database connection failed (Protocol error)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1405588/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1382842] Re: pacemaker should have a binary version dependency on pacemaker libs

2014-12-26 Thread Edward Hope-Morley
** Changed in: hacluster (Juju Charms Collection)
   Status: In Progress = Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1382842

Title:
  pacemaker should have a binary version dependency on pacemaker libs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1382842/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1396246] Re: hacluster losing vip grp primitive following upgrade

2014-12-18 Thread Edward Hope-Morley
** Changed in: charmhelpers (Juju Charms Collection)
   Status: In Progress = Fix Committed

** Package changed: hacluster (Juju Charms Collection) = nova-cloud-
controller (Juju Charms Collection)

** Also affects: swift-proxy (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: keystone (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: quantum-gateway (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Package changed: quantum-gateway (Juju Charms Collection) = cinder
(Juju Charms Collection)

** Changed in: nova-cloud-controller (Juju Charms Collection)
   Status: In Progress = New

** Changed in: cinder (Juju Charms Collection)
   Importance: Undecided = High

** Changed in: keystone (Juju Charms Collection)
   Importance: Undecided = High

** Changed in: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided = High

** Changed in: swift-proxy (Juju Charms Collection)
   Importance: Undecided = High

** Changed in: nova-cloud-controller (Juju Charms Collection)
 Assignee: Edward Hope-Morley (hopem) = (unassigned)

** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron-api (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Package changed: glance (Ubuntu) = glance (Juju Charms Collection)

** Changed in: glance (Juju Charms Collection)
   Importance: Undecided = High

** Changed in: neutron-api (Juju Charms Collection)
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to glance in Ubuntu.
https://bugs.launchpad.net/bugs/1396246

Title:
  hacluster losing vip grp primitive following upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/charms/+source/charmhelpers/+bug/1396246/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1382842] Re: SRU breaks pacemaker in 14.04

2014-12-17 Thread Edward Hope-Morley
** Changed in: hacluster (Juju Charms Collection)
 Assignee: (unassigned) = Edward Hope-Morley (hopem)

** Changed in: hacluster (Juju Charms Collection)
   Importance: Undecided = High

** Changed in: hacluster (Juju Charms Collection)
   Status: New = In Progress

** Branch linked: lp:~hopem/charms/trusty/hacluster/dont-allow-
pacemaker-upgrade

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1382842

Title:
  SRU breaks pacemaker in 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1382842/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic

2014-10-24 Thread Edward Hope-Morley
** Changed in: neutron-api (Juju Charms Collection)
   Status: Fix Committed = Fix Released

** Changed in: nova-cloud-controler (Juju Charms Collection)
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1372893

Title:
  Neutron has an empty database after deploying juno on utopic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


  1   2   >