[Yahoo-eng-team] [Bug 1283146] Re: test_delete_member_with_vip fails AssertionError

2014-07-25 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1283146

Title:
  test_delete_member_with_vip fails AssertionError

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Another random failure:

  http://logs.openstack.org/88/67288/14/check/gate-neutron-
  python26/871584e/

  Stacktrace:

  
  ft1.7230: 
neutron.tests.unit.services.loadbalancer.drivers.radware.test_plugin_driver.TestLoadBalancerPlugin.test_delete_member_with_vip_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2014-02-21 08:54:35,261 INFO [neutron.manager] Loading core plugin: 
neutron.db.db_base_plugin_v2.NeutronDbPluginV2
  2014-02-21 08:54:35,363 INFO [neutron.manager] Loading Plugin: 
neutron.services.loadbalancer.plugin.LoadBalancerPlugin
  2014-02-21 08:54:35,434 INFO [neutron.api.extensions] Initializing 
extension manager.
  2014-02-21 08:54:35,434ERROR [neutron.api.extensions] Extension path 
'unit/extensions' doesn't exist!
  2014-02-21 08:54:35,435 INFO [neutron.api.extensions] Loading extension 
file: allowedaddresspairs.pyc
  2014-02-21 08:54:35,435 INFO [neutron.api.extensions] Loading extension 
file: metering.pyc
  2014-02-21 08:54:35,435 INFO [neutron.api.extensions] Loading extension 
file: firewall.py
  2014-02-21 08:54:35,436  WARNING [neutron.api.extensions] Extension fwaas not 
supported by any of loaded plugins
  2014-02-21 08:54:35,436 INFO [neutron.api.extensions] Loading extension 
file: portsecurity.py
  2014-02-21 08:54:35,437  WARNING [neutron.api.extensions] Extension 
port-security not supported by any of loaded plugins
  2014-02-21 08:54:35,437 INFO [neutron.api.extensions] Loading extension 
file: metering.py
  2014-02-21 08:54:35,438  WARNING [neutron.api.extensions] Extension metering 
not supported by any of loaded plugins
  2014-02-21 08:54:35,438 INFO [neutron.api.extensions] Loading extension 
file: portbindings.py
  2014-02-21 08:54:35,439  WARNING [neutron.api.extensions] Extension binding 
not supported by any of loaded plugins
  2014-02-21 08:54:35,439 INFO [neutron.api.extensions] Loading extension 
file: servicetype.pyc
  2014-02-21 08:54:35,439 INFO [neutron.api.extensions] Loading extension 
file: routedserviceinsertion.py
  2014-02-21 08:54:35,439  WARNING [neutron.api.extensions] Extension 
routed-service-insertion not supported by any of loaded plugins
  2014-02-21 08:54:35,439 INFO [neutron.api.extensions] Loading extension 
file: l3agentscheduler.pyc
  2014-02-21 08:54:35,439 INFO [neutron.api.extensions] Loading extension 
file: securitygroup.pyc
  2014-02-21 08:54:35,440 INFO [neutron.api.extensions] Loading extension 
file: securitygroup.py
  2014-02-21 08:54:35,441  WARNING [neutron.api.extensions] Extension 
security-group not supported by any of loaded plugins
  2014-02-21 08:54:35,441 INFO [neutron.api.extensions] Loading extension 
file: multiprovidernet.pyc
  2014-02-21 08:54:35,441 INFO [neutron.api.extensions] Loading extension 
file: vpnaas.py
  2014-02-21 08:54:35,442  WARNING [neutron.api.extensions] Extension vpnaas 
not supported by any of loaded plugins
  2014-02-21 08:54:35,442 INFO [neutron.api.extensions] Loading extension 
file: routerservicetype.py
  2014-02-21 08:54:35,442  WARNING [neutron.api.extensions] Extension 
router-service-type not supported by any of loaded plugins
  2014-02-21 08:54:35,443 INFO [neutron.api.extensions] Loading extension 
file: quotasv2.py
  2014-02-21 08:54:35,443  WARNING [neutron.api.extensions] Extension quotas 
not supported by any of loaded plugins
  2014-02-21 08:54:35,443 INFO [neutron.api.extensions] Loading extension 
file: providernet.pyc
  2014-02-21 08:54:35,443 INFO [neutron.api.extensions] Loading extension 
file: routedserviceinsertion.pyc
  2014-02-21 08:54:35,444 INFO [neutron.api.extensions] Loading extension 
file: providernet.py
  2014-02-21 08:54:35,444  WARNING [neutron.api.extensions] Extension provider 
not supported by any of loaded plugins
  2014-02-21 08:54:35,444 INFO [neutron.api.extensions] Loading extension 
file: extra_dhcp_opt.py
  2014-02-21 08:54:35,444  WARNING [neutron.api.extensions] Extension 
extra_dhcp_opt not supported by any of loaded plugins
  2014-02-21 08:54:35,445 INFO [neutron.api.extensions] Loading extension 
file: servicetype.py
  2014-02-21 08:54:35,445 INFO [neutron.api.extensions] Loaded extension: 
service-type
  2014-02-21 08:54:35,445 INFO [neutron.api.extensions] Loading extension 
file: extraroute.py
  2014-02-21 08:54:35,446  WARNING [neutron.api.extensions] Extension 
extraroute not supported by any of loaded plugins
  2014-02-21 08:54:35,446 INFO [neutron.api.extension

[Yahoo-eng-team] [Bug 1319300] Re: icehouse can't create instance

2014-07-25 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1319300

Title:
  icehouse can't create instance

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  I install icehouse on ubuntu 14.04 as the install guide openstack-
  install-guide-apt-trunk.  I install
  keystone、nova、glance、neutron、horison on one VM machine. I configure
  openstack as the guide. I can't create a vm ,it said "can't bind
  vif_type=bind_faild ".  The compute.log deatils as follow.

  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] Traceback (most recent call last):
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1720, in _spawn
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] block_device_info)
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2250, in 
spawn
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] write_to_disk=True)
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3431, in 
to_xml
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] disk_info, rescue, block_device_info)
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3247, in 
get_guest_config
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] flavor)
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 384, in 
get_config
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] _("Unexpected vif_type=%s") % 
vif_type)
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] NovaException: Unexpected 
vif_type=binding_failed
  2014-05-14 16:23:49.815 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] 
  2014-05-14 16:23:49.954 2146 AUDIT nova.compute.manager 
[req-197c92d1-e159-4bef-9ae9-9a4d94f784cf 79aa1d5665774b51ba4d80f5b7126e62 
37f8178c977a4cc4b05e93752e71e518] [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] Terminating instance
  2014-05-14 16:23:51.034 2146 ERROR nova.virt.libvirt.driver [-] [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] During wait destroy, instance disappeared.
  2014-05-14 16:23:51.274 2146 ERROR nova.compute.manager 
[req-197c92d1-e159-4bef-9ae9-9a4d94f784cf 79aa1d5665774b51ba4d80f5b7126e62 
37f8178c977a4cc4b05e93752e71e518] [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] Error: Unexpected vif_type=binding_failed
  2014-05-14 16:23:51.274 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] Traceback (most recent call last):
  2014-05-14 16:23:51.274 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1311, in 
_build_instance
  2014-05-14 16:23:51.274 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] set_access_ip=set_access_ip)
  2014-05-14 16:23:51.274 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 399, in 
decorated_function
  2014-05-14 16:23:51.274 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] return function(self, context, *args, 
**kwargs)
  2014-05-14 16:23:51.274 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1723, in _spawn
  2014-05-14 16:23:51.274 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-05-14 16:23:51.274 2146 TRACE nova.compute.manager [instance: 
279c6e3f-8317-4a96-9970-05a19bb59eec]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-05-14 16:23:51.274 2146 TRACE nova.compute.manager 

[Yahoo-eng-team] [Bug 1321532] Re: No status report for vpn agent

2014-07-25 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1321532

Title:
  No status report for vpn agent

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  I understand the agent status table is used in resource scheduling in
  neutron. However, as vpn-agent is there that cannot be ignored,  I
  think that the status report is much more important user-wise so it's
  a good reason to add all neutron-related agents to this table/list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1321532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323267] Re: Network shouldn't be shared and external at the same time

2014-07-25 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323267

Title:
  Network shouldn't be shared and external at the same time

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Marking network as external represents a different usage for that specific 
network than an "ordinary" network.
  It doesn't make sense to connect instances directly to the external network 
(otherwise you'd have used the network directly and not floating IPs).

  For that reason, it also doesn't make sense to mark the network as
  shared (and vice versa).

  Currently it is allowed to mark a network as both shared and external,
  this should be prevented to deter misconfiguration and misuse of the
  network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330132] Re: Creation of Member role is no longer required

2014-07-25 Thread Stephen Gordon
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1330132

Title:
  Creation of Member role is no longer required

Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Identity (Keystone):
  New
Status in Tempest:
  In Progress

Bug description:
  Since Grizzly the Keystone service's SQL creation/migration scripts
  automatically create a role named _member_ for use as the default
  member role. Since Icehouse (backported to Havana) Horizon uses this
  as the default member role.

  Devstack still creates a Member role, as was previously required:

  318 # The Member role is used by Horizon and Swift so we need to keep it:
  319 MEMBER_ROLE=$(openstack role create \
  320 Member \
  321 | grep " id " | get_field 2)

  As noted above, Horizon no longer uses such a role in the default
  configuration and on investigation the Swift dependency appears to be
  introduced by the way devstack configures Swift.

  As such it should now be possible to stop creating this role (with
  corresponding changes to the Swift setup in devstack) and use _member_
  instead, avoiding the creation (and confusion) of having two member
  roles with different names.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1330132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348844] [NEW] Keystone logs auth tokens in URLs at log level info

2014-07-25 Thread Joel Friedly
Public bug reported:

Example:

2014-07-25 22:28:25.352 1458 INFO eventlet.wsgi.server [-]
10.241.1.50,10.241.1.80 - - [25/Jul/2014 22:28:25] "GET
/v2.0/tokens/d5036612660543a3a9b8054c79dea8d3 HTTP/1.1" 200 3174
0.021630

We've found that this regex can catch all of these messages:

/v2.0/tokens/[\da-f]{32}

Keystone also logs a bunch of other sensitive data in debug level
messages, but this one it still present even if you only take info level
messages and above.  We'd like to solve this problem at the source
instead of greping it out of our log files.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1348844

Title:
  Keystone logs auth tokens in URLs at log level info

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Example:

  2014-07-25 22:28:25.352 1458 INFO eventlet.wsgi.server [-]
  10.241.1.50,10.241.1.80 - - [25/Jul/2014 22:28:25] "GET
  /v2.0/tokens/d5036612660543a3a9b8054c79dea8d3 HTTP/1.1" 200 3174
  0.021630

  We've found that this regex can catch all of these messages:

  /v2.0/tokens/[\da-f]{32}

  Keystone also logs a bunch of other sensitive data in debug level
  messages, but this one it still present even if you only take info
  level messages and above.  We'd like to solve this problem at the
  source instead of greping it out of our log files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1348844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348840] [NEW] Nova logs iscsi passwords when attaching volumes

2014-07-25 Thread Joel Friedly
Public bug reported:

Example:

2014-07-25 21:50:12.987 4750 DEBUG nova.openstack.common.processutils 
[req-251c525c-b92e-4638-89a0-c77ee887ff17 119a4280aa594405aabc31b4fc0f640c 
ae356b4961204701ae7e89b7495c28bb] Running cmd (subprocess): sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m node -T 
iqn.2010-10.org.openstack:volume-5940c9ef-ebec-448a-a8eb-971f0ef32a69 -p 
10.191.1.1:3260 --op update -n node.session.auth.password -v 
266nnohUEzTRP5QtPJ47 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
2014-07-25 21:50:13.057 4750 DEBUG nova.openstack.common.processutils 
[req-251c525c-b92e-4638-89a0-c77ee887ff17 119a4280aa594405aabc31b4fc0f640c 
ae356b4961204701ae7e89b7495c28bb] Result was 0 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
2014-07-25 21:50:13.058 4750 DEBUG nova.virt.libvirt.volume 
[req-251c525c-b92e-4638-89a0-c77ee887ff17 119a4280aa594405aabc31b4fc0f640c 
ae356b4961204701ae7e89b7495c28bb] iscsiadm ('--op', 'update', '-n', 
'node.session.auth.password', '-v', u'266nnohUEzTRP5QtPJ47'): stdout= stderr= 
_run_iscsiadm /usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py:248

The part after the "-v" is the value to update the open-iscsi record
with, and it is the password used to attach the volume.  We've found
that the following regex can catch  these in the logs:

node\.session\.auth\.password.*

It's a debug level log message, so this issue can be avoided by turning
off debug logging in production.  However, since it's a command that
gets executed with sudo, it ends up in /var/log/auth.log by default too.
We'd like to fix this problem at the source by not executing a command
that contains the password.  Is there any other way to update the
record?

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Example:
  
  2014-07-25 21:50:12.987 4750 DEBUG nova.openstack.common.processutils 
[req-251c525c-b92e-4638-89a0-c77ee887ff17 119a4280aa594405aabc31b4fc0f640c 
ae356b4961204701ae7e89b7495c28bb] Running cmd (subprocess): sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m node -T 
iqn.2010-10.org.openstack:volume-5940c9ef-ebec-448a-a8eb-971f0ef32a69 -p 
10.191.1.1:3260 --op update -n node.session.auth.password -v 
266nnohUEzTRP5QtPJ47 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
  2014-07-25 21:50:13.057 4750 DEBUG nova.openstack.common.processutils 
[req-251c525c-b92e-4638-89a0-c77ee887ff17 119a4280aa594405aabc31b4fc0f640c 
ae356b4961204701ae7e89b7495c28bb] Result was 0 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
  2014-07-25 21:50:13.058 4750 DEBUG nova.virt.libvirt.volume 
[req-251c525c-b92e-4638-89a0-c77ee887ff17 119a4280aa594405aabc31b4fc0f640c 
ae356b4961204701ae7e89b7495c28bb] iscsiadm ('--op', 'update', '-n', 
'node.session.auth.password', '-v', u'266nnohUEzTRP5QtPJ47'): stdout= stderr= 
_run_iscsiadm /usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py:248
  
- The part after the "-v" the value to update the open-iscsi record with,
- and it is the password used to attach the volume.  We've found that the
- following regex can catch  these in the logs:
+ The part after the "-v" is the value to update the open-iscsi record
+ with, and it is the password used to attach the volume.  We've found
+ that the following regex can catch  these in the logs:
  
  node\.session\.auth\.password.*
  
  It's a debug level log message, so this issue can be avoided by turning
  off debug logging in production.  However, since it's a command that
  gets executed with sudo, it ends up in /var/log/auth.log by default too.
  We'd like to fix this problem at the source by not executing a command
  that contains the password.  Is there any other way to update the
  record?

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348840

Title:
  Nova logs iscsi passwords when attaching volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  Example:

  2014-07-25 21:50:12.987 4750 DEBUG nova.openstack.common.processutils 
[req-251c525c-b92e-4638-89a0-c77ee887ff17 119a4280aa594405aabc31b4fc0f640c 
ae356b4961204701ae7e89b7495c28bb] Running cmd (subprocess): sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m node -T 
iqn.2010-10.org.openstack:volume-5940c9ef-ebec-448a-a8eb-971f0ef32a69 -p 
10.191.1.1:3260 --op update -n node.session.auth.password -v 
266nnohUEzTRP5QtPJ47 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
  2014-07-25 21:50:13.057 4750 DEBUG nova.openstack.common.processutils 
[req-251c525c-b92e-4638-89a0-c77ee887ff17 119a4280aa594405aabc31b4fc0f640c 
ae356b4961204701ae7e89b7495c28bb] Result was 0 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
  2014-07-25 21:50:13.058 4750 DEBUG nov

[Yahoo-eng-team] [Bug 1348838] [NEW] Glance logs password hashes in swift URLs

2014-07-25 Thread Joel Friedly
Public bug reported:

Example:

2014-07-25 20:03:36.346 780 DEBUG glance.registry.api.v1.images
[1c66afef-0bc9-4413-b63a-c81585c2a981 2eae458f42e64420af5e3a2cab07e03a
9bc19f6aabc944c382bf553cb8131b17 - - -] Updating image dfd7e14c-
eb02-487e-8112-d1881ae031d9 with metadata: {u'status': u'active',
'locations':
[u'swift+http://service%3Aimage:GyQLQqJbh3jzBfRvAs8nw8WDQ3xUtO7nw49t33R96WddHww0zJ2CSU7AtgFtf76J@proxy:8770/v2.0
/glance-images/dfd7e14c-eb02-487e-8112-d1881ae031d9']} update
/usr/lib/python2.7/dist-packages/glance/registry/api/v1/images.py:445

We've found that the following regex will catch all of the password
hashes:

r"(swift|swift\+http|swift\+https)://(.*?:)?.*?@"

Since it's a debug-level log message, we can avoid leaking sensitive
data by turning off debug logging, but we often find ourselves needing
the debug logs to diagnose issues.  We'd like to fix this problem at the
source by sanitizing our the password hashes.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1348838

Title:
  Glance logs password hashes in swift URLs

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Example:

  2014-07-25 20:03:36.346 780 DEBUG glance.registry.api.v1.images
  [1c66afef-0bc9-4413-b63a-c81585c2a981 2eae458f42e64420af5e3a2cab07e03a
  9bc19f6aabc944c382bf553cb8131b17 - - -] Updating image dfd7e14c-
  eb02-487e-8112-d1881ae031d9 with metadata: {u'status': u'active',
  'locations':
  
[u'swift+http://service%3Aimage:GyQLQqJbh3jzBfRvAs8nw8WDQ3xUtO7nw49t33R96WddHww0zJ2CSU7AtgFtf76J@proxy:8770/v2.0
  /glance-images/dfd7e14c-eb02-487e-8112-d1881ae031d9']} update
  /usr/lib/python2.7/dist-packages/glance/registry/api/v1/images.py:445

  We've found that the following regex will catch all of the password
  hashes:

  r"(swift|swift\+http|swift\+https)://(.*?:)?.*?@"

  Since it's a debug-level log message, we can avoid leaking sensitive
  data by turning off debug logging, but we often find ourselves needing
  the debug logs to diagnose issues.  We'd like to fix this problem at
  the source by sanitizing our the password hashes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1348838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-07-25 Thread Clark Boylan
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in Designate:
  Triaged
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-07-25 Thread Clark Boylan
** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in Designate:
  Triaged
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-07-25 Thread Clark Boylan
** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in Designate:
  Triaged
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-07-25 Thread Clark Boylan
** Also affects: designate
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in Designate:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348820] [NEW] Token issued_at time changes on /v3/auth/token GET requests

2014-07-25 Thread Lance Bragstad
Public bug reported:

Steps to recreate

1.) Generate a v2.0
token http://pasteraw.com/37q9v3y80tlydltujo7vwfk7gcabggf

2.) Pull token from the body of the response and use the /v3/auth/tokens/ GET 
api call to verify the token
http://pasteraw.com/3oycofc541dil3d7hkzhihlcxlthqg4

Notice that the 'issued_at' time of the token has changed.

3.) Repeat step 2 and notice that the 'issued_at' time of the same token 
changes again.
http://pasteraw.com/9wgyrmawewer1ptv5ct58w7pcrfb7zt

The 'issued_at' time of a token should not change when validating the
token using /v3/auth/token GET api call.

This is because the issued_at time is being overwritten on GET here:
https://github.com/openstack/keystone/blob/83c7805ed3787303f8497bc479469d9071783107/keystone/token/providers/common.py#L319

This seems like it has been written strictly for POSTs? In the case of
POST, the issued_at time needs to be generated, in the case of HEAD or
GET, the issued_at time should already exist.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1348820

Title:
  Token issued_at time changes on /v3/auth/token GET requests

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Steps to recreate

  1.) Generate a v2.0
  token http://pasteraw.com/37q9v3y80tlydltujo7vwfk7gcabggf

  2.) Pull token from the body of the response and use the /v3/auth/tokens/ GET 
api call to verify the token
  http://pasteraw.com/3oycofc541dil3d7hkzhihlcxlthqg4

  Notice that the 'issued_at' time of the token has changed.

  3.) Repeat step 2 and notice that the 'issued_at' time of the same token 
changes again.
  http://pasteraw.com/9wgyrmawewer1ptv5ct58w7pcrfb7zt

  The 'issued_at' time of a token should not change when validating the
  token using /v3/auth/token GET api call.

  This is because the issued_at time is being overwritten on GET here:
  
https://github.com/openstack/keystone/blob/83c7805ed3787303f8497bc479469d9071783107/keystone/token/providers/common.py#L319

  This seems like it has been written strictly for POSTs? In the case of
  POST, the issued_at time needs to be generated, in the case of HEAD or
  GET, the issued_at time should already exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1348820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] [NEW] Unittests do not succeed with random PYTHONHASHSEED value

2014-07-25 Thread Clark Boylan
Public bug reported:

New tox and python3.3 set a random PYTHONHASHSEED value by default.
These projects should support this in their unittests so that we do not
have to override the PYTHONHASHSEED value and potentially let bugs into
these projects.

To reproduce these failures:

# install latest tox
pip install --upgrade tox
tox --version # should report 1.7.2 or greater
cd $PROJECT_REPO
# edit tox.ini to remove any PYTHONHASHSEED=0 lines
tox -epy27

Most of these failures appear to be related to dict entry ordering.

** Affects: ceilometer
 Importance: Undecided
 Status: New

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348812] [NEW] l3 agent not using root_helper to check namespace

2014-07-25 Thread Kevin Benton
Public bug reported:

The L3 agent is not using the root helper when checking to see that a
namespace already exists. This causes it get no listed namespaces if
using an unprivileged account and try to create a duplicate namespace,
which then fails because it already exists and raises a runtime error
like the one below.


Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/eventlet/greenpool.py", line 80, in 
_spawn_n_impl
func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/l3_agent.py", line 434, 
in process_router
p['ip_cidr'], p['mac_address'])
  File "/usr/lib/python2.7/dist-packages/neutron/agent/l3_agent.py", line 710, 
in internal_network_added
prefix=INTERNAL_DEV_PREFIX)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py", 
line 195, in plug
namespace_obj = ip.ensure_namespace(namespace)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
137, in ensure_namespace
ip = self.netns.add(name)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
447, in add
self._as_root('add', name, use_root_namespace=True)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
218, in _as_root
kwargs.get('use_root_namespace', False))
  File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
71, in _as_root
namespace)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
82, in _execute
root_helper=root_helper)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 
76, in execute
raise RuntimeError(m)
RuntimeError:
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'add', 'qrouter-d24d57d0-2155-4011-80d4-f4dbd382c897']
Exit code: 1
Stdout: ''
Stderr: 'Could not create 
/var/run/netns/qrouter-d24d57d0-2155-4011-80d4-f4dbd382c897: File exists\n'

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348812

Title:
  l3 agent not using root_helper to check namespace

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The L3 agent is not using the root helper when checking to see that a
  namespace already exists. This causes it get no listed namespaces if
  using an unprivileged account and try to create a duplicate namespace,
  which then fails because it already exists and raises a runtime error
  like the one below.

  
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/eventlet/greenpool.py", line 80, in 
_spawn_n_impl
  func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/neutron/agent/l3_agent.py", line 
434, in process_router
  p['ip_cidr'], p['mac_address'])
File "/usr/lib/python2.7/dist-packages/neutron/agent/l3_agent.py", line 
710, in internal_network_added
  prefix=INTERNAL_DEV_PREFIX)
File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py", 
line 195, in plug
  namespace_obj = ip.ensure_namespace(namespace)
File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
137, in ensure_namespace
  ip = self.netns.add(name)
File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
447, in add
  self._as_root('add', name, use_root_namespace=True)
File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
218, in _as_root
  kwargs.get('use_root_namespace', False))
File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
71, in _as_root
  namespace)
File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 
82, in _execute
  root_helper=root_helper)
File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 
76, in execute
  raise RuntimeError(m)
  RuntimeError:
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'add', 'qrouter-d24d57d0-2155-4011-80d4-f4dbd382c897']
  Exit code: 1
  Stdout: ''
  Stderr: 'Could not create 
/var/run/netns/qrouter-d24d57d0-2155-4011-80d4-f4dbd382c897: File exists\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303998] Re: vm fails with error vif_type=binding_failed using gre tunnels

2014-07-25 Thread Edgar Magana
This is a duplicated bug:
https://bugs.launchpad.net/neutron/+bug/1305226

** Changed in: neutron
 Assignee: (unassigned) => Edgar Magana (emagana)

** Changed in: neutron
   Status: Triaged => Invalid

** Changed in: neutron
 Assignee: Edgar Magana (emagana) => (unassigned)

** Changed in: neutron
 Assignee: (unassigned) => Edgar Magana (emagana)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303998

Title:
  vm fails with error vif_type=binding_failed using gre tunnels

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I am running Icehouse r-1 on Ubuntu 12.04. Whenever I try to launch a
  VM it immediately goes into error state. The log file fo rnova-compute
  shows the following:

   http_log_req 
/usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:173
  2014-04-07 19:15:32.888 2866 DEBUG neutronclient.client [-] RESP:{'date': 
'Mon, 07 Apr 2014 19:15:32 GMT', 'status': '204', 'content-length
  ': '0', 'x-openstack-request-id': 'req-92a58024-6cd6-4ef3-bd81-f579bd057445'} 
   http_log_resp 
/usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
  2014-04-07 19:15:32.888 2866 DEBUG nova.network.api 
[req-48a2dbdd-7067-4c48-8c09-62d604160d59 b90c0e0ca1aa4cd79703c50e1ff8684a 
f1c5b087cd9e
  412daf2360c0cf83a5c6] Updating cache with info: [] 
update_instance_cache_with_nw_info 
/usr/lib/python2.7/dist-packages/nova/network/api.py:
  74
  2014-04-07 19:15:32.909 2866 ERROR nova.compute.manager 
[req-48a2dbdd-7067-4c48-8c09-62d604160d59 b90c0e0ca1aa4cd79703c50e1ff8684a 
f1c5b087
  cd9e412daf2360c0cf83a5c6] [instance: a85f771d-13d2-4cba-88f6-6c26a5cc7f37] 
Error: Unexpected vif_type=binding_failed

  
  <--snip-->

  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 858, in 
unplug_vifs
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
self.vif_driver.unplug(instance, vif)
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 798, in unplug
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
_("Unexpected vif_type=%s") % vif_type)
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
NovaException: Unexpected vif_type=binding_failed
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
  2014-04-07 19:15:33.221 2866 ERROR oslo.messaging._drivers.common [-] 
Returning exception Unexpected vif_type=binding_failed to caller
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
  2014-04-07 19:15:33.221 2866 ERROR oslo.messaging._drivers.common [-] 
Returning exception Unexpected vif_type=binding_failed to caller

  full log file for nova-compute at: http://paste.openstack.org/show/75244/
  Log file for /var/log/neutron/openvswitch-agent.log is at: 
http://paste.openstack.org/show/75245/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348788] [NEW] network_device_mtu is not applied to VMs, only to agents

2014-07-25 Thread Ian Wells
Public bug reported:

(This is using the ML2 driver with the Linuxbridge agent, along with
libvirt and KVM.  This is likely to be agent-specific, but I think at
least some other agents will have the problem.)

1. set Neutron's network_device_mtu to (say) 9000, assuming you've set up an 
infrastructure that will pass large MTU packets
2. create a network
3. create two VMs on the network
4. attempt to pass large packets

This won't work.  The reason it won't work is because, although the
Linuxbridge agent does attempt to apply network_device_mtu to the tap
interfaces, it's not registered the relevant config item definitions
from neutron.agent.linux.interfaces, so the value is silently ignored
when the config files are read.


Registering the OPTS block from the interfaces.py file certainly fixes the 
issue, but is likely to have other effects, since there are several other 
config files in there that are at present ignored by the agent.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348788

Title:
  network_device_mtu is not applied to VMs, only to agents

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  (This is using the ML2 driver with the Linuxbridge agent, along with
  libvirt and KVM.  This is likely to be agent-specific, but I think at
  least some other agents will have the problem.)

  1. set Neutron's network_device_mtu to (say) 9000, assuming you've set up an 
infrastructure that will pass large MTU packets
  2. create a network
  3. create two VMs on the network
  4. attempt to pass large packets

  This won't work.  The reason it won't work is because, although the
  Linuxbridge agent does attempt to apply network_device_mtu to the tap
  interfaces, it's not registered the relevant config item definitions
  from neutron.agent.linux.interfaces, so the value is silently ignored
  when the config files are read.

  
  Registering the OPTS block from the interfaces.py file certainly fixes the 
issue, but is likely to have other effects, since there are several other 
config files in there that are at present ignored by the agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348766] [NEW] Big Switch: hash shouldn't be updated on unsuccessful calls

2014-07-25 Thread Kevin Benton
Public bug reported:

The configuration hash db is updated on every response from the backend
including errors that contain an empty hash. This is causing the hash to
be wiped out if a standby controller is contacted first, which opens a
narrow time window where the backend could become out of sync. It should
only update the hash on successful REST calls.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Description changed:

  The configuration hash db is updated on every response from the backend
- including errors that contain an empty hash. It should only update the
- hash on successful REST calls.
+ including errors that contain an empty hash. This is causing the hash to
+ be wiped out if a standby controller is contacted first. It should only
+ update the hash on successful REST calls.

** Description changed:

  The configuration hash db is updated on every response from the backend
  including errors that contain an empty hash. This is causing the hash to
- be wiped out if a standby controller is contacted first. It should only
- update the hash on successful REST calls.
+ be wiped out if a standby controller is contacted first, which opens a
+ narrow time window where the backend could become out of sync. It should
+ only update the hash on successful REST calls.

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348766

Title:
  Big Switch: hash shouldn't be updated on unsuccessful calls

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The configuration hash db is updated on every response from the
  backend including errors that contain an empty hash. This is causing
  the hash to be wiped out if a standby controller is contacted first,
  which opens a narrow time window where the backend could become out of
  sync. It should only update the hash on successful REST calls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348760] [NEW] volume create with image doesn't require an image

2014-07-25 Thread Doug Fish
Public bug reported:

1. Login to horizon UI
2. Go to Volumes tab from left menu
3a click on create volume and fill the details select volume-source as Image 
but don't actually specify any image
OR
3b click on create volume and fill the details select volume-source as Volume 
but don't actually specify any volume
4. submit the request

This should give error message that select the image/volume from which
volume needs to be created.  It is inconsistent to allow the form to be
submitted when create from image/create from volume has been selected
but no actual image or volume has been selected.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348760

Title:
  volume create with image doesn't require an image

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. Login to horizon UI
  2. Go to Volumes tab from left menu
  3a click on create volume and fill the details select volume-source as Image 
but don't actually specify any image
  OR
  3b click on create volume and fill the details select volume-source as Volume 
but don't actually specify any volume
  4. submit the request

  This should give error message that select the image/volume from which
  volume needs to be created.  It is inconsistent to allow the form to
  be submitted when create from image/create from volume has been
  selected but no actual image or volume has been selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348737] [NEW] Once a Gateway is set L-3 agent attempts to update external Gateway on every router update

2014-07-25 Thread Rajeev Grover
Public bug reported:

once an external gateway is set, due to a logic error, for every
subsequent router update the L-3 agent incorrectly concludes there is a
change in external gateway.  This causes the codepath to set external
gateway getting  invoked un-necessarily.

In process_router(...)

  
ex_gw_port = self._get_ex_gw_port(ri)   
returns ri.router.get('gw_port')
  ...
  if ex_gw_port and ex_gw_port != ri.ex_gw_port:
self._set_subnet_info(ex_gw_port) 
<
 ...
 ri.ex_gw_port = ex_gw_port


The _set_subnet_info adds an element to the ex_gw_port thus making it different 
from the gw_port obtained out of router dict. Any subsequent  ex_gw_port != 
ri.ex_gw_port would result True, incorrectly.

One way to fix it would be to change

From:

 if ex_gw_port and ex_gw_port != ri.ex_gw_port:
To:

if (ex_gw_port and (not ri.ex_gw_port
or ex_gw_port['id'] != ri.ex_gw_port['id'])):

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348737

Title:
  Once a Gateway is set L-3 agent attempts to update external Gateway on
  every router update

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  once an external gateway is set, due to a logic error, for every
  subsequent router update the L-3 agent incorrectly concludes there is
  a change in external gateway.  This causes the codepath to set
  external gateway getting  invoked un-necessarily.

  In process_router(...)


  ex_gw_port = self._get_ex_gw_port(ri)   
returns ri.router.get('gw_port')
...
if ex_gw_port and ex_gw_port != ri.ex_gw_port:
  self._set_subnet_info(ex_gw_port) 
<
   ...
   ri.ex_gw_port = ex_gw_port

  
  The _set_subnet_info adds an element to the ex_gw_port thus making it 
different from the gw_port obtained out of router dict. Any subsequent  
ex_gw_port != ri.ex_gw_port would result True, incorrectly.

  One way to fix it would be to change

  From:

   if ex_gw_port and ex_gw_port != ri.ex_gw_port:
  To:

  if (ex_gw_port and (not ri.ex_gw_port
  or ex_gw_port['id'] != ri.ex_gw_port['id'])):

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330786] Re: dhcp agent not sending name server config for ipv6

2014-07-25 Thread Kyle Mestery
Marking as invalid. Some work for the stateful/stateless IPV6 BP will
address this bug. That work is tracked in this patch:

https://review.openstack.org/#/c/106299/

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
 Assignee: Eugene Nikanorov (enikanorov) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330786

Title:
  dhcp agent not sending name server config  for ipv6

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  for ipv6, dhcp agent doesn't configure the option file correctly, and
  therefore the client won't be configured with, for example, name
  server for ipv6. The option should be configured using the tag option6
  for ipv6

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1330786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1219658] Re: Wrong image size using rbd backend for libvirt

2014-07-25 Thread Chuck Short
Rafael, this needs to have an SRU testcase as in:

https://wiki.ubuntu.com/StableReleaseUpdates

chuck

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1219658

Title:
  Wrong image size using rbd backend for libvirt

Status in OpenStack Compute (nova) havana series:
  In Progress
Status in “nova” package in Ubuntu:
  In Progress
Status in “nova” source package in Trusty:
  New

Bug description:
  [Impact]

    * [2cebfd2] libvirt: convert cpu features attribute from list to
     a set (LP: #1267191)

   cpu features list which is being sent to libvirt,
   when creating a domain or calling compareCPU, must contain only
   unique entries. Multiple issues arise when we are updating the
   features attribute in LibvirtConfigCPU class (for example during
   migration).

    * [b86a0e5] Fixes rdb backend image size (LP: #1219658) -> THIS!

   The original fix for bug 1219658 introduced a factor of 1024 error
   in the resulting rbd image size -> real urgent to be fixed.

  [Test Case]
   
   LP: #1267191
   * systemctl restart openstack-nova-compute
 Observe /var/log/nova/compute.log

   LP: #1219658
   * Testing this fix implies only in having rdb backend.

  [Regression Potential]

   * Tests indicate fix is running in a big production without any
  problem.

   LP: #1267191
   * A regression would continue to cause nova not to start (as is happening
 today with this bug under described conditions).

   LP: #1219658
   * RDB backend could stop working (keeping the bug will eventualy cause 
 an outage for those who use rdb backend)

  [Other Info]

  For rbd image backend for libvirt, the root partition will use the
  image size not 'disk' size. It lack of resize root volume in the
  codes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/havana/+bug/1219658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348720] [NEW] Missing index for expire_reservations

2014-07-25 Thread Vish Ishaya
Public bug reported:

While investigating some database performance problems, we discovered
that there is no index on deleted for the reservations table. When this
table gets large, the expire_reservations code will do a full table scan
and take multiple seconds to complete. Because the expire runs on a
periodic, it can slow down the master database significantly and cause
nova or cinder to become extremely slow.

> EXPLAIN UPDATE reservations SET updated_at=updated_at, deleted_at='2014-07-24 
> 22:26:17', deleted=id WHERE reservations.deleted = 0 AND reservations.expire 
> < '2014-07-24 22:26:11';
++-+--+---+---+-+-+--++--+
| id | select_type | table| type  | possible_keys | key| key_len | 
ref  | rows  | Extra|
++-+--+---+---+-+-+--++--+
|  1 | SIMPLE  | reservations | index | NULL  | PRIMARY | 4  | 
NULL | 950366 | Using where; Using temporary |
++-+--+---+---+-+-+--++--+

An index on (deleted, expire) would be the most efficient.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348720

Title:
  Missing index for expire_reservations

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  While investigating some database performance problems, we discovered
  that there is no index on deleted for the reservations table. When
  this table gets large, the expire_reservations code will do a full
  table scan and take multiple seconds to complete. Because the expire
  runs on a periodic, it can slow down the master database significantly
  and cause nova or cinder to become extremely slow.

  > EXPLAIN UPDATE reservations SET updated_at=updated_at, 
deleted_at='2014-07-24 22:26:17', deleted=id WHERE reservations.deleted = 0 AND 
reservations.expire < '2014-07-24 22:26:11';
  
++-+--+---+---+-+-+--++--+
  | id | select_type | table| type  | possible_keys | key| key_len 
| ref  | rows  | Extra|
  
++-+--+---+---+-+-+--++--+
  |  1 | SIMPLE  | reservations | index | NULL  | PRIMARY | 4  
| NULL | 950366 | Using where; Using temporary |
  
++-+--+---+---+-+-+--++--+

  An index on (deleted, expire) would be the most efficient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348706] [NEW] Python errors are not well displayed in instance details

2014-07-25 Thread François Magimel
Public bug reported:

When an instance fails, some information of the error is displayed in
the instance details ("Fault") : message, code, details and creation
date. However, details of the error are not well displayed : the python
error is on one line, for example.

** Affects: horizon
 Importance: Undecided
 Assignee: François Magimel (linkid)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => François Magimel (linkid)

** Description changed:

  When an instance fails, some information of the error is displayed in
  the instance details ("Fault") : message, code, details and creation
  date. However, details of the error are not well displayed : the python
- error is on one line.
+ error is on one line, for example.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348706

Title:
  Python errors are not well displayed in instance details

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When an instance fails, some information of the error is displayed in
  the instance details ("Fault") : message, code, details and creation
  date. However, details of the error are not well displayed : the
  python error is on one line, for example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348703] [NEW] LinuxInterfaceDriver plug and unplug methods in derived class already check for device existence

2014-07-25 Thread Rossella Sblendido
Public bug reported:

LinuxInterfaceDriver plug and unplug in derived classes already check if
the device exists. There's no need to duplicate the check in the code
that is calling those methods. See l3_agent.py in internal_network_added
for example:

if not ip_lib.device_exists(interface_name,
root_helper=self.root_helper,
namespace=ri.ns_name):
self.driver.plug(network_id, port_id, interface_name, mac_address,
 namespace=ri.ns_name,
 prefix=INTERNAL_DEV_PREFIX)

the check "if not ip_lib.device_exists" is a duplicate and it's
expensive.

** Affects: neutron
 Importance: Undecided
 Status: Confirmed


** Tags: low-hanging-fruit

** Changed in: neutron
   Status: New => Confirmed

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348703

Title:
  LinuxInterfaceDriver plug and unplug methods in derived class already
  check for device existence

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  LinuxInterfaceDriver plug and unplug in derived classes already check
  if the device exists. There's no need to duplicate the check in the
  code that is calling those methods. See l3_agent.py in
  internal_network_added for example:

  if not ip_lib.device_exists(interface_name,
  root_helper=self.root_helper,
  namespace=ri.ns_name):
  self.driver.plug(network_id, port_id, interface_name, mac_address,
   namespace=ri.ns_name,
   prefix=INTERNAL_DEV_PREFIX)

  the check "if not ip_lib.device_exists" is a duplicate and it's
  expensive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348680] [NEW] Missing headers in cURL examples in federation docs.

2014-07-25 Thread Marek Denis
Public bug reported:

cURL examples depicting fetching accessible projects and domains should
also include directives for sending X-Auth-Token header along with the
unscoped token.

Reference:
https://github.com/openstack/keystone/blob/master/doc/source/configure_federation.rst
#example-curl-1

** Affects: keystone
 Importance: Undecided
 Assignee: Marek Denis (marek-denis)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Marek Denis (marek-denis)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1348680

Title:
  Missing headers in cURL examples in federation docs.

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  cURL examples depicting fetching accessible projects and domains
  should also include directives for sending X-Auth-Token header along
  with the unscoped token.

  Reference:
  
https://github.com/openstack/keystone/blob/master/doc/source/configure_federation.rst
  #example-curl-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1348680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348661] [NEW] nova.tests.api.ec2.test_cloud.CloudTestCase.test_terminate_instances_two_instances race fails with UnexpectedDeletingTaskStateError

2014-07-25 Thread Matt Riedemann
Public bug reported:

This was being masked by bug 1311778 due to the MessagingTimeout
failure, but there are more specific errors.

http://logs.openstack.org/79/108879/1/gate/gate-nova-
python26/283e967/console.html#_2014-07-24_08_14_12_631

2014-07-24 08:14:12.631 | FAIL: 
nova.tests.api.ec2.test_cloud.CloudTestCase.test_terminate_instances_two_instances
2014-07-24 08:14:12.631 | tags: worker-4
2014-07-24 08:14:12.631 | 
--
2014-07-24 08:14:12.631 | Empty attachments:
2014-07-24 08:14:12.631 |   pythonlogging:'boto'
2014-07-24 08:14:12.631 |   stderr
2014-07-24 08:14:12.631 |   stdout
2014-07-24 08:14:12.631 | 
2014-07-24 08:14:12.631 | pythonlogging:'': {{{
2014-07-24 08:14:12.631 | INFO [nova.network.driver] Loading network driver 
'nova.network.linux_net'
2014-07-24 08:14:12.632 | AUDIT [nova.service] Starting conductor node (version 
2014.2)
2014-07-24 08:14:12.632 | INFO [nova.virt.driver] Loading compute driver 
'nova.virt.fake.FakeDriver'
2014-07-24 08:14:12.632 | AUDIT [nova.service] Starting compute node (version 
2014.2)
2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] Auditing 
locally available compute resources
2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] Free ram (MB): 
7680
2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] Free disk (GB): 
1028
2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] Free VCPUS: 1
2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] PCI stats: []
2014-07-24 08:14:12.632 | INFO [nova.compute.resource_tracker] Compute_service 
record created for 093d0c3802bf440db8f3f839963027c4:fake-mini
2014-07-24 08:14:12.632 | AUDIT [nova.service] Starting scheduler node (version 
2014.2)
2014-07-24 08:14:12.632 | INFO [nova.network.driver] Loading network driver 
'nova.network.linux_net'
2014-07-24 08:14:12.633 | AUDIT [nova.service] Starting network node (version 
2014.2)
2014-07-24 08:14:12.633 | AUDIT [nova.service] Starting consoleauth node 
(version 2014.2)
2014-07-24 08:14:12.633 | AUDIT [nova.compute.manager] Starting instance...
2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Attempting claim: memory 
2048 MB, disk 20 GB
2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Total memory: 8192 MB, 
used: 512.00 MB
2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] memory limit not 
specified, defaulting to unlimited
2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Total disk: 1028 GB, 
used: 0.00 GB
2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] disk limit not specified, 
defaulting to unlimited
2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Claim successful
2014-07-24 08:14:12.633 | AUDIT [nova.compute.manager] Starting instance...
2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Attempting claim: memory 
2048 MB, disk 20 GB
2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Total memory: 8192 MB, 
used: 2560.00 MB
2014-07-24 08:14:12.634 | AUDIT [nova.compute.claims] memory limit not 
specified, defaulting to unlimited
2014-07-24 08:14:12.634 | AUDIT [nova.compute.claims] Total disk: 1028 GB, 
used: 20.00 GB
2014-07-24 08:14:12.634 | AUDIT [nova.compute.claims] disk limit not specified, 
defaulting to unlimited
2014-07-24 08:14:12.634 | AUDIT [nova.compute.claims] Claim successful
2014-07-24 08:14:12.634 | WARNING [nova.compute.manager] Instance is not 
stopped. Calling the stop API.
2014-07-24 08:14:12.634 | ERROR [nova.compute.manager] error during stop() in 
sync_power_state.
2014-07-24 08:14:12.634 | Traceback (most recent call last):
2014-07-24 08:14:12.634 |   File "nova/compute/manager.py", line 5551, in 
_sync_instance_power_state
2014-07-24 08:14:12.634 | self.compute_api.force_stop(context, db_instance)
2014-07-24 08:14:12.634 |   File "nova/compute/api.py", line 1767, in force_stop
2014-07-24 08:14:12.634 | self.compute_rpcapi.stop_instance(context, 
instance, do_cast=do_cast)
2014-07-24 08:14:12.635 |   File "nova/compute/rpcapi.py", line 908, in 
stop_instance
2014-07-24 08:14:12.635 | return rpc_method(ctxt, 'stop_instance', 
instance=instance)
2014-07-24 08:14:12.635 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/client.py",
 line 150, in call
2014-07-24 08:14:12.635 | wait_for_reply=True, timeout=timeout)
2014-07-24 08:14:12.635 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/transport.py",
 line 90, in _send
2014-07-24 08:14:12.635 | timeout=timeout)
2014-07-24 08:14:12.635 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 166, in send
2014-07-24 08:14:12.635 | return self._send(target, ctxt, message, 
wait_for_reply, timeout)
2014-07-24 08:14:12.635 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/mes

[Yahoo-eng-team] [Bug 1311778] Re: Unit tests fail with MessagingTimeout errors

2014-07-25 Thread Matt Riedemann
Marking this as closed because the messaging timeout error is masking
new, more specific, unit test failures, e.g.:

http://logs.openstack.org/79/108879/1/gate/gate-nova-
python26/283e967/console.html#_2014-07-24_08_14_12_643

I've already put in a few fixes for the messaging timeout issue, so
let's close this bug and open new ones for specific test failures.

** Changed in: nova
   Status: In Progress => Fix Committed

** No longer affects: oslo.messaging

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1311778

Title:
  Unit tests fail with MessagingTimeout errors

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  There is an issue that is causing unit tests to fail with the
  following error:

  MessagingTimeout: No reply on topic conductor
  MessagingTimeout: No reply on topic scheduler

  2014-04-23 13:45:52.017 | Traceback (most recent call last):
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
  2014-04-23 13:45:52.017 | incoming.message))
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 176, in _dispatch
  2014-04-23 13:45:52.017 | return self._do_dispatch(endpoint, method, 
ctxt, args)
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in _do_dispatch
  2014-04-23 13:45:52.017 | result = getattr(endpoint, method)(ctxt, 
**new_args)
  2014-04-23 13:45:52.018 |   File "nova/conductor/manager.py", line 798, in 
build_instances
  2014-04-23 13:45:52.018 | legacy_bdm_in_spec=legacy_bdm)
  2014-04-23 13:51:50.628 |   File "nlibvir:  error : internal error could not 
initialize domain event timer
  2014-04-23 13:54:57.953 | ova/scheduler/rpcapi.py", line 120, in run_instance
  2014-04-23 13:54:57.953 | cctxt.cast(ctxt, 'run_instance', **msg_kwargs)
  2014-04-23 13:54:57.953 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/client.py",
 line 150, in call
  2014-04-23 13:54:57.953 | wait_for_reply=True, timeout=timeout)
  2014-04-23 13:54:57.953 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/transport.py",
 line 90, in _send
  2014-04-23 13:54:57.953 | timeout=timeout)
  2014-04-23 13:54:57.954 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 166, in send
  2014-04-23 13:54:57.954 | return self._send(target, ctxt, message, 
wait_for_reply, timeout)
  2014-04-23 13:54:57.954 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 161, in _send
  2014-04-23 13:54:57.954 | 'No reply on topic %s' % target.topic)
  2014-04-23 13:54:57.954 | MessagingTimeout: No reply on topic scheduler

  

  2014-04-23 13:45:52.008 | Traceback (most recent call last):
  2014-04-23 13:45:52.008 |   File "nova/api/openstack/__init__.py", line 125, 
in __call__
  2014-04-23 13:45:52.008 | return req.get_response(self.application)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py",
 line 1320, in send
  2014-04-23 13:45:52.009 | application, catch_exc_info=False)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py",
 line 1284, in call_application
  2014-04-23 13:45:52.009 | app_iter = application(self.environ, 
start_response)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.009 | return resp(environ, start_response)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/p

[Yahoo-eng-team] [Bug 1348642] [NEW] Rebuild does not work with cells

2014-07-25 Thread Christopher Lefelhocz
Public bug reported:

The rebuild command will not with with cells.  The command is dropped at
the api layer.

** Affects: nova
 Importance: High
 Assignee: Christopher Lefelhocz (christopher-lefelhoc)
 Status: New

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: (unassigned) => Christopher Lefelhocz (christopher-lefelhoc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348642

Title:
  Rebuild does not work with cells

Status in OpenStack Compute (Nova):
  New

Bug description:
  The rebuild command will not with with cells.  The command is dropped
  at the api layer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348640] [NEW] can't launch instance using ceph without authentication

2014-07-25 Thread cristi1979
Public bug reported:

Instance fails when using ceph as a storage backend.

The main complain comes from "/usr/lib/python2.7/site-
packages/nova/virt/libvirt/config.py", line 527.

Putting a print bedore "if self.auth_secret_type is not None:" shows
that the code is hit twice, first with auth_secret_type=cepg, second
time with auth_secret_type=None.

We fixed it in our side by commenting the entire if statement.

Stack trace:

2014-07-25 08:44:17.042 13642 ERROR nova.compute.manager 
[req-1c542f62-1f54-4515-8290-425a62354d95 43d9d5b6c6794f52ab17a90855200436 
034f45eb9e24410db64f7c0c53110f02] [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] Instance failed to spawn
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] Traceback (most recent call last):
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1715, in _spawn
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] block_device_info)
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2262, in 
spawn
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] write_to_disk=True)
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3444, in 
to_xml
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] xml = conf.to_xml()
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py", line 69, in 
to_xml
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] root = self.format_dom()
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py", line 1240, in 
format_dom
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] self._format_devices(root)
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py", line 1217, in 
_format_devices
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] devices.append(dev.format_dom())
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py", line 527, in 
format_dom
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] auth.set("username", 
self.auth_username)
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File "lxml.etree.pyx", line 713, in 
lxml.etree._Element.set (src/lxml/lxml.etree.c:39438)
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File "apihelpers.pxi", line 520, in 
lxml.etree._setAttributeValue (src/lxml/lxml.etree.c:17627)
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]   File "apihelpers.pxi", line 1333, in 
lxml.etree._utf8 (src/lxml/lxml.etree.c:24601)
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] TypeError: Argument must be bytes or 
unicode, got 'NoneType'
2014-07-25 08:44:17.042 13642 TRACE nova.compute.manager [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40]
2014-07-25 08:44:17.097 13642 AUDIT nova.compute.manager 
[req-1c542f62-1f54-4515-8290-425a62354d95 43d9d5b6c6794f52ab17a90855200436 
034f45eb9e24410db64f7c0c53110f02] [instance: 
b3e9b0ee-50d9-4579-9cc7-de36762b2e40] Terminating instance

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348640

Title:
  can't launch instance using ceph without authentication

Status in OpenStack Compute (Nova):
  New

Bug description:
  Instance fails when using ceph as a storage backend.

  The main complain comes from "/usr/lib/python2.7/site-
  packages/nova/virt/libvirt/config.py", line 527.

  Putting a print bedore "if self.auth_secret_type is not None:" shows
  that the code is hit twice, first with auth_secret_type=cepg, second
  time w

[Yahoo-eng-team] [Bug 1328288] Re: [mos] openvswitch agent fails with bridges longer than 11 chars

2014-07-25 Thread Alexander Ignatov
** Changed in: fuel/5.0.x
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328288

Title:
  [mos] openvswitch agent fails with bridges longer than 11 chars

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Won't Fix
Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The openvswitch agent will try to construct veth pairs with names
  longer than the maximum allowed (15) and fail. VMs will then have no
  external connectivity.

  This happens in cases where the bridge name is very long (e.g. int-br-
  bonded).

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1328288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348629] [NEW] Baremetal driver reports bogus vm_mode of 'baremetal'

2014-07-25 Thread Daniel Berrange
Public bug reported:

The Baremetal driver reports a 'vm_mode' of 'baremetal' for supported
instance types. This is bogus because the baremetal driver is running OS
using the native machine ABI, which is represented by vm_mode.HVM

** Affects: nova
 Importance: Undecided
 Assignee: Daniel Berrange (berrange)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Daniel Berrange (berrange)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348629

Title:
  Baremetal driver reports bogus vm_mode of 'baremetal'

Status in OpenStack Compute (Nova):
  New

Bug description:
  The Baremetal driver reports a 'vm_mode' of 'baremetal' for supported
  instance types. This is bogus because the baremetal driver is running
  OS using the native machine ABI, which is represented by vm_mode.HVM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348624] [NEW] XenAPI driver uses a bogus architecture type for i686 platforms

2014-07-25 Thread Daniel Berrange
Public bug reported:

The XenAPI driver simply parses the Xen hypervisor capabilities to
report the architecture type in the supported instances list.
Unfortunately the Xen hypervisor uses a architecture name of 'x86_32'
for i686 platforms which means it won't match the standard OS 'uname'
reported architecture used by other drivers.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348624

Title:
  XenAPI driver uses a bogus architecture type for i686 platforms

Status in OpenStack Compute (Nova):
  New

Bug description:
  The XenAPI driver simply parses the Xen hypervisor capabilities to
  report the architecture type in the supported instances list.
  Unfortunately the Xen hypervisor uses a architecture name of 'x86_32'
  for i686 platforms which means it won't match the standard OS 'uname'
  reported architecture used by other drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348623] [NEW] XenAPI and Baremetal drivers use bogus hypervisor type for supported instances

2014-07-25 Thread Daniel Berrange
Public bug reported:

The XenAPI driver reports a hypervisor type of 'xapi' for supported
instances. This is confusing the hypervisor type, which should be 'xen',
with the management API type which is 'xapi'.

The Baremetal driver reports a hypervisor type of 'baremetal' for
supported instances. This is confusing the hypervisor type with the nova
driver type. There is no hypervisor concept with the bare metal driver,
things just run natively, so the type should be 'native'.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348623

Title:
  XenAPI and Baremetal drivers use bogus hypervisor type for supported
  instances

Status in OpenStack Compute (Nova):
  New

Bug description:
  The XenAPI driver reports a hypervisor type of 'xapi' for supported
  instances. This is confusing the hypervisor type, which should be
  'xen', with the management API type which is 'xapi'.

  The Baremetal driver reports a hypervisor type of 'baremetal' for
  supported instances. This is confusing the hypervisor type with the
  nova driver type. There is no hypervisor concept with the bare metal
  driver, things just run natively, so the type should be 'native'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348610] [NEW] pep8 errors "horizon" subdirectory

2014-07-25 Thread Pawel Skowron
Public bug reported:

While fixing https://bugs.launchpad.net/horizon/+bug/1347472 "Re-enable
disabled pep8 errors" around 2 500 pep8 errors occurred. Since there are
many of them this particular bug will solve the "horizon" subdirectory
and there will be other bugs to fix all the issue before bug
https://bugs.launchpad.net/horizon/+bug/1347472 is closed.

** Affects: horizon
 Importance: Undecided
 Assignee: Pawel Skowron (pawel-skowron)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Pawel Skowron (pawel-skowron)

** Changed in: horizon
   Status: New => In Progress

** Summary changed:

- pep8 errors horizon subdirectory
+ pep8 errors "horizon" subdirectory

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348610

Title:
  pep8 errors "horizon" subdirectory

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  While fixing https://bugs.launchpad.net/horizon/+bug/1347472 "Re-
  enable disabled pep8 errors" around 2 500 pep8 errors occurred. Since
  there are many of them this particular bug will solve the "horizon"
  subdirectory and there will be other bugs to fix all the issue before
  bug https://bugs.launchpad.net/horizon/+bug/1347472 is closed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348610/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1214341] Re: Not all db.sqal.session methods are wrapped by wrap_db_error

2014-07-25 Thread Russell Bryant
** Changed in: ironic
   Status: Fix Committed => Fix Released

** Changed in: ironic
Milestone: None => juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1214341

Title:
  Not all db.sqal.session methods are wrapped by wrap_db_error

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  first(), all(), begin(), commit() and other public methods could
  produce amount of exceptions, that should be wrapped in exception in
  any case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1214341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260265] Re: BaremetalHostManager cannot distinguish baremetal hosts from other hosts

2014-07-25 Thread Russell Bryant
** Changed in: ironic
   Status: Fix Committed => Fix Released

** Changed in: ironic
Milestone: None => juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260265

Title:
  BaremetalHostManager cannot distinguish baremetal hosts from other
  hosts

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  BaremetalHostManager could distinguish baremetal hosts by checking
  "baremetal_driver" exists in capabilities or not. However, now
  BaremetalHostManager cannot, because capabilities are not reported to
  scheduler and BaremetalHostManager always receives empty capabilities.
  As a result, BaremetalHostManager just does the same thing as the
  original HostManager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1260265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320513] Re: IPMI commands are sent / queried too fast

2014-07-25 Thread Russell Bryant
** Changed in: ironic
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1320513

Title:
  IPMI commands are sent / queried too fast

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  
http://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/second-gen-interface-spec-v2.pdf
 has this in it:
  ---
  1.7.32
  Configuration Interfaces
  ...
   In some implementations, changes to configuration parameters may take
  effect immediately. Thus, a remote application should be careful when setting 
parameters that could cause the
  application to become disconnected from the BMC.

  For the purpose of conformance checking, up to 5 seconds will be allowed 
between the time a parameter is
  changed to when it must have taken effect.
  

  We've seen repeated cases of BMCs locking up or getting confused with
  high frequency polling - it might be an idea to wait 5 seconds - the
  required max time between change and effect - rather than the polling
  interval we use today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1320513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328997] Re: Unit test failure: openstack_citest" is being accessed by other users\nDETAIL: There are 1 other session(s) using the database.

2014-07-25 Thread Russell Bryant
** Changed in: ironic
   Status: Fix Committed => Fix Released

** Changed in: ironic
Milestone: None => juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328997

Title:
  Unit test failure: openstack_citest" is being accessed by other
  users\nDETAIL:  There are 1 other session(s) using the database.

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  We are periodically seeing this nova unit test failure in our CI
  system:

  openstack_citest" is being accessed by other users\nDETAIL:  There are
  1 other session(s) using the database.

  http://logs.openstack.org/76/98376/1/gate/gate-nova-
  python27/d2a0593/console.html

  
  2014-06-11 06:26:40.002 | FAIL: 
nova.tests.db.test_migrations.TestNovaMigrations.test_postgresql_opportunistically
  2014-06-11 06:26:40.002 | tags: worker-6
  2014-06-11 06:26:40.003 | 
--
  2014-06-11 06:26:40.003 | Empty attachments:
  2014-06-11 06:26:40.003 |   pythonlogging:''
  2014-06-11 06:26:40.003 |   stderr
  2014-06-11 06:26:40.003 |   stdout
  2014-06-11 06:26:40.003 | 
  2014-06-11 06:26:40.004 | Traceback (most recent call last):
  2014-06-11 06:26:40.004 |   File "nova/tests/db/test_migrations.py", line 
139, in test_postgresql_opportunistically
  2014-06-11 06:26:40.004 | self._test_postgresql_opportunistically()
  2014-06-11 06:26:40.004 |   File "nova/tests/db/test_migrations.py", line 
428, in _test_postgresql_opportunistically
  2014-06-11 06:26:40.004 | self._reset_database(database)
  2014-06-11 06:26:40.004 |   File "nova/tests/db/test_migrations.py", line 
335, in _reset_database
  2014-06-11 06:26:40.004 | self._reset_pg(conn_pieces)
  2014-06-11 06:26:40.005 |   File "nova/openstack/common/lockutils.py", line 
249, in inner
  2014-06-11 06:26:40.005 | return f(*args, **kwargs)
  2014-06-11 06:26:40.005 |   File "nova/tests/db/test_migrations.py", line 
244, in _reset_pg
  2014-06-11 06:26:40.005 | self.execute_cmd(droptable)
  2014-06-11 06:26:40.005 |   File "nova/tests/db/test_migrations.py", line 
227, in execute_cmd
  2014-06-11 06:26:40.005 | "Failed to run: %s\n%s" % (cmd, output))
  2014-06-11 06:26:40.005 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 321, in assertEqual
  2014-06-11 06:26:40.006 | self.assertThat(observed, matcher, message)
  2014-06-11 06:26:40.006 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertThat
  2014-06-11 06:26:40.006 | raise mismatch_error
  2014-06-11 06:26:40.006 | MismatchError: !=:
  2014-06-11 06:26:40.006 | reference = ''
  2014-06-11 06:26:40.006 | actual= '''\
  2014-06-11 06:26:40.007 | Unexpected error while running command.
  2014-06-11 06:26:40.007 | Command: psql -w -U openstack_citest -h localhost 
-c 'drop database if exists openstack_citest;' -d template1
  2014-06-11 06:26:40.007 | Exit code: 1
  2014-06-11 06:26:40.007 | Stdout: ''
  2014-06-11 06:26:40.007 | Stderr: 'ERROR:  database "openstack_citest" is 
being accessed by other users\\nDETAIL:  There are 1 other session(s) using the 
database.\\n\
  2014-06-11 06:26:40.007 | : Failed to run: psql -w -U openstack_citest -h 
localhost -c 'drop database if exists openstack_citest;' -d template1

  
  elastic-search query: message:"Stderr: \'ERROR:  database 
\"openstack_citest\" is being accessed by other users\\nDETAIL:  There are 1 
other session(s) using the database.\\n\'" AND project:"openstack/nova"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1328997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348589] [NEW] dhcp port binding fail

2014-07-25 Thread Xurong Yang
Public bug reported:

When creating a subnet while ovs agent is down, the subnet and the
corresponding dhcp port are created successfully. However, the vif_type
of the port is "binding_failed" and the segment is NULL so when ovs
agent restarts, it won't set the vlan tag of the dhcp port and there is
no chance for the vif_type to be set again.

I think we need a mechanism the validate the dhcp port so dhcp agent can
recreate or update the port when the vif_type is invalid.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348589

Title:
  dhcp port binding fail

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When creating a subnet while ovs agent is down, the subnet and the
  corresponding dhcp port are created successfully. However, the
  vif_type of the port is "binding_failed" and the segment is NULL so
  when ovs agent restarts, it won't set the vlan tag of the dhcp port
  and there is no chance for the vif_type to be set again.

  I think we need a mechanism the validate the dhcp port so dhcp agent
  can recreate or update the port when the vif_type is invalid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348584] [NEW] KeyError in nova.compute.api.API.external_instance_event

2014-07-25 Thread Salvatore Orlando
Public bug reported:

The fix for bug 1333654 ensured events for instance without host are not 
accepted.
However, the instances without the host are still being passed to the compute 
API layer.

This is likely to result in keyerrors as the one found here:
http://logs.openstack.org/51/109451/2/check/check-tempest-dsvm-neutron-
full/ad70f74/logs/screen-n-api.txt.gz#_2014-07-25_01_41_48_068

The fix for this bug should be straightforward.

** Affects: nova
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348584

Title:
  KeyError in nova.compute.api.API.external_instance_event

Status in OpenStack Compute (Nova):
  New

Bug description:
  The fix for bug 1333654 ensured events for instance without host are not 
accepted.
  However, the instances without the host are still being passed to the compute 
API layer.

  This is likely to result in keyerrors as the one found here:
  http://logs.openstack.org/51/109451/2/check/check-tempest-dsvm-
  neutron-full/ad70f74/logs/screen-n-api.txt.gz#_2014-07-25_01_41_48_068

  The fix for this bug should be straightforward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348576] [NEW] DeprecationWarning: Using mimetype keyword argument is deprecated, use content_type instead

2014-07-25 Thread Matthias Runge
Public bug reported:

During test execution, the following messages pop up:

DeprecationWarning: Using mimetype keyword argument is deprecated, use 
content_type instead
WARNING:py.warnings:DeprecationWarning: Using mimetype keyword argument is 
deprecated, use content_type instead

This should be fixed, since that Django feature is deprecated and will
break with Django-1.7

** Affects: horizon
 Importance: Low
 Assignee: Matthias Runge (mrunge)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348576

Title:
  DeprecationWarning: Using mimetype keyword argument is deprecated, use
  content_type instead

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  During test execution, the following messages pop up:

  DeprecationWarning: Using mimetype keyword argument is deprecated, use 
content_type instead
  WARNING:py.warnings:DeprecationWarning: Using mimetype keyword argument is 
deprecated, use content_type instead

  This should be fixed, since that Django feature is deprecated and will
  break with Django-1.7

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348561] [NEW] Cinder is not mocked properly

2014-07-25 Thread Tatiana Ovchinnikova
Public bug reported:

Cinder traces return:

DEBUG:cinderclient.client:Connection refused:
HTTPConnectionPool(host='public.nova.example.com', port=8776): Max
retries exceeded with url: /v1/types/1 (Caused by : [Errno -2] Name or service not known)

** Affects: horizon
 Importance: Medium
 Assignee: Tatiana Ovchinnikova (tmazur)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Tatiana Ovchinnikova (tmazur)

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1348561

Title:
  Cinder is not mocked properly

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Cinder traces return:

  DEBUG:cinderclient.client:Connection refused:
  HTTPConnectionPool(host='public.nova.example.com', port=8776): Max
  retries exceeded with url: /v1/types/1 (Caused by : [Errno -2] Name or service not known)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1348561/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336800] Re: neutron firewall-rule-show is not displaying protocol field when set to any

2014-07-25 Thread Koteswara Rao Kelam
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1336800

Title:
  neutron firewall-rule-show is not displaying protocol field when set
  to any

Status in Python client library for Neutron:
  In Progress

Bug description:
  DESCRIPTION: 
  neutron firewall-rule-show is not displaying protocol field when set to any

  Steps to Reproduce: 
  create a firewall rule with  protocol option as any
  check the protocol field in neutron firewall-rule-show  

  Actual Results: 
  root@IGA-OSC:~# fwrc --name r4 --protocol any  --action allow
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | allow|
  | description|  |
  | destination_ip_address |  |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | efa447cd-f411-48b2-a9dc-804b42fd371b |
  | ip_version | 4|
  | name   | r4   |
  | position   |  |
  | protocol   |  |
  | shared | False|
  | source_ip_address  |  |
  | source_port|  |
  | tenant_id  | d9481c57a11c46eea62886938b5378a7 |
  ++--+
  root@IGA-OSC:~# fwrs r4
  ++--+
  | Field  | Value|
  ++--+
  | action | allow|
  | description|  |
  | destination_ip_address |  |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | efa447cd-f411-48b2-a9dc-804b42fd371b |
  | ip_version | 4|
  | name   | r4   |
  | position   |  |
  | protocol   |  |
  | shared | False|
  | source_ip_address  |  |
  | source_port|  |
  | tenant_id  | d9481c57a11c46eea62886938b5378a7 |
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1336800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309055] Re: Post operation of migration fails whith "Connection to neutron failed" error

2014-07-25 Thread haruka tanizawa
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309055

Title:
   Post operation of migration  fails whith "Connection to neutron
  failed" error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Post migration operation fails  after successful migration, and fields
  "node","host","task_state"  in the table "instances" are not changed
  in nova database.  When nova configured to work with neutron:

  grep neutron /etc/nova/nova.conf :

  network_api_class=nova.network.neutronv2.api.API
  neutron_url=http://controller:9696
  neutron_auth_strategy=keystone
  neutron_admin_tenant_name=service
  neutron_admin_username=neutron
  neutron_admin_password=pass
  neutron_admin_auth_url=http://controller:35357/v2.0
  neutron_metadata_proxy_shared_secret = pass
  service_neutron_metadata_proxy = true

  Latest nova/neutron code in Trusty:
   nova-compute1:2014.1-0ubuntu1 
   python-novaclient   1:2.17.0-0ubuntu1 
  python-neutronclient1:2.3.4-0ubuntu1   
   neutron-common  1:2014.1~rc2-0ubuntu4  

   /var/log/nova/nova-compute.log has stacktrace:
  TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  TRACE oslo.messaging.rpc.dispatcher incoming.message))
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, 
method, ctxt, args)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, 
method)(ctxt, **new_args)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 399, in 
decorated_function
  TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, 
**kwargs)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
  TRACE oslo.messaging.rpc.dispatcher payload)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, 
self.tb)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
  8 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 309, in 
decorated_function
  TRACE oslo.messaging.rpc.dispatcher e, sys.exc_info())
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, 
self.tb)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 296, in 
decorated_function
  TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, 
**kwargs)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4669, in 
post_live_migration_at_destination
  TRACE oslo.messaging.rpc.dispatcher migration)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 259, in 
network_migrate_instance_finish
  TRACE oslo.messaging.rpc.dispatcher migration)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 391, in 
network_migrate_instance_finish
  TRACE oslo.messaging.rpc.dispatcher instance=instance_p, 
migration=migration_p)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  TRACE oslo.messaging.rpc.dispatcher wait_for_reply=True, timeout=timeout)
  TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in 
_send
  TRACE oslo.messaging.rpc.dispatcher timeout=timeout)
   TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
412, in send
   TRACE oslo.messaging.rpc.dispatcher return self._send(target, ctxt, 
message, wait_for_reply, timeout)
   TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
405, in 

[Yahoo-eng-team] [Bug 1348509] [NEW] the volume may be legacy when we delete instance whose task_state is block_device_mapping

2014-07-25 Thread zhangtralon
Public bug reported:

here, two scenes may cause that a volume is  legacy   when  we delete
instance whose task_state is   block_device_mapping .The first scene is
that using the boot volume created by image  creates instance; The other
scene is that using image create instance  with a volume created
through a image.

 two examples  to reproduce the problem on latest  icehousce:
1. the first scene
(1)root@devstack:~# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
(2)root@devstack:~# nova boot --flavor m1.tiny --block-device 
id=61ebee75-5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=volume,device=vda,size=1,shutdown=removed,bootindex=0
 --nic net-id=354ba9ac-e6a7-4fd6-a49f-6ae18a815e95 tralon_test
root@devstack:~# nova list
+--+-++--+-+---+
| ID   | Name| Status | Task State  
 | Power State | Networks  |
+--+-++--+-+---+
| 57cbb39d-c93f-44eb-afda-9ce00110950d | tralon_test | BUILD  | 
block_device_mapping | NOSTATE | private=10.0.0.20 |
+--+-++--+-+---+
(3)root@devstack:~# nova delete tralon_test
root@devstack:~# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
(4) root@devstack:~# cinder list
+--+---+--+--+-+--+--+
|  ID  |   Status  | Name | Size | Volume Type 
| Bootable | Attached to  |
+--+---+--+--+-+--+--+
| 3e5579a9-5aac-42b6-9885-441e861f6cc0 | available | None |  1   | None
|  false   |  |
| a4121322-529b-4223-ac26-0f569dc7821e | available |  |  1   | None
|   true   |  |
| a7ad846b-8638-40c1-be42-f2816638a917 |   in-use  |  |  1   | None
|   true   | 57cbb39d-c93f-44eb-afda-9ce00110950d |
+--+---+--+--+-+--+--+
we can see that the instance  57cbb39d-c93f-44eb-afda-9ce00110950d was deleted 
while the volume still exists with the "in-use" status

2. the scend scene
 (1)root@devstack:~# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
(2)root@devstack:~# nova boot --flavor m1.tiny --image 
61ebee75-5883-49a3-bf85-ad6f6c29fc1b --nic 
net-id=354ba9ac-e6a7-4fd6-a49f-6ae18a815e95  --block-device 
id=61ebee75-5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=volume,device=vdb,size=1,shutdown=removed
 tralon_image_instance
root@devstack:~# nova list
+--+---++--+-+---+
| ID   | Name  | Status | Task 
State   | Power State | Networks  |
+--+---++--+-+---+
| 25bcfe84-0c3f-40d3-a917-4791e092fa06 | tralon_image_instance | BUILD  | 
block_device_mapping | NOSTATE | private=10.0.0.26 |
+--+---++--+-+---+
(3)root@devstack:~# nova delete 25bcfe84-0c3f-40d3-a917-4791e092fa06
  ( 4 ) root@devstack:~# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
 (5) root@devstack:~# cinder list
+--+---+--+--+-+--+--+
|  ID  |   Status  | Name | Size | Volume Type 
| Bootable | Attached to  |
+--+---+--+--+-+--+--+
| 3e5579a