[Yahoo-eng-team] [Bug 1376563] [NEW] add unit tests covering full-sync to ODL

2014-10-01 Thread Cédric OLLIVIER
Public bug reported:

Full synchronization to OpenDaylight haven't been covered by any unit test.
Bug #1371115 about "fix full synchronization between neutron and ODL" would 
have been caught.

It should complete the bug #1325184 focused on covering the single
operations.

** Affects: neutron
 Importance: Undecided
 Assignee: Cédric OLLIVIER (m.col)
 Status: New


** Tags: icehouse-backport-potential opendaylight unittest

** Changed in: neutron
 Assignee: (unassigned) => Cédric OLLIVIER (m.col)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376563

Title:
  add unit tests covering full-sync to ODL

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Full synchronization to OpenDaylight haven't been covered by any unit test.
  Bug #1371115 about "fix full synchronization between neutron and ODL" would 
have been caught.

  It should complete the bug #1325184 focused on covering the single
  operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315495] Re: Openvswitch throwing a table missing error?

2014-10-01 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1315495

Title:
  Openvswitch throwing a table missing error?

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  I am running IceHouse with CentOS 6.4 and I followed the redhat
  deployment guide for my install.  It is a 3 node setup with
  controller, network and compute nodes.  My issue is happening on the
  compute node.

  My /var/log/neutron/openvswitch-agent.log file is continually throwing
  this error:

  2014-05-02 15:11:44.195 2003 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Agent out of sync with 
plugin!
  2014-05-02 15:11:44.277 2003 WARNING neutron.agent.linux.ovs_lib [-] Found 
failed openvswitch port: [u'qvo93d39ce1-92', [u'map', [[u'attached-mac', 
u'fa:16:3e:37:26:f9'], [u'iface-id', u'93d39ce1-9209-49ee-96b5-01ff479b71c5'], 
[u'iface-status', u'active'], [u'vm-uuid', 
u'74187c71-8fed-4214-abb2-42c301489deb']]], -1]
  2014-05-02 15:11:44.278 2003 WARNING neutron.agent.linux.ovs_lib [-] Found 
failed openvswitch port: [u'qvo02722581-14', [u'map', [[u'attached-mac', 
u'fa:16:3e:1f:59:19'], [u'iface-id', u'02722581-14f4-4e6f-bfd2-8fc6594440da'], 
[u'iface-status', u'active'], [u'vm-uuid', 
u'3402975f-6a90-45d9-86e4-0f4ed00b2762']]], -1]
  2014-05-02 15:11:44.358 2003 INFO neutron.agent.securitygroups_rpc [-] 
Preparing filters for devices set([u'831403fc-8d48-4949-8a4f-4c4f3455d81a', 
u'fd5bb883-a415-49ec-849a-07442e3c7f1c'])
  2014-05-02 15:11:44.796 2003 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Port 
fd5bb883-a415-49ec-849a-07442e3c7f1c updated. Details: {u'admin_state_up': 
True, u'network_id': u'5e377240-4297-44c5-aec3-90e7db89b7a2', 
u'segmentation_id': 2, u'physical_network': None, u'device': 
u'fd5bb883-a415-49ec-849a-07442e3c7f1c', u'port_id': 
u'fd5bb883-a415-49ec-849a-07442e3c7f1c', u'network_type': u'gre'}
  2014-05-02 15:11:44.854 2003 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Error while processing 
VIF ports
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1226, in rpc_loop
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent sync = 
self.process_network_ports(port_info)
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1080, in process_network_ports
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent devices_added_updated)
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 985, in treat_devices_added_or_updated
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.context, device, 
self.agent_id, cfg.CONF.host)
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/rpc.py", line 107, in 
update_device_up
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent topic=self.topic)
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/proxy.py", line 
125, in call
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent result = 
rpc.call(context, real_topic, msg, timeout)
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", 
line 112, in call
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent return 
_get_impl().call(CONF, context, topic, msg, timeout)
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/impl_qpid.py", 
line 784, in call
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
rpc_amqp.get_connection_pool(conf, Connection))
  2014-05-02 15:11:44.854 2003 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py", line 
575, in call
  2014-05-02 15:11:44.854 2003

[Yahoo-eng-team] [Bug 1376542] [NEW] Translation import for Juno release

2014-10-01 Thread Akihiro Motoki
Public bug reported:

Translation import for Juno release.

As we discussed in Horizon meeting at Sep 30, a patch to import
translations will be proposed around Oct 9.

** Affects: horizon
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1376542

Title:
  Translation import for Juno release

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Translation import for Juno release.

  As we discussed in Horizon meeting at Sep 30, a patch to import
  translations will be proposed around Oct 9.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1376542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376389] Re: boto/test_ec2_instance_run.py fails with MismatchError, 'error' not in set

2014-10-01 Thread Ghanshyam Mann
*** This bug is a duplicate of bug 1375108 ***
https://bugs.launchpad.net/bugs/1375108

This is nova bug - https://bugs.launchpad.net/nova/+bug/1375108

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

** This bug has been marked a duplicate of bug 1375108
   Failed to reboot instance successfully with EC2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376389

Title:
  boto/test_ec2_instance_run.py fails with MismatchError, 'error' not in
  set

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  As seen here: 
http://logs.openstack.org/02/125402/1/check/check-tempest-dsvm-postgres-full/24a9390/console.html
  and here: 
http://logs.openstack.org/02/125402/1/check/check-tempest-dsvm-full-icehouse/dae39ea/console.html

   Traceback (most recent call last):
   2014-10-01 17:20:08.496 |   File 
"tempest/thirdparty/boto/test_ec2_instance_run.py", line 224, in 
test_run_reboot_terminate_instance
   2014-10-01 17:20:08.496 | self.assertInstanceStateWait(instance, 
'_GONE') 
   2014-10-01 17:20:08.496 |   File "tempest/thirdparty/boto/test.py", line 
358, in assertInstanceStateWait
   2014-10-01 17:20:08.496 | state = self.waitInstanceState(lfunction, 
wait_for)
   2014-10-01 17:20:08.496 |   File "tempest/thirdparty/boto/test.py", line 
343, in waitInstanceState
   2014-10-01 17:20:08.497 | self.valid_instance_state)
   2014-10-01 17:20:08.497 |   File "tempest/thirdparty/boto/test.py", line 
334, in state_wait_gone
   2014-10-01 17:20:08.497 | self.assertIn(state, valid_set | 
self.gone_set)
   2014-10-01 17:20:08.497 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 354, in 
assertIn
   2014-10-01 17:20:08.497 | self.assertThat(haystack, 
Contains(needle), message)
   2014-10-01 17:20:08.497 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 433, in 
assertThat
   2014-10-01 17:20:08.497 | raise mismatch_error
   2014-10-01 17:20:08.497 | MismatchError: u'error' not in set(['stopped', 
'stopping', 'paused', 'running', 'terminated', 'shutting-down', '_GONE', 
'pending'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376527] [NEW] big switch plugin doesn't notify agent on sec group update

2014-10-01 Thread Kevin Benton
Public bug reported:

The Big Switch plugin is not sending notifications to the security group
agent after the security groups are updated.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376527

Title:
  big switch plugin doesn't notify agent on sec group update

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The Big Switch plugin is not sending notifications to the security
  group agent after the security groups are updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240317] Re: cant resize alter live migrate(block_migrate)

2014-10-01 Thread Rey Aram A. Alcantara
2014-10-01 22:50:22.612 24766 ERROR nova.compute.manager 
[req-c7c8f643-6b3a-4ca2-99c6-be252dc12b87 fba8a3983484473ebb5caf6af441c282 
04d61c2227bc448397fcbf035dd76890] [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25] Setting instance vm_state to ERROR
2014-10-01 22:50:22.612 24766 TRACE nova.compute.manager [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25] Traceback (most recent call last):
2014-10-01 22:50:22.612 24766 TRACE nova.compute.manager [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5538, in 
_error_out_instance_on_exception
2014-10-01 22:50:22.612 24766 TRACE nova.compute.manager [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25] yield
2014-10-01 22:50:22.612 24766 TRACE nova.compute.manager [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3117, in 
_confirm_resize
2014-10-01 22:50:22.612 24766 TRACE nova.compute.manager [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25] rt = 
self._get_resource_tracker(migration.source_node)
2014-10-01 22:50:22.612 24766 TRACE nova.compute.manager [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 613, in 
_get_resource_tracker
2014-10-01 22:50:22.612 24766 TRACE nova.compute.manager [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25] "compute host.") % nodename)
2014-10-01 22:50:22.612 24766 TRACE nova.compute.manager [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25] NovaException: compute2 is not a valid 
node managed by this compute host.
2014-10-01 22:50:22.612 24766 TRACE nova.compute.manager [instance: 
fbefc220-0ef0-4c4d-a06d-31709e022e25] 
2014-10-01 22:50:22.747 24766 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: compute2 is not a valid node managed by this compute 
host.
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher payload)
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 333, in 
decorated_function
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 309, in 
decorated_function
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 296, in 
decorated_function
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3088, in 
confirm_resize
2014-10-01 22:50:22.747 24766 TRACE oslo.messaging.rpc.dispatcher 
do_confirm_resize(contex

[Yahoo-eng-team] [Bug 1376512] [NEW] hang when disk has little free space

2014-10-01 Thread Tomoki Sekiyama
Public bug reported:

When the disk has little free space ( e.g. ~600MB ), glance operations
(even glance image-delete) hang up and don't finish forever.


This issue is seen in the one node setup with devstack.

% glance image-list
+--+-+-+--+---++
| ID   | Name| Disk 
Format | Container Format | Size  | Status |
+--+-+-+--+---++
| 535f59b9-118b-4960-bfbe-17ac9a7f17c5 | cirros-0.3.2-x86_64-uec | ami  
   | ami  | 25165824  | active |
| e8e25ec5-853e-4492-9a21-d6d413dc49ae | cirros-0.3.2-x86_64-uec-kernel  | aki  
   | aki  | 4969360   | active |
| ef56e39a-6c59-4503-9662-dbe2e6f9e794 | cirros-0.3.2-x86_64-uec-ramdisk | ari  
   | ari  | 3723817   | active |
| dcc47833-1fc5-4c9a-8131-4560ca94f2de | f20-qga | 
qcow2   | bare | 645332992 | active |
| e82ca789-47cb-4dcd-9fec-737c695d7cb7 | Fedora-x86_64-20-20140618-sda   | 
qcow2   | bare | 209649664 | active |
+--+-+-+--+---++


( do something like "nova boot ..." and disk space will go below 600MB)


% glance image-delete f20-qga

( =>  will hang up. Any other operations, such as  creating new image,
are also hang.)


% rm -rf /tmp/...  (free some disk spaces)

(=> glance will recover from hang up)


Expected behavior is that the operation fails with IO error (No disk space).


(If the filesystem free space is really low, e.g. 50MB, it fails with IO error, 
as expected.)

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1376512

Title:
  hang when disk has little free space

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When the disk has little free space ( e.g. ~600MB ), glance operations
  (even glance image-delete) hang up and don't finish forever.

  
  This issue is seen in the one node setup with devstack.

  % glance image-list
  
+--+-+-+--+---++
  | ID   | Name| 
Disk Format | Container Format | Size  | Status |
  
+--+-+-+--+---++
  | 535f59b9-118b-4960-bfbe-17ac9a7f17c5 | cirros-0.3.2-x86_64-uec | 
ami | ami  | 25165824  | active |
  | e8e25ec5-853e-4492-9a21-d6d413dc49ae | cirros-0.3.2-x86_64-uec-kernel  | 
aki | aki  | 4969360   | active |
  | ef56e39a-6c59-4503-9662-dbe2e6f9e794 | cirros-0.3.2-x86_64-uec-ramdisk | 
ari | ari  | 3723817   | active |
  | dcc47833-1fc5-4c9a-8131-4560ca94f2de | f20-qga | 
qcow2   | bare | 645332992 | active |
  | e82ca789-47cb-4dcd-9fec-737c695d7cb7 | Fedora-x86_64-20-20140618-sda   | 
qcow2   | bare | 209649664 | active |
  
+--+-+-+--+---++

  
  ( do something like "nova boot ..." and disk space will go below 600MB)

  
  % glance image-delete f20-qga

  ( =>  will hang up. Any other operations, such as  creating new image,
  are also hang.)

  
  % rm -rf /tmp/...  (free some disk spaces)

  (=> glance will recover from hang up)

  
  Expected behavior is that the operation fails with IO error (No disk space).

  
  (If the filesystem free space is really low, e.g. 50MB, it fails with IO 
error, as expected.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1376512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376492] [NEW] Minesweeper failure: tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert

2014-10-01 Thread Arnaud Legendre
Public bug reported:

Patch I7598afbf0dc3c527471af34224003d28e64daaff introduces a tempest
failure with Minesweeper due to the fact that the destroy operation can
be triggered by both the user and the revert resize operation. In case
of a revert resize operation, we do not want to delete the original VM.

** Affects: nova
 Importance: Undecided
 Assignee: Arnaud Legendre (arnaudleg)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Arnaud Legendre (arnaudleg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376492

Title:
  Minesweeper failure:
  
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Patch I7598afbf0dc3c527471af34224003d28e64daaff introduces a tempest
  failure with Minesweeper due to the fact that the destroy operation
  can be triggered by both the user and the revert resize operation. In
  case of a revert resize operation, we do not want to delete the
  original VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376485] [NEW] Solidify environment_version handling in run_tests.sh

2014-10-01 Thread Doug Fish
Public bug reported:

We've had a number of review problems/escaped bugs related to the
handling of the environment_version in run_tests.sh.

The problem is that its an integer and it is supposed to increase
sequentially.  If there are multiple environment related patches out for
review in parallel only the first one will update the
environment_version.  The others will have their environment_version
successfully merge unless the developer or a reviewer noticed.

I plan to fix this by making the environment_version a string.  It takes
only a trivial change to the script to make this happen.  Then the
environment version can be the gerrit change number, possibly suffixed
with a "/2" or "/3" if there are multiple review patches.  This way the
environment_versions will not secretly resolve and patches can reliably
modify the environment_version.

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Doug Fish (drfish)

** Description changed:

  We've had a number of review problems/escaped bugs related to the
  handling of the environment_version in run_tests.sh.
  
  The problem is that its an integer and it is supposed to increase
  sequentially.  If there are multiple environment related patches out for
  review in parallel only the first one will update the
  environment_version.  The others will have their environment_version
  successfully merge unless the developer or a reviewer noticed.
  
  I plan to fix this by making the environment_version a string.  It takes
  only a trivial change to the script to make this happen.  Then the
  environment version can be the gerrit change number, possibly suffixed
- with a "/1" or "/2" if there are multiple review patches.  This way the
+ with a "/2" or "/3" if there are multiple review patches.  This way the
  environment_versions will not secretly resolve and patches can reliably
  modify the environment_version.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1376485

Title:
  Solidify environment_version handling in run_tests.sh

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We've had a number of review problems/escaped bugs related to the
  handling of the environment_version in run_tests.sh.

  The problem is that its an integer and it is supposed to increase
  sequentially.  If there are multiple environment related patches out
  for review in parallel only the first one will update the
  environment_version.  The others will have their environment_version
  successfully merge unless the developer or a reviewer noticed.

  I plan to fix this by making the environment_version a string.  It
  takes only a trivial change to the script to make this happen.  Then
  the environment version can be the gerrit change number, possibly
  suffixed with a "/2" or "/3" if there are multiple review patches.
  This way the environment_versions will not secretly resolve and
  patches can reliably modify the environment_version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1376485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374931] Re: Zero-byte image snapshot leads to error launching an instance from snapshot

2014-10-01 Thread Gary W. Smith
Per the discussion in gerrit, the zero-byte snapshot is normal from
nova/glance, as it reflects that fact that no space is being taken in
glance, since the volume snapshots are stored in cinder.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374931

Title:
  Zero-byte image snapshot leads to error launching an instance from
  snapshot

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When I try to launch an instance of snapshot image (which was a
  snapshot of a volume-backed instance), the 'launch button' has no
  response.

  How to reproduce:
  1. Launch an instance by using "boot from image(create new volume)"
  2. Take a snapshot of this instance, this operation will produce a image 
which size is 0 byte.
  3. Launch an instance by using the image generated in step 2, the 'launch 
button' has no response.

  If I open the console of web browser, I can see the Error Message
   "An invalid form control with name='volume_size' is not focusable. "

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1374931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376458] [NEW] filter in instances does not filter on Active or active

2014-10-01 Thread Gloria Gu
Public bug reported:

How to reproduce:

in Project -> compute -> instances

create a couple of instances
once they are all Active 
check one instance, click Soft Reboot instances
that instance should be at Reboot Status
use the filter for status=, put the Active or active, it doesn't filter out the 
instance which has Reboot status. 

Note: however if you put status = Reboot or reboot, it does filter out
the instances correctly.

Same filter issue at Admin 's instances panel.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1376458

Title:
  filter in instances does not filter on Active or active

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce:

  in Project -> compute -> instances

  create a couple of instances
  once they are all Active 
  check one instance, click Soft Reboot instances
  that instance should be at Reboot Status
  use the filter for status=, put the Active or active, it doesn't filter out 
the instance which has Reboot status. 

  Note: however if you put status = Reboot or reboot, it does filter out
  the instances correctly.

  Same filter issue at Admin 's instances panel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1376458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376446] [NEW] Neutron LBAAS agent ignores MTU settings used by subnets

2014-10-01 Thread Diego Lima
Public bug reported:

The network interfaces in qlbaas namespaces always use 1500 as their
default MTU setting. This may cause trouble when communicating with
instances that have different MTU set (usually through dhcp options
using /etc/neutron/dnsmasq-neutron.conf). Different MTU values may break
communications between instances and load balancers and lead to weird
behaviours, such as some web pages timing out waiting for a reply while
others work.

 Manually setting an lbaas namespace to the same value as the instance
corrects any communication problems between balancer and instance.

Neutron should use options from their subnet or at least provide a way
to manually set the load balancer's MTU.

For reference, here is the output of a lbaas namespace settings showing
MTU 1500:

# ip netns exec qlbaas-a38fc742-9c0b-4503-a0b9-64d5a6adfe77 ip a
(...)
234: tape853dd58-66:  mtu 1500 qdisc noqueue state 
UNKNOWN group default 
link/ether fa:16:3e:a2:e0:c7 brd ff:ff:ff:ff:ff:ff
inet 10.134.54.151/22 brd 10.134.55.255 scope global tape853dd58-66
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fea2:e0c7/64 scope link 
   valid_lft forever preferred_lft forever

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376446

Title:
  Neutron LBAAS agent ignores MTU settings used by subnets

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The network interfaces in qlbaas namespaces always use 1500 as their
  default MTU setting. This may cause trouble when communicating with
  instances that have different MTU set (usually through dhcp options
  using /etc/neutron/dnsmasq-neutron.conf). Different MTU values may
  break communications between instances and load balancers and lead to
  weird behaviours, such as some web pages timing out waiting for a
  reply while others work.

   Manually setting an lbaas namespace to the same value as the instance
  corrects any communication problems between balancer and instance.

  Neutron should use options from their subnet or at least provide a way
  to manually set the load balancer's MTU.

  For reference, here is the output of a lbaas namespace settings
  showing MTU 1500:

  # ip netns exec qlbaas-a38fc742-9c0b-4503-a0b9-64d5a6adfe77 ip a
  (...)
  234: tape853dd58-66:  mtu 1500 qdisc noqueue state 
UNKNOWN group default 
  link/ether fa:16:3e:a2:e0:c7 brd ff:ff:ff:ff:ff:ff
  inet 10.134.54.151/22 brd 10.134.55.255 scope global tape853dd58-66
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fea2:e0c7/64 scope link 
 valid_lft forever preferred_lft forever

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283204] Re: run_test.sh local_lock_path issue

2014-10-01 Thread Henry Gessau
I don't think this is seen by anyone any more. Marked as invalid.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1283204

Title:
  run_test.sh local_lock_path issue

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  run_test.sh no longer works and fails with :

  Traceback (most recent call last):
File "neutron/api/v2/resource.py", line 84, in resource
  result = method(request=request, **args)
File "neutron/api/v2/base.py", line 411, in create
  obj = obj_creator(request.context, **kwargs)
File "neutron/plugins/bigswitch/plugin.py", line 729, in create_network
  self._send_create_network(new_net, context)
File "neutron/plugins/bigswitch/plugin.py", line 567, in 
_send_create_network
  self.servers.rest_create_network(tenant_id, mapped_network)
File "neutron/plugins/bigswitch/plugin.py", line 385, in rest_create_network
  self.rest_action('POST', resource, data, errstr)
File "neutron/plugins/bigswitch/plugin.py", line 341, in rest_action
  resp = self.rest_call(action, resource, data, headers, ignore_codes)
File "neutron/openstack/common/lockutils.py", line 246, in inner
  with lock(name, lock_file_prefix, external, lock_path):
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  return self.gen.next()
File "neutron/openstack/common/lockutils.py", line 186, in lock
  fileutils.ensure_tree(local_lock_path)
File "neutron/openstack/common/fileutils.py", line 37, in ensure_tree
  os.makedirs(path)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
  makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 157, in makedirs
  mkdir(name, mode)
  OSError: [Errno 13] Permission denied: '/var/lib/neutron'
  }}}

  Traceback (most recent call last):
File "neutron/tests/unit/test_db_plugin.py", line 2021, in 
test_list_networks_with_sort_emulated
  name='net3')
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  return self.gen.next()
File "/usr/lib/python2.7/contextlib.py", line 112, in nested
  vars.append(enter())
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  return self.gen.next()
File "neutron/tests/unit/test_db_plugin.py", line 523, in network
  admin_state_up, **kwargs)
File "neutron/tests/unit/test_db_plugin.py", line 403, in _make_network
  raise webob.exc.HTTPClientError(code=res.status_int)
  HTTPClientError: The server could not comply with the request since it is 
either malformed or otherwise incorrect.

  
  ==
  FAIL: process-returncode

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1283204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376424] [NEW] Horizon uses deprecated cinder API

2014-10-01 Thread Gary W. Smith
Public bug reported:

Per https://review.openstack.org/#/c/73856/, the V1 cinder API has been
deprecated in Juno and will be removed in Kilo.  The default version in
openstack_dashboard.api.cinder should be set to 2.

** Affects: horizon
 Importance: Medium
 Assignee: Gary W. Smith (gary-w-smith)
 Status: New


** Tags: volume

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1376424

Title:
  Horizon uses deprecated cinder API

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Per https://review.openstack.org/#/c/73856/, the V1 cinder API has
  been deprecated in Juno and will be removed in Kilo.  The default
  version in openstack_dashboard.api.cinder should be set to 2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1376424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363332] Re: Database migration downgrade to havana fails

2014-10-01 Thread Henry Gessau
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363332

Title:
  Database migration downgrade to havana fails

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The first migration script after havana (e197124d4b9) has a bad
  downgrade.

  INFO  [alembic.migration] Running downgrade e197124d4b9 -> havana, add unique 
constraint to members
  Traceback (most recent call last):
File "/home/henry/Dev/neutron/.tox/py27/bin/neutron-db-manage", line 10, in 

  sys.exit(main())
File "/home/henry/Dev/neutron/neutron/db/migration/cli.py", line 175, in 
main
  CONF.command.func(config, CONF.command.name)
File "/home/henry/Dev/neutron/neutron/db/migration/cli.py", line 85, in 
do_upgrade_downgrade
  do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
File "/home/henry/Dev/neutron/neutron/db/migration/cli.py", line 63, in 
do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/command.py",
 line 151, in downgrade
  script.run_env()
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/script.py",
 line 203, in run_env
  util.load_python_file(self.dir, 'env.py')
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/util.py",
 line 215, in load_python_file
  module = load_module_py(module_id, path)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/compat.py",
 line 58, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/home/henry/Dev/neutron/neutron/db/migration/alembic_migrations/env.py", line 
120, in 
  run_migrations_online()
File 
"/home/henry/Dev/neutron/neutron/db/migration/alembic_migrations/env.py", line 
108, in run_migrations_online
  options=build_options())
File "", line 7, in run_migrations
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/environment.py",
 line 689, in run_migrations
  self.get_context().run_migrations(**kw)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/migration.py",
 line 263, in run_migrations
  change(**kw)
File 
"/home/henry/Dev/neutron/neutron/db/migration/alembic_migrations/versions/e197124d4b9_add_unique_constrain.py",
 line 62, in downgrade
  type_='unique'
File "", line 7, in drop_constraint
File "", line 1, in 
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/util.py",
 line 332, in go
  return fn(*arg, **kw)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/operations.py",
 line 841, in drop_constraint
  self.impl.drop_constraint(const)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 138, in drop_constraint
  self._exec(schema.DropConstraint(const))
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 76, in _exec
  conn.execute(construct, *multiparams, **params)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 729, in execute
  return meth(self, multiparams, params)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 69, in _execute_on_connection
  return connection._execute_ddl(self, multiparams, params)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 783, in _execute_ddl
  compiled
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 958, in _execute_context
  context)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1156, in _handle_dbapi_exception
  util.raise_from_cause(newraise, exc_info)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 199, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 951, in _execute_context
  context)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 436, in do_execute
  cursor.execute(statement, parameters)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages/MySQLdb/cursors.py",
 line 205, in execute
  self.errorhandler(self, exc, value)
File 
"/home/henry/Dev/neutron/.tox/py27/local/lib/python2.7/site-packages

[Yahoo-eng-team] [Bug 1255511] Re: Timeout in test_rescued_vm_add_remove_security_group

2014-10-01 Thread Mauro Sergio Martins Rodrigues
Considering lianhao-lu's logs I dive into n-cpu logs and this called my
attention:

2014-07-31 07:52:42.023 25463 WARNING nova.compute.manager [-]
[instance: 1300d7c9-d35d-46ef-a680-bf295cfdef01] Instance shutdown by
itself. Calling the stop API. (at
http://logs.openstack.org/89/108389/2/check/check-tempest-dsvm-
full/dbfc949/logs/screen-n-cpu.txt.gz#_2014-07-31_07_52_42_023)


Also notice similar behavior here 
http://logs.openstack.org/35/123935/1/check/check-tempest-dsvm-postgres-full/6a870ac/console.html.gz
 
The failure happened at 2014-09-25 06:45:21.021 running during 222.964523s or 3 
minutes and 43secs and the instance shut itself down at 2014-09-25_06_42_09_579 
(link 
http://logs.openstack.org/35/123935/1/check/check-tempest-dsvm-postgres-full/6a870ac/logs/screen-n-cpu.txt.gz?level=WARNING#_2014-09-25_06_42_09_579)
 - considering that part of the test ran fine and consumed part of those 3:43 
min that sounds like a good direction of what is wrong. I'm adding nova to the 
discussion

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255511

Title:
  Timeout in test_rescued_vm_add_remove_security_group

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Confirmed

Bug description:
  2013-11-27 11:08:17.740 | Traceback (most recent call last):
  2013-11-27 11:08:17.741 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 229, in 
test_rescued_vm_add_remove_security_group
  2013-11-27 11:08:17.741 | 
self.servers_client.wait_for_server_status(self.server_id, 'ACTIVE')
  2013-11-27 11:08:17.741 |   File 
"tempest/services/compute/xml/servers_client.py", line 365, in 
wait_for_server_status
  2013-11-27 11:08:17.741 | extra_timeout=extra_timeout)
  2013-11-27 11:08:17.741 |   File "tempest/common/waiters.py", line 82, in 
wait_for_server_status
  2013-11-27 11:08:17.742 | raise exceptions.TimeoutException(message)
  2013-11-27 11:08:17.742 | TimeoutException: Request timed out
  2013-11-27 11:08:17.742 | Details: Server 
2f7a6567-3e5d-44e0-a616-980b4a32eee3 failed to reach ACTIVE status within the 
required time (196 s). Current status: SHUTOFF.

  http://logs.openstack.org/64/56664/13/gate/gate-tempest-devstack-vm-
  full/317991f/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373872] Re: OpenContrail neutron plugin doesn't support portbinding.vnic_type

2014-10-01 Thread Numan Siddique
** This bug is no longer a duplicate of bug 1370077
   Set default vnic_type in neutron.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373872

Title:
  OpenContrail neutron plugin doesn't support portbinding.vnic_type

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  OpenContrail neutron plugin is not supporting portbinding.vnic_type
  during port creation. Nova expects portbindings.vnic_type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376392] Re: glance using localhost instead of OS_AUTH_URL

2014-10-01 Thread Louis Taylor
This is actually a glance client bug.

** Changed in: glance
   Status: New => Invalid

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1376392

Title:
  glance using localhost instead of OS_AUTH_URL

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Python client library for Glance:
  New

Bug description:
  On OSX and Ubuntu 12.04 using version 0.14.1 when i specify
  OS_AUTH_URL it does not get used and instead glance is trying to
  resolve to localhost... this doesn't work.

  Also -v and -d don't seem to do anything.

  env settings:
  OS_AUTH_URL=http://my-api-endpoint:5000/v2.0
  OS_TENANT_ID=redacted
  OS_TENANT_NAME=redacted
  OS_USERNAME=jaybocc2
  OS_PASSWORD=redacted

  [1] → # glance image-create --progress --human-readable --name 
win-2008R2-U1.18 --is-public true --container-format bare --disk-format vmdk 
--property vmware_disktype="sparse" --file 
./windows_2008_r2_U1.18_openstack.vmdk
  Unable to establish connection to http://localhost:5000/v2.0/tokens

  [1] → # glance -v -d --os-auth-url 
http://api-oyla1.client.metacloud.net:5000/v2.0 image-create --progress 
--human-readable --name win-2008R2-U1.18 --is-public true --container-format 
bare --disk-format vmdk --property vmware_disktype="sparse" --file 
./windows_2008_r2_U1.18_openstack.vmdk
  Unable to establish connection to http://localhost:5000/v2.0/tokens

  [1] → # glance --version
  0.14.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1376392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357677] Re: Instances failes to boot from volume

2014-10-01 Thread Attila Fazekas
** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357677

Title:
  Instances failes to boot from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Logstash query for full console outputs which does not contains 'info:
  initramfs loading root from /dev/vda' , but contains the previous boot
  message.

  These issues look like ssh connectivity issue, but the instance is not
  booted and it happens regardless to the network type.

  message: "Freeing unused kernel memory" AND message: "Initializing
  cgroup subsys cpuset" AND NOT message: "initramfs loading root from"
  AND tags:"console"

  49 incident/week.

  Example console log:
  
http://logs.openstack.org/75/113175/1/gate/check-tempest-dsvm-neutron-full/827c854/console.html.gz#_2014-08-14_11_23_30_120

  It failed when it's tried to ssh 3th server.
  WARNING: The conole.log contains two instances serial console output,  try no 
to mix them when reading.

  The fail point in the test code was here:
  
https://github.com/openstack/tempest/blob/b7144eb08175d010e1300e14f4f75d04d9c63c98/tempest/scenario/test_volume_boot_pattern.py#L175

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376392] [NEW] glance using localhost instead of OS_AUTH_URL

2014-10-01 Thread jay bendon
Public bug reported:

On OSX and Ubuntu 12.04 using version 0.14.1 when i specify OS_AUTH_URL
it does not get used and instead glance is trying to resolve to
localhost... this doesn't work.

Also -v and -d don't seem to do anything.

env settings:
OS_AUTH_URL=http://my-api-endpoint:5000/v2.0
OS_TENANT_ID=redacted
OS_TENANT_NAME=redacted
OS_USERNAME=jaybocc2
OS_PASSWORD=redacted

[1] → # glance image-create --progress --human-readable --name win-2008R2-U1.18 
--is-public true --container-format bare --disk-format vmdk --property 
vmware_disktype="sparse" --file ./windows_2008_r2_U1.18_openstack.vmdk
Unable to establish connection to http://localhost:5000/v2.0/tokens

[1] → # glance -v -d --os-auth-url 
http://api-oyla1.client.metacloud.net:5000/v2.0 image-create --progress 
--human-readable --name win-2008R2-U1.18 --is-public true --container-format 
bare --disk-format vmdk --property vmware_disktype="sparse" --file 
./windows_2008_r2_U1.18_openstack.vmdk
Unable to establish connection to http://localhost:5000/v2.0/tokens

[1] → # glance --version
0.14.1

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  On OSX and Ubuntu 12.04 using version 0.14.1 when i specify OS_AUTH_URL
  it does not get used and instead glance is trying to resolve to
  localhost... this doesn't work.
  
  Also -v and -d don't seem to do anything.
  
  env settings:
- OS_AUTH_URL=http://api-oyla1.client.metacloud.net:5000/v2.0
+ OS_AUTH_URL=http://my-api-endpoint:5000/v2.0
  OS_TENANT_ID=redacted
  OS_TENANT_NAME=redacted
  OS_USERNAME=jaybocc2
  OS_PASSWORD=redacted
  
- [1] → # glance image-create --progress --human-readable --name 
win-2008R2-U1.18 --is-public true --container-format bare --disk-format vmdk 
--property vmware_disktype="sparse" --file 
./windows_2008_r2_U1.18_openstack.vmdk  

+ [1] → # glance image-create --progress --human-readable --name 
win-2008R2-U1.18 --is-public true --container-format bare --disk-format vmdk 
--property vmware_disktype="sparse" --file 
./windows_2008_r2_U1.18_openstack.vmdk
  Unable to establish connection to http://localhost:5000/v2.0/tokens
  
  [1] → # glance -v -d --os-auth-url 
http://api-oyla1.client.metacloud.net:5000/v2.0 image-create --progress 
--human-readable --name win-2008R2-U1.18 --is-public true --container-format 
bare --disk-format vmdk --property vmware_disktype="sparse" --file 
./windows_2008_r2_U1.18_openstack.vmdk
  Unable to establish connection to http://localhost:5000/v2.0/tokens
  
- [1] → # glance --version  



 
+ [1] → # glance --version
  0.14.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1376392

Title:
  glance using localhost instead of OS_AUTH_URL

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  On OSX and Ubuntu 12.04 using version 0.14.1 when i specify
  OS_AUTH_URL it does not get used and instead glance is trying to
  resolve to localhost... this doesn't work.

  Also -v and -d don't seem to do anything.

  env settings:
  OS_AUTH_URL=http://my-api-endpoint:5000/v2.0
  OS_TENANT_ID=redacted
  OS_TENANT_NAME=redacted
  OS_USERNAME=jaybocc2
  OS_PASSWORD=redacted

  [1] → # glance image-create --progress --human-readable --name 
win-2008R2-U1.18 --is-public true --container-format bare --disk-format vmdk 
--property vmware_disktype="sparse" --file 
./windows_2008_r2_U1.18_openstack.vmdk
  Unable to establish connection to http://localhost:5000/v2.0/tokens

  [1] → # glance -v -d --os-auth-url 
http://api-oyla1.client.metacloud.net:5000/v2.0 image-create --progress 
--human-readable --name win-2008R2-U1.18 --is-public true --container-format 
bare --disk-format vmdk --property vmware_disktype="sparse" --file 
./windows_2008_r2_U1.18_openstack.vmdk
  Unable to establish connection to http://localhost:5000/v2.0/tokens

  [1] → # glance --version
  0.14.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1376392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376384] [NEW] instance stuck in hard_reboot

2014-10-01 Thread Joe Gordon
Public bug reported:

I have an instance (on rackspace public cloud), that was wedged so I
tried a reboot and then a hard reboot.  I later found out the machine my
instance was on had somes issues and was subsequently  rebooted. Now as
a regular user I have an instance in

Status:   HARD_REBOOT 
Task State: rebooting_hard 
Power State: Running  


And I cannot try another hard reboot due to 
https://review.openstack.org/#/c/28603/ and I connot reset the state since I am 
not an admin user. How do I fix my server without requiring customer service to 
do it for me?

I think a hard reboot should always be allowed no matter what, so I have
a way of resetting my server if it gets in any sort of bad state.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376384

Title:
  instance stuck in hard_reboot

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have an instance (on rackspace public cloud), that was wedged so I
  tried a reboot and then a hard reboot.  I later found out the machine
  my instance was on had somes issues and was subsequently  rebooted.
  Now as a regular user I have an instance in

  Status:   HARD_REBOOT 
  Task State: rebooting_hard 
  Power State: Running  

  
  And I cannot try another hard reboot due to 
https://review.openstack.org/#/c/28603/ and I connot reset the state since I am 
not an admin user. How do I fix my server without requiring customer service to 
do it for me?

  I think a hard reboot should always be allowed no matter what, so I
  have a way of resetting my server if it gets in any sort of bad state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374497] Re: change in oslo.db "ping" handling is causing issues in projects that are not using transactions

2014-10-01 Thread Doug Hellmann
** Changed in: oslo.db/juno
   Status: Triaged => Fix Committed

** Changed in: oslo.db/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1374497

Title:
  change in oslo.db "ping" handling is causing issues in projects that
  are not using transactions

Status in OpenStack Identity (Keystone):
  Triaged
Status in Oslo Database library:
  Fix Committed
Status in oslo.db juno series:
  Fix Released

Bug description:
  in https://review.openstack.org/#/c/106491/, the "ping" listener which
  emits "SELECT 1" at connection start was moved from being a connection
  pool "checkout" handler to a "transaction on begin" handler.
  Apparently Keystone and possibly others are using the Session in
  "autocommit" mode, despite that this is explicitly warned against in
  SQLAlchemy's docs (see
  http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html#autocommit-
  mode), and for these projects they are seeing failed connections not
  transparently recovered (see
  https://bugs.launchpad.net/keystone/+bug/1361378).

  Alternatives include:

  1. move the ping listener back to being a "checkout" handler

  2. fix downstream projects to not use the session in autocommit mode

  In all likelihood, the fix here should involve both.   I have a longer
  term plan to fix EngineFacade once and for all so that the correct use
  patterns are explicit, but that still has to be blueprinted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1374497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376363] [NEW] test_live_migration_raises_unsupported_config_exception can fail if VIR_ERR_CONFIG_UNSUPPORTED is not in fakelibvirt

2014-10-01 Thread Matt Riedemann
Public bug reported:

This change introduces a libvirt driver unit test that uses the
VIR_ERR_CONFIG_UNSUPPORTED error code:

https://review.openstack.org/#/c/123811/

If you don't have libvirt installed the unit test will fail:

Traceback (most recent call last):
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/mock.py", 
line 1201, in patched
return func(*args, **keywargs)
  File "nova/tests/virt/libvirt/test_driver.py", line 5722, in 
test_live_migration_raises_unsupported_config_exception
unsupported_config_error.err = (libvirt.VIR_ERR_CONFIG_UNSUPPORTED,)
AttributeError: 'module' object has no attribute 
'VIR_ERR_CONFIG_UNSUPPORTED'

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: libvirt testing

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376363

Title:
  test_live_migration_raises_unsupported_config_exception can fail if
  VIR_ERR_CONFIG_UNSUPPORTED is not in fakelibvirt

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  This change introduces a libvirt driver unit test that uses the
  VIR_ERR_CONFIG_UNSUPPORTED error code:

  https://review.openstack.org/#/c/123811/

  If you don't have libvirt installed the unit test will fail:

  Traceback (most recent call last):
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/mock.py", 
line 1201, in patched
  return func(*args, **keywargs)
File "nova/tests/virt/libvirt/test_driver.py", line 5722, in 
test_live_migration_raises_unsupported_config_exception
  unsupported_config_error.err = (libvirt.VIR_ERR_CONFIG_UNSUPPORTED,)
  AttributeError: 'module' object has no attribute 
'VIR_ERR_CONFIG_UNSUPPORTED'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376347] [NEW] styling: dropdowns in dialogs are rendered wrong in FireFox, IE

2014-10-01 Thread Doug Fish
Public bug reported:

This happens, for example, in 
Admin->System->Networks->Create Network 
Chrome looks good.  But FireFox ESR 31 cuts off the right side of the drop down 
lists.  IE 11 renders these select boxes needlessly wide.

Other dialogs seem to be affected in the same way.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "FF-cutoff.png"
   
https://bugs.launchpad.net/bugs/1376347/+attachment/4221637/+files/FF-cutoff.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1376347

Title:
  styling:  dropdowns in dialogs are rendered wrong in FireFox, IE

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This happens, for example, in 
  Admin->System->Networks->Create Network 
  Chrome looks good.  But FireFox ESR 31 cuts off the right side of the drop 
down lists.  IE 11 renders these select boxes needlessly wide.

  Other dialogs seem to be affected in the same way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1376347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374931] Re: Zero-byte image snapshot leads to error launching an instance from snapshot

2014-10-01 Thread Gary W. Smith
** Summary changed:

- Javascript error prevents launching an instance from volume snapshot
+ Zero-byte image snapshot leads to error launching an instance from snapshot

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374931

Title:
  Zero-byte image snapshot leads to error launching an instance from
  snapshot

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Compute (Nova):
  New

Bug description:
  When I try to launch an instance of snapshot image (which was a
  snapshot of a volume-backed instance), the 'launch button' has no
  response.

  How to reproduce:
  1. Launch an instance by using "boot from image(create new volume)"
  2. Take a snapshot of this instance, this operation will produce a image 
which size is 0 byte.
  3. Launch an instance by using the image generated in step 2, the 'launch 
button' has no response.

  If I open the console of web browser, I can see the Error Message
   "An invalid form control with name='volume_size' is not focusable. "

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1374931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376325] [NEW] Cannot enable DVR and IPv6 simultaneously

2014-10-01 Thread Brian Haley
Public bug reported:

While testing out the devstack change to support IPv6,
https://review.openstack.org/#/c/87987/ I tripped-over a DVR error since
I have it enabled by default in local.conf.

I have these two things enabled in local.conf:

Q_DVR_MODE=dvr_snat
IP_VERSION=4+6

After locally fixing lib/neutron to teach it about the DVR snat-
namespace (another bug to be filed for that), stack.sh was able to
complete, but the l3-agent wasn't very happy:

Stderr: '' execute /opt/stack/neutron/neutron/agent/linux/utils.py:81
2014-09-30 12:53:47.511 21778 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'ip', 'rule', 'add', 'from', 
'fd00::1/64', 'lookup', '336294682933583715844663186250927177729', 'priority', 
'336294682933583715844663186250927177729'] create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:46
2014-09-30 12:53:47.641 21778 ERROR neutron.agent.linux.utils [-]
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'arping', '-A', '-I', 
'qr-3d0eda6e-54', '-c', '3', 'fd00::1']
Exit code: 2
Stdout: ''
Stderr: 'arping: unknown host fd00::1\n'
2014-09-30 12:53:47.643 21778 ERROR neutron.agent.l3_agent [-] Failed sending 
gratuitous ARP:
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'arping', '-A', '-I', 
'qr-3d0eda6e-54', '-c', '3', 'fd00::1']
Exit code: 2
Stdout: ''
Stderr: 'arping: unknown host fd00::1\n'
2014-09-30 12:53:48.682 21778 ERROR neutron.agent.linux.utils [-]
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'ip', 'rule', 'add', 'from', 
'fd00::1/64', 'lookup', '336294682933583715844663186250927177729', 'priority', 
'336294682933583715844663186250927177729']
Exit code: 255
Stdout: ''
Stderr: 'Error: argument "336294682933583715844663186250927177729" is wrong: 
preference value is invalid\n\n'
2014-09-30 12:53:48.683 21778 ERROR neutron.agent.l3_agent [-] DVR: error 
adding redirection logic
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Traceback (most 
recent call last):
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 1443, in _snat_redirect_add
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent 
ns_ipr.add_rule_from(sn_port['ip_cidr'], snat_idx, snat_idx)
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 202, in add_rule_from
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent ip = 
self._as_root('', 'rule', tuple(args))
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 74, in _as_root
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent 
log_fail_as_error=self.log_fail_as_error)
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 86, in _execute
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent 
log_fail_as_error=log_fail_as_error)
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 84, in execute
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent raise 
RuntimeError(m)
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent RuntimeError:
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-95b180a4-9623-4ef9-adda-772ca0838253', 'ip', 'rule', 'add', 
'from', 'fd00::1/64', 'lookup', '336294682933583715844663186250927177729', 
'priority', '336294682933583715844663186250927177729']
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Exit code: 255
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Stdout: ''
2014-09-30 12:53:48.683 21778 TRACE neutron.agent.l3_agent Stderr: 'Error: 
argument "336294682933583715844663186250927177729" is wrong: preference value 
is invalid\n\n'

Ignore the ARP failures, there's already an upstream patch proposed for
that.

The fix for now might just be to ignore IPv6 addresses in the SNAT code,
we can look at optimizations later, but need to get this working so we
can enable both at the same time.

There are other errors that this then triggers, so devstack isn't very
usable until you turn one off.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notificatio

[Yahoo-eng-team] [Bug 1370292] Re: Possible SQL Injection vulnerability in hyperv plugin

2014-10-01 Thread Jeremy Stanley
Switched the bug to public and marked the security advisory task wontfix
based on the above explanation.

** Information type changed from Private Security to Public

** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370292

Title:
  Possible SQL Injection vulnerability in hyperv plugin

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  On this line:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/hyperv/agent/utilsv2.py#L190
  a raw SQL query is being made with the parameters 'class_name' and
  'element_name'.  Class name appears to be a hardcoded value in the
  usage that I saw, but element_name looks like it is set from
  "switch_port_name".  If a malicious user is able to tamper with the
  switch port name, a SQL injection vulnerability exists.

  At least this is an unsafe programming practice.  A library such as
  sqlalchemy should be used, or at least prepared statements.

  If there is no way for a user to tamper with these parameters, this
  can be fixed in public and treated as security hardening rather than a
  vulnerability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376307] [NEW] nova compute is crashing with the error TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'

2014-10-01 Thread Numan Siddique
Public bug reported:

nova compute is crashing with the below error when nova compute is
started


2014-10-01 14:50:26.854 ^[[00;32mDEBUG nova.virt.libvirt.driver 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mUpdating host stats^[[00m ^[[00;33mfrom 
(pid=9945) update_status /opt/stack/nova/nova/virt/libvirt/driver.py:6361^[[00m
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 449, 
in fire_timers
timer()
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
58, in __call__
cb(*args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 167, in 
_do_send
waiter.switch(result)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
207, in main
result = function(*args, **kwargs)
  File "/opt/stack/nova/nova/openstack/common/service.py", line 490, in 
run_service
service.start()
  File "/opt/stack/nova/nova/service.py", line 181, in start
self.manager.pre_start_hook()
  File "/opt/stack/nova/nova/compute/manager.py", line 1152, in pre_start_hook
self.update_available_resource(nova.context.get_admin_context())
  File "/opt/stack/nova/nova/compute/manager.py", line 5946, in 
update_available_resource
nodenames = set(self.driver.get_available_nodes())
  File "/opt/stack/nova/nova/virt/driver.py", line 1237, in get_available_nodes
stats = self.get_host_stats(refresh=refresh)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5771, in 
get_host_stats
return self.host_state.get_host_stats(refresh=refresh)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 470, in host_state
self._host_state = HostState(self)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6331, in __init__
self.update_status()
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6387, in 
update_status
numa_topology = self.driver._get_host_numa_topology()
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4828, in 
_get_host_numa_topology
for cell in topology.cells])
TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
2014-10-01 14:50:26.989 ^[[01;31mERROR nova.openstack.common.threadgroup 
[^[[00;36m-^[[01;31m] ^[[01;35m^[[01;31munsupported operand type(s) for /: 
'NoneType' and 'int'^[[00m


Seems like the commit 
https://github.com/openstack/nova/commit/6a374f21495c12568e4754800574e6703a0e626f
is the cause.

** Affects: nova
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Numan Siddique (numansiddique)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376307

Title:
  nova compute is crashing with the error TypeError: unsupported operand
  type(s) for /: 'NoneType' and 'int'

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  nova compute is crashing with the below error when nova compute is
  started

  
  2014-10-01 14:50:26.854 ^[[00;32mDEBUG nova.virt.libvirt.driver 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mUpdating host stats^[[00m ^[[00;33mfrom 
(pid=9945) update_status /opt/stack/nova/nova/virt/libvirt/driver.py:6361^[[00m
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 
449, in fire_timers
  timer()
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
58, in __call__
  cb(*args, **kw)
File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 167, 
in _do_send
  waiter.switch(result)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
207, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/openstack/common/service.py", line 490, in 
run_service
  service.start()
File "/opt/stack/nova/nova/service.py", line 181, in start
  self.manager.pre_start_hook()
File "/opt/stack/nova/nova/compute/manager.py", line 1152, in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File "/opt/stack/nova/nova/compute/manager.py", line 5946, in 
update_available_resource
  nodenames = set(self.driver.get_available_nodes())
File "/opt/stack/nova/nova/virt/driver.py", line 1237, in 
get_available_nodes
  stats = self.get_host_stats(refresh=refresh)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5771, in 
get_host_stats
  return self.host_state.get_host_stats(refresh=refresh)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 470, in host_state
  self._host_state = HostState(self)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6331, in __init__
  self.update_status()
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6387, in 
upd

[Yahoo-eng-team] [Bug 1315556] Re: Disabling a domain does not disable the projects in that domain

2014-10-01 Thread Dolph Mathews
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Tags removed: havana-backport-potential icehouse-backport-potential
security

** Changed in: keystone/icehouse
   Status: New => In Progress

** Changed in: keystone/icehouse
   Importance: Undecided => High

** Changed in: keystone/icehouse
 Assignee: (unassigned) => Dolph Mathews (dolph)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1315556

Title:
  Disabling a domain does not disable the projects in that domain

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  In Progress

Bug description:
  User from an enabled domain can still get a token scoped to a project
  in a disabled domain.

  Steps to reproduce.

  1. create domains "domainA" and "domainB"
  2. create user "userA" and project "projectA" in "domainA"
  3. create user "userB" and project "projectB" in "domainB"
  4. assign "userA" some role for "projectB"
  5. disable "domainB"
  6. authenticate to get a  token for "userA" scoped to "projectB". This should 
fail as "projectB"'s domain ("domainB") is disabled.

  Looks like the fix would be the check for the project domain to make
  sure it is also enabled. See

  
https://github.com/openstack/keystone/blob/master/keystone/auth/controllers.py#L112

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1315556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319023] Re: TestDashboardBasicOps.test_basic_scenario fails with "HTTP Error 500: INTERNAL SERVER ERROR"

2014-10-01 Thread Mauro Sergio Martins Rodrigues
** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1319023

Title:
  TestDashboardBasicOps.test_basic_scenario  fails with "HTTP Error 500:
  INTERNAL SERVER ERROR"

Status in OpenStack Dashboard (Horizon):
  New
Status in Tempest:
  Invalid

Bug description:
  You can see the full failure at
  http://logs.openstack.org/31/92831/1/check/check-grenade-
  dsvm/ed47c7e/console.html

  excerpted:

  2014-05-13 04:21:42.780 | {2} 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario
 [24.934203s] ... FAILED
  2014-05-13 04:21:42.780 | 
  2014-05-13 04:21:42.780 | Captured traceback:
  2014-05-13 04:21:42.780 | ~~~
  2014-05-13 04:21:42.780 | Traceback (most recent call last):
  2014-05-13 04:21:42.780 |   File "tempest/test.py", line 126, in wrapper
  2014-05-13 04:21:42.780 | return f(self, *func_args, **func_kwargs)
  2014-05-13 04:21:42.780 |   File 
"tempest/scenario/test_dashboard_basic_ops.py", line 75, in test_basic_scenario
  2014-05-13 04:21:42.780 | self.user_login()
  2014-05-13 04:21:42.780 |   File 
"tempest/scenario/test_dashboard_basic_ops.py", line 66, in user_login
  2014-05-13 04:21:42.781 | self.opener.open(req, 
urllib.urlencode(params))
  2014-05-13 04:21:42.781 |   File "/usr/lib/python2.7/urllib2.py", line 
406, in open
  2014-05-13 04:21:42.781 | response = meth(req, response)
  2014-05-13 04:21:42.781 |   File "/usr/lib/python2.7/urllib2.py", line 
519, in http_response
  2014-05-13 04:21:42.781 | 'http', request, response, code, msg, hdrs)
  2014-05-13 04:21:42.781 |   File "/usr/lib/python2.7/urllib2.py", line 
438, in error
  2014-05-13 04:21:42.781 | result = self._call_chain(*args)
  2014-05-13 04:21:42.781 |   File "/usr/lib/python2.7/urllib2.py", line 
378, in _call_chain
  2014-05-13 04:21:42.781 | result = func(*args)
  2014-05-13 04:21:42.781 |   File "/usr/lib/python2.7/urllib2.py", line 
625, in http_error_302
  2014-05-13 04:21:42.781 | return self.parent.open(new, 
timeout=req.timeout)
  2014-05-13 04:21:42.781 |   File "/usr/lib/python2.7/urllib2.py", line 
406, in open
  2014-05-13 04:21:42.782 | response = meth(req, response)
  2014-05-13 04:21:42.782 |   File "/usr/lib/python2.7/urllib2.py", line 
519, in http_response
  2014-05-13 04:21:42.782 | 'http', request, response, code, msg, hdrs)
  2014-05-13 04:21:42.782 |   File "/usr/lib/python2.7/urllib2.py", line 
438, in error
  2014-05-13 04:21:42.782 | result = self._call_chain(*args)
  2014-05-13 04:21:42.782 |   File "/usr/lib/python2.7/urllib2.py", line 
378, in _call_chain
  2014-05-13 04:21:42.782 | result = func(*args)
  2014-05-13 04:21:42.782 |   File "/usr/lib/python2.7/urllib2.py", line 
625, in http_error_302
  2014-05-13 04:21:42.782 | return self.parent.open(new, 
timeout=req.timeout)
  2014-05-13 04:21:42.782 |   File "/usr/lib/python2.7/urllib2.py", line 
406, in open
  2014-05-13 04:21:42.782 | response = meth(req, response)
  2014-05-13 04:21:42.782 |   File "/usr/lib/python2.7/urllib2.py", line 
519, in http_response
  2014-05-13 04:21:42.783 | 'http', request, response, code, msg, hdrs)
  2014-05-13 04:21:42.783 |   File "/usr/lib/python2.7/urllib2.py", line 
444, in error
  2014-05-13 04:21:42.783 | return self._call_chain(*args)
  2014-05-13 04:21:42.783 |   File "/usr/lib/python2.7/urllib2.py", line 
378, in _call_chain
  2014-05-13 04:21:42.783 | result = func(*args)
  2014-05-13 04:21:42.783 |   File "/usr/lib/python2.7/urllib2.py", line 
527, in http_error_default
  2014-05-13 04:21:42.783 | raise HTTPError(req.get_full_url(), code, 
msg, hdrs, fp)
  2014-05-13 04:21:42.783 | HTTPError: HTTP Error 500: INTERNAL SERVER ERROR
  2014-05-13 04:21:42.783 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1319023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368097] Re: UnicodeDecodeError using ldap backend

2014-10-01 Thread Dolph Mathews
*** This bug is a duplicate of bug 1355489 ***
https://bugs.launchpad.net/bugs/1355489

Agree, and the fix has been backported to stable/icehouse and should be
included in 2014.1.3

** This bug has been marked a duplicate of bug 1355489
   authenticate ldap binary fields fail when converting fields to utf8

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1368097

Title:
  UnicodeDecodeError using ldap backend

Status in OpenStack Identity (Keystone):
  Incomplete

Bug description:
  Using the LDAP backend if any attribute contains accents, the ldap2py
  function in keystone/common/ldap/core.py fails with a
  UnicodeDecodeError.

  
https://github.com/openstack/keystone/blob/1e204483e5feebe489ecca409509ae31bacb0ce2/keystone/common/ldap/core.py#L110-L129

  
  This function was introduced by commit 
cbf805161b84f13f459a19bfd46220c4f298b264 
(https://review.openstack.org/#/c/82398/) . That commit encodes and decodes to 
and from utf8 the strings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1368097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362966] Re: IPv6 two attributes cannot be set to None

2014-10-01 Thread Henry Gessau
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362966

Title:
  IPv6 two attributes cannot be set to None

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Python client library for Neutron:
  New

Bug description:
  The default value of IPv6 RA and address modes is None (if they are not 
specified when the subnet is created).
  However, we cannot change IPv6 two modes to None from other values after 
creating a subnet.
  (ra_mode address_mode) = (None, None) is a valid combinaiton, but for example 
we cannot change them from (slaac, slaac) to (none, none).

  IMO IPv6 two modes should accept None in API to allow users to reset
  the attribute value to None.

  ubuntu@dev02:~/neutron (master)$ neutron subnet-show 
4ab34962-b330-4be5-98fe-ac7862f8d511
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | allocation_pools  | {"start": "fe80:::2", "end": 
"fe80::ff:::::fffe"} |
  | cidr  | fe80:::/40  
  |
  | dns_nameservers   | 
  |
  | enable_dhcp   | True
  |
  | gateway_ip| fe80:::1
  |
  | host_routes   | 
  |
  | id| 4ab34962-b330-4be5-98fe-ac7862f8d511
  |
  | ip_version| 6   
  |
  | ipv6_address_mode | slaac   
  |
  | ipv6_ra_mode  | slaac   
  |
  | name  | 
  |
  | network_id| 07315dce-0c6c-4c2f-99ec-e8575ffa72af
  |
  | tenant_id | 36c29390faa8408cb9deff8762319740
  |
  
+---+---+

  ubuntu@dev02:~/neutron (master)$ neutron subnet-update 
4ab34962-b330-4be5-98fe-ac7862f8d511 --ipv6_ra_mode action=clear 
--ipv6_address_mode action=clear
  Invalid input for ipv6_ra_mode. Reason: 'None' is not in ['dhcpv6-stateful', 
'dhcpv6-stateless', 'slaac']. (HTTP 400) (Request-ID: 
req-9431df59-3881-4c85-861e-b25217b8013d)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1362966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290486] Re: neutron-openvswitch-agent does not recreate flows after ovsdb-server restarts

2014-10-01 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290486

Title:
  neutron-openvswitch-agent does not recreate flows after ovsdb-server
  restarts

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  The DHCP requests were not being responded to after they were seen on
  the undercloud network interface.  The neutron services were restarted
  in an attempt to ensure they had the newest configuration and knew
  they were supposed to respond to the requests.

  Rather than using the heat stack create (called in
  devtest_overcloud.sh) to test, it was simple to use the following to
  directly boot a baremetal node.

  nova boot --flavor $(nova flavor-list | grep 
"|[[:space:]]*baremetal[[:space:]]*|" | awk '{print $2}) \
--image $(nova image-list | grep 
"|[[:space:]]*overcloud-control[[:space:]]*|" | awk '{print $2}') \
bm-test1

  Whilst the baremetal node was attempting to pxe boot a restart of the
  neutron services was performed.  This allowed the baremetal node to
  boot.

  It has been observed that a neutron restart was needed for each
  subsequent reboot of the baremetal nodes to succeed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1290486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376247] [NEW] Test "test_get_client_all_creds" fails if OS_XXX variables set

2014-10-01 Thread Stuart McLaren
Public bug reported:

 'test_get_client_all_creds' will fail if any of the typical   Openstack
environment variables have been set (eg OS_PASSWORD).

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1376247

Title:
  Test "test_get_client_all_creds" fails if OS_XXX variables set

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
   'test_get_client_all_creds' will fail if any of the typical
  Openstack environment variables have been set (eg OS_PASSWORD).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1376247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298242] Re: live_migration_uri should be dependant on virt_type

2014-10-01 Thread Alvaro Lopez
Sorry, but I do not think this is Invalid.

The "live_migration_uri" value is dependent on the libvirt type being
used, and therefore, the default values should be dependent on that.

If I as an operator set libivrt_type to whatever I expect that I do not
have to tweak the default values for that connection type (for example
the connection url, the migration url, etc). With the current situation
I will get a broken deployment where I have to figure out that live
migration does not work, just because the default migration_url only
works with one type of libvirt connection type.

** Changed in: nova
   Status: Invalid => New

** Changed in: nova
 Assignee: Alvaro Lopez (aloga) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298242

Title:
  live_migration_uri should be dependant on virt_type

Status in OpenStack Compute (Nova):
  New

Bug description:
  The "live_migration_uri" default should be dependent on the
  "virt_type" flag (this is the same behavior as "connection_uri").

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376227] [NEW] Always use shades of blue for distribution pie charts

2014-10-01 Thread Ana Krivokapić
Public bug reported:

It was suggested that red and orange colors can indicated problems and
therefore should not be used in distribution pie charts. We should
instead always use different shades of blue for this type of pie charts.

** Affects: horizon
 Importance: Undecided
 Assignee: Ana Krivokapić (akrivoka)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1376227

Title:
  Always use shades of blue for distribution pie charts

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  It was suggested that red and orange colors can indicated problems and
  therefore should not be used in distribution pie charts. We should
  instead always use different shades of blue for this type of pie
  charts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1376227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376211] Re: Retry mechanism does not work on startup when used with MySQL

2014-10-01 Thread Salvatore Orlando
This bug affects neutron since after a reboot it might be possible that
the service will fail at startup if it's started before the mysql
service.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376211

Title:
  Retry mechanism does not work on startup when used with MySQL

Status in OpenStack Neutron (virtual network service):
  New
Status in Oslo Database library:
  New

Bug description:
  This is initially revealed as Red Hat bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1144181

  The problem shows up when Neutron or any other oslo.db based projects
  start while MySQL server is not up yet. Instead of retrying connection
  as per max_retries and retry_interval, service just crashes with
  return code 1.

  This is because during engine initialization, "engine.execute("SHOW
  VARIABLES LIKE 'sql_mode'")" is called, which opens the connection,
  *before* _test_connection() succeeds. So the server just bail out to
  sys.exit() at the top of the stack.

  This behaviour was checked for both oslo.db 0.4.0 and 1.0.1.

  I suspect this is a regression from the original db code from oslo-
  incubator though I haven't checked it specifically.

  The easiest way to reproduce the traceback is:

  1. stop MariaDB.
  2. execute the following Python script:

  '''
  import oslo.db.sqlalchemy.session

  url = 'mysql://neutron:123456@10.35.161.235/neutron'
  engine = oslo.db.sqlalchemy.session.EngineFacade(url)
  '''

  The following traceback can be seen in service log:

  2014-10-01 13:46:10.588 5812 TRACE neutron Traceback (most recent call last):
  2014-10-01 13:46:10.588 5812 TRACE neutron   File "/usr/bin/neutron-server", 
line 10, in 
  2014-10-01 13:46:10.588 5812 TRACE neutron sys.exit(main())
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/server/__init__.py", line 47, in main
  2014-10-01 13:46:10.588 5812 TRACE neutron neutron_api = 
service.serve_wsgi(service.NeutronApiService)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 105, in serve_wsgi
  2014-10-01 13:46:10.588 5812 TRACE neutron LOG.exception(_('Unrecoverable 
error: please check log '
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/excutils.py", line 
82, in __exit__
  2014-10-01 13:46:10.588 5812 TRACE neutron six.reraise(self.type_, 
self.value, self.tb)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 102, in serve_wsgi
  2014-10-01 13:46:10.588 5812 TRACE neutron service.start()
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 73, in start
  2014-10-01 13:46:10.588 5812 TRACE neutron self.wsgi_app = 
_run_wsgi(self.app_name)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 168, in _run_wsgi
  2014-10-01 13:46:10.588 5812 TRACE neutron app = 
config.load_paste_app(app_name)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/common/config.py", line 182, in 
load_paste_app
  2014-10-01 13:46:10.588 5812 TRACE neutron app = 
deploy.loadapp("config:%s" % config_path, name=app_name)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2014-10-01 13:46:10.588 5812 TRACE neutron return loadobj(APP, uri, 
name=name, **kw)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2014-10-01 13:46:10.588 5812 TRACE neutron return context.create()
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
  2014-10-01 13:46:10.588 5812 TRACE neutron return 
self.object_type.invoke(self)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
  2014-10-01 13:46:10.588 5812 TRACE neutron **context.local_conf)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 56, in fix_call
  2014-10-01 13:46:10.588 5812 TRACE neutron val = callable(*args, **kw)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/urlmap.py", line 25, in urlmap_factory
  2014-10-01 13:46:10.588 5812 TRACE neutron app = loader.get_app(app_name, 
global_conf=global_conf)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2014-10-01 13:46:10.588 5812 

[Yahoo-eng-team] [Bug 1355777] Re: support for ipv6 nameservers

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355777

Title:
  support for ipv6 nameservers

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Current git version of nova does not fully support ipv6 nameservers despite 
being able to set them during subnet creation.
  This patch adds this support in nova (git) and its interfaces.template. It is 
currently deployed and used in our infrastructure based on icehouse (Nova 
2.17.0).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300697] Re: VMWare - Instance with volume attached cannot be resized

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300697

Title:
  VMWare - Instance with volume attached cannot be resized

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The instance has volume attached. While trying to resize that instance, the 
resize operation fails and instance goes to ERROR state.
  Snippet from nova-compute log

  2014-04-01 09:28:28.768 23215 ERROR nova.compute.manager 
[req-390d2d8c-37d1-4689-adcb-e4cf05b86038 a4a3a9f5a25942b5b9a52e86bef6ac5c 
693523ae4a8548f9962dee10df9ccf3b] [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] Setting instance vm_state to ERROR
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] Traceback (most recent call last):
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3163, in 
finish_resize
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] disk_info, image)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3122, in 
_finish_resize
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] context, instance, 
refresh_conn_info=True)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1529, in 
_get_instance_volume_block_device_info
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] self.driver, self.conductor_api)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 290, in 
refresh_conn_infos
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] block_device_mapping)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 191, in 
refresh_connection_info
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] connector)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 176, in wrapper
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] res = method(self, ctx, volume_id, 
*args, **kwargs)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 274, in 
initialize_connection
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] connector)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 321, in 
initialize_connection
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] {'connector': 
connector})[1]['connection_info']
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 250, in 
_action
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] return self.api.client.post(url, 
body=body)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 217, in post
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] return self._cs_request(url, 'POST', 
**kwargs)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   File 
"/usr/lib/python2.6/site-packages/cinderclient/client.py", line 181, in 
_cs_request
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52] **kwargs)
  2014-04-01 09:28:28.768 23215 TRACE nova.compute.manager [instance: 
983cc3d9-9918-4461-8336-32601b28ea52]   Fi

[Yahoo-eng-team] [Bug 1298420] Re: Libvirt's image caching fetches images multiple times

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298420

Title:
  Libvirt's image caching fetches images multiple times

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When launching several VMs in rapid succession, it is possible that
  libvirt's image caching will fetch the same image several times.  This
  can occur when all of the VMs in question are using the same base
  image and this base image has not been previously fetched. The inline
  fetch_func_sync method prevents multiple threads from fetching the
  same image at the same time, but it does not prevent a thread that is
  waiting to acquire the lock from fetching the image that was being
  fetched while the lock was still in use. This is because the presence
  of the image is checked only before the lock has been acquired, not
  after.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292243] Re: deallocate_for_instance should delete all neutron ports on error

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1292243

Title:
  deallocate_for_instance should delete all neutron ports on error

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When deleting an instance if an instance has multiple ports and one
  of the deletes fail nova should LOG an error and continue trying
  to delete the other ports. Previously, nova would stop deleting ports
  are the first non 404 error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1292243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288049] Re: Lazy translation fails for complex formatting

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288049

Title:
  Lazy translation fails for complex formatting

Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  When lazy translation is enabled _() returns a gettextutils.Message
  instance.   When formatting (%) is called on the instance rather than
  apply the replacement values, it gathers them so they can be applied
  when the message is lazily translated.   This support includes the use
  of keyword replacement as part of formatting (passing a dictionary).
  In order to limit the size of the dictionary, especially in the case
  where locales is passed, the dictionary of values is limited to only
  those keywords that are actually referenced in the format string.

  The code that extracts the replacement keys (and thus dictionary
  entries to keep) only handles simple formatting.  Currently it does
  not handle things like '%(key).02f', but instead omits them from the
  dictionary, which causes a KeyError when the message is translated and
  the replacements applied.

  Confirmed that regex that extracts the keywords does not handle this case: 
  
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/gettextutils.py#L266

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376169] [NEW] ODL MD can't reconnect to ODL after it restarts

2014-10-01 Thread Cédric OLLIVIER
Public bug reported:

ODL MD doesn't process any traitment when it receives 401 http error 
(Unauthorized) which happens after ODL restarts.
The only way to recover is to restart neutron. 

It induces a strongly link between restarts of neutron and ODL.

To reproduce it:
 - start ODL and neutron
 - create a network
 - restart ODL
 - create another network

The last command raises an 401 http error and any extra single
operations will fail.

** Affects: neutron
 Importance: Undecided
 Assignee: Cédric OLLIVIER (m.col)
 Status: New


** Tags: icehouse-backport-potential opendaylight

** Tags added: icehouse-backport-potential opendaylight

** Changed in: neutron
 Assignee: (unassigned) => Cédric OLLIVIER (m.col)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376169

Title:
  ODL MD can't reconnect to ODL after it restarts

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ODL MD doesn't process any traitment when it receives 401 http error 
(Unauthorized) which happens after ODL restarts.
  The only way to recover is to restart neutron. 

  It induces a strongly link between restarts of neutron and ODL.

  To reproduce it:
   - start ODL and neutron
   - create a network
   - restart ODL
   - create another network

  The last command raises an 401 http error and any extra single
  operations will fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351002] Re: live migration (non-block-migration) with shared instance storage and config drive fails

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351002

Title:
  live migration (non-block-migration) with shared instance storage and
  config drive fails

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Note: the reproduction case below has been fixed by not blocking migration on 
config drives.  However, the underlying issue of NFS not being marked
  as shared storage still stands, since the 'is_shared_block_storage' data
  is used elsewhere as well.

  ---

  To reproduce:

  1. Set up shared instance storage via NFS and use one of the file-based image 
backends
  2. Boot an instance with a config drive
  3. Attempt to live migrate said instance w/o doing a block migration

  The issue is caused by the following lines in nova/virt/libvirt/driver.py:
  if not (is_shared_instance_path and is_shared_block_storage):
  # NOTE(mikal): live migration of instances using config drive is
  # not supported because of a bug in libvirt (read only devices
  # are not copied by libvirt). See bug/1246201
  if configdrive.required_by(instance):
  raise exception.NoLiveMigrationForConfigDriveInLibVirt()

  The issue, I believe, was caused by commit bc45c56f1, which separated
  checks for shared instance directories and shared block storage
  backends like Ceph.  The issue is that if a deployer is not using
  Ceph, the call to
  self.image_backend.backend().is_shared_block_storage() returns False.
  However, is_shared_block_storage should not even be considered if the
  image backend is a file-based one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1351002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355922] Re: instance fault not created when boot process fails

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355922

Title:
  instance fault not created when boot process fails

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If the build process makes it to build_and_run_instance in the compute
  manager no instance faults are recorded for failures after that point.
  The instance will be set to an ERROR state appropriately, but no
  information is stored to return to the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351301] Re: Log messages for child cells informing their parents can be more verbose

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351301

Title:
  Log messages for child cells informing their parents can be more
  verbose

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In nova-cells service, the child cells frequently update their parents
  about their capabilities and capacities. The corresponding log
  messages usually contain a list of capacities and capabilities updated
  by the child.  The names of the parent cells to which this information
  was sent can also be added to the log message for completeness.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1351301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356633] Re: use_usb_tablet=true have no effect

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356633

Title:
  use_usb_tablet=true have no effect

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Hello Stackers!

  I'm trying to enable the "usb tablet" to see if it improve my Windows
  guests but, it have not effect.

  Steps to reproduce:

  1- Install Ubuntu 14.04.1 with IceHouse;

  2- Enable the following line at /etc/nova/nova-compute.conf, under
  [libvirt] group:

  ---
  use_usb_tablet=true
  ---

  3- Start a guest (Windows 2k8 R2), go to its compute node to verify
  the VM configuration file with:

  ---
  virsh dumpxml instance-WWWZ
  ---

  ...There is no "usb tablet" there.

  I tried to put "use_usb_tablet=true" under [DEFAULT], under nova.conf,
  still no luck.

  Thanks in advance!

  Regards,
  Thiago

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357372] Re: Race condition in VNC port allocation when spawning a instance on VMware

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357372

Title:
  Race condition in VNC port allocation when spawning a instance on
  VMware

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  When spawning some instances,  nova VMware driver could have a race
  condition in VNC port allocation. Although the get_vnc_port function
  has a lock it not guarantee that the whole vnc port allocation process
  is locked, so another instance could receive the same port if it
  requests the VNC port before nova has finished the vnc port allocation
  to another VM.

  If the instances with the same VNC port are allocated in same host it
  could lead to a improper access to the instance console.

  Reproduce the problem: Launch  two or more instances at same time. In
  some cases one instance could execute the get_vnc_port and pick a port
  but before this instance has finished the _set_vnc_config another
  instance could execute get_vnc_port and pick the same port.

  How often this occurs: unpredictable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296478] Re: The Hyper-V driver's list_instances() returns an empty result set on certain localized versions of the OS

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296478

Title:
  The Hyper-V driver's list_instances() returns an empty result set on
  certain localized versions of the OS

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  This issue is related to different values that MSVM_ComputerSystem's
  Caption property can have on different locales.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296430] Re: Improve error message when attempting to delete a host aggregate that has hosts

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296430

Title:
  Improve error message when attempting to delete a host aggregate that
  has hosts

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When you attempt to delete a host aggregate that has associated hosts you get 
a generic and confusing error message saying:
  Error: Unable to delete host aggregate: 

  This error should be more specific and tell the user that the action
  cannot be completed unless the hosts are removed.

  Once you remove them all, the delete action can be completed without
  problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367642] Re: "revertResize/confirmResize" server actions does not work for v2.1 API

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367642

Title:
  "revertResize/confirmResize" server actions does not work for v2.1 API

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  "revertResize/confirmResize" server actions does not work for v2.1 API

  Those needs to be converted to V2.1 from V3 base code.
  This needs to be fixed to make V2.1 backward compatible with V2 APIs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368597] Re: Wrong status code in @wsgi.response decorator in server's `confirmResize` action

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368597

Title:
  Wrong status code in @wsgi.response decorator in server's
  `confirmResize` action

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In server`s action `confirmResize` status code in @wsgi.response
  decorator is set as 202 but this is overridden/ignored by return
  statement (return exc.HTTPNoContent()) which return 204 status code  -
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L1080

  This is very confusing and we should have expected status code in
  @wsgi.response decorator as consistence with other APIs.

  There is no change required in API return status code but in code it
  should be fixed to avoid the confusion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1201873] Re: dnsmasq does not use -h, so /etc/hosts sends folks to loopback when they look up the machine it's running on

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1201873

Title:
  dnsmasq does not use -h, so /etc/hosts sends folks to loopback when
  they look up the machine it's running on

Status in OpenStack Compute (Nova):
  Fix Released
Status in “nova” package in Ubuntu:
  Triaged

Bug description:
   from dnsmasq(8):

-h, --no-hosts
Don't read the hostnames in /etc/hosts.

  
  I reliably get bit by this during certain kinds of deployments, where my 
nova-network/dns host has an entry in /etc/hosts such as:

  127.0.1.1hostname.example.com hostname

  I keep having to edit /etc/hosts on that machine to use a real IP,
  because juju gets really confused when it looks up certain openstack
  hostnames and gets sent to its own instance!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1201873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350355] Re: nova-network: inconsistent parameters in deallocate_for_instance()

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350355

Title:
  nova-network: inconsistent parameters in deallocate_for_instance()

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Client side of the network RPC API sets 'requested_networks' parameter in 
deallocate_for_instance() call [1] 
  while server side expects 'fixed_ips' parameter [2]

  [1] https://github.com/openstack/nova/blob/master/nova/network/rpcapi.py#L183
  [2] https://github.com/openstack/nova/blob/master/nova/network/manager.py#L555

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1350355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367633] Re: Server actions 'createImage' does not work for v2.1 API

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367633

Title:
  Server actions 'createImage' does not work for v2.1 API

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  'createImage' server action  does not work for V2.1 API.

  This needs to be converted to V2.1 from V3 base code.
  This needs to be fixed to make V2.1 backward compatible with V2 APIs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350846] Re: add/delete fixed ip fails with nova-network

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350846

Title:
  add/delete fixed ip fails with nova-network

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova add-fixed-ip   fails with following in compute
  log:

  2014-07-31 06:01:48.697 ERROR oslo.messaging.rpc.dispatcher 
[req-6e04dd42-1ebe-4aa3-a37b-e84bb60b3413 admin demo] Exception during message 
handling: 'dict' object has no attribute 'get_meta'
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 414, in decorated_function
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 327, in decorated_function
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 315, in decorated_function
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 3737, in 
add_fixed_ip_to_instance
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher 
self._inject_network_info(context, instance, network_info)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 4091, in _inject_network_info
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher network_info)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5383, in inject_network_info
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher 
self.firewall_driver.setup_basic_filtering(instance, nw_info)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/firewall.py", line 286, in 
setup_basic_filtering
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher 
self.nwfilter.setup_basic_filtering(instance, network_info)
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/firewall.py", line 123, in 
setup_basic_filtering
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher if 
subnet.get_meta('dhcp_server'):
  2014-07-31 06:01:48.697 TRACE oslo.messaging.rpc.dispatcher AttributeError: 
'dict' object has no attribute 'get_meta'

  same happens with remove-fixed-ip call

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1350846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Pos

[Yahoo-eng-team] [Bug 1334903] Re: Should use 403 status code instead of 413 when out of quota

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334903

Title:
  Should use 403 status code instead of 413 when out of quota

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  We should fix the remaining API cases where the 413 status code is
  used instead of 403 when the failure is due to running out of quota

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357599] Re: race condition with neutron in nova migrate code

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357599

Title:
  race condition with neutron in nova migrate code

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  The tempest test that does a resize on the instance from time to time
  fails with a neutron virtual interface timeout error. The reason why
  this is occurring is because resize_instance() calls:

  disk_info = self.driver.migrate_disk_and_power_off(
  context, instance, migration.dest_host,
  instance_type, network_info,
  block_device_info)

  which calls destory() which unplugs the vifs(). Then,

  self.driver.finish_migration(context, migration, instance,
   disk_info,
   network_info,
   image, resize_instance,
   block_device_info, power_on)

  is called which expects a vif_plugged event. Since this happens on the
  same host the neutron agent is able to detect that the vif was
  unplugged then plugged because it happens so fast.  To fix this we
  should check if we are migrating to the same host if we are we should
  not expect to get an event.

  8d1] Setting instance vm_state to ERROR
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] Traceback (most recent call last):
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] disk_info, image)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] old_instance_type, sys_meta)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] six.reraise(self.type_, self.value, 
self.tb)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in 
finish_migration
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in 
_create_domain_and_network
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] raise 
exception.VirtualInterfaceCreateException()
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] VirtualInterfaceCreateException: Virtual 
Interface creation failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308404] Re: Using deleted image as marker does not return bad request with v2 registry

2014-10-01 Thread Erno Kuvaja
it indeed looks like this bug has been fixed on current master. Thanks
Rajesh!

** Changed in: glance
   Status: New => Fix Committed

** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1308404

Title:
  Using deleted image as marker does not return bad request with v2
  registry

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  $ ./run_tests.sh --subunit 
glance.tests.functional.v2.test_images.TestImages.test_images_container
  Running `tools/with_venv.sh python -m glance.openstack.common.lockutils 
python setup.py testr --testr-args='--subunit --concurrency 1  --subunit 
glance.tests.functional.v2.test_images.TestImages.test_images_container'`
  glance.tests.functional.v2.test_images.TestImages
  test_images_container FAIL

  Slowest 1 tests took 12.71 secs:
  glance.tests.functional.v2.test_images.TestImages
  test_images_container 
12.71

  ==
  FAIL: glance.tests.functional.v2.test_images.TestImages.test_images_container
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File "/home/ubuntu/glance/glance/tests/functional/v2/test_images.py", line 
1649, in test_images_container
  self.assertEqual(400, response.status_code)
File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertThat
  raise mismatch_error
  MismatchError: 400 != 200

  1646 # Ensure bad request for using a deleted image as marker
  1647 path = self._url('/v2/images?marker=%s' % images[0]['id'])
  1648 response = requests.get(path, headers=self._headers())
  1649 self.assertEqual(400, response.status_code)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1308404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285880] Re: force_config_drive should be based on image property

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1285880

Title:
  force_config_drive should be based on image property

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Currently there is a force_config_drive config item for each compute
  node.  A VM migrated from host w/ force_config_drive  to a host w/o it
  will lost the config_drive, this is not so good. Currently it's ok
  because most of the config drive information is only used at launch
  time.

  According to the comments , it's be better to be based on image
  property.

  One thing need consider is rebuild. A image is changed when
  rebuilding, and I think if new image has no property for config drive,
  there should be no config drive for it. Thus we have to distinguish
  between the config_drive requirement from user and from image
  property.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1285880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326599] Re: The 'uri' input validation of jsonschema does not work

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326599

Title:
  The 'uri' input validation of jsonschema does not work

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Now the 'uri' format validation of jsonschema library should be used for
  API input validation, but it does not work.

  In jsonschema library, it tries to import rfc3987 library for 'uri'
  vaildation. And if it fails, jsonschema does not check 'uri' format.
  Now requirements.txt of Nova does not contain rfc3987 and the 'uri'
  validation does not work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281627] Re: nova/image/download/file.py is missing unit tests

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281627

Title:
  nova/image/download/file.py is missing unit tests

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova/image/download/file.py is missing unit tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308649] Re: Attaching Nova volumes fails if open-iscsi daemon is not already running

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308649

Title:
  Attaching Nova volumes fails if open-iscsi daemon is not already
  running

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova-compute traces on first volume attach if open-iscsi is not
  running:

  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] Traceback (most recent call last):
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/compute/manager.py", line 4142, in 
_attach_volume
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] do_check_attach=False, 
do_driver_attach=True)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/virt/block_device.py", line 44, in 
wrapped
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] ret_val = method(obj, context, *args, 
**kwargs)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/virt/block_device.py", line 248, in 
attach
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] connector)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/openstack/common/excutils.py", line 
68, in __exit__
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] six.reraise(self.type_, self.value, 
self.tb)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/virt/block_device.py", line 239, in 
attach
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] device_type=self['device_type'], 
encryption=encryption)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1224, in 
attach_volume
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] disk_info)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1183, in 
volume_driver_method
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] return method(connection_info, *args, 
**kwargs)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] return f(*args, **kwargs)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] self.gen.throw(type, value, traceback)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
212, in lock
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] yield sem
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] return f(*args, **kwargs)
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481]   File 
"/usr/lib64/python2.6/site-packages/nova/virt/libvirt/volume.py", line 285, in 
connect_volume
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867-67a9-4af7-b7d0-d75dff53b481] for ip, iqn in 
self._get_target_portals_from_iscsiadm_output(out):
  2014-04-16 13:19:44.359 6079 TRACE nova.compute.manager [instance: 
ddc07867

[Yahoo-eng-team] [Bug 1307416] Re: Unshelve instance needs handling exceptions

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1307416

Title:
  Unshelve instance needs handling exceptions

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  There are some error cases not handled when we unshelve an instance in
  the conductor.

     nova/conductor/manager.py#823

  if the key 'shelved_image_id' is not defined the current code will raise an 
KeyError not handled.
  Also when the 'shelved_image_id' is set to None (which is not the expected 
behavior), the error is not correctly handled and the message could be 
confusing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1307416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306404] Re: reset_network raises not implemented error on network_reset

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306404

Title:
  reset_network raises not implemented error on network_reset

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  2014-04-11 04:13:48.201 ERROR oslo.messaging.rpc.dispatcher 
[req-801875af-3ef5-486b-8d5d-0794e785301c ServersAdminTestJSON-2137680125 
ServersAdminTestJSON-546442934] Exception during message handling: 
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 281, in decorated_function
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher pass
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 267, in decorated_function
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 310, in decorated_function
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 297, in decorated_function
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3946, in reset_network
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher 
self.driver.reset_network(instance)
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/virt/xenapi/driver.py", line 351, in reset_network
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher 
self._vmops.reset_network(instance)
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/virt/xenapi/vmops.py", line 1714, in reset_network
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher raise 
NotImplementedError()
  2014-04-11 04:13:48.201 13779 TRACE oslo.messaging.rpc.dispatcher 
NotImplementedError

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1306404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304099] Re: link prefixes are truncated

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304099

Title:
  link prefixes are truncated

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The osapi_glance_link_prefix and osapi_compute_link_prefix
  configuration parameters have their paths removed. For instance, if
  nova.conf contains

  osapi_compute_link_prefix = http:/127.0.0.1/compute/

  the values displayed in the API response exclude the "compute/"
  component. Other services, such as keystone, retain the path.

  This bit of code is where the bug occurs:

  
https://github.com/openstack/nova/blob/673ecaea3935b6a50294f24f8a964590ca07a959/nova/api/openstack/common.py#L568-L582

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313779] Re: ResourceTracker auditing the wrong amount of free resources for Ironic

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1313779

Title:
  ResourceTracker auditing the wrong amount of free resources for Ironic

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Won't Fix
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I've two nodes avaiable in Ironic, they both have cpus=1,
  memory_mb=512, local_gb=10, cpu_arch=x86_64 but when you look at the
  audit logs it seems to be reporting the amount of resources of only
  one of the nodes:

  N-cpu:
  2014-04-28 16:09:47.200 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 512
  2014-04-28 16:09:47.200 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 10
  2014-04-28 16:09:47.200 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 1

  If I update the first of the nodes of the list and let's say double
  the ram, the audit will report it:

  N-cpu:
  2014-04-28 16:11:26.885 AUDIT nova.compute.resource_tracker 
[req-8a8a5d53-8cf1-4b9e-9420-5f0e3a6f9b27 None None] Free ram (MB): 1024

  But if I update the second node, no changes are reported back to the
  resource tracker...

  ...

  Worst, if I delete the properties from the first node, now the
  Resource Tracker will report:

  $ ironic node-update $NODE remove properties

  N-cpu:
  2014-04-28 16:13:07.735 AUDIT nova.compute.resource_tracker 
[req-c3211bd1-768d-40ea-b2cf-6e73c69e39b1 None None] Free ram (MB): 0
  2014-04-28 16:13:07.735 AUDIT nova.compute.resource_tracker 
[req-c3211bd1-768d-40ea-b2cf-6e73c69e39b1 None None] Free disk (GB): 0
  2014-04-28 16:13:07.735 AUDIT nova.compute.resource_tracker 
[req-c3211bd1-768d-40ea-b2cf-6e73c69e39b1 None None] Free VCPU information 
unavailable

  
  UPD from comment:
  We need to change Nova to understand the Ironic use case better. For nova 
each n-cpu is responsable for managing a X number of machines, but when the 
Ironic driver is loaded the n-cpu is just a small thin layer that talks to the 
Ironic api, and every n-cpu is managing _all_ the machines in the cluster. So 
in the Ironic use case different n-cpus would share the same machines and 
that's what making nova confused when auditing the resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1313779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294556] Re: os-aggregates sample files miss

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294556

Title:
  os-aggregates sample files miss

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  os-aggregates' sample files are different in V2 and V3.

  ~$ vi /opt/stack/nova/doc/api_samples/os-aggregates/aggregates-list-
  get-resp.json

1 {
2 "aggregates": [
3 {
4 "availability_zone": "nova",
5 "created_at": "2012-11-16T06:22:23.361359",
6 "deleted": false,★
7 "deleted_at": null,
8 "hosts": [],
9 "id": 1,
   10 "metadata": {
   11 "availability_zone": "nova"
   12 },
   13 "name": "name",
   14 "updated_at": null
   15 }
   16 ]
   17 }

  ~$ vi /opt/stack/nova/doc/v3/api_samples/os-aggregates/aggregates-
  list-get-resp.json

   1 {
2 "aggregates": [
3 {
4 "availability_zone": "nova",
5 "created_at": "2013-08-18T12:17:56.856455",
6 "deleted": 0,★
7 "deleted_at": null,
8 "hosts": [],
9 "id": 1,
   10 "metadata": {
   11 "availability_zone": "nova"
   12 },
   13 "name": "name",
   14 "updated_at": null
   15 }
   16 ]
   17 }

  The 'deleted' element is 'false' in V2 but '0' in V3, and it's the
  same with aggregates-get-resp.json

  I also found in the response from the API, 'deleted' is 'false'

   curl -i 'http://10.21.42.98:8774/v3/os-aggregates' -X GET -H "X-Auth-
  Project-Id: demo" -H "User-Agent: python-novaclient" -H "Accept:
  application/json" -H "X-Auth-Token: MIISPQYJKoZIhvcNA...

  RESP BODY: {"aggregates": [{"name": "agg1", "availability_zone":
  "nova", ★"deleted": false,★ "created_at":
  "2014-03-18T19:38:33.00", "updated_at": null, "hosts": [],
  "deleted_at": null, "id": 1, "metadata": {"availability_zone":
  "nova"}}, {"name": "agg2", "availability_zone": null, "deleted":
  false, "created_at": "2014-03-18T19:41:06.00", "updated_at": null,
  "hosts": [], "deleted_at": null, "id": 2, "metadata": {}}]}

  So i think this is a bug of V3 sample file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289397] Re: nova instance delete fails if dhcp_release fails

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289397

Title:
  nova  instance delete fails if dhcp_release fails

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  ssatya@devstack:~$ nova boot --image 1e95fe6b-cec6-4420-97d1-1e7bc8c81c49 
--flavor 1  testdummay
  
+--+---+
  | Property | Value
 |
  
+--+---+
  | OS-DCF:diskConfig| MANUAL   
 |
  | OS-EXT-AZ:availability_zone  | nova 
 |
  | OS-EXT-STS:power_state   | 0
 |
  | OS-EXT-STS:task_state| networking   
 |
  | OS-EXT-STS:vm_state  | building 
 |
  | OS-SRV-USG:launched_at   | -
 |
  | OS-SRV-USG:terminated_at | -
 |
  | accessIPv4   |  
 |
  | accessIPv6   |  
 |
  | adminPass| fK8SPGtHLUds 
 |
  | config_drive |  
 |
  | created  | 2014-03-07T14:33:49Z 
 |
  | flavor   | m1.tiny (1)  
 |
  | hostId   | 
2c1ae30aa2a235d9c0c8b04aae3f4199cd98356e44a03b5c8f878adb  |
  | id   | eae503d9-c6f7-4e3e-9adc-0b8b6803c90e 
 |
  | image| debian-2.6.32-i686 
(1e95fe6b-cec6-4420-97d1-1e7bc8c81c49) |
  | key_name | -
 |
  | metadata | {}   
 |
  | name | testdummay   
 |
  | os-extended-volumes:volumes_attached | []   
 |
  | progress | 0
 |
  | security_groups  | default  
 |
  | status   | BUILD
 |
  | tenant_id| 209ab7e4f3744675924212805db3ad74 
 |
  | updated  | 2014-03-07T14:33:50Z 
 |
  | user_id  | f3756a4910054883b84ee15acc15fbd1 
 |
  
+--+---+
  ssatya@devstack:~$ nova list
  
+--++++-+--+
  | ID   | Name   | Status | Task State | 
Power State | Networks |
  
+--++++-+--+
  | eae503d9-c6f7-4e3e-9adc-0b8b6803c90e | testdummay | BUILD  | spawning   | 
NOSTATE |  |
  | d1e982c4-85c2-422d-b046-1643bd81e674 | testvm1| ERROR  | deleting   | 
Shutdown| private=10.0.0.2 |
  
+--++++-+--+
  ssatya@devstack:~$ nova list
  
+--++++-+--+
  | ID   | Name   | Status | Task State | 
Power State | Networks |
  
+--++++-+--+
  | eae503d9-c6f7-4e3e-9adc-0b8b6803c90e | testdummay | ACTIVE | -  | 
Running | private=10.0.0.3 |
  | d1e982c4-85c2-422d-b046-1643bd81e674 | testvm1| ERROR  | deleting   | 
Shutdown| private=10.0.0.2 |
  
+--+

[Yahoo-eng-team] [Bug 1323975] Re: do not use default=None for config options

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323975

Title:
  do not use default=None for config options

Status in OpenStack Key Management (Barbican):
  Fix Released
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  In Progress
Status in OpenStack Messaging and Notifications Service (Zaqar):
  Fix Released

Bug description:
  In the cfg module default=None is set as the default value. It's not
  necessary to set it again when defining config options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1323975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317174] Re: novnc console failure after Icehouse upgrade

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317174

Title:
  novnc console failure after Icehouse upgrade

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  After upgrading my Havana installation to Icehouse, VNC console logins
  are no longer working (1006 error).

  The version in use is:

  openstack-nova-novncproxy-2014.1-2.el6.noarch

  from RDO.

  This is the full error in the logs:

  2014-05-07 17:12:58.003 13074 AUDIT nova.consoleauth.manager 
[req-684f9e8d-3c0a-4647-aa66-44f0bb35c4df None None] Checking Token: 
dbbd1b9b-002f-46b6-bf79-0d90e92c034e, True
  2014-05-07 17:12:58.112 13074 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: tuple index out of range
  Traceback (most recent call last):

File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 133, in _dispatch_and_reply
  incoming.message))

File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 176, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/server.py", line 
139, in inner
  return func(*args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 390, 
in decorated_function
  args = (_load_instance(args[0]),) + args[1:]

  IndexError: tuple index out of range
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/consoleauth/manager.py", line 117, in 
check_token
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher if 
self._validate_token(context, token):
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/consoleauth/manager.py", line 108, in 
_validate_token
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
token['console_type'])
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/rpcapi.py", line 506, in 
validate_console_port
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
console_type=console_type)
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
412, in send
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
405, in _send
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher IndexError: 
tuple index out of range
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
  2014-05-07 17:12:58.112 13074 TRACE oslo.mess

[Yahoo-eng-team] [Bug 1333494] Re: os-agents api update return string that is different with index return integer

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333494

Title:
  os-agents api update return string that is different with index return
  integer

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This bug found by Dan Smith in
  https://review.openstack.org/#/c/101995/

  First problem, there is inconsistent in api samples.

  create and index action return integer for agent id actually. But in api 
samples file the agent id is string.
  This is because api sample file provide a wrong fake data.

  Second problem, the update action return string for agent id.
  For back compatibility problem, we can't fix this problem for v2 and v2.1 api

  We only can fix this problem for v3 api. And we need to add translator
  for v2.1 api for this later.

  This problem will be fixed after v3 api feature exposed by
  microversion

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1333494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341954] Re: suds client subject to cache poisoning by local attacker

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341954

Title:
  suds client subject to cache poisoning by local attacker

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in Gantt:
  New
Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo VMware library for OpenStack projects:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  
  The suds project appears to be largely unmaintained upstream. The default 
cache implementation stores pickled objects to a predictable path in /tmp. This 
can be used by a local attacker to redirect SOAP requests via symlinks or run a 
privilege escalation / code execution attack via a pickle exploit. 

  cinder/requirements.txt:suds>=0.4
  gantt/requirements.txt:suds>=0.4
  nova/requirements.txt:suds>=0.4
  oslo.vmware/requirements.txt:suds>=0.4

  
  The details are available here - 
  https://bugzilla.redhat.com/show_bug.cgi?id=978696
  (CVE-2013-2217)

  Although this is an unlikely attack vector steps should be taken to
  prevent this behaviour. Potential ways to fix this are by explicitly
  setting the cache location to a directory created via
  tempfile.mkdtemp(), disabling cache client.set_options(cache=None), or
  using a custom cache implementation that doesn't load / store pickled
  objects from an insecure location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1341954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269990] Re: LXC volume issues

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269990

Title:
  LXC volume issues

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Few issues with LXC and volumes that relate to the same code.

  * Hard rebooting a volume will make attached volumes disappear from
  libvirt xml

  * Booting an instance specifying an extra volume (passing in
  block_device_mappings on server.create) will result in the volume not
  being in the libvirt xml

  This is due to 2 places in the code where LXC is treated differently

  1. nova.virt.libvirt.blockinfo  get_disk_mapping
  2. nova.virt.libvirt.driver get_guest_storage_config

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271097] Re: [tox] ImportError: No module named config

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271097

Title:
  [tox] ImportError: No module named config

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When trying and running tox on Nova, tests are failing because of
  missing oslo.config package.

  Although oslo.config is specified in the requirements.txt file and pip
  seems to be successfull in installing oslo.config, the directory where
  it should be isn't created.

  I'm using Nova trunk version 23a0bea232b42df3c26425d7164c779c82fa7a77

  Here's my trace:

  (cleanenv)ubuntu@flstack2:/opt/stack/nova$ pip freeze
  argparse==1.2.1
  wsgiref==0.1.2
  (cleanenv)ubuntu@flstack2:/opt/stack/nova$ git pull
  Already up-to-date.
  (cleanenv)ubuntu@flstack2:/opt/stack/nova$ aptitude search oslo
  p   srvadmin-oslog - Dell OpenManage 
Server Administrator OS Logging Control  
  (cleanenv)ubuntu@flstack2:/opt/stack/nova$ rm -r .tox/py27/
  (cleanenv)ubuntu@flstack2:/opt/stack/nova$ tox -e py27
  py27 create: /opt/stack/nova/.tox/py27
  py27 installdeps: -r/opt/stack/nova/requirements.txt, 
-r/opt/stack/nova/test-requirements.txt
  py27 develop-inst: /opt/stack/nova
  py27 runtests: commands[0] | python -m nova.openstack.common.lockutils python 
setup.py test --slowest --testr-args=
  Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
  "__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
  exec code in run_globals
File "/opt/stack/nova/nova/openstack/common/lockutils.py", line 29, in 

  from oslo.config import cfg
  ImportError: No module named config
  ERROR: InvocationError: '/opt/stack/nova/.tox/py27/bin/python -m 
nova.openstack.common.lockutils python setup.py test --slowest --testr-args='
  __ summary 
___
  ERROR:   py27: commands failed
  (cleanenv)ubuntu@flstack2:/opt/stack/nova$ ls -d 
.tox/py27/lib/python2.7/site-packages/oslo*
  .tox/py27/lib/python2.7/site-packages/oslo
  .tox/py27/lib/python2.7/site-packages/oslo.rootwrap-1.0.0-py2.7.egg-info
  .tox/py27/lib/python2.7/site-packages/oslo.rootwrap-1.0.0-py2.7-nspkg.pth
  .tox/py27/lib/python2.7/site-packages/oslo.sphinx-1.1-py2.7.egg-info
  .tox/py27/lib/python2.7/site-packages/oslo.sphinx-1.1-py2.7-nspkg.pth
  (cleanenv)ubuntu@flstack2:/opt/stack/nova$ ls 
.tox/py27/lib/python2.7/site-packages/oslo
  rootwrap  sphinx
  (cleanenv)ubuntu@flstack2:/opt/stack/nova$ source .tox/py27/bin/activate
  (py27)ubuntu@flstack2:/opt/stack/nova$ pip freeze | grep oslo.config
  -e 
git+https://github.com/openstack/oslo.config.git@03930e31965524cb90279b5d9e793c6825791b54#egg=oslo.config-master
  (py27)ubuntu@flstack2:/opt/stack/nova$ grep oslo.config requirements.txt 
  oslo.config>=1.2.0
  (py27)ubuntu@flstack2:/opt/stack/nova$

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361806] Re: requested_network as a tuple now should be converted to an object

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361806

Title:
  requested_network as a tuple now should be converted to an object

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  requested_network is a tuple of (net_id, fixed_ip, port_id). Some of
  the members can be None depending on the user input, for example, of
  the nova boot command.  When SR-IOV tries to use it for SR-IOV, it
  needs to add a pci_request_id into it. Concerns are raised on the
  tuple's expandability, and its being prone to error when
  packing/unpacking the tuple.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340885] Re: Can't unset a flavor-key

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340885

Title:
  Can't unset a flavor-key

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I am able to set a flavor-key but not unset it.  devstack
  sha1=fdf1cffbd5d2a7b47d5bdadbc0755fcb2ff6d52f

  
  ubuntu@d8:~/devstack$ nova help flavor-key
  usage: nova flavor-key[ ...]

  Set or unset extra_spec for a flavor.

  Positional arguments:
 Name or ID of flavor
 Actions: 'set' or 'unset'
  Extra_specs to set/unset (only key is necessary on unset)
  ubuntu@d8:~/devstack$ nova flavor-key m1.tiny set foo=bar
  ubuntu@d8:~/devstack$ nova flavor-show m1.tiny
  +++
  | Property   | Value  |
  +++
  | OS-FLV-DISABLED:disabled   | False  |
  | OS-FLV-EXT-DATA:ephemeral  | 0  |
  | disk   | 1  |
  | extra_specs| {"foo": "bar"} |
  | id | 1  |
  | name   | m1.tiny|
  | os-flavor-access:is_public | True   |
  | ram| 512|
  | rxtx_factor| 1.0|
  | swap   ||
  | vcpus  | 1  |
  +++
  ubuntu@d8:~/devstack$ nova flavor-key m1.tiny unset foo
  ubuntu@d8:~/devstack$ nova flavor-show m1.tiny
  +++
  | Property   | Value  |
  +++
  | OS-FLV-DISABLED:disabled   | False  |
  | OS-FLV-EXT-DATA:ephemeral  | 0  |
  | disk   | 1  |
  | extra_specs| {"foo": "bar"} |
  | id | 1  |
  | name   | m1.tiny|
  | os-flavor-access:is_public | True   |
  | ram| 512|
  | rxtx_factor| 1.0|
  | swap   ||
  | vcpus  | 1  |
  +++

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357972] Re: boot from volume fails on Hyper-V if boot device is not vda

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357972

Title:
  boot from volume fails on Hyper-V if boot device is not vda

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  The Tempest test
  
"tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern"
  fails on Hyper-V.

  The cause is related to the fact that the root device is "sda" and not
  "vda".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360119] Re: Nova tries to re-define an existing nwfilter with the same name but different uuid

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360119

Title:
  Nova tries to re-define an existing nwfilter with the same name but
  different uuid

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Hello,

  I have successfully compiled libvirt 1.2.7 and qemu 2.1.0 but had
  some troubles with nova-compute. It appears like libvirt is throwing
  back an error if a nwfilter is already present.

  Here is my debug log:
  2014-08-22 08:22:25.032 15354 DEBUG nova.virt.libvirt.firewall 
[req-0959ec86-3939-4e38-9505-48494b44a9fa f1d21892f9a0413c9437b6771e4290ce 
9cad53a0432d4164837b8c0b35d91307] nwfilterDefineXML may have failed with 
(operation failed: filter 'nova-nodhcp' already exists with uuid 
59970732-ca52-4521-ba0c-d001049d8460)! _define_filter 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/firewall.py:239
  2014-08-22 08:22:25.033 15354 DEBUG nova.virt.libvirt.firewall 
[req-0959ec86-3939-4e38-9505-48494b44a9fa f1d21892f9a0413c9437b6771e4290ce 
9cad53a0432d4164837b8c0b35d91307] nwfilterDefineXML may have failed with 
(operation failed: filter 'nova-base' already exists with uuid 
b5aa80ad-ea4a-4633-84ac-442c9270a143)! _define_filter 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/firewall.py:239
  2014-08-22 08:22:25.034 15354 DEBUG nova.virt.libvirt.firewall 
[req-0959ec86-3939-4e38-9505-48494b44a9fa f1d21892f9a0413c9437b6771e4290ce 
9cad53a0432d4164837b8c0b35d91307] nwfilterDefineXML may have failed with 
(operation failed: filter 'nova-vpn' already exists with uuid 
b61eb708-a9a5-4a16-8787-cdc58310babc)! _define_filter 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/firewall.py:239

  Here is the original function:
  def _define_filter(self, xml):
  if callable(xml):
  xml = xml()
  self._conn.nwfilterDefineXML(xml)

  And here is the "patched" function":
  def _define_filter(self, xml):
  if callable(xml):
  xml = xml()
  try:
  self._conn.nwfilterDefineXML(xml)
  except Exception, e:
  LOG.debug(_('nwfilterDefineXML may have failed with (%s)!'), e)

  I'm not a python expert but I think that patch could be adapted to
  raise an error ONLY if the nwfilter rule doesn't already exist.

  Dave

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362191] Re: Deprecate the libvirt volume_drivers config parameter

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362191

Title:
  Deprecate the libvirt volume_drivers config parameter

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In this thread the topic of deprecating and removing config parameters
  related to extension points for non-public APIs was discussed. The
  consensus was that such extension points should not be exposed and
  that instead people wishing to develop extensions should be doing so
  on a nova branch, instead of entirely separate repository.

https://www.mail-archive.com/openstack-
  d...@lists.openstack.org/msg30206.html

  The vif_drivers parameter is now removed, and this bug is to track
  deprecation & removal of the volume_drivers parameter since that
  serves an identical purpose

  It will be deprecated in Kilo and deleted in L

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335315] Re: Cells: set_admin_password doesn't work after an objects update

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1335315

Title:
  Cells: set_admin_password doesn't work after an objects update

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Set admin password fails in the child cell service with the following:

  2014-06-24 20:45:08.403 29591 DEBUG nova.openstack.common.policy [req- None] 
Rule compute
  :set_admin_password will be now enforced enforce 
/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/openstack/common/
  policy.py:288
  2014-06-24 20:45:08.404 29591 ERROR nova.cells.messaging [req- None] Error 
processing mes
  sage locally: 'dict' object has no attribute 'task_state'
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging Traceback (most 
recent call last):
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging File 
"/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/c
  ells/messaging.py", line 200, in _process_locally
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging File 
"/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/c
  ells/messaging.py", line 1289, in _process_message_locally
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging return fn(message, 
**message.method_kwargs)
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging File 
"/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/c
  ells/messaging.py", line 692, in run_compute_api_method
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging return 
fn(message.ctxt, *args, **method_info['method_kwargs'])
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging File 
"/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/c
  ompute/api.py", line 201, in wrapped
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging return func(self, 
context, target, *args, **kwargs)
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging File 
"/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/c
  ompute/api.py", line 191, in inner
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging return 
function(self, context, instance, *args, **kwargs)
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging File 
"/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/c
  ompute/api.py", line 172, in inner
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging return f(self, 
context, instance, *args, **kw)
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging File 
"/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/c
  ompute/api.py", line 2712, in set_admin_password
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging instance.task_state 
= task_states.UPDATING_PASSWORD
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging AttributeError: 
'dict' object has no attribute 'task_state'
  2014-06-24 20:45:08.404 29591 TRACE nova.cells.messaging

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1335315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283987] Re: Query Deadlock when creating >200 servers at once in sqlalchemy

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1283987

Title:
  Query Deadlock when creating >200 servers at once in sqlalchemy

Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo Database library:
  Fix Released

Bug description:
  Query Deadlock when creating >200 servers at once in sqlalchemy.

  

  This bug occurred when I test this bug: 
  https://bugs.launchpad.net/nova/+bug/1270725

  The original info is logged here:
  http://paste.openstack.org/show/61534/

  --

  After checking the error-log, we can notice that the deadlock function
  is 'all()' in sqlalchemy framework.

  
  Previously, we use '@retry_on_dead_lock' function to retry requests when 
deadlock occurs.

  But it's only available for session deadlock(query/flush/execute). It
  doesn't cover some 'Query' actions in sqlalchemy.

  
  So, we need to add the same protction for 'all()' in sqlalchemy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1283987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339462] Re: vmware: cannot use adaptertype Paravirtual

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1339462

Title:
  vmware: cannot use adaptertype Paravirtual

Status in OpenStack Compute (Nova):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  nova icehouse Ubuntu14.04 
  Version: 1:2014.1-0ubuntu1.2

  Current Nova vmwareapi cannot configure guest VM's SCSI controller type as 
"Paravirtual".
  (though this requires vmwaretools running on the guest VM)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1339462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333471] Re: Checking security group in nova immediately after instance is created results in error

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333471

Title:
  Checking security group in nova immediately after instance is created
  results in error

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Environment:
  Openstack Havana with Neutron for networking and security groups

  Error:
  Response from nova:
  The server could not comply with the request since it is either malformed or 
otherwise incorrect.", "code": 400

  In nova-api log
  014-06-19 00:48:39.483 17462 ERROR nova.api.openstack.wsgi 
[req-60aa8941-d129-4018-a30f-f815f0770118 10764ccc2d154d0a919f5104872fb9a8 
2b60ae3ba5bd41d893674d0e57ae4390] Exception handling resource: 'NoneType' 
object is not iterable
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 997, in 
_process_stack
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi action_result 
= self.dispatch(meth, request, action_args)
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 1078, in 
dispatch
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_groups.py",
 line 438, in index
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi for group in 
groups]
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi TypeError: 
'NoneType' object is not iterable
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi 
  2014-06-19 00:48:39.485 17462 INFO nova.osapi_compute.wsgi.server 
[req-60aa8941-d129-4018-a30f-f815f0770118 10764ccc2d154d0a919f5104872fb9a8 
2b60ae3ba5bd41d893674d0e57ae4390] 10.147.22.73,54.225.248.128 "GET 
/v2/2b60ae3ba5bd41d893674d0e57ae4390/servers/c7e5f472-57fb-4a95-95cf-45c6506db0cd/os-security-groups
 HTTP/1.1" status: 400 len: 362 time: 0.0710380

  Steps to reproduce:
  1) Create new instance
  2) Immediately check security group through nova 
(/v2/$tenant/servers/$server_id/os-security-groups
  3) Wait several seconds and try again (Works if given a small delay between 
instance creation and checking sec group)

  Notes: This error did not come up in earlier versions of havana, but
  started after a recent upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1333471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326183] Re: detach interface fails as instance info cache is corrupted

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326183

Title:
  detach interface fails as instance info cache is corrupted

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  
  Performing attach/detach interface on a VM sometimes results in an interface 
that can't be detached from the VM.
  I could triage it to the corrupted instance cache info due to non-atomic 
update of that information.
  Details on how to reproduce the bug are as follows. Since this is due to a 
race condition, the test can take quite a bit of time before it hits the bug.

  Steps to reproduce:

  1) Devstack with trunk with the following local.conf:
  disable_service n-net
  enable_service q-svc
  enable_service q-agt
  enable_service q-dhcp
  enable_service q-l3
  enable_service q-meta
  enable_service q-metering
  RECLONE=yes
  # and other options as set in the trunk's local

  2) Create few networks:
  $> neutron net-create testnet1
  $> neutron net-create testnet2
  $> neutron net-create testnet3
  $> neutron subnet-create testnet1 192.168.1.0/24
  $> neutron subnet-create testnet2 192.168.2.0/24
  $> neutron subnet-create testnet3 192.168.3.0/24

  2) Create a testvm in testnet1:
  $> nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64-uec --nic 
net-id=`neutron net-list | grep testnet1 | cut -f 2 -d ' '` testvm

  3) Run the following shell script to attach and detach interfaces for this vm 
in the remaining two networks in a loop until we run into the issue at hand:
  
  #! /bin/bash
  c=1
  netid1=`neutron net-list | grep testnet2 | cut -f 2 -d ' '`
  netid2=`neutron net-list | grep testnet3 | cut -f 2 -d ' '`
  while [ $c -gt 0 ]
  do
 echo "Round: " $c
 echo -n "Attaching two interfaces... "
 nova interface-attach --net-id $netid1 testvm
 nova interface-attach --net-id $netid2 testvm
 echo "Done"
 echo "Sleeping until both those show up in interfaces"
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova interface-list testvm | wc -l`
 if [ $count -eq 7 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 echo "Waited for " $waittime " seconds"
 echo "Detaching both... "
 nova interface-list testvm | grep $netid1 | awk '{print "deleting ",$4; 
system("nova interface-detach testvm "$4 " ; sleep 2");}'
 nova interface-list testvm | grep $netid2 | awk '{print "deleting ",$4; 
system("nova interface-detach testvm "$4 " ; sleep 2");}'
 echo "Done; check interfaces are gone in a minute."
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova interface-list testvm | wc -l`
 echo "line count: " $count
 if [ $count -eq 5 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 if [ $waittime -ge 60 ]
 then
echo "bad case"
exit 1
 fi
 echo "Interfaces are gone"
 ((  c-- ))
  done
  -

  Eventually the test will stop with a failure ("bad case") and the
  interface remaining either from testnet2 or testnet3 can not be
  detached at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274183] Re: "nova host-servers-migrate " Migrate instances on free compute in error status

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274183

Title:
  "nova host-servers-migrate " Migrate instances on free compute
  in error status

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  {"build_id": "2014-01-29_13-30-05", "ostf_sha":
  "338ddf840c229918d1df8c6597588b853d02de4c", "build_number": "67",
  "nailgun_sha": "3463912a986465133058a24c615c3548cef53cac",
  "fuelmain_sha": "7d8768f2ac7e1e54d16c135e4ebd64722e49179e",
  "astute_sha": "200f68381327d955428c371582c03a97bfec3154", "release":
  "4.1", "fuellib_sha": "73e74f0c449ad86b3da922c8bd5eb333eac94489"}

  
  "nova host-servers-migrate " Migrate instances on free compute in error 
status

  Steps to reproduce 
  1. Get  ISO#67
  2. Cluster configuration:   Ubuntu, simple,   1Controller, (2+ceph)Computes,  
Ceph for images,Neutron GRE.
  3. Create  Instance check on wich compute it is.  
  4. On controller execute command "nova --debug host-servers-migrate " 
host should be the name of  compute with created instance.

  
  Expected result:
  Instance should migrate and be in Active status

  Actual result: 
  After migration instance became in Error state.

  compute node on wich should be migrated this instance contain
  following:

  <180>Jan 29 16:12:16 node-3 nova-nova.compute.manager AUDIT: Migrating
  <182>Jan 29 16:12:18 node-3 nova-nova.virt.libvirt.driver INFO: Creating image
  <179>Jan 29 16:12:19 node-3 nova-nova.virt.libvirt.imagebackend ERROR: error 
opening rbd image 
/var/lib/nova/instances/_base/4e3fb6726c8ee72072724a16179d5e400c712864
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 467, in __init__
  read_only=read_only)
File "/usr/lib/python2.7/dist-packages/rbd.py", line 351, in __init__
  raise make_ex(ret, 'error opening image %s at snapshot %s' % (name, 
snapshot))
  ImageNotFound: error opening image 
/var/lib/nova/instances/_base/4e3fb6726c8ee72072724a16179d5e400c712864 at 
snapshot None
  <179>Jan 29 16:12:19 node-3 nova-nova.compute.manager ERROR: Setting instance 
vm_state to ERROR
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3153, 
in finish_resize
  disk_info, image)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3121, 
in _finish_resize
  block_device_info, power_on)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275267] Re: GuestFS fails to mount image for data injection

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275267

Title:
  GuestFS fails to mount image for data injection

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  A GuestFS error is causing injection to fail. This result in a warning
  for metadata injection but results in a spawn error for key injection.

  This is logged with debug level:

  Exception AttributeError: "GuestFS instance has no attribute '_o'" in > ignored
  febootstrap-supermin-helper: ext2: parent directory not found: /lib: File not 
found by ext2_lookup

  
  And causes this error: http://paste.openstack.org/show/62293/

  Full logs available here:  http://logs.openstack.org/58/63558/8/check
  /check-tempest-dsvm-neutron-pg/108e4ca/logs

  Interestingly, it seems guestfs was not actually used when the relevant 
patches went throught the gate checks:
  https://review.openstack.org/#/c/70237/
  https://review.openstack.org/#/c/70354/

  This was expected for patch #70354 but sounds strange for patch #70237

  
  Finally, The traceback seeems different from that of bug 1221985

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1275267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340552] Re: Volume detach error when use NFS as the cinder backend

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340552

Title:
  Volume detach error when use NFS as the cinder backend

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Tested Environment
  --
  OS: Ubuntu 14.04 LST
  Cinder NFS driver:
  volume_driver=cinder.volume.drivers.nfs.NfsDriver

  Error description
  --
  I used NFS as the cinder storage backend and successfully attached multiple 
volumes to nova instances.
  However, when I tried to detach one them, I found following error on 
nova-compute.log.

  ---Error log--
  2014-07-07 17:48:46.175 3195 ERROR nova.virt.libvirt.volume 
[req-a07d077f-2ad1-4558-91fa-ab1895ca4914 c8ac60023a794aed8cec8552110d5f12 
fdd538eb5dbf48a98d08e6d64def73d7] Couldn't unmount the NFS share 
172.23.58.245:/NFSThinLun2
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Traceback (most 
recent call last):
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume   File 
"/usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 675, 
in disconnect_volume
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume 
utils.execute('umount', mount_path, run_as_root=True)
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume   File 
"/usr/local/lib/python2.7/dist-packages/nova/utils.py", line 164, in execute
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume return 
processutils.execute(*cmd, **kwargs)
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume   File 
"/usr/local/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", 
line 193, in execute
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume cmd=' 
'.join(cmd))
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume 
ProcessExecutionError: Unexpected error while running command.
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Command: sudo 
nova-rootwrap /etc/nova/rootwrap.conf umount 
/var/lib/nova/mnt/16a381ac60f3e130cf26e7d6eb832cb6
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Exit code: 16
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Stdout: ''
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Stderr: 
'umount.nfs: /var/lib/nova/mnt/16a381ac60f3e130cf26e7d6eb832cb6: device is 
busy\numount.nfs: /var/lib/nova/mnt/16a381ac60f3e130cf26e7d6eb832cb6: device is 
busy\n'
  2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume

  ---End of the Log--

  
  For NFS volumes, every time you detach a volume, nova tries to umount the 
device path.
  /nova/virt/libvirt/volume.py in
  Line 632: class LibvirtNFSVolumeDriver(LibvirtBaseVolumeDriver):
  Line 653: def disconnect_volume(self, connection_info, disk_dev):
  Line 661: utils.execute('umount', mount_path, run_as_root=True)

  This works when the device path is not busy.
  If the device path is busy (or in use), it should output a message to log and 
continue.
  The problem is, Instead of output a log message, it raise exception and that 
cause the above error.

  I think the reason is, the ‘if’ statement at Line 663 fails to catch the 
device busy massage from the content of the exc.message. It looking for the 
‘target is busy’ in the exc.message, but umount error code returns ‘device is 
busy’.
  Therefore, current code skip the ‘if’ statement and run the ‘else’ and raise 
the exception.

  How to reproduce
  --
  (1)   Prepare a NFS share storage and set it as the storage backend of you 
cinder
  (refer 
http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/NFS-driver.html)
  In cinder.conf
  volume_driver=cinder.volume.drivers.nfs.NfsDriver
  nfs_shares_config=
  (2)   Create 2 empty volumes from cinder
  (3)   Create a nova instance and attach above 2 volumes
  (4)   Then, try to detach one of them.
  You will get the error in nova-compute.log “Couldn't unmount the NFS share 
”

  Proposed Fix
  --
  I’m not sure about any other OSs who outputs the ‘target is busy’ in the 
umount error code.
  Therefore, first fix comes to my mind is fix the ‘if’ statement to:
  Before fix;
  if 'target is busy' in exc.message:
  After fix;
  if 'device is busy' in exc.message:

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324400] Re: Invalid EC2 instance type for a volume backed instance

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324400

Title:
  Invalid EC2 instance  type for a volume backed instance

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Since nova.virt.driver:LibvirtDriver.get_guest_config prepends instance 
root_device_name with 'dev' prefix, root_device_name may not coincide with 
device_name in block device mapping structure.
  In this case describe instances operation reports wrong instance type: 
instance-store instead of ebs.

  Environment: DevStack

  Steps to reproduce:
  1 Create volume backed instance passing vda as root device name
  $ cinder create --image-id xxx 1
  $ nova boot --flavor m1.nano --block-device-mapping vda=yyy:::1 inst
  Note. I used cirros ami image.

  2 Describe instances
  $ euca-describe-instances
  Look on instace type. It must be ebs, but it is instance-store in the output.

  Note. If euca-describe-instance crashes on ebs instnce, apply
  https://review.openstack.org/#/c/95580/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1324400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Neutron:
  Fix Committed
Status in Trove client binding:
  Fix Released
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  In Progress
Status in Tuskar:
  Fix Released

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301690] Re: should rollback quota if 2 confirm_resize operations are executed concurrently

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301690

Title:
  should rollback quota if 2 confirm_resize operations are executed
  concurrently

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  quota is reserved when confirm_resize is executed, if we found one migration 
is already in 'confirmed' status 
  we directly return without rollback the quota.

  
  def confirm_resize(self, context, instance, reservations, migration):

  quotas = quotas_obj.Quotas.from_reservations(context,
   reservations,
   instance=instance)

  @utils.synchronized(instance['uuid'])
  def do_confirm_resize(context, instance, migration_id):
  ..
  try:
  # TODO(russellb) Why are we sending the migration object just
  # to turn around and look it up from the db again?
  migration = migration_obj.Migration.get_by_id(
  context.elevated(), migration_id)
  except exception.MigrationNotFound:
  LOG.error(_("Migration %s is not found during confirmation") %
  migration_id, context=context, instance=instance)
  return

  if migration.status == 'confirmed':
  LOG.info(_("Migration %s is already confirmed") %
  migration_id, context=context, instance=instance)
  return

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334965] Re: the headroom infomation is wrong in the method of limit_check

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334965

Title:
  the headroom infomation  is wrong in the method of limit_check

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If multiple resources beyond the quotas ,  the headroom information  about 
the resources beyond the quotas is incomplete(quota.py).
  for example: 
  If the lens of  path and content of the  injected files  exceed the limits, 
the above situation will appear.
  2014-06-27 12:14:04.241 4547 INFO nova.quota 
[req-a94185e5-779f-4455-ba45-230eb70fb774 None] overs: 
['injected_file_content_bytes', 'injected_file_path_bytes']
  2014-06-27 12:14:04.241 4547 INFO nova.quota 
[req-a94185e5-779f-4455-ba45-230eb70fb774 None] headroom: 
{'injected_file_path_bytes': 255}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354010] Re: Vmware Driver, need clean this error info relate to opaqueNetwork

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354010

Title:
  Vmware Driver, need clean this error info relate to opaqueNetwork

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  With vCenter version bellow 5.5 always get error info from nova.log
  As opaqueNetwork is added since vSphere API 5.5, using API with low version 
will meet this error.

  log from nova.conf, VMwareVCDriver in use--
  2014-08-07 06:34:12.841 25266 ERROR suds.client 
[req-2b1836fb-a3fd-442f-b999-81ff42692a40 ] 
  http://schemas.xmlsoap.org/soap/envelope/"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/";>
 

   propertyCollector
   
  
 HostSystem
 false
 config.network.opaqueNetwork
  
  
 host-410
 false
  
   
   
  100
   

 
  
  -

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354010/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304790] Re: flavor is not limit by osapi_max_limit when pagenation

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304790

Title:
  flavor is not limit by osapi_max_limit when pagenation

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  when flavor pagenation, the conf of osapi_max_limit does not work.so
  we must modify.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322926] Re: Hyper-V driver volumes are attached incorrectly when multiple iSCSI servers are present

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322926

Title:
  Hyper-V driver volumes are attached incorrectly when multiple iSCSI
  servers are present

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Hyper-V can change the order of the mounted drives when rebooting a
  host and thus passthrough disks can be assigned to the wrong instance
  resulting in a critical scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1322926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1214406] Re: nova.virt.block_device.get_swap should have better safeguards

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1214406

Title:
  nova.virt.block_device.get_swap should have better safeguards

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  As per markmc's comments on https://review.openstack.org/#/c/39086/24
  - the function is used to get the swap out of the list context as the
  block_device_info data structure (used internally by the virt drivers)
  needs 'swap' field to be either a single dict or none. However if
  passed something that is not an obvious list of swap looking things -
  the function will happily reutrn the passed list.

  More safe and correct behaviour would be to return None (or raise).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1214406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353131] Re: Failed to commit reservations in gate

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353131

Title:
  Failed to commit reservations  in gate

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  From: http://logs.openstack.org/31/105031/14/gate/gate-tempest-dsvm-
  full/c05b927/console.html

  2014-08-05 02:54:01.131 | Log File Has Errors: n-cond
  2014-08-05 02:54:01.132 | *** Not Whitelisted *** 2014-08-05 02:25:47.799 
ERROR nova.quota [req-19feeaa2-e1d4-419b-a7bb-a19bb7000b1d 
AggregatesAdminTestJSON-2075387658 AggregatesAdminTestJSON-270189725] Failed to 
commit reservations [u'ceaa6ce7-db8d-4ba6-871a-b29c59f4a338', 
u'10d7550d-d791-44dd-8396-2fa6eaea7c20', 
u'e7a322e2-948d-45f7-892f-7ea4d9aa0e7c']

  There are a number of errors happening in that file that arent
  whitelisted.

  This one *seems* to be a possible cause of others.as there is then a
  number of InstanceNotFound errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1353131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350064] Re: Deadlock in quota reservations in security groups tests on old side of grenade (icehouse)

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350064

Title:
  Deadlock in quota reservations in security groups tests on old side of
  grenade (icehouse)

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  http://logs.openstack.org/60/109660/7/check/check-grenade-dsvm-
  partial-
  ncpu/deb4b82/logs/old/screen-n-api.txt.gz#_2014-07-28_19_59_01_933

  2014-07-28 19:59:01.933 ERROR nova.api.openstack 
[req-6907415f-1dd8-4bb3-9e63-d007c5afc6a7 SecurityGroupsTestAdminXML-687258929 
SecurityGroupsTestAdminXML-417284851] Caught error: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'SELECT 
quota_usages.created_at AS quota_usages_created_at, quota_usages.updated_at AS 
quota_usages_updated_at, quota_usages.deleted_at AS quota_usages_deleted_at, 
quota_usages.deleted AS quota_usages_deleted, quota_usages.id AS 
quota_usages_id, quota_usages.project_id AS quota_usages_project_id, 
quota_usages.user_id AS quota_usages_user_id, quota_usages.resource AS 
quota_usages_resource, quota_usages.in_use AS quota_usages_in_use, 
quota_usages.reserved AS quota_usages_reserved, quota_usages.until_refresh AS 
quota_usages_until_refresh \nFROM quota_usages \nWHERE quota_usages.deleted = 
%s AND quota_usages.project_id = %s FOR UPDATE' (0, 
'ed64e6649d0840d5b9bb61189e50a675')
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/opt/stack/old/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 663, in __call__
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack return 
self.app(env, start_response)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 131, in 
__call__
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 917, in __call__
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack content_type, body, 
accept)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 983, in _process_stack
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-07-28 19:59:01.933 2641 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 1070, in dispatch
  2014-07-28 19:59:01.

[Yahoo-eng-team] [Bug 1361517] Re: Nova prompt wrong message when boot from a error status volume

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361517

Title:
  Nova prompt wrong message when boot from a error status volume

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  1. create a cinder from an existed image

  cinder create 2 --display-name hbvolume-newone --image-id 9769cbfe-
  2d1a-4f60-9806-16810c666d7f

  2. set the created volume with error status
  cinder reset-state --state error 76f5e521-d45f-4675-851e-48f8e3a3f039

  3. boot a vm from the created volume
  nova boot --flavor 2 --block-device-mapping 
vda=76f5e521-d45f-4675-851e-48f8e3a3f039:::0 device-mapping-test2 --nic 
net-id=231eb787-e5bf-4e65-a822-25d37a84eab8

  # cinder list
  
+--+---+-+--+-+--+-+
  |  ID  |   Status  |   Display Name  | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+-+--+-+--+-+
  | 21c50923-7341-49ba-af48-f4a7e2099bfd | available |   None  |  1   | 
None|  false   | |
  | 76f5e521-d45f-4675-851e-48f8e3a3f039 |   error   |hbvolume-2   |  2   | 
None|   true   | |
  | 92de3c7f-9c56-447a-b06a-a5c3bdfca683 | available | hbvolume-newone |  2   | 
None|   true   | |
  
+--+---+-+--+-+--+-+

  #RESULTS
  it reports "failed to get the volume"
  ERROR (BadRequest): Block Device Mapping is Invalid: failed to get volume 
76f5e521-d45f-4675-851e-48f8e3a3f039. (HTTP 400)

  #Expected Message:
  report the status of the volume is not correct to boot a VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356157] Re: make nova floating-ip-delete atomic with neutron

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356157

Title:
  make nova floating-ip-delete atomic with neutron

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  The infra guys were noticing an issue where they were leaking floating ip
  addresses. One of the reasons this would occur for them is they called
  nova floating-ip-delete which first disassocates the floating-ip in neutron
  and then deletes it. Because it makes two calls to neutron if the first one
  succeeds and the second fails it results in the instance no longer being
  associated with the floatingip. They have retry logic but they base it on
  the instance and when they go to retry cleaning up the instance the floatingip
  is no longer on the instance so they never delete it.  

  This patch fixes this issue by directly calling delete_floating_ip instead
  of releasing first if using neutron as neutron allows this. I looked into 
  doing the same thing for nova-network but the code is written to prevent this.
  This allows the operation to be atomic. I know this is sorta hackish that
  we're doing this in the api layer but we do this in a few other places
  too fwiw.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >