[Yahoo-eng-team] [Bug 1380901] [NEW] Refine HTTP error code for os-interface

2014-10-14 Thread Qin Zhao
Public bug reported:

Related bug --  https://bugs.launchpad.net/nova/+bug/1363901

When attaching interface to an instance, Neutron may return several
types of error to Nova (Eg. PortNotFound, PortInUse and etc.) Nova need
to translate those error into a correct HTTP error code, so that end
user will be able to know the exact failure reason.

According to http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html, my
proposed mapping between Nova error and HTTP error code is:

PortNotFound404
FixedIpAlreadyInUse409  
PortInUse409
NetworkDuplicated400
NetworkAmbiguous400
NetworkNotFound404

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380901

Title:
  Refine HTTP error code for os-interface

Status in OpenStack Compute (Nova):
  New

Bug description:
  Related bug --  https://bugs.launchpad.net/nova/+bug/1363901

  When attaching interface to an instance, Neutron may return several
  types of error to Nova (Eg. PortNotFound, PortInUse and etc.) Nova
  need to translate those error into a correct HTTP error code, so that
  end user will be able to know the exact failure reason.

  According to http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html,
  my proposed mapping between Nova error and HTTP error code is:

  PortNotFound404
  FixedIpAlreadyInUse409
  PortInUse409
  NetworkDuplicated400
  NetworkAmbiguous400
  NetworkNotFound404

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380547] Re: Image is not added when filesystem_store_datadirs specified in glance configuration

2014-10-14 Thread Abhishek Kekane
The issue of moving deprecated filesystem options from DEFAULT sections
to glance_store section in glance-api.conf, is addressed by this bug
https://bugs.launchpad.net/bugs/1380689

Marking this bug as invalid.

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1380547

Title:
  Image is not added when filesystem_store_datadirs specified in glance
  configuration

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  If user specifies 'filesystem_store_datadirs' option in the glance api
  configuration, he is not able to add the image to the specified
  location as following response is returned to the user.

  html
   head
title410 Gone/title
   /head
   body
h1410 Gone/h1
Error in store configuration. Adding images to store is disabled.br /br 
/

   /body
  /html (HTTP N/A)

  Steps to reproduce:

  1. Edit glance-api.conf file to add 'filesystem_store_datadirs' parameter.
 filesystem_store_datadirs = /opt/stack/data/glance/images/:1
 
 Note: Disable 'filesystem_store_datadir' option

  2. restart glance-api service

  3. create image
 glance image-create --file /etc/passwd --name passwd --container-format 
bare --disk-format raw

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1380547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372792] Re: Inconsistent timestamp formats in ceilometer metering messages

2014-10-14 Thread Ann Kamyshnikova
Seems that this problem appear because of usage Elasticsearch, as there
is no such problem running devstack with Neutron and Ceilometer with
both publishers (rpc, upd). I'm not sure where this should be fixed as
in basic case everything works fine.

** Changed in: neutron
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372792

Title:
  Inconsistent timestamp formats in ceilometer metering messages

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  The messages generated by neutron-metering-agent contain timestamps in
  a different format than the other messages received through UDP from
  ceilometer-agent-notification. This creates unnecessary troubles for
  whoever is trying to decode the messages and do something useful with
  them.

  I particular, up to now, I found out about the timestamp field in the
  bandwidth message.

  They contain UTC dates (I hope), but there is no Z at the end, and
  they contain a space instead of a T between date and time. In short,
  they are not in ISO8601 as the timestamps in the other messages. I
  found out about them because elasticsearch tries to parse them and
  fails, throwing away the message.

  This bug was filed against Ceilometer, but I have been redirected here:
  https://bugs.launchpad.net/ceilometer/+bug/1370607

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380547] Re: Image is not added when filesystem_store_datadirs specified in glance configuration

2014-10-14 Thread Zhi Yan Liu
As we designed, glance (even all services) should be able to load
options from deprecated place instead of failure. This report involved a
backward-compatibility issue of glance or/and glance_store, IMO. Need
more investigation.

** Changed in: glance
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1380547

Title:
  Image is not added when filesystem_store_datadirs specified in glance
  configuration

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  If user specifies 'filesystem_store_datadirs' option in the glance api
  configuration, he is not able to add the image to the specified
  location as following response is returned to the user.

  html
   head
title410 Gone/title
   /head
   body
h1410 Gone/h1
Error in store configuration. Adding images to store is disabled.br /br 
/

   /body
  /html (HTTP N/A)

  Steps to reproduce:

  1. Edit glance-api.conf file to add 'filesystem_store_datadirs' parameter.
 filesystem_store_datadirs = /opt/stack/data/glance/images/:1
 
 Note: Disable 'filesystem_store_datadir' option

  2. restart glance-api service

  3. create image
 glance image-create --file /etc/passwd --name passwd --container-format 
bare --disk-format raw

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1380547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380547] Re: Image is not added when filesystem_store_datadirs specified in glance configuration

2014-10-14 Thread Zhi Yan Liu
With abhishekk's clarification, now I see what really happened behind
this report. Agreed to mark this as invalid. Thanks for the
clarification abhishekk.

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1380547

Title:
  Image is not added when filesystem_store_datadirs specified in glance
  configuration

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  If user specifies 'filesystem_store_datadirs' option in the glance api
  configuration, he is not able to add the image to the specified
  location as following response is returned to the user.

  html
   head
title410 Gone/title
   /head
   body
h1410 Gone/h1
Error in store configuration. Adding images to store is disabled.br /br 
/

   /body
  /html (HTTP N/A)

  Steps to reproduce:

  1. Edit glance-api.conf file to add 'filesystem_store_datadirs' parameter.
 filesystem_store_datadirs = /opt/stack/data/glance/images/:1
 
 Note: Disable 'filesystem_store_datadir' option

  2. restart glance-api service

  3. create image
 glance image-create --file /etc/passwd --name passwd --container-format 
bare --disk-format raw

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1380547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380934] [NEW] failure rebuilding instance after resize-revert

2014-10-14 Thread haruka tanizawa
Public bug reported:

Short version:
Rebuilding instance after resize, vm become ERROR cause of 
VirtualInterfaceCreateException.

Long version:

Reproduce:
master(0bbf6301b1886f501713df8c13a5f8b92c9b0c2c)

1. create instanse
2. do resize(succcess and active in destination host/status is VERIFY_RESIZE)
3. do resize-revert(success and active in source host/status is ACTIVE/port is 
DOWN)
4. do rebuild

After timeout(value of vif_plugging_timeout), vm became ERROR cause of
VirtualInterfaceCreateException.

If you set vif_plugging_is_fatal to True, vm is removed.
Otherwise, if you set vif_plugging_is_fatal to False, vm become ACTIVE.

Cause:
When I do resize-revert, instance come back to source host, but host 
information is set to destination node.
(host information is 'host' of ml2_port_bindings table of neutron DB)
This lead to mismatch of actual neutron port and DB information of port.
So this port become to DOWN.
Do rebuild in this situation, port don't become to ACTIVE, so results in 
timuout.

Solution:
nova/compute/manager.py
def finish_revert_resize(self, context, instance, reservations, migration):
instance.system_metadata = sys_meta
instance.memory_mb = instance_type['memory_mb']
instance.vcpus = instance_type['vcpus']
instance.root_gb = instance_type['root_gb']
instance.ephemeral_gb = instance_type['ephemeral_gb']
instance.instance_type_id = instance_type['id']
instance.host = migration['source_compute']
instance.node = migration['source_node']
instance.save()

+migration.dest_compute = migration['source_compute']
+migration.save()

self.network_api.setup_networks_on_host(context, instance,
migration['source_compute'])

block_device_info = self._get_instance_block_device_info(
context, instance, refresh_conn_info=True)

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Short version:
- Do rebuild instance after resize, vm become ERROR cause of 
VirtualInterfaceCreateException.
- 
+ Rebuilding instance after resize, vm become ERROR cause of 
VirtualInterfaceCreateException.
  
  Long version:
  
  Reproduce:
  master(0bbf6301b1886f501713df8c13a5f8b92c9b0c2c)
  
  1. create instanse
  2. do resize(succcess and active in destination host/status is VERIFY_RESIZE)
  3. do resize-revert(success and active in source host/status is ACTIVE/port 
is DOWN)
  4. do rebuild
  
  After timeout(value of vif_plugging_timeout), vm became ERROR cause of
  VirtualInterfaceCreateException.
  
  If you set vif_plugging_is_fatal to True, vm is removed.
  Otherwise, if you set vif_plugging_is_fatal to False, vm become ACTIVE.
  
  Cause:
  When I do resize-revert, instance come back to source host, but host 
information is set to destination node.
  (host information is 'host' of ml2_port_bindings table of neutron DB)
  This lead to mismatch of actual neutron port and DB information of port.
  So this port become to DOWN.
  Do rebuild in this situation, port don't become to ACTIVE, so results in 
timuout.
  
- 
  Solution:
  nova/compute/manager.py
  
-  def finish_revert_resize(self, context, instance, reservations, 
migration):
-  instance.system_metadata = sys_meta
-  instance.memory_mb = instance_type['memory_mb']
-  instance.vcpus = instance_type['vcpus']
-  instance.root_gb = instance_type['root_gb']
-  instance.ephemeral_gb = instance_type['ephemeral_gb']
-  instance.instance_type_id = instance_type['id']
-  instance.host = migration['source_compute']
-  instance.node = migration['source_node']
-  instance.save()
+  def finish_revert_resize(self, context, instance, reservations, 
migration):
+  instance.system_metadata = sys_meta
+  instance.memory_mb = instance_type['memory_mb']
+  instance.vcpus = instance_type['vcpus']
+  instance.root_gb = instance_type['root_gb']
+  instance.ephemeral_gb = instance_type['ephemeral_gb']
+  instance.instance_type_id = instance_type['id']
+  instance.host = migration['source_compute']
+  instance.node = migration['source_node']
+  instance.save()
  
  + migration.dest_compute = migration['source_compute']
  + migration.save()
  +
-  self.network_api.setup_networks_on_host(context, instance,
-  migration['source_compute'])
+  self.network_api.setup_networks_on_host(context, instance,
+  migration['source_compute'])
  
-  block_device_info = self._get_instance_block_device_info(
-  context, instance, refresh_conn_info=True)
+  block_device_info = self._get_instance_block_device_info(

[Yahoo-eng-team] [Bug 1380948] [NEW] ds=nocloud-net cloud-init-local.conf breaks system init

2014-10-14 Thread Marc Riemer
Public bug reported:

After upgrading cloud-init to 0.7.5 the init process is broken. The VM
was previously configured via cloud-init.

CMDLINE:

ds=nocloud-net;s=http://init.example.com/

Changing cloud-init-local.conf solved the problem for us:

- start on mounted MOUNTPOINT=/ and mounted MOUNTPOINT=/run
+ start on mounted MOUNTPOINT=/

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: init

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1380948

Title:
  ds=nocloud-net  cloud-init-local.conf breaks system init

Status in Init scripts for use on cloud images:
  New

Bug description:
  After upgrading cloud-init to 0.7.5 the init process is broken. The VM
  was previously configured via cloud-init.

  CMDLINE:

  ds=nocloud-net;s=http://init.example.com/

  Changing cloud-init-local.conf solved the problem for us:

  - start on mounted MOUNTPOINT=/ and mounted MOUNTPOINT=/run
  + start on mounted MOUNTPOINT=/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1380948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380965] [NEW] Floating IPs don't have instance ids in Juno

2014-10-14 Thread Björn Tillenius
Public bug reported:

In Icehouse, when I associate a floating IP with an instance, the Nova
API for listing floating IPs (/os-floating-ips) gives you the instance
ID of the associated instance:

  {floating_ips: [{instance_id: 82c2aff3-511b-
4e9e-8353-79da86281dfd, ip: 10.1.151.1, fixed_ip: 10.10.0.4,
id: 8113e71b-7194-447a-ad37-98182f7be80a, pool: ext_net}]}


With latest rc for Juno, the instance_id always seem to be null:

  {floating_ips: [{instance_id: null, ip: 10.96.201.0,
fixed_ip: 10.10.0.8, id: 00ffd9a0-5afe-4221-8913-7e275da7f82a,
pool: ext_net}]}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380965

Title:
  Floating IPs don't have instance ids in Juno

Status in OpenStack Compute (Nova):
  New

Bug description:
  In Icehouse, when I associate a floating IP with an instance, the Nova
  API for listing floating IPs (/os-floating-ips) gives you the instance
  ID of the associated instance:

{floating_ips: [{instance_id: 82c2aff3-511b-
  4e9e-8353-79da86281dfd, ip: 10.1.151.1, fixed_ip: 10.10.0.4,
  id: 8113e71b-7194-447a-ad37-98182f7be80a, pool: ext_net}]}

  
  With latest rc for Juno, the instance_id always seem to be null:

{floating_ips: [{instance_id: null, ip: 10.96.201.0,
  fixed_ip: 10.10.0.8, id: 00ffd9a0-5afe-4221-8913-7e275da7f82a,
  pool: ext_net}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380948] Re: ds=nocloud-net cloud-init-local.conf breaks system init

2014-10-14 Thread Marc Riemer
** Project changed: cloud-init = cloud-init (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1380948

Title:
  ds=nocloud-net  cloud-init-local.conf breaks system init

Status in “cloud-init” package in Ubuntu:
  New

Bug description:
  After upgrading cloud-init to 0.7.5 the init process is broken. The VM
  was previously configured via cloud-init.

  CMDLINE:

  ds=nocloud-net;s=http://init.example.com/

  Changing cloud-init-local.conf solved the problem for us:

  - start on mounted MOUNTPOINT=/ and mounted MOUNTPOINT=/run
  + start on mounted MOUNTPOINT=/

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1380948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378568] Re: new horizon inverted tab style is very confusing and ugly

2014-10-14 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/128247

** Changed in: horizon
   Status: Opinion = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378568

Title:
  new horizon inverted tab style is very confusing and ugly

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  I just noticed the new patch that landed in horizon that changes the
  look/style of the tabs in horizon.

  https://review.openstack.org/#/c/115649/

  
  This new ui is confusing and very ugly.   It's very hard to distinguish now 
which tab is selected as the tab is inverted.  This is counter intuitive as 
compared to any modern UI.   

  When you have 3 tabs and the middle tab is selected, it now looks like
  the two outer values are selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380965] Re: Floating IPs don't have instance ids in Juno

2014-10-14 Thread James Page
Confirmed:

ubuntu@james-page-bastion:~/openstack-charm-testing⟫ nova floating-ip-list
++---+--+-+
| Ip | Server Id | Fixed Ip | Pool|
++---+--+-+
| 10.5.150.1 | - | 192.168.21.2 | ext_net |
++---+--+-+
ubuntu@james-page-bastion:~/openstack-charm-testing⟫ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| 47d46f8f-cdcc-4aa8-b6ac-b8033f6d4968 | test | ACTIVE | -  | Running   
  | private=192.168.21.2, 10.5.150.1 |
+--+--+++-+--+


** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380965

Title:
  Floating IPs don't have instance ids in Juno

Status in OpenStack Compute (Nova):
  New
Status in “nova” package in Ubuntu:
  New

Bug description:
  In Icehouse, when I associate a floating IP with an instance, the Nova
  API for listing floating IPs (/os-floating-ips) gives you the instance
  ID of the associated instance:

{floating_ips: [{instance_id: 82c2aff3-511b-
  4e9e-8353-79da86281dfd, ip: 10.1.151.1, fixed_ip: 10.10.0.4,
  id: 8113e71b-7194-447a-ad37-98182f7be80a, pool: ext_net}]}

  
  With latest rc for Juno, the instance_id always seem to be null:

{floating_ips: [{instance_id: null, ip: 10.96.201.0,
  fixed_ip: 10.10.0.8, id: 00ffd9a0-5afe-4221-8913-7e275da7f82a,
  pool: ext_net}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381061] [NEW] VMware: ESX hosts must not be externally routable

2014-10-14 Thread Matthew Booth
Public bug reported:

Change I70fd7d3ee06040d6ce49d93a4becd9cbfdd71f78 removed passwords from
VNC hosts. This change is fine because we proxy the VNC connection and
do access control at the proxy, but it assumes that ESX hosts are not
externally routable.

In a non-OpenStack VMware deployment, accessing a VM's console requires
the end user to have a direct connection to an ESX host. This leads me
to believe that many VMware administrators may leave ESX hosts
externally routable if not specifically directed otherwise.

The above change makes a design decision which requires ESX hosts not to
be externally routable. There may also be other reasons. We need to
ensure that this is very clearly documented. This may already be
documented, btw, but I don't know how our documentation is organised,
and would prefer that somebody more familiar with it assures themselves
that this has been given appropriate weight.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381061

Title:
  VMware: ESX hosts must not be externally routable

Status in OpenStack Compute (Nova):
  New

Bug description:
  Change I70fd7d3ee06040d6ce49d93a4becd9cbfdd71f78 removed passwords
  from VNC hosts. This change is fine because we proxy the VNC
  connection and do access control at the proxy, but it assumes that ESX
  hosts are not externally routable.

  In a non-OpenStack VMware deployment, accessing a VM's console
  requires the end user to have a direct connection to an ESX host. This
  leads me to believe that many VMware administrators may leave ESX
  hosts externally routable if not specifically directed otherwise.

  The above change makes a design decision which requires ESX hosts not
  to be externally routable. There may also be other reasons. We need to
  ensure that this is very clearly documented. This may already be
  documented, btw, but I don't know how our documentation is organised,
  and would prefer that somebody more familiar with it assures
  themselves that this has been given appropriate weight.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381071] [NEW] Unit test not available for TunnelRpcCallbackMixin

2014-10-14 Thread Romil Gupta
Public bug reported:

Currently , there is no unit test available for
TunnelRpcCallbackMixin-- tunnel_sync () method. It needs to be
addressed.

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

** Description changed:

  Currently , there is no unit test available for
  TunnelRpcCallbackMixin-- tunnel_sync () method. It needs to be
- addressed..
+ addressed.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381071

Title:
  Unit test not available for TunnelRpcCallbackMixin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently , there is no unit test available for
  TunnelRpcCallbackMixin-- tunnel_sync () method. It needs to be
  addressed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381071/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381073] [NEW] KeyError: 'available' in upstream unit tests

2014-10-14 Thread Bradley Jones
Public bug reported:

Appears in floating_ips/tests

** Affects: horizon
 Importance: Undecided
 Assignee: Bradley Jones (bradjones)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Bradley Jones (bradjones)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381073

Title:
  KeyError: 'available' in upstream unit tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Appears in floating_ips/tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381094] [NEW] NSX plugin request retry should delay in case of [Errno 104] Connection reset by peer

2014-10-14 Thread Han Zhou
Public bug reported:

When sending request to NSX controller, the TCP connection is reset by
remote (in our case, by Load-Balancer), which is a transient failure
(could be caused by LB ratelimit). The return code is [Errno 104]
Connection reset by peer.

In this case, retry should work. However, error 104 is returned
immediately after sending the request, and the retry logic in NSX plugin
doesn't delay at all, so continuous retries in a row failed.

From the log below we can see all the 3 requests were performed and
failed in 1 ms.

2014-10-12 11:07:18,163 43793136DEBUG 
[neutron.plugins.nicira.api_client.client] [13168] Acquired connection 
https://os-nvp.vip.ppp01.corp.com:443. 14 connection(s) available.
2014-10-12 11:07:18,163 43793136DEBUG 
[neutron.plugins.nicira.api_client.request] [13168] Issuing - request GET 
https://os-nvp.vip.ppp01.corp.com:443//ws.v1/lswitch/7d13735a-ca65-43b3-a3fb-ef8b1ca0f552?relations=LogicalSwitchStatus
2014-10-12 11:07:18,163 43793136DEBUG 
[neutron.plugins.nicira.api_client.request] Setting 
X-Nvp-Wait-For-Config-Generation request header: '377462645'
2014-10-12 11:07:18,164 43793136  WARNING 
[neutron.plugins.nicira.api_client.request] [13168] Exception issuing request: 
[Errno 104] Connection reset by peer
2014-10-12 11:07:18,164 43793136  WARNING 
[neutron.plugins.nicira.api_client.request] [13168] Failed request 'GET 
https://os-nvp.vip.ppp01.corp.com:443//ws.v1/lswitch/7d13735a-ca65-43b3-a3fb-ef8b1ca0f552?relations=LogicalSwitchStatus':
 '[Errno 104] Connection reset by peer' (0.00 seconds)
2014-10-12 11:07:18,164 43793136  WARNING 
[neutron.plugins.nicira.api_client.client] [13168] Connection returned in bad 
state, reconnecting to https://os-nvp.vip.ppp01.corp.com:443
2014-10-12 11:07:18,164 43793136DEBUG 
[neutron.plugins.nicira.api_client.client] [13168] Released connection 
https://os-nvp.vip.ppp01.corp.com:443. 15 connection(s) available.
2014-10-12 11:07:18,165 43793136 INFO 
[neutron.plugins.nicira.api_client.request_eventlet] [13168] Error while 
handling request: [Errno 104] Connection reset by peer
2014-10-12 11:07:18,165 43793136DEBUG 
[neutron.plugins.nicira.api_client.client] [13168] Acquired connection 
https://os-nvp.vip.ppp01.corp.com:443. 14 connection(s) available.
2014-10-12 11:07:18,165 43793136DEBUG 
[neutron.plugins.nicira.api_client.request] [13168] Issuing - request GET 
https://os-nvp.vip.ppp01.corp.com:443//ws.v1/lswitch/7d13735a-ca65-43b3-a3fb-ef8b1ca0f552?relations=LogicalSwitchStatus
2014-10-12 11:07:18,165 43793136DEBUG 
[neutron.plugins.nicira.api_client.request] Setting 
X-Nvp-Wait-For-Config-Generation request header: '377462645'
2014-10-12 11:07:18,166 43793136  WARNING 
[neutron.plugins.nicira.api_client.request] [13168] Exception issuing request: 
[Errno 104] Connection reset by peer
2014-10-12 11:07:18,166 43793136  WARNING 
[neutron.plugins.nicira.api_client.request] [13168] Failed request 'GET 
https://os-nvp.vip.ppp01.corp.com:443//ws.v1/lswitch/7d13735a-ca65-43b3-a3fb-ef8b1ca0f552?relations=LogicalSwitchStatus':
 '[Errno 104] Connection reset by peer' (0.00 seconds)
2014-10-12 11:07:18,166 43793136  WARNING 
[neutron.plugins.nicira.api_client.client] [13168] Connection returned in bad 
state, reconnecting to https://os-nvp.vip.ppp01.corp.com:443
2014-10-12 11:07:18,166 43793136DEBUG 
[neutron.plugins.nicira.api_client.client] [13168] Released connection 
https://os-nvp.vip.ppp01.corp.com:443. 15 connection(s) available.
2014-10-12 11:07:18,166 43793136 INFO 
[neutron.plugins.nicira.api_client.request_eventlet] [13168] Error while 
handling request: [Errno 104] Connection reset by peer
2014-10-12 11:07:18,167 43793136DEBUG 
[neutron.plugins.nicira.api_client.client] [13168] Acquired connection 
https://os-nvp.vip.ppp01.corp.com:443. 14 connection(s) available.
2014-10-12 11:07:18,167 43793136DEBUG 
[neutron.plugins.nicira.api_client.request] [13168] Issuing - request GET 
https://os-nvp.vip.ppp01.corp.com:443//ws.v1/lswitch/7d13735a-ca65-43b3-a3fb-ef8b1ca0f552?relations=LogicalSwitchStatus
2014-10-12 11:07:18,167 43793136DEBUG 
[neutron.plugins.nicira.api_client.request] Setting 
X-Nvp-Wait-For-Config-Generation request header: '377462645'
2014-10-12 11:07:18,167 43793136  WARNING 
[neutron.plugins.nicira.api_client.request] [13168] Exception issuing request: 
[Errno 104] Connection reset by peer
2014-10-12 11:07:18,167 43793136  WARNING 
[neutron.plugins.nicira.api_client.request] [13168] Failed request 'GET 
https://os-nvp.vip.ppp01.corp.com:443//ws.v1/lswitch/7d13735a-ca65-43b3-a3fb-ef8b1ca0f552?relations=LogicalSwitchStatus':
 '[Errno 104] Connection reset by peer' (0.00 seconds)
2014-10-12 11:07:18,168 43793136  WARNING 
[neutron.plugins.nicira.api_client.client] [13168] Connection returned in bad 
state, reconnecting to https://os-nvp.vip.ppp01.corp.com:443
2014-10-12 11:07:18,168 43793136DEBUG 
[neutron.plugins.nicira.api_client.client] [13168] Released 

[Yahoo-eng-team] [Bug 1336855] Re: [SRU] non-interactive grub updates broken for /dev/xvda devices on Cloud-Images/Cloud-init

2014-10-14 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.3

---
cloud-init (0.7.5-0ubuntu1.3) trusty-proposed; urgency=medium

  * d/patches/lp-1336855-grub_xvda.patch: include xvda devices for
consideration for grub configuration (LP: #1336855).
 -- Ben Howard ben.how...@ubuntu.com   Thu, 18 Sep 2014 16:47:23 -0600

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed = Fix Released

** Changed in: cloud-init (Ubuntu Precise)
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1336855

Title:
  [SRU] non-interactive grub updates broken for /dev/xvda devices on
  Cloud-Images/Cloud-init

Status in Init scripts for use on cloud images:
  Fix Released
Status in “cloud-init” package in Ubuntu:
  Fix Released
Status in “cloud-init” source package in Precise:
  Fix Released
Status in “cloud-init” source package in Trusty:
  Fix Released
Status in “cloud-init” source package in Utopic:
  Fix Released

Bug description:
  [SRU JUSTIFICATION]

  [IMPACT] Cloud-init, as part of the first boot configures grub-pc to
  set the device that grub should install to. However, in the case of
  HVM instances, /dev/xvda and /dev/xvda1 are not considered (only sda,
  sda1, vda and vda1). Since AWS HVM instances and Xen use /dev/xvdX
  devices, this means that any Grub update with an ABI change will break
  the instances, rendering them unable to boot.

  [FIX] Cloud-init has been patched to understand /dev/xvda devices and
  set the correct grub-pc/install_device. Further, cloud-init's postinst
  has been patched to fix people who might be affected by this bug.

  [Test Case 1]
  1. Boot HVM instance store AMI ami-90b156f8 (us-east-1)
  2. Update grub
  3. Update cloud-init from -proposed
  4. Reboot instance
  5. instance should come back up

  [Test Case 2 -- 12.04 Only]
  1. Boot HVM instance store AMI ami-90b156f8 (us-east-1)
  2. run cloud-init-cfg grub_dpkg  --freqenucy always
  3. run debconf-show grub-pc, confirm that  grub-pc/install_devices is 
/dev/xvda
  4. update grub
  5. Reboot
  6. instance should come back up

  [Test Case 3 -- 14.04 Only]
  1. Boot HVM instance store AMI ami-1f958c76 (us-east-1)
  2. run cloud-init single grub_dpkg  --freqenucy always
  3. run debconf-show grub-pc, confirm that  grub-pc/install_devices is 
/dev/xvda
  4. update grub
  5. Reboot
  6. instance should come back up

  [Test Case 4]
  1. Install from -proposed
  2. Simulate a first-run:
 echo grub-pc grub-pc/install_devices select /dev/sda | 
debconf-set-selections
  3. Run: cloud-init single --name=grub-dpkg --frequency=always
  4. Run: debconf-show grub-pc
  5. confirm that /dev/xvda is shown as the install device

  
  ORIGINAL report

  It looks like a recent update to grub or the kernel on 12.04 is breaking
  unattended installs on EC2 for HVM instances.

  You can reproduce the problem by doing the following:

  region: us-east-1
  virtualization type: HVM (e.g. r3.xlarge)
  AMI ID: ami-7a916212

  dpkg --configure –a
  apt-get update
  apt-get install -y ruby ruby-dev libicu-dev libssl-dev libxslt-dev
  libxml2-dev monit
  apt-get dist-upgrade –y

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1336855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381100] [NEW] neutron-db-manage has mandatory --config-file parameter

2014-10-14 Thread Jakub Libosvar
Public bug reported:

Other openstack projects use oslo.config to find relevant config files
without giving path to file containing connection string. Neutron
requires providing config files manually.

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381100

Title:
  neutron-db-manage has mandatory --config-file parameter

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Other openstack projects use oslo.config to find relevant config files
  without giving path to file containing connection string. Neutron
  requires providing config files manually.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379952] Re: API accepts tenant name for TenantId, fails, and provides not helpful message

2014-10-14 Thread Morgan Fainberg
I am marking this as wont fix only because we cannot fix this without
being API incompatible. I agree that the UX could be better, but it is
just not possible to correct the behavior without potentially breaking
real deployments.

** Changed in: keystone
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1379952

Title:
  API accepts tenant name for TenantId, fails, and provides not
  helpful message

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  When authenticating with Keystone's REST API, if I happen to provide
  my tenant name in the TenantId field, the resulting error tells me
  that I am not authorized for that tenant, even though all the
  information (user, pass, tenant) are correct. It *should* tell me that
  I just passed invalid data into a field that expects a UUID, which, as
  a user, would tell me exactly what was wrong.

  For what it's worth, in debugging my auth problem, I ended up
  tcpdump'ing keystone which led to this gem of a demonstration of the
  problem:

  http://paste.openstack.org/show/4v5JtwbGNu6QhQ3K5oQ1/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1379952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357372] Re: [oss-security] [OSSA 2014-035] Nova VMware driver may connect VNC to another tenant's console (CVE-2014-8750)

2014-10-14 Thread Jeremy Stanley
** Summary changed:

- Race condition in VNC port allocation when spawning a instance on VMware 
(CVE-2014-8750)
+ [oss-security] [OSSA 2014-035] Nova VMware driver may connect VNC to another 
tenant's console (CVE-2014-8750)

** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357372

Title:
  [oss-security] [OSSA 2014-035] Nova VMware driver may connect VNC to
  another tenant's console (CVE-2014-8750)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When spawning some instances,  nova VMware driver could have a race
  condition in VNC port allocation. Although the get_vnc_port function
  has a lock it not guarantee that the whole vnc port allocation process
  is locked, so another instance could receive the same port if it
  requests the VNC port before nova has finished the vnc port allocation
  to another VM.

  If the instances with the same VNC port are allocated in same host it
  could lead to a improper access to the instance console.

  Reproduce the problem: Launch  two or more instances at same time. In
  some cases one instance could execute the get_vnc_port and pick a port
  but before this instance has finished the _set_vnc_config another
  instance could execute get_vnc_port and pick the same port.

  How often this occurs: unpredictable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323975] Re: do not use default=None for config options

2014-10-14 Thread Ilya Sviridov
** No longer affects: magnetodb

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323975

Title:
  do not use default=None for config options

Status in OpenStack Key Management (Barbican):
  Fix Released
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  In Progress
Status in OpenStack Messaging and Notifications Service (Zaqar):
  Fix Released

Bug description:
  In the cfg module default=None is set as the default value. It's not
  necessary to set it again when defining config options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1323975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381153] [NEW] Cannot create instance live snapshots in Centos7 (icehouse)

2014-10-14 Thread Pedro Sousa
Public bug reported:

When trying to create a snapshot through horizon from a running instance
I get this error:

2014-10-14 17:45:19.602 33435 INFO nova.virt.libvirt.driver 
[req-4cce8ec4-e5ae-4614-b82a-4aac602aa110 977d2518de7042369d6f9fdf505326d3 
89382cc045ad415684985bbfb0a02fee] [instance: 
0816d148-d271-42d8-9bba-da78927d0eff] Beginning live snapshot process
2014-10-14 17:45:19.919 33435 INFO nova.virt.libvirt.driver 
[req-4cce8ec4-e5ae-4614-b82a-4aac602aa110 977d2518de7042369d6f9fdf505326d3 
89382cc045ad415684985bbfb0a02fee] [instance: 
0816d148-d271-42d8-9bba-da78927d0eff] Snapshot extracted, beginning image upload
2014-10-14 17:45:20.167 33435 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: unsupported configuration: block copy is not supported 
with this QEMU binary
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 122, 
in _do_dispatch
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 88, in wrapped
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher payload)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 71, in wrapped
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 274, in 
decorated_function
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher pass
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 260, in 
decorated_function
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 303, in 
decorated_function
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 290, in 
decorated_function
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 353, in 
decorated_function
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher % 
image_id, instance=instance)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 343, in 
decorated_function
2014-10-14 17:45:20.167 33435 TRACE oslo.messaging.rpc.dispatcher *args, 
**kwargs)
2014-10-14 17:45:20.167 33435 TRACE 

[Yahoo-eng-team] [Bug 1378525] Re: Broken L3 HA migration should be blocked

2014-10-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126711
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=234c9e6d66c867e87f6e0eb8b0fa84240f6f8c4a
Submitter: Jenkins
Branch:proposed/juno

commit 234c9e6d66c867e87f6e0eb8b0fa84240f6f8c4a
Author: Assaf Muller amul...@redhat.com
Date:   Tue Oct 7 22:45:41 2014 +0300

Forbid update of HA property of routers

While the HA property is update-able, and resulting router-get
invocations suggest that the router is HA, the migration
itself fails on the agent. This is deceiving and confusing
and should be blocked until the migration itself is fixed
in a future patch.

Change-Id: I4171ab481e3943e0110bd9a300d965bbebe44871
Related-Bug: #1365426
Closes-Bug: #1378525
(cherry picked from commit 1fd7dd99ca7e5e9736200360aa354cada7fb43ff)


** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378525

Title:
  Broken L3 HA migration should be blocked

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  While the HA property is update-able, and resulting router-get
  invocations suggest that the router is HA, the migration
  itself fails on the agent. This is deceiving and confusing
  and should be blocked until the migration itself is fixed
  in a future patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242820] Re: Nova docker virt driver triggers - AttributeError: 'Message' object has no attribute 'format'

2014-10-14 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.i18n
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1242820

Title:
  Nova docker virt driver triggers - AttributeError: 'Message' object
  has no attribute 'format'

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  New
Status in The Oslo library incubator:
  Invalid
Status in Oslo Internationalization Library:
  New

Bug description:
  Using the nova virt docker driver and when I try to deploy an instance
  I get:

  2013-10-21 12:18:39.229 20636 ERROR nova.compute.manager 
[req-270deff8-b0dc-4a05-9923-417dc5b662db c99d13095fbd4605b36a802fd9539a4a 
a03677565e97495fa798fe6cd2628180] [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] Error: 'Message' object has no attribute 
'format'
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] Traceback (most recent call last):
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1045, in 
_build_instance
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] set_access_ip=set_access_ip)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1444, in _spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1430, in _spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] block_device_info)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/virt/docker/driver.py, line 297, in 
spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] LOG.info(msg.format(image_name))
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/gettextutils.py, line 
255, in __getattribute__
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] return 
UserString.UserString.__getattribute__(self, name)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] AttributeError: 'Message' object has no 
attribute 'format'

  On a boot VM flow.

  Based on a tiny bit of poking it appears there is a common
  gettextutils method to wrapper message/string access which does not
  account for the use of Message.format() as used in the docker virt
  driver.

  Honestly based on the code I'm running I'm not sure how spawn worked
  at all for docker as the use of Message.format() is in the boot
  codepath:

  296 msg = _('Image name {0} does not exist, fetching it...')
  297 LOG.info(msg.format(image_name))

  This always triggers the no attr message.

  I'm not up to speed on the gettextutils wrapper, but I hacked around
  this by adding 'format' to the list of ops in gettextutils.py:

  
   def __getattribute__(self, name):
  243 # NOTE(mrodden): handle lossy operations that we can't deal with 
yet
  244 # These override the UserString implementation, since UserString
  245 # uses our __class__ attribute to try and build a new message
  246 # after running the inner data string through the operation.
  247 # At that point, we have lost the gettext message id and can just
  248 # safely resolve to a string instead.
  249 ops = ['capitalize', 'center', 'decode', 'encode',
  250'expandtabs', 'ljust', 'lstrip', 'replace', 'rjust', 
'rstrip',
  251'strip', 'swapcase', 'title', 'translate', 'upper', 
'zfill', 'format']
  252 if name in ops:
  253 return getattr(self.data, name)
  254 else:
  255 return UserString.UserString.__getattribute__(self, name)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1242820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369992] [NEW] glance image-delete CLI is not functioning as expected.

2014-10-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Below are the help content of Glance image delete-

$ glance help image-delete
usage: glance image-delete IMAGE [IMAGE ...]

Delete specified image(s).

Positional arguments:
  IMAGE  Name or ID of image(s) to delete.

Here were can pass Name or ID of image to delete.

The problem is here-
if I am trying to delete a non public image  through its name, it returns below 
message -
No image with a name or ID of 'image-name' exists

while if I am passing ID of non public image(instead of name), it
deletes image successfully.

JFYI-
In the case of public image, CLI works as expected for Name and ID both.

Below are the command execution logs-

$ glance image-list --is-public=False
+--+--+-+--+--++
| ID   | Name | Disk Format | 
Container Format | Size | Status |
+--+--+-+--+--++
| 37adaa79-7b09-4d65-abb2-7d8848447ddb | Cirros 0.3.1 | qcow2   | bare  
   | 13147648 | active |
| c5566731-e9a5-4b63-b409-813f9d83fb40 | test13   | vmdk| ovf   
   |  | queued |
+--+--+-+--+--++

$ glance image-delete test13
No image with a name or ID of 'test13' exists

$ glance image-list --is-public=False
+--+--+-+--+--++
| ID   | Name | Disk Format | 
Container Format | Size | Status |
+--+--+-+--+--++
| 37adaa79-7b09-4d65-abb2-7d8848447ddb | Cirros 0.3.1 | qcow2   | bare  
   | 13147648 | active |
| c5566731-e9a5-4b63-b409-813f9d83fb40 | test13   | vmdk| ovf   
   |  | queued |
+--+--+-+--+--++

$ glance image-delete c5566731-e9a5-4b63-b409-813f9d83fb40

$ glance image-list --is-public=False
+--+--+-+--+--++
| ID   | Name | Disk Format | 
Container Format | Size | Status |
+--+--+-+--+--++
| 37adaa79-7b09-4d65-abb2-7d8848447ddb | Cirros 0.3.1 | qcow2   | bare  
   | 13147648 | active |
+--+--+-+--+--++

$ glance --version
0.12.0

** Affects: glance
 Importance: Undecided
 Assignee: Wlodzimierz Borkowski (woodbor)
 Status: New

-- 
glance image-delete CLI is not functioning as expected.
https://bugs.launchpad.net/bugs/1369992
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242820] Re: Nova docker virt driver triggers - AttributeError: 'Message' object has no attribute 'format'

2014-10-14 Thread Ben Nemec
I suspect we omitted this on purpose because we didn't want to have to
support multiple methods of formatting strings, which is a little tricky
to do for lazy translation because you have to store all of the
parameters too so they can be lazily translated if necessary.  Also, as
Doug noted the code that originally triggered this doesn't follow
logging best practices anyway and should be changed.

I'm going to close as won't-fix for the reasons above, but if anyone
disagrees or finds a situation where this is absolutely needed then
please feel free to reopen (or ping an Oslo person to reopen).

** Changed in: oslo.i18n
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1242820

Title:
  Nova docker virt driver triggers - AttributeError: 'Message' object
  has no attribute 'format'

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  New
Status in The Oslo library incubator:
  Invalid
Status in Oslo Internationalization Library:
  Won't Fix

Bug description:
  Using the nova virt docker driver and when I try to deploy an instance
  I get:

  2013-10-21 12:18:39.229 20636 ERROR nova.compute.manager 
[req-270deff8-b0dc-4a05-9923-417dc5b662db c99d13095fbd4605b36a802fd9539a4a 
a03677565e97495fa798fe6cd2628180] [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] Error: 'Message' object has no attribute 
'format'
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] Traceback (most recent call last):
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1045, in 
_build_instance
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] set_access_ip=set_access_ip)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1444, in _spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1430, in _spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] block_device_info)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/virt/docker/driver.py, line 297, in 
spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] LOG.info(msg.format(image_name))
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/gettextutils.py, line 
255, in __getattribute__
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] return 
UserString.UserString.__getattribute__(self, name)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] AttributeError: 'Message' object has no 
attribute 'format'

  On a boot VM flow.

  Based on a tiny bit of poking it appears there is a common
  gettextutils method to wrapper message/string access which does not
  account for the use of Message.format() as used in the docker virt
  driver.

  Honestly based on the code I'm running I'm not sure how spawn worked
  at all for docker as the use of Message.format() is in the boot
  codepath:

  296 msg = _('Image name {0} does not exist, fetching it...')
  297 LOG.info(msg.format(image_name))

  This always triggers the no attr message.

  I'm not up to speed on the gettextutils wrapper, but I hacked around
  this by adding 'format' to the list of ops in gettextutils.py:

  
   def __getattribute__(self, name):
  243 # NOTE(mrodden): handle lossy operations that we can't deal with 
yet
  244 # These override the UserString implementation, since UserString
  245 # uses our __class__ attribute to try and build a new message
  246 # after running the inner data string through the operation.
  247 # At that point, we have lost the gettext message id and can just
  248 # safely resolve to a string instead.
  249 ops = ['capitalize', 'center', 'decode', 'encode',
  250'expandtabs', 'ljust', 'lstrip', 'replace', 'rjust', 
'rstrip',
  251'strip', 'swapcase', 'title', 

[Yahoo-eng-team] [Bug 1369992] Re: glance image-delete CLI is not functioning as expected.

2014-10-14 Thread Wlodzimierz Borkowski
Hi after checked in the code, there is:
def _get_images(self, context, filters, **params):
Get images, wrapping in exception if necessary.
# NOTE(markwash): for backwards compatibility, is_public=True for
# admins actually means treat me as if I'm not an admin and show me
# all my images
if context.is_admin and params.get('is_public') is True:
params['admin_as_user'] = True
del params['is_public']

So it's glance (sorry for inconveniences). Do you work as admin account
? Because there is note:  for backwards compatibility, is_public=True
for admins actually means treat me as if I'm not an admin and show me
all my images. Checked for non admin account and works as defined (split
public and non public)

** Project changed: python-glanceclient = glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1369992

Title:
  glance image-delete CLI is not functioning as expected.

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Below are the help content of Glance image delete-

  $ glance help image-delete
  usage: glance image-delete IMAGE [IMAGE ...]

  Delete specified image(s).

  Positional arguments:
    IMAGE  Name or ID of image(s) to delete.

  Here were can pass Name or ID of image to delete.

  The problem is here-
  if I am trying to delete a non public image  through its name, it returns 
below message -
  No image with a name or ID of 'image-name' exists

  while if I am passing ID of non public image(instead of name), it
  deletes image successfully.

  JFYI-
  In the case of public image, CLI works as expected for Name and ID both.

  Below are the command execution logs-

  $ glance image-list --is-public=False
  
+--+--+-+--+--++
  | ID   | Name | Disk Format | 
Container Format | Size | Status |
  
+--+--+-+--+--++
  | 37adaa79-7b09-4d65-abb2-7d8848447ddb | Cirros 0.3.1 | qcow2   | 
bare | 13147648 | active |
  | c5566731-e9a5-4b63-b409-813f9d83fb40 | test13   | vmdk| ovf 
 |  | queued |
  
+--+--+-+--+--++

  $ glance image-delete test13
  No image with a name or ID of 'test13' exists

  $ glance image-list --is-public=False
  
+--+--+-+--+--++
  | ID   | Name | Disk Format | 
Container Format | Size | Status |
  
+--+--+-+--+--++
  | 37adaa79-7b09-4d65-abb2-7d8848447ddb | Cirros 0.3.1 | qcow2   | 
bare | 13147648 | active |
  | c5566731-e9a5-4b63-b409-813f9d83fb40 | test13   | vmdk| ovf 
 |  | queued |
  
+--+--+-+--+--++

  $ glance image-delete c5566731-e9a5-4b63-b409-813f9d83fb40

  $ glance image-list --is-public=False
  
+--+--+-+--+--++
  | ID   | Name | Disk Format | 
Container Format | Size | Status |
  
+--+--+-+--+--++
  | 37adaa79-7b09-4d65-abb2-7d8848447ddb | Cirros 0.3.1 | qcow2   | 
bare | 13147648 | active |
  
+--+--+-+--+--++

  $ glance --version
  0.12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1369992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381185] [NEW] Update imports to use jsonutils from oslo.serialization

2014-10-14 Thread Ed Leafe
Public bug reported:

The code for jsonutils has moved from oslo-incubated to the
oslo.serialization library. References to the old jsonutils in
nova.openstack.common should be updated to use the version in
oslo.serialization.

** Affects: nova
 Importance: Undecided
 Assignee: Ed Leafe (ed-leafe)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Ed Leafe (ed-leafe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381185

Title:
  Update imports to use jsonutils from oslo.serialization

Status in OpenStack Compute (Nova):
  New

Bug description:
  The code for jsonutils has moved from oslo-incubated to the
  oslo.serialization library. References to the old jsonutils in
  nova.openstack.common should be updated to use the version in
  oslo.serialization.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255627] Re: images.test_list_image_filters.ListImageFiltersTest fails with timeout

2014-10-14 Thread Davanum Srinivas (DIMS)
Don't see any hits in the last week

** Changed in: nova-docker
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255627

Title:
  images.test_list_image_filters.ListImageFiltersTest fails with timeout

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  Invalid
Status in Tempest:
  Invalid

Bug description:
  Spurious failure in this test:

  http://logs.openstack.org/49/55749/8/check/check-tempest-devstack-vm-
  full/9bc94d5/console.html

  2013-11-27 01:10:35.802 | 
==
  2013-11-27 01:10:35.802 | FAIL: setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)
  2013-11-27 01:10:35.803 | setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)
  2013-11-27 01:10:35.803 | 
--
  2013-11-27 01:10:35.803 | _StringException: Traceback (most recent call last):
  2013-11-27 01:10:35.804 |   File 
tempest/api/compute/images/test_list_image_filters.py, line 50, in setUpClass
  2013-11-27 01:10:35.807 | cls.client.wait_for_image_status(cls.image1_id, 
'ACTIVE')
  2013-11-27 01:10:35.809 |   File 
tempest/services/compute/xml/images_client.py, line 153, in 
wait_for_image_status
  2013-11-27 01:10:35.809 | raise exceptions.TimeoutException
  2013-11-27 01:10:35.809 | TimeoutException: Request timed out

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381185] Re: Update imports to use jsonutils from oslo.serialization

2014-10-14 Thread Ed Leafe
Another patch hit Gerrit before mine that fixed this issue, so this is
no longer a valid bug.

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381185

Title:
  Update imports to use jsonutils from oslo.serialization

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The code for jsonutils has moved from oslo-incubated to the
  oslo.serialization library. References to the old jsonutils in
  nova.openstack.common should be updated to use the version in
  oslo.serialization.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1232525] Re: KeyError Exception while synching routers

2014-10-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/128288
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b28eda57223e492924edb731e24c2e4f64cc0de5
Submitter: Jenkins
Branch:proposed/juno

commit b28eda57223e492924edb731e24c2e4f64cc0de5
Author: Carl Baldwin carl.bald...@hp.com
Date:   Wed Oct 8 03:22:49 2014 +

Remove two sets that are not referenced

The code no longer references the updated_routers and removed_routers
sets.  This should have been cleaned up before but was missed.

Closes-bug: #1232525

Change-Id: I0396e13d2f7c3789928e0c6a4c0a071b02d5ff17
(cherry picked from commit edb26bfcddf9d9a0e95955a6590d11fa7245ea2b)


** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1232525

Title:
  KeyError Exception while synching routers

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  
  Sometimes this exception is seen while the L3-agent _rpc_loop periodic task 
is executing:

  30748 ERROR quantum.agent.l3_agent [-] Failed synchronizing routers
  30748 TRACE quantum.agent.l3_agent Traceback (most recent call last):
  30748 TRACE quantum.agent.l3_agent File 
/usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py, line 725, in 
_rpc_loop
  30748 TRACE quantum.agent.l3_agent self._process_router_delete()
  30748 TRACE quantum.agent.l3_agent File 
/usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py, line 733, in 
_process_router_delete
  30748 TRACE quantum.agent.l3_agent self._router_removed(router_id)
  30748 TRACE quantum.agent.l3_agent File 
/usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py, line 319, in 
_router_removed
  30748 TRACE quantum.agent.l3_agent ri = self.router_info[router_id]
  30748 TRACE quantum.agent.l3_agent KeyError: 
u'0f9ec689-c094-44c7-bae7-b995830d405f'

  0f9ec689-c094-44c7-bae7-b995830d405f is a router that just has been
  deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1232525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381221] [NEW] VPNaaS Cisco cleanup test code

2014-10-14 Thread Paul Michali
Public bug reported:

There is unused arguments in a mock method, and duplication of
constants, in the Cisco VPNaaS unit test files.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: In Progress


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) = Paul Michali (pcm)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381221

Title:
  VPNaaS Cisco cleanup test code

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  There is unused arguments in a mock method, and duplication of
  constants, in the Cisco VPNaaS unit test files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381238] [NEW] Race condition on processing DVR floating IPs

2014-10-14 Thread Rajeev Grover
Public bug reported:

A race condition can sometimes occur in l-3 agent when a dvr based
floatingip is being deleted from one router and another dvr based
floatingip is being configured on another router in the same node.
Especially if the floatingip being deleted was the last floatingip on
the node.  Although fix for Bug # 1373100 [1] eliminated frequent
observation of this behavior in upstream tests, it still shows up.
Couple of recent examples:

http://logs.openstack.org/88/128288/1/check/check-tempest-dsvm-neutron-
dvr/8fdd1de/

http://logs.openstack.org/03/123403/7/check/check-tempest-dsvm-neutron-
dvr/859534a/

Relevant log messages:

2014-10-14 16:06:15.803 22303 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'fip-82fb2751-30ba-4015-a5da-6c8563064db9', 'ip', 'link', 'del', 
'fpr-7ed86ca6-b'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:46
2014-10-14 16:06:15.838 22303 DEBUG neutron.agent.linux.utils [-] 
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-7ed86ca6-b42d-4ba9-8899-447ff0509174', 'ip', 'addr', 'show', 
'rfp-7ed86ca6-b']
Exit code: 0
Stdout: '2: rfp-7ed86ca6-b: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000\nlink/ether c6:88:ee:71:a7:51 
brd ff:ff:ff:ff:ff:ff\ninet 169.254.30.212/31 scope global rfp-7ed86ca6-b\n 
  valid_lft forever preferred_lft forever\ninet6 
fe80::c488:eeff:fe71:a751/64 scope link \n   valid_lft forever 
preferred_lft forever\n'
Stderr: '' execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:81
2014-10-14 16:06:15.839 22303 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-7ed86ca6-b42d-4ba9-8899-447ff0509174', 'ip', '-4', 'addr', 'add', 
'172.24.4.91/32', 'brd', '172.24.4.91', 'scope', 'global', 'dev', 
'rfp-7ed86ca6-b'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:46
2014-10-14 16:06:16.221 22303 DEBUG neutron.agent.linux.utils [-] 
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'fip-82fb2751-30ba-4015-a5da-6c8563064db9', 'ip', 'link', 'del', 
'fpr-7ed86ca6-b']
Exit code: 0
Stdout: ''
Stderr: '' execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:81
2014-10-14 16:06:16.222 22303 DEBUG neutron.agent.l3_agent [-] DVR: unplug: 
fg-f04e25ef-e3 _destroy_fip_namespace 
/opt/stack/new/neutron/neutron/agent/l3_agent.py:679
2014-10-14 16:06:16.222 22303 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', '-o', 'link', 'show', 'br-ex'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:46
2014-10-14 16:06:16.251 22303 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-7ed86ca6-b42d-4ba9-8899-447ff0509174', 'ip', '-4', 'addr', 'add', 
'172.24.4.91/32', 'brd', '172.24.4.91', 'scope', 'global', 'dev', 
'rfp-7ed86ca6-b']
Exit code: 1
Stdout: ''
Stderr: 'Cannot find device rfp-7ed86ca6-b\n'  


[1] https://bugs.launchpad.net/neutron/+bug/1373100

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381238

Title:
  Race condition on processing DVR floating IPs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  A race condition can sometimes occur in l-3 agent when a dvr based
  floatingip is being deleted from one router and another dvr based
  floatingip is being configured on another router in the same node.
  Especially if the floatingip being deleted was the last floatingip on
  the node.  Although fix for Bug # 1373100 [1] eliminated frequent
  observation of this behavior in upstream tests, it still shows up.
  Couple of recent examples:

  http://logs.openstack.org/88/128288/1/check/check-tempest-dsvm-
  neutron-dvr/8fdd1de/

  http://logs.openstack.org/03/123403/7/check/check-tempest-dsvm-
  neutron-dvr/859534a/

  Relevant log messages:

  2014-10-14 16:06:15.803 22303 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'fip-82fb2751-30ba-4015-a5da-6c8563064db9', 'ip', 'link', 'del', 
'fpr-7ed86ca6-b'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:46
  2014-10-14 16:06:15.838 22303 DEBUG neutron.agent.linux.utils [-] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-7ed86ca6-b42d-4ba9-8899-447ff0509174', 'ip', 'addr', 'show', 
'rfp-7ed86ca6-b']
  Exit code: 0
  Stdout: '2: rfp-7ed86ca6-b: 

[Yahoo-eng-team] [Bug 1381263] [NEW] Attribute Error found with router_interface_delete

2014-10-14 Thread Swaminathan Vasudevan
Public bug reported:

This may be related to the DB lockwait timeout issue seen in the neutron with 
DVR.
The error occurs when trying to query the ports in the 
delete_csnat_router_interface.

This is from the recent change in the router and port relationship.

2014-10-14 15:54:02.332 ERROR neutron.api.v2.resource 
[req-5508a216-08e5-4dc4-8ed5-9ebe79b1f7aa admin 
065018a159504b6189562b5ff29463d0] remove_router_interface failed
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 87, in resource
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 200, in _handle_action
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/l3_dvr_db.py, line 260, in 
remove_router_interface
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource context.elevated(), 
router, subnet_id=subnet_id)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/l3_dvr_db.py, line 590, in 
delete_csnat_router_interface_ports
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource filters={'id': ports}
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1463, in get_ports
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource 
page_reverse=page_reverse)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1449, in 
_get_ports_query
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource query = 
self._apply_filters_to_query(query, Port, filters)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/common_db_mixin.py, line 132, in 
_apply_filters_to_query
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource query = 
query.filter(column.in_(value))
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 404, in in_
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return 
self.operate(in_op, other)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 252, in 
operate
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return 
op(self.comparator, *other, **kwargs)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 694, in 
in_op
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return a.in_(b)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 404, in in_
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return 
self.operate(in_op, other)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/properties.py, line 212, in 
operate
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return 
op(self.__clause_element__(), *other, **kwargs)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 694, in 
in_op
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return a.in_(b)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 404, in in_
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return 
self.operate(in_op, other)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 2311, in 
operate
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return 
op(self.comparator, *other, **kwargs)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 694, in 
in_op
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return a.in_(b)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 404, in in_
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return 
self.operate(in_op, other)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 1994, in 
operate
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource return o[0](self, 
self.expr, op, *(other + o[1:]), **kwargs)
2014-10-14 15:54:02.332 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 2113, in 
_in_impl

[Yahoo-eng-team] [Bug 1381061] Re: VMware: ESX hosts must not be externally routable

2014-10-14 Thread Christopher Yeoh
** Changed in: nova
   Status: New = Confirmed

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381061

Title:
  VMware: ESX hosts must not be externally routable

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Manuals:
  New

Bug description:
  Change I70fd7d3ee06040d6ce49d93a4becd9cbfdd71f78 removed passwords
  from VNC hosts. This change is fine because we proxy the VNC
  connection and do access control at the proxy, but it assumes that ESX
  hosts are not externally routable.

  In a non-OpenStack VMware deployment, accessing a VM's console
  requires the end user to have a direct connection to an ESX host. This
  leads me to believe that many VMware administrators may leave ESX
  hosts externally routable if not specifically directed otherwise.

  The above change makes a design decision which requires ESX hosts not
  to be externally routable. There may also be other reasons. We need to
  ensure that this is very clearly documented. This may already be
  documented, btw, but I don't know how our documentation is organised,
  and would prefer that somebody more familiar with it assures
  themselves that this has been given appropriate weight.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381277] [NEW] Fix docstring on send_delete-port_request in N1kv plugin

2014-10-14 Thread Steven Hillman
Public bug reported:

The docstring for _send_delete_port_request wasn't properly updated as
part of bug/1373547 to reflect that the last port check and subsequent
call to delete_vm_network were removed. Need to update this docstring to
reflect that change.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: cisco

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381277

Title:
  Fix docstring on send_delete-port_request in N1kv plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The docstring for _send_delete_port_request wasn't properly updated as
  part of bug/1373547 to reflect that the last port check and subsequent
  call to delete_vm_network were removed. Need to update this docstring
  to reflect that change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381285] [NEW] Nexus driver keeps trying to configure VLAN at the switch

2014-10-14 Thread Danny Choi
Public bug reported:

OpenStack version: Juno

Issue: Nexus driver keeps trying to configure VLAN at the switch after
update_port_postcommit failed due to ssh connection issue

Description:
I used devstack to deploy Multi-Node OpenStack with Cisco Nexus ML2 plugin for 
VLAN network type.

There’s a requirement to first ssh to the Nexus switch from the
Controller node in order to set up the ssh credentials before launching
VMs.

I did not do that and caused the ssh exception and
update_port_postcommit failure, resulting in the VLAN not configured at
the Nexus switch.

In the Controller’s screen-q-svc.log, it showed the Nexus driver repeatedly 
trying to configure the VLAN every 2+ seconds;
and, after a few minutes, causing ssh failure to the Nexus switch from my MAC.  

DANNCHOI-M-G07T:~ dannychoi$ ssh -l admin 172.22.191.67
ssh_exchange_identification: Connection closed by remote host
DANNCHOI-M-G07T:~ dannychoi$ ssh -l admin 172.22.191.67
ssh_exchange_identification: Connection closed by remote host
DANNCHOI-M-G07T:~ dannychoi$ ssh -l admin 172.22.191.67
ssh_exchange_identification: Connection closed by remote host

This continues until the VM is deleted; and after a few minutes, I can
ssh to the Nexus switch from my MAC.

Below is a snippet of the log that keeps repeating every 2+ seconds:

2014-10-14 20:41:26.381 DEBUG neutron.context 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] Arguments dropped when 
creating context: {u'project_name': None, u'tenant': None} from (pid=961) 
__init__ /opt/stack/neutron/neutron/context.py:83
2014-10-14 20:41:26.382 DEBUG neutron.plugins.ml2.rpc 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] Device 
0f9454bc-1a31-4743-9c98-9c42a084e673 up at agent ovs-agent-qa4 from (pid=961) 
update_device_up /opt/stack/neutron/neutron/plugins/ml2/rpc.py:149
2014-10-14 20:41:26.402 DEBUG neutron.openstack.common.lockutils 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] Got semaphore db-access 
from (pid=961) lock /opt/stack/neutron/neutron/openstack/common/lockutils.py:168
2014-10-14 20:41:26.426 DEBUG 
neutron.plugins.ml2.drivers.cisco.nexus.nexus_db_v2 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] add_nexusport_binding() 
called from (pid=961) add_nexusport_binding 
/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_db_v2.py:45
2014-10-14 20:41:26.431 DEBUG 
neutron.plugins.ml2.drivers.cisco.nexus.nexus_db_v2 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] get_nexusvlan_binding() 
called from (pid=961) get_nexusvlan_binding 
/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_db_v2.py:39
2014-10-14 20:41:26.437 DEBUG 
neutron.plugins.ml2.drivers.cisco.nexus.mech_cisco_nexus 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] Nexus: create  trunk vlan 
%s from (pid=961) _configure_switch_entry 
/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/mech_cisco_nexus.py:125
2014-10-14 20:41:26.438 DEBUG 
neutron.plugins.ml2.drivers.cisco.nexus.nexus_network_driver 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] NexusDriver: 
  config xmlns:xc=urn:ietf:params:xml:ns:netconf:base:1.0
configure
  __XML__MODE__exec_configure
vlan
  vlan-id-create-delete
__XML__PARAM_value300/__XML__PARAM_value
__XML__MODE_vlan
  name
vlan-namedan-300/vlan-name
  /name
/__XML__MODE_vlan
  /vlan-id-create-delete
/vlan

  /__XML__MODE__exec_configure
/configure
  /config
 from (pid=961) create_vlan 
/opt/stack/neutron/neutron/plugins/ml2/drivers/cisco/nexus/nexus_network_driver.py:123
2014-10-14 20:41:26.438 DEBUG ncclient.transport.session 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] SSHSession(session, 
initial daemon) created: 
client_capabilities=['urn:ietf:params:netconf:capability:writable-running:1.0', 
'urn:ietf:params:netconf:capability:rollback-on-error:1.0', 
'urn:ietf:params:netconf:capability:validate:1.0', 
'urn:ietf:params:netconf:capability:confirmed-commit:1.0', 
'urn:ietf:params:netconf:capability:url:1.0?scheme=http,ftp,file,https,sftp', 
'urn:ietf:params:netconf:capability:candidate:1.0', 
'urn:ietf:params:netconf:capability:xpath:1.0', 
'urn:ietf:params:netconf:capability:startup:1.0', 
'urn:ietf:params:xml:ns:netconf:base:1.0', 
'urn:liberouter:params:netconf:capability:power-control:1.0urn:ietf:params:netconf:capability:interleave:1.0']
 from (pid=961) __init__ 
/usr/local/lib/python2.7/dist-packages/ncclient/transport/session.py:42
2014-10-14 20:41:26.438 DEBUG ncclient.transport.session 
[req-25e05ffb-da63-4aaf-b7a8-6e58c0762abc None None] SSHSession(session, 
initial daemon) created: 
client_capabilities=['urn:ietf:params:netconf:capability:writable-running:1.0', 
'urn:ietf:params:netconf:capability:rollback-on-error:1.0', 
'urn:ietf:params:netconf:capability:validate:1.0', 

[Yahoo-eng-team] [Bug 1381295] [NEW] live-migration with context by get_admin_context would fail

2014-10-14 Thread hougangliu
Public bug reported:

when 3rd-party wants to live-migration a VM to a host by call compute
API directly ( not by novaclient ) like code snippet 1 as below, the
destination compute service would call glance to get image(refer to Code
snippet 2 as below),  the destniation compute service would raise
exception as Log snippet 1 as below, this is caused by patch
https://review.openstack.org/#/c/121692/2. I think similar problem
exists in nova, too, refer to Code snippet 3.

Code snippet 1:
from nova import compute
ctxt = context.get_admin_context()
self.compute_api = compute.API()
self.compute_api.live_migrate(
ctxt.elevated(), inst, False, False, host_dict)


Code snippet 2:nova.image.glance.py:
def _create_glance_client(context, host, port, use_ssl, version=1):
Instantiate a new glanceclient.Client object.
params = {}
if use_ssl:
scheme = 'https'
# https specific params
params['insecure'] = CONF.glance.api_insecure
params['ssl_compression'] = False
if CONF.ssl.cert_file:
params['cert_file'] = CONF.ssl.cert_file
if CONF.ssl.key_file:
params['key_file'] = CONF.ssl.key_file
if CONF.ssl.ca_file:
params['cacert'] = CONF.ssl.ca_file
else:
scheme = 'http'

if CONF.auth_strategy == 'keystone':
# NOTE(isethi): Glanceclient = 0.9.0.49 accepts only
# keyword 'token', but later versions accept both the
# header 'X-Auth-Token' and 'token'
params['token'] = context.auth_token
params['identity_headers'] = generate_identity_headers(context)
would return {'X-Auth-Token': None, }
if utils.is_valid_ipv6(host):
# if so, it is ipv6 address, need to wrap it with '[]'
host = '[%s]' % host
endpoint = '%s://%s:%s' % (scheme, host, port)
return glanceclient.Client(str(version), endpoint, **params)  
params=={'identity_headers':{{'X-Auth-Token': None, }}...}

 Code snippet 3:
novaclient.client.py:
def http_log_req(self, method, url, kwargs):
if not self.http_log_debug:
return

string_parts = ['curl -i']

if not kwargs.get('verify', True):
string_parts.append(' --insecure')

string_parts.append( '%s' % url)
string_parts.append(' -X %s' % method)

headers = copy.deepcopy(kwargs['headers'])
self._redact(headers, ['X-Auth-Token'])  
here
# because dict ordering changes from 2 to 3
keys = sorted(headers.keys())
for name in keys:
value = headers[name]
header = ' -H %s: %s' % (name, value)
string_parts.append(header)

if 'data' in kwargs:
data = json.loads(kwargs['data'])
self._redact(data, ['auth', 'passwordCredentials', 'password'])
string_parts.append( -d '%s' % json.dumps(data))
self._logger.debug(REQ: %s % .join(string_parts))


Log  snippet 1:
2014-10-14 00:42:10.699 31346 INFO nova.compute.manager [-] [instance: 
aa68237f-e669-4025-b16e-f4b50926f7a5] During the sync_power process the 
instance has moved from host cmo-comp5.ibm.com to host cmo-comp4.ibm.com
2014-10-14 00:42:10.913 31346 INFO nova.compute.manager 
[req-7be58838-3ec2-43d4-afd1-23d6b3d5e3de None] [instance: 
aa68237f-e669-4025-b16e-f4b50926f7a5] Post operation of migration started
2014-10-14 00:42:11.148 31346 ERROR oslo.messaging.rpc.dispatcher 
[req-7be58838-3ec2-43d4-afd1-23d6b3d5e3de ] Exception during message handling: 
'NoneType' object has no attribute 'encode'
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, 
in _dispatch_and_reply
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 177, 
in _dispatch
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 123, 
in _do_dispatch
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 418, in 
decorated_function
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-14 00:42:11.148 31346 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/exception.py, line 88, in wrapped
2014-10-14 00:42:11.148 31346 

[Yahoo-eng-team] [Bug 1381302] [NEW] the value of num_instances in field stats which in compute_nodes talbe

2014-10-14 Thread Rong Han ZTE
Public bug reported:

the value of num_instances is not correct , which is in stats field
of compute_nodes table in nova database.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381302

Title:
  the value of num_instances in field stats which in compute_nodes
  talbe

Status in OpenStack Compute (Nova):
  New

Bug description:
  the value of num_instances is not correct , which is in stats
  field of compute_nodes table in nova database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381303] [NEW] The value of num_instances is not correct , which is in stats field of compute_nodes table in nova database.

2014-10-14 Thread Rong Han ZTE
Public bug reported:

the value of num_instances is not correct , which is in stats field
of compute_nodes table in nova database.

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- the value of num_instances in field stats which in compute_nodes talbe
+ the value of num_instances is not correct , which is in stats field of 
compute_nodes table in nova database.

** Summary changed:

- the value of num_instances is not correct , which is in stats field of 
compute_nodes table in nova database.
+ The value of num_instances is not correct , which is in stats field of 
compute_nodes table in nova database.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381303

Title:
  The value of num_instances is not correct , which is in stats
  field of compute_nodes table in nova database.

Status in OpenStack Compute (Nova):
  New

Bug description:
  the value of num_instances is not correct , which is in stats
  field of compute_nodes table in nova database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381306] [NEW] password eye icon is displayed in the help text in password change forms

2014-10-14 Thread Akihiro Motoki
Public bug reported:

Settings - Password and Identity - User - Edit forms support changing 
password.
In these forms, password eye icon is displayed at the upper-right of the help 
text.
It is unnecessary.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381306

Title:
  password eye icon is displayed in the help text in password change
  forms

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Settings - Password and Identity - User - Edit forms support changing 
password.
  In these forms, password eye icon is displayed at the upper-right of the help 
text.
  It is unnecessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381302] Re: the value of num_instances in field stats which in compute_nodes talbe

2014-10-14 Thread Rong Han ZTE
** Changed in: nova
   Status: New = Invalid

** Information type changed from Public to Private

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381302

Title:
  the value of num_instances in field stats which in compute_nodes
  talbe

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  the value of num_instances is not correct , which is in stats
  field of compute_nodes table in nova database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381311] [NEW] Dashboard internationalization

2014-10-14 Thread liaonanhai
Public bug reported:

When I change the Language to zh-cn,But The Dashboard is also show In
English?

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Screenshot from 2014-10-15 11:26:14.png
   
https://bugs.launchpad.net/bugs/1381311/+attachment/4236561/+files/Screenshot%20from%202014-10-15%2011%3A26%3A14.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381311

Title:
  Dashboard internationalization

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I change the Language to zh-cn,But The Dashboard is also show In
  English?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259539] Re: When switching tabs, 'Loading...' with spinner doesn't disappear after tab data was loaded first time

2014-10-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1259539

Title:
  When switching tabs, 'Loading...' with spinner doesn't disappear after
  tab data was loaded first time

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Bug appears on horizon 2013.2. So i switch to some tab, it loads its
  contents from the server, but after the GET request is completed,
  'Loading...' string with spinner do not disappear. If I then switch
  back to some other tab and then to the tab with hanging spinner, it
  successfully shows its contents.

  I found the code responsible for tab loading,
  static/horizon/js/horizon.tabs.js:18. So if I place a breakpoint (in
  Chrome Web Tools) onto line 20, then continue execution, the desired
  tab contents appears even when I switch to it first time. Seems like
  an async issue - javascript doesn't have enough time to update the
  DOM. Also, it wasn't reproduced on slower machines.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1259539/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311193] Re: Evacuate with a volume attached

2014-10-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1311193

Title:
  Evacuate with a volume attached

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  After evacuating my instance. I can see that my volume is still
  attached. But when i see the volume list in horizon. My volume appear
  available. My volume is stored in nfs on a freenas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1311193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325952] Re: tempest.api.network.test_fwaas_extensions.FWaaSExtensionTestJSON tearDownClass FAILED

2014-10-14 Thread Ghanshyam Mann
Neutron firewall_l3_agent failed to create firewall through Attribute
Error. firewall stuck in PENDING_CREATE state and in PENDING_DELETE
while deleting the same.

Tests through error as firewall policy is still in use with failed
firewall.


Console log-

2014-10-13 23:15:21.180 | 2014-10-13 22:43:28,278 25997 INFO 
[tempest.common.rest_client] Request 
(FWaaSExtensionTestJSON:test_create_show_delete_firewall): 201 POST 
http://127.0.0.1:9696/v2.0/fw/firewalls 0.061s
2014-10-13 23:15:21.180 | 2014-10-13 22:43:28,278 25997 DEBUG
[tempest.common.rest_client] Request 
(FWaaSExtensionTestJSON:test_create_show_delete_firewall): 201 POST 
http://127.0.0.1:9696/v2.0/fw/firewalls 0.061s
2014-10-13 23:15:21.181 | Request - Headers: {'Content-Type': 
'application/json', 'X-Auth-Token': 'omitted', 'Accept': 'application/json'}
2014-10-13 23:15:21.181 | Body: {firewall: {firewall_policy_id: 
1af273a9-b299-4e5c-81cf-580398ae4ec7, name: firewall-1257536380}}
2014-10-13 23:15:21.181 | Response - Headers: {'content-type': 
'application/json; charset=UTF-8', 'date': 'Mon, 13 Oct 2014 22:43:28 GMT', 
'content-length': '273', 'status': '201', 'connection': 'close', 
'x-openstack-request-id': 'req-5a6b5a71-ee4f-4c89-a90b-0e17a0b7f929'}
2014-10-13 23:15:21.181 | Body: {firewall: {status: 
PENDING_CREATE, name: firewall-1257536380, admin_state_up: true, 
tenant_id: b51b3b4153794b18933ac1c3b26f11bd, firewall_policy_id: 
1af273a9-b299-4e5c-81cf-580398ae4ec7, id: 
5b12dcb1-5805-4123-833b-6cdc1b123a62, description: }}

After 5 min (network build timeout)

2014-10-13 23:15:21.689 | 2014-10-13 22:48:27,791 25997 DEBUG
[tempest.common.rest_client] Request 
(FWaaSExtensionTestJSON:test_create_show_delete_firewall): 200 GET 
http://127.0.0.1:9696/v2.0/fw/firewalls/5b12dcb1-5805-4123-833b-6cdc1b123a62 
0.370s
2014-10-13 23:15:21.689 | Request - Headers: {'Content-Type': 
'application/json', 'X-Auth-Token': 'omitted', 'Accept': 'application/json'}
2014-10-13 23:15:21.689 | Body: None
2014-10-13 23:15:21.689 | Response - Headers: {'content-type': 
'application/json; charset=UTF-8', 'date': 'Mon, 13 Oct 2014 22:48:27 GMT', 
'content-length': '273', 'status': '200', 'content-location': 
'http://127.0.0.1:9696/v2.0/fw/firewalls/5b12dcb1-5805-4123-833b-6cdc1b123a62', 
'connection': 'close', 'x-openstack-request-id': 
'req-e95acaaa-81c3-4415-b4df-917c8427a160'}
2014-10-13 23:15:21.689 | Body: {firewall: {status: 
PENDING_CREATE, name: firewall-1257536380, admin_state_up: true, 
tenant_id: b51b3b4153794b18933ac1c3b26f11bd, firewall_policy_id: 
1af273a9-b299-4e5c-81cf-580398ae4ec7, id: 
5b12dcb1-5805-4123-833b-6cdc1b123a62, description: }}


screen-q-vpn logs-

2014-10-13 22:43:29.445 21229 ERROR 
neutron.services.firewall.agents.l3reference.firewall_l3_agent 
[req-5a6b5a71-ee4f-4c89-a90b-0e17a0b7f929 None] FWaaS RPC failure in 
create_firewall for fw: 5b12dcb1-5805-4123-833b-6cdc1b123a62
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent Traceback (most 
recent call last):
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent   File 
/opt/stack/new/neutron/neutron/services/firewall/agents/l3reference/firewall_l3_agent.py,
 line 122, in _invoke_driver_for_plugin_api
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent routers = 
self.plugin_rpc.get_routers(context)
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent   File 
/opt/stack/new/neutron/neutron/agent/l3_agent.py, line 108, in get_routers
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent 
router_ids=router_ids))
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent   File 
/opt/stack/new/neutron/neutron/common/log.py, line 34, in wrapper
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent return 
method(*args, **kwargs)
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 161, in call
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent context, 
msg, rpc_method='call', **kwargs)
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 187, in __call_rpc_method
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent return 
func(context, msg['method'], **msg['args'])
2014-10-13 22:43:29.445 21229 TRACE 
neutron.services.firewall.agents.l3reference.firewall_l3_agent   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py,