[Yahoo-eng-team] [Bug 1261636] [NEW] Need set compute driver's flag capabilities correctly

2013-12-16 Thread ChangBo Guo
Public bug reported:

1.Back ground:
Class ComputeDriver, base of all compute drivers, has dictionary capabilities
to indicate if the compute driver implements some functions. The 
'supports_recreate'
flag needs the driver to support the evacuate operation. The 'has_imagecache' 
flag
needs the driver to implement the 'manage_image_cache' method.[1] Compute 
manger will
check this with capabilities['has_imagecache'] [2]or 
capabilities['supports_recreate']
[3]directly.

2. Problems:
1)Docker does not currently support these two functions, so the capabilities 
flags
should not be set.[4]
2)Baremetal only set 'has_imagecache', there is code path leading KeyError.
Need set capabilities explicitly to avoid this.[5]

3.Solution:
Set or unset compute driver's capabilities explicitly.

[1] https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L128
[2] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L5168
[3] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2142
[4] https://github.com/openstack/nova/blob/master/nova/virt/docker/driver.py#L62
[5] 
https://github.com/openstack/nova/blob/master/nova/virt/baremetal/driver.py#L114

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261636

Title:
  Need set compute driver's flag capabilities correctly

Status in OpenStack Compute (Nova):
  New

Bug description:
  1.Back ground:
  Class ComputeDriver, base of all compute drivers, has dictionary capabilities
  to indicate if the compute driver implements some functions. The 
'supports_recreate'
  flag needs the driver to support the evacuate operation. The 'has_imagecache' 
flag
  needs the driver to implement the 'manage_image_cache' method.[1] Compute 
manger will
  check this with capabilities['has_imagecache'] [2]or 
capabilities['supports_recreate']
  [3]directly.

  2. Problems:
  1)Docker does not currently support these two functions, so the capabilities 
flags
  should not be set.[4]
  2)Baremetal only set 'has_imagecache', there is code path leading KeyError.
  Need set capabilities explicitly to avoid this.[5]

  3.Solution:
  Set or unset compute driver's capabilities explicitly.

  [1] https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L128
  [2] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L5168
  [3] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2142
  [4] 
https://github.com/openstack/nova/blob/master/nova/virt/docker/driver.py#L62
  [5] 
https://github.com/openstack/nova/blob/master/nova/virt/baremetal/driver.py#L114

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261633] [NEW] tenant of admin delete vm failed

2013-12-16 Thread shihanzhang
Public bug reported:

Tenant of admin create vm in another tenant network,then   deleting that vm 
will failed,the failed log is bellow
my environment is: OS  ubuntu 12.04 
openstack package:2013.2-0ubuntu1~cloud0 

TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", 
line 461, in _process_data
 **args)
   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
 result = getattr(proxyobj, method)(ctxt, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 353, 
in decorated_function
 return function(self, context, *args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 90, in 
wrapped
 payload)
   File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 73, in 
wrapped
 return f(self, context, *args, **kw)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 243, 
in decorated_function
 pass
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 229, 
in decorated_function
return function(self, context, *args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 294, 
in decorated_function
 function(self, context, *args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 271, 
in decorated_function
 e, sys.exc_info())
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 258, 
in decorated_function
 return function(self, context, *args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1792, 
in terminate_instance
 do_terminate_instance(instance, bdms)
   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", 
line 246, in inner
 return f(*args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1784, 
in do_terminate_instance
 reservations=reservations)
   File "/usr/lib/python2.7/dist-packages/nova/hooks.py", line 105, in inner
 rv = f(*args, **kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1757, 
in _delete_instance
 user_id=user_id)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1729, 
in _delete_instance
 self._shutdown_instance(context, db_inst, bdms)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1639, 
in _shutdown_instance
 network_info = self._get_instance_nw_info(context, instance)
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 876, 
in _get_instance_nw_info
 instance)
   File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
455, in get_instance_nw_info
 result = self._get_instance_nw_info(context, instance, networks)
   File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
463, in _get_instance_nw_info
 nw_info = self._build_network_info_model(context, instance, networks)
   File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
997, in _build_network_info_model
 ports = [port for port in ports if port['network_id'] in net_ids]
   File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
962, in _nw_info_build_network
 label=network_name,
 UnboundLocalError: local variable 'network_name' referenced before assignment

I analyse the failed log, in  _build_network_info_model

networks = self._get_available_networks(context,
instance['project_id'])
when tenant of admin deleting vm, nova get network according to the tenant ID 
,but that network  doesn't belong to admin,so the error  " local variable 
'network_name' referenced before assignment " happens

** Affects: nova
 Importance: Medium
 Assignee: shihanzhang (shihanzhang)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261633

Title:
  tenant of admin delete vm failed

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Tenant of admin create vm in another tenant network,then   deleting that vm 
will failed,the failed log is bellow
  my environment is: OS  ubuntu 12.04 
  openstack package:2013.2-0ubuntu1~cloud0 

  TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", 
line 461, in _process_data
   **args)
 File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
   result = getattr(proxyobj, method)(ctxt, **kwargs)
 File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 353, 

[Yahoo-eng-team] [Bug 1261624] [NEW] Missing image size after adding/updating the locations

2013-12-16 Thread Fei Long Wang
Public bug reported:

Reproduce steps:

1. Create an empty image without image data/locations
2. Update image with PATCH (add/replace) to add locations
3. No image size even the image is in active state

** Affects: glance
 Importance: Medium
 Assignee: Fei Long Wang (flwang)
 Status: Triaged

** Changed in: glance
   Importance: Undecided => Medium

** Changed in: glance
 Assignee: (unassigned) => Fei Long Wang (flwang)

** Changed in: glance
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1261624

Title:
  Missing image size after adding/updating the locations

Status in OpenStack Image Registry and Delivery Service (Glance):
  Triaged

Bug description:
  Reproduce steps:

  1. Create an empty image without image data/locations
  2. Update image with PATCH (add/replace) to add locations
  3. No image size even the image is in active state

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1261624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261622] [NEW] change text or behaviour of the admin token in keystone.conf

2013-12-16 Thread Steve Martinelli
Public bug reported:

Given the outcome of: https://bugs.launchpad.net/keystone/+bug/1259440
And a recent colleague asking why he can't use the admin token to get a list of 
projects we should address the misconception surrounding this part of the 
keystone.conf file.

Currently, it reads:
[DEFAULT]
# A "shared secret" between keystone and other openstack services
# admin_token = ADMIN

which kind of gives the indication that it has overwhelming power, when
in fact it does not represent a user and carries no explicit
authorization that can be delegated. It's just a magical hack for
bootstrapping keystone and should be removed from the wsgi pipeline
after that.

Suggest we either clean up the comment before the admin_token, or we
actually make it usable, and let it grab the admin project/user (but if
no users or project exist... )

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1261622

Title:
  change text or behaviour of the admin token in keystone.conf

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Given the outcome of: https://bugs.launchpad.net/keystone/+bug/1259440
  And a recent colleague asking why he can't use the admin token to get a list 
of projects we should address the misconception surrounding this part of the 
keystone.conf file.

  Currently, it reads:
  [DEFAULT]
  # A "shared secret" between keystone and other openstack services
  # admin_token = ADMIN

  which kind of gives the indication that it has overwhelming power,
  when in fact it does not represent a user and carries no explicit
  authorization that can be delegated. It's just a magical hack for
  bootstrapping keystone and should be removed from the wsgi pipeline
  after that.

  Suggest we either clean up the comment before the admin_token, or we
  actually make it usable, and let it grab the admin project/user (but
  if no users or project exist... )

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1261622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261621] [NEW] nova api value error is not right

2013-12-16 Thread Jay Lau
Public bug reported:

I was trying to add a json field to DB but forget to dumps the json to
string, and nova api report the following error.

2013-12-17 12:37:51.615 TRACE object Traceback (most recent call last):
2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/base.py", line 70, in setter
2013-12-17 12:37:51.615 TRACE object field.coerce(self, name, value))
2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/fields.py", line 166, in coerce
2013-12-17 12:37:51.615 TRACE object return self._type.coerce(obj, attr, 
value)
2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/fields.py", line 218, in coerce
2013-12-17 12:37:51.615 TRACE object raise ValueError(_('A string is 
required here, not %s'),
2013-12-17 12:37:51.615 TRACE object ValueError: (u'A string is required here, 
not %s', 'dict') <<<

The error should be 

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261621

Title:
  nova api value error is not right

Status in OpenStack Compute (Nova):
  New

Bug description:
  I was trying to add a json field to DB but forget to dumps the json to
  string, and nova api report the following error.

  2013-12-17 12:37:51.615 TRACE object Traceback (most recent call last):
  2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/base.py", line 70, in setter
  2013-12-17 12:37:51.615 TRACE object field.coerce(self, name, value))
  2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/fields.py", line 166, in coerce
  2013-12-17 12:37:51.615 TRACE object return self._type.coerce(obj, attr, 
value)
  2013-12-17 12:37:51.615 TRACE object   File 
"/opt/stack/nova/nova/objects/fields.py", line 218, in coerce
  2013-12-17 12:37:51.615 TRACE object raise ValueError(_('A string is 
required here, not %s'),
  2013-12-17 12:37:51.615 TRACE object ValueError: (u'A string is required 
here, not %s', 'dict') <<<

  The error should be 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260432] Re: nova-compute can't be setting up during install on trusty

2013-12-16 Thread Ming Lei
** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260432

Title:
  nova-compute can't be setting up during install on trusty

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  1, during install:
  Setting up nova-compute (1:2014.1~b1-0ubuntu2) ...
  start: Job failed to start
  invoke-rc.d: initscript nova-compute, action "start" failed.
  dpkg: error processing nova-compute (--configure):
   subprocess installed post-installation script returned error exit status 1
  Setting up nova-compute-kvm (1:2014.1~b1-0ubuntu2) ...
  Errors were encountered while processing:
   nova-compute
  E: Sub-process /usr/bin/dpkg returned an error code (1)

  2, the system is latest trusty:
  ming@arm64:~$ sudo apt-get dist-upgrade
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  Calculating upgrade... Done
  The following packages were automatically installed and are no longer 
required:
dnsmasq-utils iputils-arping libboost-system1.53.0 libboost-thread1.53.0
libclass-isa-perl libopts25 libswitch-perl ttf-dejavu-core
  Use 'apt-get autoremove' to remove them.
  The following packages have been kept back:
checkbox-cli
  0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

  3, looks /usr/bin/nova-compute can't be started:
  ming@arm64:~$ nova-compute 
  2013-12-12 17:57:19.992 13823 ERROR stevedore.extension [-] Could not load 
'file': (WebOb 1.3 (/usr/lib/python2.7/dist-packages), 
Requirement.parse('WebOb>=1.2.3,<1.3'))
  2013-12-12 17:57:19.993 13823 ERROR stevedore.extension [-] (WebOb 1.3 
(/usr/lib/python2.7/dist-packages), Requirement.parse('WebOb>=1.2.3,<1.3'))
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension Traceback (most 
recent call last):
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 134, in 
_load_plugins
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension invoke_kwds,
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 146, in 
_load_one_plugin
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension plugin = ep.load()
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2107, in load
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension if require: 
self.require(env, installer)
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2120, in require
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension 
working_set.resolve(self.dist.requires(self.extras),env,installer)))
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
"/usr/lib/python2.7/dist-packages/pkg_resources.py", line 580, in resolve
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension raise 
VersionConflict(dist,req) # XXX put more info here
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension VersionConflict: 
(WebOb 1.3 (/usr/lib/python2.7/dist-packages), 
Requirement.parse('WebOb>=1.2.3,<1.3'))
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension 
  2013-12-12 17:57:20.133 13823 ERROR nova.virt.driver [-] Compute driver 
option required, but not specified

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261611] Re: Make sure report_interval is less than service_down_time

2013-12-16 Thread Liyingjun
Duplicate bug with https://bugs.launchpad.net/nova/+bug/1255685

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261611

Title:
  Make sure report_interval is less than service_down_time

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  If service_down_time is less than report_interval, services will
  routinely be considered down, because they report in too rarely.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261611] [NEW] Make sure report_interval is less than service_down_time

2013-12-16 Thread ling-yun
Public bug reported:

If service_down_time is less than report_interval, services will
routinely be considered down, because they report in too rarely.

** Affects: nova
 Importance: Undecided
 Assignee: ling-yun (zengyunling)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => ling-yun (zengyunling)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261611

Title:
  Make sure report_interval is less than service_down_time

Status in OpenStack Compute (Nova):
  New

Bug description:
  If service_down_time is less than report_interval, services will
  routinely be considered down, because they report in too rarely.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261593] Re: tests/ml2: GreTypeTest missing self.addCleanup(db.clear_db)

2013-12-16 Thread Isaku Yamahata
** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261593

Title:
  tests/ml2: GreTypeTest missing self.addCleanup(db.clear_db)

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  GreTypeTest.setUp() in test_type_gre.py should have
  self.addCleanup(db.clear_db)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261298] Re: The [database] section in neutron.conf should have a comment that it goes in the core plugin .ini file

2013-12-16 Thread Henry Gessau
Moving from devstack to neutron (low priority)

** Summary changed:

- The 'connection' param in neutron.conf is in the default value always
+ The [database] section in neutron.conf should have a comment that it goes in 
the core plugin .ini file

** Project changed: devstack => neutron

** Changed in: neutron
 Assignee: Kiyohiro Adachi (adachi) => (unassigned)

** Changed in: neutron
   Status: In Progress => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261298

Title:
  The [database] section in neutron.conf should have a comment that it
  goes in the core plugin .ini file

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  No one sets the 'connection' param at the '[database]' section in
  neutron.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261298] [NEW] The [database] section in neutron.conf should have a comment that it goes in the core plugin .ini file

2013-12-16 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

No one sets the 'connection' param at the '[database]' section in
neutron.conf.

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
The [database] section in neutron.conf should have a comment that it goes in 
the core plugin .ini file
https://bugs.launchpad.net/bugs/1261298
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261598] [NEW] VPNaaS doesn't consider subnet interface or router gateway removal operation

2013-12-16 Thread berlin
Public bug reported:

I met one problem when fixing bug #1255442. On Routed service insertion
implementation such as VPNaaS,  there is router and subnet validation
check before creating a vpnservice and also router in using check before
deleting a router.

But it did not consider subnet interface or the router gateway removal 
operation. 
Whether it is okay to put all checks into l3_db.py or have some other more 
appropriate way to handle the problem.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-core vpnaas

** Tags added: vpnaas

** Tags added: neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261598

Title:
  VPNaaS doesn't consider subnet interface or router gateway removal
  operation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I met one problem when fixing bug #1255442. On Routed service
  insertion implementation such as VPNaaS,  there is router and subnet
  validation check before creating a vpnservice and also router in using
  check before deleting a router.

  But it did not consider subnet interface or the router gateway removal 
operation. 
  Whether it is okay to put all checks into l3_db.py or have some other more 
appropriate way to handle the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261585] [NEW] ConfigDrive metadata is incorrectly generated on Windows

2013-12-16 Thread Alessandro Pilotti
Public bug reported:

Files must be written with "wb" instead of "w" in order to support
multiple platforms:

https://github.com/openstack/nova/blob/b823db737855149ba847e5b19df9232f109f6001/nova/virt/configdrive.py#L92

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

** Tags added: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261585

Title:
  ConfigDrive metadata is incorrectly generated on Windows

Status in OpenStack Compute (Nova):
  New

Bug description:
  Files must be written with "wb" instead of "w" in order to support
  multiple platforms:

  
https://github.com/openstack/nova/blob/b823db737855149ba847e5b19df9232f109f6001/nova/virt/configdrive.py#L92

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261581] [NEW] tempest.api.compute.v3.images.test_images.ImagesV3TestXML.test_create_image_from_stopped_server fails

2013-12-16 Thread John Dickinson
Public bug reported:

tempest.api.compute.v3.images.test_images.ImagesV3TestXML.test_create_image_from_stopped_server
fails with the following message:

(logs from http://logs.openstack.org/25/62425/1/gate/gate-tempest-dsvm-
full/d606295/console.html)

2013-12-16 23:21:26.365 | Traceback (most recent call last):
2013-12-16 23:21:26.365 |   File 
"tempest/api/compute/v3/images/test_images.py", line 91, in 
test_create_image_from_stopped_server
2013-12-16 23:21:26.365 | wait_until='active')
2013-12-16 23:21:26.365 |   File "tempest/api/compute/base.py", line 279, in 
create_image_from_server
2013-12-16 23:21:26.365 | server_id, name)
2013-12-16 23:21:26.365 |   File 
"tempest/services/compute/v3/xml/servers_client.py", line 515, in create_image
2013-12-16 23:21:26.366 | str(Document(post_body)), self.headers)
2013-12-16 23:21:26.366 |   File "tempest/common/rest_client.py", line 302, in 
post
2013-12-16 23:21:26.366 | return self.request('POST', url, headers, body)
2013-12-16 23:21:26.366 |   File "tempest/common/rest_client.py", line 436, in 
request
2013-12-16 23:21:26.366 | resp, resp_body)
2013-12-16 23:21:26.366 |   File "tempest/common/rest_client.py", line 491, in 
_error_checker
2013-12-16 23:21:26.366 | raise exceptions.Conflict(resp_body)
2013-12-16 23:21:26.367 | Conflict: An object with that identifier already 
exists
2013-12-16 23:21:26.367 | Details: {'message': "Cannot 'create_image' while 
instance is in task_state powering-off", 'code': '409'}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261581

Title:
  
tempest.api.compute.v3.images.test_images.ImagesV3TestXML.test_create_image_from_stopped_server
  fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  
tempest.api.compute.v3.images.test_images.ImagesV3TestXML.test_create_image_from_stopped_server
  fails with the following message:

  (logs from http://logs.openstack.org/25/62425/1/gate/gate-tempest-
  dsvm-full/d606295/console.html)

  2013-12-16 23:21:26.365 | Traceback (most recent call last):
  2013-12-16 23:21:26.365 |   File 
"tempest/api/compute/v3/images/test_images.py", line 91, in 
test_create_image_from_stopped_server
  2013-12-16 23:21:26.365 | wait_until='active')
  2013-12-16 23:21:26.365 |   File "tempest/api/compute/base.py", line 279, in 
create_image_from_server
  2013-12-16 23:21:26.365 | server_id, name)
  2013-12-16 23:21:26.365 |   File 
"tempest/services/compute/v3/xml/servers_client.py", line 515, in create_image
  2013-12-16 23:21:26.366 | str(Document(post_body)), self.headers)
  2013-12-16 23:21:26.366 |   File "tempest/common/rest_client.py", line 302, 
in post
  2013-12-16 23:21:26.366 | return self.request('POST', url, headers, body)
  2013-12-16 23:21:26.366 |   File "tempest/common/rest_client.py", line 436, 
in request
  2013-12-16 23:21:26.366 | resp, resp_body)
  2013-12-16 23:21:26.366 |   File "tempest/common/rest_client.py", line 491, 
in _error_checker
  2013-12-16 23:21:26.366 | raise exceptions.Conflict(resp_body)
  2013-12-16 23:21:26.367 | Conflict: An object with that identifier already 
exists
  2013-12-16 23:21:26.367 | Details: {'message': "Cannot 'create_image' while 
instance is in task_state powering-off", 'code': '409'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic

2013-12-16 Thread Alan Pevec
** Changed in: ceilometer/havana
   Status: Fix Committed => Fix Released

** Changed in: heat/havana
   Status: Fix Committed => Fix Released

** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257293

Title:
  [messaging] QPID broadcast RPC requests to all servers for a given
  topic

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Released
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed

Bug description:
  According to the oslo.messaging documentation, when a RPC request is
  made to a given topic, and there are multiple servers for that topic,
  only _one_ server should service that RPC request.  See
  http://docs.openstack.org/developer/oslo.messaging/target.html

  "topic (str) – A name which identifies the set of interfaces exposed
  by a server. Multiple servers may listen on a topic and messages will
  be dispatched to one of the servers in a round-robin fashion."

  In the case of a QPID-based deployment using topology version 2, this
  is not the case.  Instead, each listening server gets a copy of the
  RPC and will process it.

  For more detail, see

  https://bugs.launchpad.net/oslo/+bug/1178375/comments/26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261572] Re: Keystoneclient import fails in python3

2013-12-16 Thread Georgy Okrokvertskhov
** Also affects: solum
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1261572

Title:
  Keystoneclient import fails in python3

Status in OpenStack Identity (Keystone):
  New
Status in Solum - Application Lifecycle Management:
  New

Bug description:
  When application imports keystoneclient.auth_token this import fails
  because xmlrpclib module was renamed in python 3.

  Here is a stack trace:
  3 | Traceback (most recent call last):
  2013-11-28 18:27:12.653 |   File "./solum/tests/api/base.py", line 37, in 
setUp
  2013-11-28 18:27:12.653 | 'debug': False
  2013-11-28 18:27:12.653 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/pecan/testing.py",
 line 35, in load_test_app
  2013-11-28 18:27:12.653 | return TestApp(load_app(config))
  2013-11-28 18:27:12.653 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/pecan/core.py",
 line 157, in load_app
  2013-11-28 18:27:12.654 | module = __import__(package_name, 
fromlist=['app'])
  2013-11-28 18:27:12.654 |   File "./solum/api/app.py", line 20, in 
  2013-11-28 18:27:12.654 | from solum.api import auth
  2013-11-28 18:27:12.654 |   File "./solum/api/auth.py", line 19, in 
  2013-11-28 18:27:12.654 | from keystoneclient.middleware import auth_token
  2013-11-28 18:27:12.654 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/keystoneclient/middleware/auth_token.py",
 line 161, in 
  2013-11-28 18:27:12.654 | from keystoneclient.openstack.common import 
jsonutils
  2013-11-28 18:27:12.655 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/keystoneclient/openstack/common/jsonutils.py",
 line 42, in 
  2013-11-28 18:27:12.655 | import xmlrpclib
  2013-11-28 18:27:12.655 | ImportError: No module named 'xmlrpclib'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1261572/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261572] [NEW] Keystoneclient import fails in python3

2013-12-16 Thread Georgy Okrokvertskhov
Public bug reported:

When application imports keystoneclient.auth_token this import fails
because xmlrpclib module was renamed in python 3.

Here is a stack trace:
3 | Traceback (most recent call last):
2013-11-28 18:27:12.653 |   File "./solum/tests/api/base.py", line 37, in setUp
2013-11-28 18:27:12.653 | 'debug': False
2013-11-28 18:27:12.653 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/pecan/testing.py",
 line 35, in load_test_app
2013-11-28 18:27:12.653 | return TestApp(load_app(config))
2013-11-28 18:27:12.653 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/pecan/core.py",
 line 157, in load_app
2013-11-28 18:27:12.654 | module = __import__(package_name, 
fromlist=['app'])
2013-11-28 18:27:12.654 |   File "./solum/api/app.py", line 20, in 
2013-11-28 18:27:12.654 | from solum.api import auth
2013-11-28 18:27:12.654 |   File "./solum/api/auth.py", line 19, in 
2013-11-28 18:27:12.654 | from keystoneclient.middleware import auth_token
2013-11-28 18:27:12.654 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/keystoneclient/middleware/auth_token.py",
 line 161, in 
2013-11-28 18:27:12.654 | from keystoneclient.openstack.common import 
jsonutils
2013-11-28 18:27:12.655 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/keystoneclient/openstack/common/jsonutils.py",
 line 42, in 
2013-11-28 18:27:12.655 | import xmlrpclib
2013-11-28 18:27:12.655 | ImportError: No module named 'xmlrpclib'

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: solum
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1261572

Title:
  Keystoneclient import fails in python3

Status in OpenStack Identity (Keystone):
  New
Status in Solum - Application Lifecycle Management:
  New

Bug description:
  When application imports keystoneclient.auth_token this import fails
  because xmlrpclib module was renamed in python 3.

  Here is a stack trace:
  3 | Traceback (most recent call last):
  2013-11-28 18:27:12.653 |   File "./solum/tests/api/base.py", line 37, in 
setUp
  2013-11-28 18:27:12.653 | 'debug': False
  2013-11-28 18:27:12.653 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/pecan/testing.py",
 line 35, in load_test_app
  2013-11-28 18:27:12.653 | return TestApp(load_app(config))
  2013-11-28 18:27:12.653 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/pecan/core.py",
 line 157, in load_app
  2013-11-28 18:27:12.654 | module = __import__(package_name, 
fromlist=['app'])
  2013-11-28 18:27:12.654 |   File "./solum/api/app.py", line 20, in 
  2013-11-28 18:27:12.654 | from solum.api import auth
  2013-11-28 18:27:12.654 |   File "./solum/api/auth.py", line 19, in 
  2013-11-28 18:27:12.654 | from keystoneclient.middleware import auth_token
  2013-11-28 18:27:12.654 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/keystoneclient/middleware/auth_token.py",
 line 161, in 
  2013-11-28 18:27:12.654 | from keystoneclient.openstack.common import 
jsonutils
  2013-11-28 18:27:12.655 |   File 
"/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/keystoneclient/openstack/common/jsonutils.py",
 line 42, in 
  2013-11-28 18:27:12.655 | import xmlrpclib
  2013-11-28 18:27:12.655 | ImportError: No module named 'xmlrpclib'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1261572/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241713] Re: lbaas pool tcp

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241713

Title:
  lbaas pool tcp

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  When adding a lbaas pool, there is an option for http and https, but
  not tcp, though you can use tcp via the cli.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243298] Re: only meters associated with the first instance reported appear in Metric dropdown list

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1243298

Title:
  only meters associated with the first instance reported appear in
  Metric dropdown list

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  The Compute (Nova) meters list is generally incomplete in the Metric
  dropdown on the Stats tab of the admin/Resource Usage panel.

  The content of this dropdown list also changes from time to time.

  This is because the list is populated from the meter links in the
  *first* resource reported by the ceilometer API with availability zone
  metadata (i.e. indicating a nova instance).

  However, the set of meters associated with instances is not uniform in
  ceilometer (i.e. a certain meter gathered for one resource may not
  necessarily be gathered for another).

  Also the content of this list changes from time to time, depending on
  the order of sample acquisition (as a different instance, with a
  different set of associated meters, may be at the head of the
  resources list reported by the ceilometer API).

  For example, once instances of different flavors have been spun up in
  an openstack deployment, the Metric list will not contain all the
  possible compute meters, nor will it even always be incorrect in the
  same way. Only the 'instance:' meter relating to the first
  instance appears in the dropdown, and the identity of this first
  instance may change over time.

  So at any given point in time, only a subset of the compute meters are
  accessible via the dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1243298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241467] Re: precision of floating point metering stats is discarded unnecessarily

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241467

Title:
  precision of floating point metering stats is discarded unnecessarily

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Ceilometer returns aggregated statistical values as float, whereas the
  'resource usage' panel narrows to int before inserting these data into
  the line chart.

  For meters defined over a narrow range (such as cpu_util ranging from
  0.0% to 100.0%) this has the effect of unnaturally smoothening the
  graph by discarding precision.

  If it really was crucial that the line chart contain only ints, then
  the float->int conversion should be a round:

i = int(round(f, 0))

  as opposed to narrowing cast:

i = int(f)

  However, AFAICS there's no reason why these raw data couldn't be
  represented directly as floats in the line chart.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241209] Re: LBaaS VIP creation via horizon requires a mandatory "Connection Limit" argument although it's only optional in the cli command

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241209

Title:
  LBaaS VIP creation via horizon requires a mandatory "Connection Limit"
  argument although it's only optional in the cli command

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Version
  ===
  Havana on rhel

  
  Description
  ===
  The "Connection Limit" argument in the Add VIP dialog should not be mandatory.

  Here is the cli command tha performs the same:

  $ neutron lb-vip-create 
  usage: neutron lb-vip-create [-h] [-f {shell,table}] [-c COLUMN]
   [--variable VARIABLE] [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--address ADDRESS]
   [--admin-state-down]
   [--connection-limit CONNECTION_LIMIT]
   [--description DESCRIPTION] --name NAME
   --protocol-port PROTOCOL_PORT --protocol PROTOCOL
   --subnet-id SUBNET_ID
   pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226910] Re: I18n: the downloaded CVS summary is in mixed languages

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1226910

Title:
  I18n: the downloaded CVS summary is in mixed languages

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Hi,

  When I set the language in Chinese, and downloaded the CVS summary
  from the "Overview" page, I found the CVS file were in two langauges.
  The start date and end date of a period were described in Chinese.
  "Project ID" was translated in Chinese. Others were still in English.

  Here is an example
  Usage Report For Period:,九月. 17 2013,九月. 17 2013
  项目ID:,d2c23998b9364d7d8c5242536f117e1f
  Total Active VCPUs:,0
  CPU-HRs Used:,0.00
  Total Active Ram (MB):,0
  Total Disk Size:,0
  Total Disk Usage:,0.00
  Instance Name,VCPUs,Ram (MB),Disk (GB),Usage (Hours),Uptime(Seconds),State

  Regards
  Daisy

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1226910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226829] Re: Password Change needs to logout current user

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1226829

Title:
  Password Change needs to logout current user

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Password Change needs to logout current user after change is applied

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1226829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235358] Re: invalid volume when source image virtual size is bigger than the requested size

2013-12-16 Thread Alan Pevec
** Changed in: cinder/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1235358

Title:
  invalid volume when source image virtual size is bigger than the
  requested size

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I created a volume from an image and booted an instance from it 
  when instance boots I get this: 'selected cylinder exceeds maximum supported 
by bios'
  If I boot an instance from the same image I can boot with no issues so its 
just booting from the volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1235358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239927] Re: i18n: "Filter" in "Flavor Access" tab of "Create Flavor" workflow is not translatable

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1239927

Title:
  i18n: "Filter" in "Flavor Access" tab of "Create Flavor" workflow is
  not translatable

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  "Flavor Access" tab of "Create Flavor" form in the Admin dashboard has
  small search windows in "All Projects" and "Selected Projects"
  repectively. The string "Filter" in the search windows are not
  translatable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1239927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241004] Re: The "Weight" parameter in Horizon's LBaaS member creation dialog is mandatory while it's only optional in the cli command

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241004

Title:
  The "Weight" parameter in Horizon's LBaaS member creation dialog is
  mandatory while it's only optional in the cli command

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Version
  ===
  Havana on rhel

  Description
  ===
  The "Weight" parameter in new member creation should be only optional and not 
mandatory.

  
  # neutron lb-member-create 
  usage: neutron lb-member-create [-h] [-f {shell,table}] [-c COLUMN]
  [--variable VARIABLE] [--prefix PREFIX]
  [--request-format {json,xml}]
  [--tenant-id TENANT_ID] [--admin-state-down]
  [--weight WEIGHT] --address ADDRESS
  --protocol-port PROTOCOL_PORT
  pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243526] Re: ./templates/base.html.c:6: warning: unterminated string literal

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1243526

Title:
  ./templates/base.html.c:6: warning: unterminated string literal

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  yanglei@yanglei-ThinkCentre-M58:~/community/horizon$ ./run_tests.sh 
--makemessages -N
  horizon: /home/yanglei/installed_openstack_devstack/pbr/pbr/version.py:21: 
UserWarning: Module openstack_dashboard was already imported from 
/home/yanglei/community/horizon/openstack_dashboard/__init__.pyc, but 
/home/yanglei/installed_openstack_devstack/horizon is being added to sys.path
import pkg_resources
  WARNING:root:No local_settings file found.
  processing language en
  horizon javascript: 
/home/yanglei/installed_openstack_devstack/pbr/pbr/version.py:21: UserWarning: 
Module openstack_dashboard was already imported from 
/home/yanglei/community/horizon/openstack_dashboard/__init__.pyc, but 
/home/yanglei/installed_openstack_devstack/horizon is being added to sys.path
import pkg_resources
  WARNING:root:No local_settings file found.
  processing language en
  Error: errors happened while running xgettext on base.html
  ./templates/base.html.c:6: warning: unterminated string literal

  yanglei@yanglei-ThinkCentre-M58:~/community/horizon$ git branch
  * master
  yanglei@yanglei-ThinkCentre-M58:~/community/horizon$

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1243526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249279] Re: Resource Usage Page table views shows statistics in a wrong way

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1249279

Title:
  Resource Usage Page table views shows statistics in a wrong way

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  It was pointed out to me that some of the table columns are displayed
  in a wrong way, also the table heading Average 30 days won't be usable
  for all statistics.

  As I look back on this, almost each column has to be considered separately. 
So e.g. average over some time make
  sense only for gauge maybe delta type. Though for the cumulative type max 
makes much more sense. Also if I want
  to see a total value of a particular timeframe, I will have to do some extra 
computation (like max - min) to see e.g.
  network.incoming.bytes of the last month. (the max is a total of all times)

  There is e.g. *storage.objects.outgoing.bytes=Delta* but there is a * 
network. outgoing.bytes=Cumulative* , so there
  can't be unified approach of getting average over some time from them. Not 
sure why it is like that, but I am sure it has
  a good reason. :-)

  These table stats will be enhanced by the sparklines. So there it will
  be much more readable.

  The solution
  ==

  Here comes the list of all used meters and description how each meter
  should be properly displayed:

  Global disk usage
  

  "disk.read.bytes",
  "disk.read.requests",
  "disk.write.bytes",
  "disk.write.requests"

  All above are cumulative. The best here will be show 'total for last
  30 days aggregated by project'. But it will have to load all
  statistics grouped_by resource, computing (min - max) for each
  resource (gives the total amount in the time period for the one
  resource=disk). Then sum of them for each project is what we want. Or
  there can be average of them, not sure what is better.

  GlobalNetworkTrafficUsage
  ---

  "network.incoming.bytes"
  "network.incoming.packets"
  "network.outgoing.bytes"
  "network.outgoing.packets"

  Same approach as in Global disk usage will be done.

  GlobalNetworkUsage
  -

  "network"
  "network_create"
  "subnet"
  "subnet_create"
  "port"
  "port_create"
  "router"
  "router_create"
  "ip_floating"
  "ip_floating_create"

  They all follow pattern of these two:

  "network" - Gauge - Duration - I suspect it doesn't return the time
  up, but rather 1 or 0, depending whether the network was up or down
  during sampling. Not sure what to show here. Maybe counting a duration
  of each network of the tenant in last 30 days and then show average
  up-time of them?

  "network_create" - Creation requests: I suspect the samples doesn't
  show e.g. number of network_creates but there is a one record for each
  network created. So this should show rather count then avg. The field
  would show 'total in 30 days aggregated by tenant'

  GlobalObjectStoreUsage
  --

  "storage.objects"
  "storage.objects.size"
  "storage.objects.incoming.bytes"
  "storage.objects.outgoing.bytes"

  all above are either delta or gauge and it make sense to leave them
  'last 30 days average aggregated by tenant' as it is now

  Confirmation from Ceilometer
  

  Not sure if I understand all of the meters correctly, eglynn please
  could you confirm or correct the above?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1249279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243796] Re: charting of meters for all resource types other than instance is broken when not grouped by project

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1243796

Title:
  charting of meters for all resource types other than instance is
  broken when not grouped by project

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  The 'Group by' dropdown on the Stats tab of the admin/Resource Usage
  panel allows the statistics to be grouped by either Project or '--'.

  From the code, it seems the intent of the '--' option is to group by
  resource ID, as opposed to project ID.

  This works as expected for the Compute (Nova) meters, but is broken
  for all the meters associated with any other resource type (glance
  images, swift objects, etc.)

  This is because the strategy used to query statistics by resource is
  to first find all the resources of the relevant type, then iterate
  over the resources separately querying for the statistics associated
  with the meter in question for each individual resource.

  The problem is that this initial query to discover the relevant
  resources is hard-coded in the group-by resource case to only ever
  identify instances:

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/metering/views.py#L124

  As a result, the panel attempts to retrieve statistics for non-
  instance meters (e.g. 'image.download' or 'storage.objects')
  constrained to a resource ID associated with an instance. All of those
  queries are guaranteed never to yield any data.

  Instead, this iterative strategy should be replaced with a *single*
  statistics query with the 'groupby=resource_id' param set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1243796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250942] Re: "Tenant" should be "Project" (Resource Usage panel)

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1250942

Title:
  "Tenant" should be "Project" (Resource Usage panel)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  In the 'Resource Usage' panel in the Admin dashboard (available when
  Ceilometer is installed),  many column headers read "Tenant" when it's
  agreed that the term "Project" should be used going forward. The
  verbose_name for these columns should be fixed.

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/metering/tables.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1250942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243241] Re: missing hover hint for instance: meter in Metric dropdown list

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1243241

Title:
  missing hover hint for instance: meter in Metric dropdown list

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  The Metric dropdown list on the Stats tab of the admin/Resource Usage
  panel has hover hints with a short description of each meter listed.

  However the hover hint for instance: meter is missing.

  This is because tab context_data list of hover hints is seemingly
  built up from a copy'n'paste from the ceilometer documentation:

http://docs.openstack.org/developer/ceilometer/measurements.html
  #compute-nova

  in particular, using the literal 'instance:' as the meter name.

  Whereas in the ceilometer docco,  'instance:' is not intended to
  be interpreted as a literal meter name. Instead the '' is
  intended to act as a placeholder for the actual instance type, i.e.
  the nova flavor name of 'm1.tiny', 'm1.small', etc.

  So this non-existent 'instance:' meter should be replaced with a
  set of hints for meters named with a  'instance:' prefix and a flavor
  name suffix,  for each of the current set of flavors known to nova
  (i.e. both standard  and custom instance types).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1243241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250554] Re: iso8601 debug message is annoying in dashboard unit test

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1250554

Title:
  iso8601 debug message is annoying in dashboard unit test

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  log messages from iso8601 0.1.8 are not masked in openstack_dashboard
  unit test and output to the console. It makes hard to track the test
  progress.

  We need to control the debug level of iso8601 module in settings.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1250554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-16 Thread Alan Pevec
** Changed in: ceilometer/havana
   Status: Fix Committed => Fix Released

** Changed in: cinder/havana
   Status: Fix Committed => Fix Released

** Changed in: heat/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Released
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service "my-topic".   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {"node": {"x-declare": {"auto-delete": true, 
"durable": true}, "type": "topic"}, "create": "always", "link": {"x-declare": 
{"auto-delete": true, "exclusive": false, "durable": false}, "durable": true, 
"name": "my-topic"}}
  Recevr openstack/my-topic.server-02 ; {"node": {"x-declare": {"auto-delete": 
true, "durable": true}, "type": "topic"}, "create": "always", "link": 
{"x-declare": {"auto-delete": true, "exclusive": false, "durable": false}, 
"durable": true, "name": "my-topic.server-02"}}
  Recevr my-topic_fanout ; {"node": {"x-declare": {"auto-delete": true, 
"durable": false, "type": "fanout"}, "type": "topic"}, "create": "always", 
"link": {"x-declare": {"auto-delete": true, "exclusive": true, "durable": 
false}, "durable": true, "name": 
"my-topic_fanout_489a3178fc704123b0e5e2fbee125247"}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {"node": {"x-declare": {"auto-delete": true, 
"durable": false, "type": "fanout"}, "type": "topic"}, "create": "always", 
"link": {"x-declare": {"auto-delete": true, "exclusive": true, "durable": 
false}, "durable": true, "name": 
"my-topic_fanout_b40001afd9d946a582ead3b7b858b588"}}
  Recevr my-topic_fanout ; {"node": {"x-declare": {"auto-delete": true, 
"durable": false, "type": "fanout"}, "type": "topic"}, "create": "always", 
"link": {"x-declare": {"auto-delete": true, "exclusive": true, "durable": 
false}, "durable": true, "name": 
"my-topic_fanout_b40001afd9d946a582ead3b7b858b588"}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {"node": {"x-declare": {"auto-delete": 
true, "durable": true}, "type": "topic"}, "create": "always", "link": 
{"x-declare": {"auto-delete": true, "exclusive": false, "durable": false}, 
"durable": true, "name": "my-topic.server-02"}}
  Recevr openstack/my-topic ; {"node": {"x-declare": {"auto-delete": true, 
"durable": true}, "type": "topic"}, "create": "always", "link": {"x-declare": 
{"auto-delete": true, "exclusive": false, "durable": false}, "durable": true, 
"name": "my-topic"}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {"link": {"x-declare": 
{"auto-delete": true, "durable": false}}}
  Recevr amq.topic/fanout/ ; {"link": {"x-declare": {"auto-delete": true, 
"exclusive": true}}}
  Recevr amq.topic/fanout/ ; {"link": {"x-declare": {"auto-delete": true, 
"exclusive": true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {"link": {"x-declare": 
{"auto-delete": true, "durable": false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test

[Yahoo-eng-team] [Bug 1252082] Re: Cannot assign different translations for present and past message of BatchAction

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1252082

Title:
  Cannot assign different translations for present and past message of
  BatchAction

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Message strings of BatchAction (including DeleteAction) is generated
  in _conjugate of BatchAction in horizon/tables/actions.py.
  _conjugate() generates both "present" message string (which is used as
  table action name and confirm dialog) and "past" message strings
  (which is displayed as a message popup after the specified operation
  is completed).

  Iin some languages (at least Japanese) we need to use different
  translated strings for "present" and "past" message strings, but The
  same string "%(action)s %(data_type)s" is used to generate them. As a
  result, either of "present" or "past" message string may become odd
  translation string (e.g., Havana Horizon translation in Japanese has
  some odd strings due to this issue).

  At least it is better "present" and "past" message strings can be
  distinguished.

  From translation perspective, it is ideal we can define "present" and
  "past" strings respectively for each action class rather than
  generating strings in _conjugate(). Translator need to use different
  strings based on action types. It will be filed as a separate bug.

  My idea is to use contextual marker [1] to distinguish them.
  I would like to hear opinions before proposing a patch.

  @@ -557,8 +558,11 @@ class BatchAction(Action):
   data_type = self.data_type_singular
   else:
   data_type = self.data_type_plural
  -return _("%(action)s %(data_type)s") % {'action': action,
  -'data_type': data_type}
  +if action_type == "past":
  +msgstr = pgettext_lazy("past", "%(action)s %(data_type)s")
  +else:
  +msgstr = pgettext_lazy("present", "%(action)s %(data_type)s")
  +return msgstr % {'action': action, 'data_type': data_type}
   
   def action(self, request, datum_id):
   """

  After this, we can get the following entries in the PO file and assign
  different translations to "past" and "present" strings.

  #: tables/actions.py:562
  #, python-format
  msgctxt "past"
  msgid "%(action)s %(data_type)s"
  msgstr ""

  #: tables/actions.py:564
  #, python-format
  msgctxt "present"
  msgid "%(action)s %(data_type)s"
  msgstr ""

  
  [1] 
https://docs.djangoproject.com/en/dev/topics/i18n/translation/#contextual-markers

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1252082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247675] Re: [OSSA 2013-036] Insufficient sanitization of Instance Name in Horizon (CVE-2013-6858)

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1247675

Title:
  [OSSA 2013-036] Insufficient sanitization of Instance Name in Horizon
  (CVE-2013-6858)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) grizzly series:
  Fix Committed
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512

  Hello,

  My name is Chris Chapman, I am an Incident Manager with Cisco PSIRT.

  I would like to report the following XSS issue found in the OpenStack
  WebUI that was reported to Cisco.

  The details are as follows:

  The OpenStack web user interface is vulnerable to XSS:

  While launching (or editing) an instance, injecting 

[Yahoo-eng-team] [Bug 1252074] Re: Some "Working" dialogs are not translatable

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1252074

Title:
  Some "Working" dialogs are not translatable

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Some "Working" dialogs are not translatable.

  1. Go to "Networks" panel
  2. Click "Create Network"
  3. Click "Create" button
  4. "Working" modal spinner is displayed for a while, but the displayed 
message is not displayed.

  We need to update horizon/static/horizon/js/horizon.modals.js

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1252074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250029] Re: Default port for The MS SQL Security Group is 1433 instead of 1443

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1250029

Title:
  Default port for The MS SQL Security Group is 1433 instead of 1443

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  The default port for the MS SQL Security group is 1433 instead of
  1443.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1250029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252881] Re: detach volume dialog contains escaped html

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1252881

Title:
  detach volume dialog contains escaped html

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  It looks like I went a little to far when cleaning up XSS problems.
  If you go to volumes panel, bring up assignments page and detach volume,
  you see an "Are you sure" dialog that contains escaped HTML.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1252881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254026] Re: Subnet / Subnet details not marked as translatable

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1254026

Title:
  Subnet / Subnet details not marked as translatable

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  When trying to create or update a subnet from the Networks details
  page, the tab titles 'Subnet' and 'Subnet details' are not showing as
  translated.

  This is because when defining the names, the leading underscore  were
  missed in several places in order to mark the string as translatable,
  cf.
  
https://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/project/networks/subnets/workflows.py

  Steps to reproduce:
  1. On the left side panel, select the Networks menu.
  2. Click on a network name to open the Network Details page.
  3. Click the "Create Subnet" or "Edit Subnet" button and check the tabs

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1254026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254049] Re: English string for "Injected File Path Bytes" is wrong

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1254049

Title:
  English string for "Injected File Path Bytes" is wrong

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  It looks like the string for injected_file_path_bytes was accidentally
  copied over from the Injected File Content Bytes, leading to a
  confusing display in the Default Quotas page where the injected file
  content bytes limit reads as both 10240 and 255.

  The fix will be safe to backport for other languages too, as
  translation for the correct string is already available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1254049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255004] Re: I18n: Localization of the role "Member"

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1255004

Title:
  I18n: Localization of the role "Member"

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released
Status in OpenStack I18n & L10n:
  New

Bug description:
  Hi,

  There is a very strange thing happened to role "Member" when I set
  Horizon to use my local language.

  In the dialog "Domain Groups", it is translated. But in the dialog of
  "Project Members" and "Project Groups", it is not translated.

  From my point of view, if we can localize role names, it will be
  wonderful. If we are not able to localize role names, it is
  acceptable. But we need to make them consistent.

  Hope somebody can take a look at this interested issue.

  Thanks.
  Daisy

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1255004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258462] Re: Translation update for 2013.2.1 release

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1258462

Title:
  Translation update for 2013.2.1 release

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  I18N team is improving translations and catching up with the upstream
  changes. We need to import the latest translations before releasing
  2013.2.1 update.

  As we discussed in stable-maintenance list, the patch is handled as
  FFE. I will proposes a patch to import translations on Sunday (Dec 8th
  UTC).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1258462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic

2013-12-16 Thread Alan Pevec
** Changed in: cinder/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257293

Title:
  [messaging] QPID broadcast RPC requests to all servers for a given
  topic

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed

Bug description:
  According to the oslo.messaging documentation, when a RPC request is
  made to a given topic, and there are multiple servers for that topic,
  only _one_ server should service that RPC request.  See
  http://docs.openstack.org/developer/oslo.messaging/target.html

  "topic (str) – A name which identifies the set of interfaces exposed
  by a server. Multiple servers may listen on a topic and messages will
  be dispatched to one of the servers in a round-robin fashion."

  In the case of a QPID-based deployment using topology version 2, this
  is not the case.  Instead, each listening server gets a copy of the
  RPC and will process it.

  For more detail, see

  https://bugs.launchpad.net/oslo/+bug/1178375/comments/26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261565] [NEW] nova.compute.utils.EventReporter drops exception messages on the floor

2013-12-16 Thread Nicolas Simonds
Public bug reported:

While reviewing the instance action logs, it was noticed that upon error
conditions, the instance_actions_events log separates the exception
message from the traceback, but there is no corresponding column in the
model to store it.

This appears to be a simple oversight and/or mistake in the
implementation of the InstanceActionEvent class.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261565

Title:
  nova.compute.utils.EventReporter drops exception messages on the floor

Status in OpenStack Compute (Nova):
  New

Bug description:
  While reviewing the instance action logs, it was noticed that upon
  error conditions, the instance_actions_events log separates the
  exception message from the traceback, but there is no corresponding
  column in the model to store it.

  This appears to be a simple oversight and/or mistake in the
  implementation of the InstanceActionEvent class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234857] Re: neutron unittest require minimum 4gb memory

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1234857

Title:
  neutron unittest require minimum 4gb memory

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in neutron havana series:
  Fix Released

Bug description:
  tox -e py26

  The unittest hang forever. Each test seem to take around 25mins to
  complete. Each test report following error, though it is PASS. It
  sounds like a regression caused by fix for
  https://bugs.launchpad.net/neutron/+bug/1191768.

  
https://github.com/openstack/neutron/commit/06f679df5d025e657b2204151688ffa60c97a3d3

  As per this fix, the default behavior for
  neutron.agent.rpc.report_state() is modified to use cast(), to report
  back the state in json format. The original behavior was to use call()
  method.

  Using call() method by default might fix this problem.

  ERROR:neutron.plugins.linuxbridge.agent.linuxbridge_neutron_agent:Failed 
reporting state!
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/csi-neutron-upstream/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 759, in _report_state
  self.agent_state)
File "/home/jenkins/workspace/csi-neutron-upstream/neutron/agent/rpc.py", 
line 74, in report_state
  return self.cast(context, msg, topic=self.topic)
File 
"/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/proxy.py",
 line 171, in cast
  rpc.cast(context, self._get_topic(topic), msg)
File 
"/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/__init__.py",
 line 158, in cast
  return _get_impl().cast(CONF, context, topic, msg)
File 
"/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/impl_fake.py",
 line 166, in cast
  check_serialize(msg)
File 
"/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/impl_fake.py",
 line 131, in check_serialize
  json.dumps(msg)
File "/usr/lib64/python2.6/json/__init__.py", line 230, in dumps
  return _default_encoder.encode(obj)
File "/usr/lib64/python2.6/json/encoder.py", line 367, in encode
  chunks = list(self.iterencode(o))
File "/usr/lib64/python2.6/json/encoder.py", line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 317, in _iterencode
  for chunk in self._iterencode_default(o, markers):
File "/usr/lib64/python2.6/json/encoder.py", line 323, in 
_iterencode_default
  newobj = self.default(o)
File "/usr/lib64/python2.6/json/encoder.py", line 344, in default
  raise TypeError(repr(o) + " is not JSON serializable")
  TypeError:  is 
not JSON serializable

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1234857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209011] Re: L3 agent can't handle updates that change floating ip id

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1209011

Title:
  L3 agent can't handle updates that change floating ip id

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The problem occurs when a network update comes along where a new
  floating ip id carries the same (reused) IP address as an old floating
  IP.  In short, same address, different floating ip id.  We've seen
  this occur in testing where the floating ip free pool has gotten small
  and creates/deletes come quickly.

  What happens is the agent skips calling "ip addr add" for the address
  since the address already appears.  It then calls "ip addr del" to
  remove the address from the qrouter's gateway interface.  It shouldn't
  have done this and the floating ip is left in a non-working state.

  Later, when the floating ip is disassociated from the port, the agent
  attempts to remove the address from the device which results in an
  exception which is caught above.  The exception prevents the iptables
  code from removing the DNAT address for the floating ip.

  2013-07-23 09:20:06.094 3109 DEBUG quantum.agent.linux.utils [-] Running 
command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-2b75022a-3721-443f-af99-ec648819d080', 'ip', '-4', 
'addr', 'del', '15.184.103.155/32', 'dev', 'qg-c847c5a7-62'] execute 
/usr/lib/python2.7/dist-packages/quantum/agent/linux/utils.py:42
  2013-07-23 09:20:06.179 3109 DEBUG quantum.agent.linux.utils [-] 
  Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-2b75022a-3721-443f-af99-ec648819d080', 'ip', '-4', 
'addr', 'del', '15.184.103.155/32', 'dev', 'qg-c847c5a7-62']
  Exit code: 2
  Stdout: ''
  Stderr: 'RTNETLINK answers: Cannot assign requested address\n' execute 
/usr/lib/python2.7/dist-packages/quantum/agent/linux/utils.py:59

  The DNAT entries in the iptables stay in a bad state from this point
  on sometimes preventing other floating ip addresses from being
  attached to the same instance.

  I have a fix for this that is currently in testing.  Will submit for
  review when it is ready.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1209011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210236] Re: traceback is suppressed when deploy.loadapp fails

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210236

Title:
  traceback is suppressed when deploy.loadapp fails

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  I saw this error when attempt to start a relatively recent quantum (setup.py 
--version says "2013.2.a782.ga36f237"):
   ERROR: Unable to load quantum from configuration file 
/etc/quantum/api-paste.ini.

  After running quantum-server through strace I determined that the
  error was due to missing mysql client libraries:

  ...
  open("/lib64/tls/libmysqlclient.so.18", O_RDONLY) = -1 ENOENT (No such 
file or directory)
  open("/lib64/libmysqlclient.so.18", O_RDONLY) = -1 ENOENT (No such file 
or directory)
  open("/usr/lib64/tls/libmysqlclient.so.18", O_RDONLY) = -1 ENOENT (No 
such file or directory)
  open("/usr/lib64/libmysqlclient.so.18", O_RDONLY) = -1 ENOENT (No such 
file or directory)
  munmap(0x7ffcd8132000, 34794)   = 0
  munmap(0x7ffccd147000, 2153456) = 0
  close(4)= 0
  close(3)= 0
  write(2, "ERROR: Unable to load quantum fr"..., 95ERROR: Unable to load 
quantum from configuration file /usr/local/csi/etc/quantum/api-paste.ini.) = 95
  write(2, "\n", 1 )   = 1
  rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x3eec80f500}, 
{0x3eef90db70, [], SA_RESTORER, 0x3eec80f500}, 8) = 0
  exit_group(1)

  
  The error message is completely bogus and the lack of traceback made it 
difficult to debug.

  This is a regression from commit 6869821 which was to fix related bug
  1004062

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1210236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211915] Re: Connection to neutron failed: Maximum attempts reached

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1211915

Title:
  Connection to neutron failed: Maximum attempts reached

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  http://logs.openstack.org/64/41464/4/check/gate-tempest-devstack-vm-
  neutron/4288a6b/console.html

  Seen testing https://review.openstack.org/#/c/41464/

  2013-08-13 17:34:46.774 | Traceback (most recent call last):
  2013-08-13 17:34:46.774 |   File 
"tempest/scenario/test_network_basic_ops.py", line 176, in 
test_003_create_networks
  2013-08-13 17:34:46.774 | router = self._get_router(self.tenant_id)
  2013-08-13 17:34:46.775 |   File 
"tempest/scenario/test_network_basic_ops.py", line 141, in _get_router
  2013-08-13 17:34:46.775 | router.add_gateway(network_id)
  2013-08-13 17:34:46.775 |   File "tempest/api/network/common.py", line 78, in 
add_gateway
  2013-08-13 17:34:46.776 | self.client.add_gateway_router(self.id, 
body=body)
  2013-08-13 17:34:46.776 |   File 
"/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", line 108, 
in with_params
  2013-08-13 17:34:46.776 | ret = self.function(instance, *args, **kwargs)
  2013-08-13 17:34:46.776 |   File 
"/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", line 396, 
in add_gateway_router
  2013-08-13 17:34:46.777 | body={'router': {'external_gateway_info': 
body}})
  2013-08-13 17:34:46.777 |   File 
"/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", line 987, 
in put
  2013-08-13 17:34:46.777 | headers=headers, params=params)
  2013-08-13 17:34:46.778 |   File 
"/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", line 970, 
in retry_request
  2013-08-13 17:34:46.778 | raise 
exceptions.ConnectionFailed(reason=_("Maximum attempts reached"))
  2013-08-13 17:34:46.778 | ConnectionFailed: Connection to neutron failed: 
Maximum attempts reached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1211915/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235486] Re: Integrity violation on delete network

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235486

Title:
  Integrity violation on delete network

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Found while running tests for bug 1224001.
  Full logs here: 
http://logs.openstack.org/24/49424/13/check/check-tempest-devstack-vm-neutron-pg-isolated/405d3b4

  Keeping to medium priority for now.
  Will raise priority if we found more occurrences.

  2013-10-04 21:20:46.888 31438 ERROR neutron.api.v2.resource [-] delete failed
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 84, in resource
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 432, in delete
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 411, in 
delete_network
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource break
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 456, 
in __exit__
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource self.commit()
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 368, 
in commit
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource 
self._prepare_impl()
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 347, 
in _prepare_impl
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource 
self.session.flush()
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py", 
line 542, in _wrap
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource raise 
exception.DBError(e)
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource DBError: 
(IntegrityError) update or delete on table "networks" violates foreign key 
constraint "ports_network_id_fkey" on table "ports"
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource DETAIL:  Key 
(id)=(c63057f4-8d8e-497c-95d6-0d93d2cc83f5) is still referenced from table 
"ports".
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource  'DELETE FROM 
networks WHERE networks.id = %(id)s' {'id': 
u'c63057f4-8d8e-497c-95d6-0d93d2cc83f5'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1235486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239637] Re: internal neutron server error on tempest VolumesActionsTest

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1239637

Title:
  internal neutron server error on tempest VolumesActionsTest

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Logstash query:
  @message:"DBError: (IntegrityError) null value in column \"network_id\" 
violates not-null constraint" AND @fields.filename:"logs/screen-q-svc.txt"

  
http://logs.openstack.org/22/51522/2/check/check-tempest-devstack-vm-neutron-pg-isolated/015b3d9/logs/screen-q-svc.txt.gz#_2013-10-14_10_13_01_431
  
http://logs.openstack.org/22/51522/2/check/check-tempest-devstack-vm-neutron-pg-isolated/015b3d9/console.html

  
  2013-10-14 10:16:28.034 | 
==
  2013-10-14 10:16:28.034 | FAIL: tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesActionsTest)
  2013-10-14 10:16:28.035 | tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesActionsTest)
  2013-10-14 10:16:28.035 | 
--
  2013-10-14 10:16:28.035 | _StringException: Traceback (most recent call last):
  2013-10-14 10:16:28.035 |   File 
"tempest/api/volume/test_volumes_actions.py", line 55, in tearDownClass
  2013-10-14 10:16:28.036 | super(VolumesActionsTest, cls).tearDownClass()
  2013-10-14 10:16:28.036 |   File "tempest/api/volume/base.py", line 72, in 
tearDownClass
  2013-10-14 10:16:28.036 | cls.isolated_creds.clear_isolated_creds()
  2013-10-14 10:16:28.037 |   File "tempest/common/isolated_creds.py", line 
453, in clear_isolated_creds
  2013-10-14 10:16:28.037 | self._clear_isolated_net_resources()
  2013-10-14 10:16:28.037 |   File "tempest/common/isolated_creds.py", line 
445, in _clear_isolated_net_resources
  2013-10-14 10:16:28.038 | self._clear_isolated_network(network['id'], 
network['name'])
  2013-10-14 10:16:28.038 |   File "tempest/common/isolated_creds.py", line 
399, in _clear_isolated_network
  2013-10-14 10:16:28.038 | net_client.delete_network(network_id)
  2013-10-14 10:16:28.038 |   File 
"tempest/services/network/json/network_client.py", line 76, in delete_network
  2013-10-14 10:16:28.039 | resp, body = self.delete(uri, self.headers)
  2013-10-14 10:16:28.039 |   File "tempest/common/rest_client.py", line 308, 
in delete
  2013-10-14 10:16:28.039 | return self.request('DELETE', url, headers)
  2013-10-14 10:16:28.040 |   File "tempest/common/rest_client.py", line 436, 
in request
  2013-10-14 10:16:28.040 | resp, resp_body)
  2013-10-14 10:16:28.040 |   File "tempest/common/rest_client.py", line 522, 
in _error_checker
  2013-10-14 10:16:28.041 | raise exceptions.ComputeFault(message)
  2013-10-14 10:16:28.041 | ComputeFault: Got compute fault
  2013-10-14 10:16:28.041 | Details: {"NeutronError": "Request Failed: internal 
server error while processing your request."}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1239637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237912] Re: Cannot update IPSec Policy lifetime

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1237912

Title:
  Cannot update IPSec Policy lifetime

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  When you try to update IPSec Policy lifetime, you get an error:

  (neutron) vpn-ipsecpolicy-update ipsecpolicy --lifetime 
units=seconds,value=36001
  Request Failed: internal server error while processing your request.

  Meanwhile updating IKE Policy lifetime works well:

  (neutron) vpn-ikepolicy-update ikepolicy --lifetime units=seconds,value=36001
  Updated ikepolicy: ikepolicy

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1237912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235450] Re: [OSSA 2013-033] Metadata queries from Neutron to Nova are not restricted by tenant (CVE-2013-6419)

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235450

Title:
  [OSSA 2013-033] Metadata queries from Neutron to Nova are not
  restricted by tenant (CVE-2013-6419)

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron grizzly series:
  Fix Committed
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Committed

Bug description:
  The neutron metadata service works in the following way:

  Instance makes a GET request to http://169.254.169.254/

  This is directed to the metadata-agent which knows which
  router(namespace) he is running on and determines the ip_address from
  the http request he receives.

  Now, the neturon-metadata-agent queries neutron-server  using the
  router_id and ip_address from the request to determine the port the
  request came from. Next, the agent takes the device_id (nova-instance-
  id) on the port and passes that to nova as X-Instance-ID.

  The vulnerability is that if someone exposes their instance_id their
  metadata can be retrieved. In order to exploit this, one would need to
  update the device_id  on a port to match the instance_id they want to
  hijack the data from.

  To demonstrate:

  arosen@arosen-desktop:~/devstack$ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 1eb33bf1-6400-483a-9747-e19168b68933 | vm1  | ACTIVE | None   | Running 
| private=10.0.0.4 |
  | eed973e2-58ea-42c4-858d-582ff6ac3a51 | vm2  | ACTIVE | None   | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  
  arosen@arosen-desktop:~/devstack$ neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 3128f195-c41b-4160-9a42-40e024771323 |  | fa:16:3e:7d:a5:df | 
{"subnet_id": "d5cbaa98-ecf0-495c-b009-b5ea6160259b", "ip_address": "10.0.0.1"} 
|
  | 62465157-8494-4fb7-bdce-2b8697f03c12 |  | fa:16:3e:94:62:47 | 
{"subnet_id": "d5cbaa98-ecf0-495c-b009-b5ea6160259b", "ip_address": "10.0.0.4"} 
|
  | 8473fb8d-b649-4281-b03a-06febf61b400 |  | fa:16:3e:4f:a3:b0 | 
{"subnet_id": "d5cbaa98-ecf0-495c-b009-b5ea6160259b", "ip_address": "10.0.0.2"} 
|
  | 92c42c1a-efb0-46a6-89eb-a38ae170d76d |  | fa:16:3e:de:9a:39 | 
{"subnet_id": "d5cbaa98-ecf0-495c-b009-b5ea6160259b", "ip_address": "10.0.0.3"} 
|
  
+--+--+---+-+

  
  arosen@arosen-desktop:~/devstack$ neutron port-show  
62465157-8494-4fb7-bdce-2b8697f03c12
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | device_id | 1eb33bf1-6400-483a-9747-e19168b68933
|
  | device_owner  | compute:None
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {"subnet_id": 
"d5cbaa98-ecf0-495c-b009-b5ea6160259b", "ip_address": "10.0.0.4"} |
  | id| 62465157-8494-4fb7-bdce-2b8697f03c12
|
  | mac_address   | fa:16:3e:94:62:47   
|
  | name  | 
|
  | netwo

[Yahoo-eng-team] [Bug 1242734] Re: Error message encoding issue when using qpid

2013-12-16 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242734

Title:
  Error message encoding issue when using qpid

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  When I was trying to create a new image to recreate the storage full
  exception, I got 500 error code instead of 413. And I see below trace
  in log. Seems we need to call jsonutils.to_primitive to make the
  message can be encoded.

  2013-10-15 05:18:18.623 2430 ERROR glance.api.v1.upload_utils 
[b256bf1b-81e4-41b1-b89a-0a6bcb58b5ab 396ce5f3575a43abb636c489a959bf16 
29db386367fa4c4e9ffb3c369a46ee90] Image storage media is full: There is not 
enough disk space on the image storage media.
  2013-10-15 05:18:18.691 2430 ERROR glance.notifier.notify_qpid 
[b256bf1b-81e4-41b1-b89a-0a6bcb58b5ab 396ce5f3575a43abb636c489a959bf16 
29db386367fa4c4e9ffb3c369a46ee90] Notification error.  Priority: error Message: 
{'event_type': 'image.upload', 'timestamp': '2013-10-15 10:18:18.662667', 
'message_id': 'b74ec17a-06ac-45b8-84c3-37a55af8dfe1', 'priority': 'ERROR', 
'publisher_id': 'yangj228', 'payload': u'Image storage media is full: There is 
not enough disk space on the image storage media.'}
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid Traceback 
(most recent call last):
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/glance/notifier/notify_qpid.py", line 134, in 
_send
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
sender.send(qpid_msg)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"", line 6, in send
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 879, in 
send
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.sync(timeout=timeout)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"", line 6, in sync
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 890, in 
sync
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid if not 
self._ewait(lambda: self.acked >= mno, timeout=timeout):
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 804, in 
_ewait
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid result = 
self.session._ewait(lambda: self.error or predicate(), timeout)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 571, in 
_ewait
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid result = 
self.connection._ewait(lambda: self.error or predicate(), timeout)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 214, in 
_ewait
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.check_error()
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 207, in 
check_error
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid raise 
self.error
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid InternalError: 
Traceback (most recent call last):
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 497, in 
dispatch
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.engine.dispatch()
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 802, in 
dispatch
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.process(ssn)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 1037, in 
process
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.send(snd, msg)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 1248, in send
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid body = 
enc(msg.content)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/message.py", line 28, in encode
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
sc.write_primitive(type, x)
  2013-10-15 05:18:

[Yahoo-eng-team] [Bug 1240125] Re: Linux IP wrapper cannot handle VLAN interfaces

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240125

Title:
  Linux IP wrapper cannot handle VLAN interfaces

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Interface VLAN name have a '@' character in their names when iproute2 utility 
list them.
  But the usable interface name (for iproute2 commands) is the string before 
the '@' character, so this interface need a special parse.

  $ ip link show
  1: wlan0:  mtu 1500 qdisc mq state DOWN 
group default qlen 1000
  link/ether 6c:88:14:b7:fe:80 brd ff:ff:ff:ff:ff:ff
  inet 169.254.10.78/16 brd 169.254.255.255 scope link wlan0:avahi
 valid_lft forever preferred_lft forever
  2: wlan0.10@wlan0:  mtu 1500 qdisc noop state DOWN group 
default 
  link/ether 6c:88:14:b7:fe:80 brd ff:ff:ff:ff:ff:ff
  3: vlan100@wlan0:  mtu 1500 qdisc noop state DOWN group 
default 
  link/ether 6c:88:14:b7:fe:80 brd ff:ff:ff:ff:ff:ff

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241602] Re: AttributeError in plugins/linuxbridge/lb_neutron_plugin.py

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1241602

Title:
  AttributeError in plugins/linuxbridge/lb_neutron_plugin.py

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  I'm running Ubuntu 12.04 LTS x64 + OpenStack Havana with the following
  neutron package versions:

  neutron-common 2013.2~rc3-0ubuntu1~cloud0
  neutron-dhcp-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-l3-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-metadata-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-plugin-linuxbridge 2013.2~rc3-0ubuntu1~cloud0
  neutron-plugin-linuxbridge-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-server 2013.2~rc3-0ubuntu1~cloud0
  python-neutron 2013.2~rc3-0ubuntu1~cloud0   
  python-neutronclient 2.3.0-0ubuntu1~cloud0


  When adding a router interface the following error message in
  /var/log/neutron/server.log:

  2013-10-18 15:35:14.862 15675 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py", line 
438, in _process_data
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 44, in dispatch
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/lb_neutron_plugin.py",
 line 147, in update_device_up
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
port = self.get_port_from_device.get_port(device)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
AttributeError: 'function' object has no attribute 'get_port'
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp
  2013-10-18 15:35:14.862 15675 ERROR neutron.openstack.common.rpc.common [-] 
Returning exception 'function' object has no attribute 'get_port' to caller
  2013-10-18 15:35:14.863 15675 ERROR neutron.openstack.common.rpc.common [-] 
['Traceback (most recent call last):\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py", line 
438, in _process_data\n**args)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 44, in 
dispatch\nneutron_ctxt, version, method, namespace, **kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch\nresult = getattr(proxyobj, method)(ctxt, 
**kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/lb_neutron_plugin.py",
 line 147, in update_device_up\nport = 
self.get_port_from_device.get_port(device)\n', "AttributeError: 'function' 
object has no attribute 'get_port'\n"]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1241602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240742] Re: linuxbridge agent doesn't remove vxlan interface if no interface mappings

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240742

Title:
  linuxbridge agent doesn't remove vxlan interface if no interface
  mappings

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The LinuxBridge Agent doesn't remove vxlan interfaces if  
  physical_interface_mappings isn't set  in the config file

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240790] Re: Allow using ipv6 address with omiting zero

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240790

Title:
  Allow using ipv6 address with omiting zero

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Now neutron support ipv6 address like 2001:db8::10:10:10:0/120,
  but don't support ipv6 address with omiting zero like 
2001:db8:0:0:10:10:10:0/120
  that will cause the exception "'2001:db8:0:0:10:10:10:0/120' isn't a 
recognized IP subnet cidr, '2001:db8::10:10:10:0/120' is recommended"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241874] Re: L2 pop mech driver sends notif. even no related port changes

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1241874

Title:
  L2 pop mech driver sends notif. even no related port changes

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  L2 population mechanism driver sends add notifications even if there
  is no related port changes, ex ip changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1241874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240720] Re: Nicira plugin: 500 when removing a router port desynchronized from the backend

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240720

Title:
  Nicira plugin: 500 when removing a router port desynchronized from the
  backend

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  If the logical switch port backing a neutron router interface port
  (device_owner=network:router_interface) is removed, then the port goes
  into ERROR state. However the interface remove process still tries to
  retrieve that port from the NVP backend, causing a 500 error.

  Different tracebacks can be generated according to the conditions
  which led to the switch port (or the peer router port) to be removed
  from the backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242501] Re: Jenkins failed due to TestGlanceAPI.test_get_details_filter_changes_since

2013-12-16 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242501

Title:
  Jenkins failed due to
  TestGlanceAPI.test_get_details_filter_changes_since

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance grizzly series:
  Fix Committed
Status in Glance havana series:
  Fix Released

Bug description:
  Now we're running into the Jenkins failure due to below test case
  failure:

  2013-10-20 06:12:31.930 | 
==
  2013-10-20 06:12:31.930 | FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_get_details_filter_changes_since
  2013-10-20 06:12:31.930 | 
--
  2013-10-20 06:12:31.930 | _StringException: Traceback (most recent call last):
  2013-10-20 06:12:31.931 |   File 
"/home/jenkins/workspace/gate-glance-python27/glance/tests/unit/v1/test_api.py",
 line 1358, in test_get_details_filter_changes_since
  2013-10-20 06:12:31.931 | self.assertEquals(res.status_int, 400)
  2013-10-20 06:12:31.931 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 322, in assertEqual
  2013-10-20 06:12:31.931 | self.assertThat(observed, matcher, message)
  2013-10-20 06:12:31.931 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertThat
  2013-10-20 06:12:31.931 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-20 06:12:31.931 | MismatchError: 200 != 400

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1242501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240744] Re: L2 pop sends updates for unrelated networks

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240744

Title:
  L2 pop sends updates for unrelated networks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The l2population mechanism driver sends update notifications for
  networks which are not related to the port which is being updated.
  Thus the fdb is populated with some incorrect entries.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243821] Re: Qpid protocol configuration is wrong

2013-12-16 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1243821

Title:
  Qpid protocol configuration is wrong

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  notify_qpid.py appears to be suffering from the same issue as
  described in launchpad bug
  https://bugs.launchpad.net/oslo/+bug/1158807.  Instead of setting
  connection.transport, it is attempting to set connection.protocol.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1243821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241198] Re: Keystone tests determine rootdir relative to pwd

2013-12-16 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1241198

Title:
  Keystone tests determine rootdir relative to pwd

Status in OpenStack Identity (Keystone):
  Invalid
Status in Keystone havana series:
  Fix Released

Bug description:
  keystone/tests/core.py

  contains this code:

ROOTDIR = os.path.dirname(os.path.abspath('..'))

  which is determining the abspath of $PWD/..

  A more reliable way to determine the rootdir is relative to the
  dirname(__file__) of the python module itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1241198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242715] Re: Wrong parameter in the config file s/qpid_host/qpid_hostname/

2013-12-16 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242715

Title:
  Wrong parameter in the config file s/qpid_host/qpid_hostname/

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  Glance config sample shows `qpid_host` as the parameter to use for
  qpid's host, however, the right value is `qpid_hostname`

  [0] https://github.com/openstack/glance/blob/master/etc/glance-api.conf#L228
  [1] 
https://github.com/openstack/glance/blob/master/glance/notifier/notify_qpid.py#L34

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1242715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244259] Re: error while creating l2 gateway services in nvp

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244259

Title:
  error while creating l2 gateway services in nvp

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  If a conflict occurs while using the L2 Gateway extension, 500 errors
  may mask underlying exceptions, For instance:

  
  2013-10-24 07:42:37.709 ERROR NVPApiHelper [-] Received error code: 409
  2013-10-24 07:42:37.710 ERROR NVPApiHelper [-] Server Error Message: Device 
breth0 on transport node dd2e6fb9-98fe-4306-a679-30e15f0af06a is already in use 
as a gateway in Gateway Service 166ddc25-e617-4cfc-bde5-485a0b622fc6
  2013-10-24 07:42:37.710 ERROR neutron.api.v2.resource [-] create failed
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 411, in create
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/nicira/NeutronPlugin.py", line 1921, in 
create_network_gateway
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource "created 
resource:%s") % nvp_res)
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource UnboundLocalError: 
local variable 'nvp_res' referenced before assignment

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243862] Re: fix nvp version validation for distributed router creation

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1243862

Title:
  fix nvp version validation for distributed router creation

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Current test is not correct as it prevents the right creation policy
  to occur when for newer versions of NVP whose minor is 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1243862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244255] Re: binding_failed because of l2 agent assumed down

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244255

Title:
  binding_failed because of l2 agent assumed down

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  New

Bug description:
  Tempest test ServerAddressesTestXML failed on a change that does not
  involve any code modification.

  https://review.openstack.org/53633

  2013-10-24 14:04:29.188 | 
==
  2013-10-24 14:04:29.189 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML)
  2013-10-24 14:04:29.189 | setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML)
  2013-10-24 14:04:29.189 | 
--
  2013-10-24 14:04:29.189 | _StringException: Traceback (most recent call last):
  2013-10-24 14:04:29.189 |   File 
"tempest/api/compute/servers/test_server_addresses.py", line 31, in setUpClass
  2013-10-24 14:04:29.189 | resp, cls.server = 
cls.create_server(wait_until='ACTIVE')
  2013-10-24 14:04:29.189 |   File "tempest/api/compute/base.py", line 143, in 
create_server
  2013-10-24 14:04:29.190 | server['id'], kwargs['wait_until'])
  2013-10-24 14:04:29.190 |   File 
"tempest/services/compute/xml/servers_client.py", line 356, in 
wait_for_server_status
  2013-10-24 14:04:29.190 | return waiters.wait_for_server_status(self, 
server_id, status)
  2013-10-24 14:04:29.190 |   File "tempest/common/waiters.py", line 71, in 
wait_for_server_status
  2013-10-24 14:04:29.190 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2013-10-24 14:04:29.190 | BuildErrorException: Server 
e21d695e-4f15-4215-bc62-8ea645645a26 failed to build and is in ERROR status


  From n-cpu.log (http://logs.openstack.org/33/53633/1/check/check-
  tempest-devstack-vm-
  neutron/4dd98e5/logs/screen-n-cpu.txt.gz#_2013-10-24_13_58_07_532):

   Error: Unexpected vif_type=binding_failed
   Traceback (most recent call last):
   set_access_ip=set_access_ip)
 File "/opt/stack/new/nova/nova/compute/manager.py", line 1413, in _spawn
   LOG.exception(_('Instance failed to spawn'), instance=instance)
 File "/opt/stack/new/nova/nova/compute/manager.py", line 1410, in _spawn
   block_device_info)
 File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2084, in spawn
   write_to_disk=True)
 File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3064, in 
to_xml
   disk_info, rescue, block_device_info)
 File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2951, in 
get_guest_config
   inst_type)
 File "/opt/stack/new/nova/nova/virt/libvirt/vif.py", line 380, in 
get_config
   _("Unexpected vif_type=%s") % vif_type)
   NovaException: Unexpected vif_type=binding_failed
   TRACE nova.compute.manager [instance: e21d695e-4f15-4215-bc62-8ea645645a26]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251086] Re: nvp_cluster_uuid is no longer used in nvp.ini

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251086

Title:
  nvp_cluster_uuid is no longer used in nvp.ini

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  remove it!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245799] Re: IP lib fails when int name has '@' character and VLAN interfaces

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1245799

Title:
  IP lib fails when int name has '@' character and VLAN interfaces

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  IP lib can not distinguish between interfaces with an '@' in their name to a 
VLAN interfaces.
  And an interface name can have more than one '@' in their name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1245799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service "my-topic".   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {"node": {"x-declare": {"auto-delete": true, 
"durable": true}, "type": "topic"}, "create": "always", "link": {"x-declare": 
{"auto-delete": true, "exclusive": false, "durable": false}, "durable": true, 
"name": "my-topic"}}
  Recevr openstack/my-topic.server-02 ; {"node": {"x-declare": {"auto-delete": 
true, "durable": true}, "type": "topic"}, "create": "always", "link": 
{"x-declare": {"auto-delete": true, "exclusive": false, "durable": false}, 
"durable": true, "name": "my-topic.server-02"}}
  Recevr my-topic_fanout ; {"node": {"x-declare": {"auto-delete": true, 
"durable": false, "type": "fanout"}, "type": "topic"}, "create": "always", 
"link": {"x-declare": {"auto-delete": true, "exclusive": true, "durable": 
false}, "durable": true, "name": 
"my-topic_fanout_489a3178fc704123b0e5e2fbee125247"}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {"node": {"x-declare": {"auto-delete": true, 
"durable": false, "type": "fanout"}, "type": "topic"}, "create": "always", 
"link": {"x-declare": {"auto-delete": true, "exclusive": true, "durable": 
false}, "durable": true, "name": 
"my-topic_fanout_b40001afd9d946a582ead3b7b858b588"}}
  Recevr my-topic_fanout ; {"node": {"x-declare": {"auto-delete": true, 
"durable": false, "type": "fanout"}, "type": "topic"}, "create": "always", 
"link": {"x-declare": {"auto-delete": true, "exclusive": true, "durable": 
false}, "durable": true, "name": 
"my-topic_fanout_b40001afd9d946a582ead3b7b858b588"}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {"node": {"x-declare": {"auto-delete": 
true, "durable": true}, "type": "topic"}, "create": "always", "link": 
{"x-declare": {"auto-delete": true, "exclusive": false, "durable": false}, 
"durable": true, "name": "my-topic.server-02"}}
  Recevr openstack/my-topic ; {"node": {"x-declare": {"auto-delete": true, 
"durable": true}, "type": "topic"}, "create": "always", "link": {"x-declare": 
{"auto-delete": true, "exclusive": false, "durable": false}, "durable": true, 
"name": "my-topic"}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {"link": {"x-declare": 
{"auto-delete": true, "durable": false}}}
  Recevr amq.topic/fanout/ ; {"link": {"x-declare": {"auto-delete": true, 
"exclusive": true}}}
  Recevr amq.topic/fanout/ ; {"link": {"x-declare": {"auto-delete": true, 
"exclusive": true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {"link": {"x-declare": 
{"auto-delete": true, "durable": false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
http

[Yahoo-eng-team] [Bug 1254046] Re: openstack.common.local module is out of date

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254046

Title:
  openstack.common.local module is out of date

Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  local has a broken TLS symbol - strong_store, fixed in oslo some time
  ago in Ib544be1485823f6c619312fdee5a04031f48bbb4. All direct and
  indirect (lockutils and rpc) usages of strong_store might be
  potentially affected.

  Original change to Nova: https://review.openstack.org/#/c/57509/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1254046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252284] Re: OVS agent doesn't reclaim local VLAN

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252284

Title:
  OVS agent doesn't reclaim local VLAN

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Locally to an OVS agent, when the last port of a network disappears
  the local VLAN isn't reclaim.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1252284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257523] Re: Neither vpnaas.filters nor debug.filters are referenced in setup.cfg

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257523

Title:
  Neither vpnaas.filters nor debug.filters are referenced in setup.cfg

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Both vpnaas.filters and debug.filters are missing from setup.cfg,
  breaking rootwrap for the appropriate commands.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255421] Re: Unittest fails due to unexpected ovs-vsctl calling in ryu plugin test

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255421

Title:
  Unittest fails due to unexpected ovs-vsctl calling in ryu plugin test

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  In the unit test, ovs-vsctl command is unexpectedly called in ryu
  plugin test.

  It occurs in the latest master branch 
(4b47717b132336396cdbea9d168acaaa30bd5a02).
  In gating test, the followings hit this issue:
  
http://logs.openstack.org/70/58270/2/check/gate-neutron-python27/79ef6dd/console.html
  
http://logs.openstack.org/25/58125/4/check/gate-neutron-python27/2cc0dc5/console.html#_2013-11-27_01_46_09_003

  According to the result of debugging by adding print_traceback in 
ovs_lib.OVSBridge.get_vif_port_by_id,
  the following tests fails and the following stack trace is got:

  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_update_ip_address_only
  neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_update_ips
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_delete_ip
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_add_additional_ip
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_not_admin
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_update_ip

    File 
"/home/ubuntu/neutron/.venv/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 194, in main
  result = function(*args, **kwargs)
    File "neutron/openstack/common/rpc/impl_fake.py", line 67, in _inner
  namespace, **args)
    File "neutron/openstack/common/rpc/dispatcher.py", line 172, in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)
    File "neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 296, in 
port_update
  vif_port = self.int_br.get_vif_port_by_id(port['id'])
    File "neutron/agent/linux/ovs_lib.py", line 362, in get_vif_port_by_id
  print traceback.print_stack()

  More intersting thing is that it occurs only when both OVS plugin test and 
Ryu plugin test are run.
  More precisely, it happens when we run
  - first 
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent, and
  - then neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port

  $ source .venv/bin/activate
  $ OS_DEBUG=1 python setup.py testr --testr-args='--concurrency=4 
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent 
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1255421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255519] Re: NVP connection fails because port is a string

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255519

Title:
  NVP connection fails because port is a string

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released

Bug description:
  On a dev machine I've recently create I noticed failures at startup when 
Neutron is configured with the NVP plugin.
  I root caused the failure to port being explicitly passed to HTTPSConnection 
constructor as a string rather than an integer.

  This can be easily fixed ensuring port is always an integer.

  I am not sure of the severity of this bug as it might strictly related
  to this specific dev env, but it might be worth applying and
  backporting it

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1255519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211742] Re: notification not available for deleting an instance having no host associated

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1211742

Title:
  notification not available for deleting an instance having no host
  associated

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Steps to reproduce issue:
  1. Set the Nova notification_driver (to say log_notifier) and monitor the 
notifications.
  2. Delete an instance which does not have a host associated with it.
  3. Check if any notifications are generated for the instance deletion.

  Expected Result:
  'delete.start' and 'delete.end' notifications should be generated for the 
instance being deleted.

  Actual Result:
  There are no 'delete' notifications being generated in this scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1211742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199954] Re: VCDriver: Failed to resize instance

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199954

Title:
  VCDriver: Failed to resize instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  Steps to reproduce:
  nova resize  2

  Error:
   ERROR nova.openstack.common.rpc.amqp 
[req-762f3a87-7642-4bd3-a531-2bcc095ec4a5 demo demo] Exception during message 
handling
Traceback (most recent call last):
  File "/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 421, in 
_process_data
**args)
  File "/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 172, 
in dispatch
result = getattr(proxyobj, method)(ctxt, **kwargs)
  File "/opt/stack/nova/nova/exception.py", line 99, in wrapped
temp_level, payload)
  File "/opt/stack/nova/nova/exception.py", line 76, in wrapped
return f(self, context, *args, **kw)
  File "/opt/stack/nova/nova/compute/manager.py", line 218, in 
decorated_function
pass
  File "/opt/stack/nova/nova/compute/manager.py", line 204, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 269, in 
decorated_function
function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 246, in 
decorated_function
e, sys.exc_info())
  File "/opt/stack/nova/nova/compute/manager.py", line 233, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 2633, in 
resize_instance
block_device_info)
  File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 410, in 
migrate_disk_and_power_off
dest, instance_type)
  File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 893, in 
migrate_disk_and_power_off
raise exception.HostNotFound(host=dest)
HostNotFound:

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1199954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1193980] Re: Regression: Cinder Volumes "unable to find iscsi target" for VMware instances

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1193980

Title:
  Regression: Cinder Volumes "unable to find iscsi target" for VMware
  instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When trying to attach a cinder volume to a VMware based instance I am
  seeing the attached error in the nova-compute logs. Cinder does not
  report back any problem to the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1193980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188543] Re: NBD mount errors when booting an instance from volume

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188543

Title:
  NBD mount errors when booting an instance from volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  My environment:
  - Grizzly OpenStack (installed from Ubuntu repository)
  - Network using Quantum
  - Cinder backed up by a Ceph cluster

  I'm able to boot an instance from a volume but it takes a long time
  for the instance to be active. I've got warnings in the logs of the
  nova-compute node (see attached file). The logs show that the problem
  is related to file injection in the disk image which isn't
  required/relevant when booting from a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1188543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1197041] Re: nova compute crashes if you do not have any hosts in your cluster

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1197041

Title:
  nova compute crashes if you do not have any hosts in your cluster

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  I forgot to add a host to my cluster and brought up nova-compute.  I
  get the following crash on startup.   A controlled  exit with a proper
  warning message would have saved me some time

  
  \\File "/opt/stack/nova/nova/virt/vmwareapi/host.py", line 156, in __init__
 self.update_status()
   File "/opt/stack/nova/nova/virt/vmwareapi/host.py", line 169, in 
update_status
 host_mor = vm_util.get_host_ref(self._session, self._cluster)
   File "/opt/stack/nova/nova/virt/vmwareapi/vm_util.py", line 663, in 
get_host_ref
 if not host_ret.ManagedObjectReference:
  AttributeError: 'Text' object has no attribute 'ManagedObjectReference'
  Removing descriptor: 6

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1197041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1224453] Re: min_count ignored for instance create

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1224453

Title:
  min_count ignored for instance create

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The server create API takes min_count and max_count values for the
  number of instances to  be created, where the actual number to be
  created should be the highest value allowed by quota between these
  limits.

  However the code in compute/api.py does a single check against
  max_count and then treats the exeception as a failure - resulting in
  messages such as:

  min_count=1
  max_count= (quota+1)

  "Quota exceeded for instances: Requested 1, but already used 13 of 40
  instances"


  The code in _check_num_instances_quota() looks like it has most of the
  logic for adjusting the values when it gets an OverQuota exception
  from the initial reservation request based on max_count - but always
  ends up raising TooManyInstances .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1224453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231263] Re: Clear text password has been print in log by some API call

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1231263

Title:
  Clear text password has been print in log by some API call

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  In current implementation, when perform some api call, like change server 
password, or rescue server, the password has been print in log in nova.
  i.e:

  2013-09-26 13:48:01.711 DEBUG routes.middleware [-] Match dict: {'action': 
u'action', 'controller': , 'project_id': u'05004a24b3304cd9b55a0fcad08107b3', 'id': 
u'8c4a1dfa-147a-4f
  f8-8116-010d8c346115'} from (pid=10629) __call__ 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py:103
  2013-09-26 13:48:01.711 DEBUG nova.api.openstack.wsgi 
[req-10ebd201-ba52-453f-b1ce-1e41fbef8cdd admin demo] Action: 'action', body: 
{"changePassword": {"adminPass": "1234567"}} from (pid=10629) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:926

  This is not secue which the password should be replaced by ***

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1231263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233837] Re: target_iqn is referenced before assignment after exceptions in hyperv/volumeop.py attch_volume()

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233837

Title:
  target_iqn is referenced before assignment after exceptions in
  hyperv/volumeop.py attch_volume()

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  If a exception is encountered in _login_storage_target or
  _get_mounted_disk_from_lun target_iqn will be referenced in the
  exception handler before it is defined resulting in the following
  traceback:

  c39117134492490cba81828d080895b5 1a26ee4f153e438c806203607a0d728e] Exception 
during message handling
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\amqp.py", line 461, 
in _process_data
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\dispatcher.py", line 
172, in dispatch
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py", line 90, in wrapped
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py", line 73, in wrapped
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py", line 249, in 
decorated_function
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py", line 235, in 
decorated_function
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py", line 277, in 
decorated_function
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py", line 264, in 
decorated_function
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py", line 3676, in 
attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp 
context, instance, mountpoint)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py", line 3671, in 
attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp 
mountpoint, instance)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py", line 3717, in 
_attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp 
connector)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py", line 3707, in 
_attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp 
encryption=encryption)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\driver.py", line 72, in 
attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common

[Yahoo-eng-team] [Bug 1233026] Re: exception.InstanceIsLocked is not caught in start and stop server api

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233026

Title:
  exception.InstanceIsLocked is not caught in start and stop server api

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  when port nova-v3-test:
  test_server_actions.ServerActionsV3TestXML.test_lock_unlock_server. We
  found the exception.InstanceIsLocked is not caught in start and stop
  server API.

  
  the following is the nova log:

  2013-09-30 15:03:29.306 ^[[00;32mDEBUG nova.api.openstack.wsgi 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[00;32m] 
^[[01;35m^[[00;32mAction: 'action', body: 
  http://docs.openstack.org/compute/api/v1.1"/>^[[00m 
^[[00;33mfrom (pid=23798) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:935^[[00m
  2013-09-30 15:03:29.307 ^[[00;32mDEBUG nova.api.openstack.wsgi 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[00;32m] 
^[[01;35m^[[00;32mCalling method >^[[00m ^[[00;33mfrom (pid=23798) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:936^[[00m
  2013-09-30 15:03:29.339 ^[[00;32mDEBUG 
nova.api.openstack.compute.plugins.v3.servers 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[00;32m] 
^[[01;35m[instance: cd4fec81-d2e8-43cd-ab5d-47da72dd90fa] ^[[00;32mstop 
instance^[[00m ^[[00;33mfrom (pid=23798) _stop_server 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/servers.py:1372^[[00m
  2013-09-30 15:03:29.340 ^[[01;31mERROR nova.api.openstack.extensions 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[01;31m] 
^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
469, in wrapped
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/plugins/v3/servers.py", line 1374, 
in _stop_server
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mself.compute_api.stop(context, instance)
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/compute/api.py", line 198, in 
wrapped
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn func(self, context, target, *args, **kwargs)
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/compute/api.py", line 187, in inner
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mraise 
exception.InstanceIsLocked(instance_uuid=instance['uuid'])
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mInstanceIsLocked: Instance cd4fec81-d2e8-43cd-ab5d-47da72dd90fa 
is locked
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m
  2013-09-30 15:03:29.341 ^[[00;36mINFO nova.api.openstack.wsgi 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[00;36m] 
^[[01;35m^[[00;36mHTTP exception thrown: Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  ^[[00m

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1233026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226698] Re: flavor pagination incorrectly uses id rather than flavorid

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226698

Title:
  flavor pagination incorrectly uses id rather than flavorid

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The "ID" in flavor-list response is really instance_types.flavorid in 
database.  When using the marker, it use instance_types.id field. The test pass 
as long as instance_types.id begin with 1 and it is sequential. If it does not 
begin with 1 or if it does not match instance_types.flavorid, the test fail 
with following error:   
   

  
  '''   
  
  Traceback (most recent call last):
  
File 
"/Volumes/apple/openstack/tempest/tempest/api/compute/flavors/test_flavors.py", 
line 91, in test_list_flavors_detailed_using_marker 
  resp, flavors = self.client.list_flavors_with_detail(params)  
  
File 
"/Volumes/apple/openstack/tempest/tempest/services/compute/json/flavors_client.py",
 line 45, in list_flavors_with_detail   
 
  resp, body = self.get(url)
  
File "/Volumes/apple/openstack/tempest/tempest/common/rest_client.py", line 
263, in get
  return self.request('GET', url, headers)  
  
File "/Volumes/apple/openstack/tempest/tempest/common/rest_client.py", line 
394, in request
  resp, resp_body)  
  
File "/Volumes/apple/openstack/tempest/tempest/common/rest_client.py", line 
439, in _error_checker
  raise exceptions.NotFound(resp_body)  
  
  NotFound: Object not found
  
  Details: {"itemNotFound": {"message": "The resource could not be found.", 
"code": 404}}

  
  ==
  
  FAIL: 
tempest.api.compute.flavors.test_flavors.FlavorsTestJSON.test_list_flavors_using_marker[gate]
  '''   
  

  
  Really, it should use flavorid for marker.  The flavor_get_all() method in 
nova.db.sqlalchemy.api should be fixed to use flavorid=marker in filter, as 
follows:
  -filter_by(id=marker).\   
  
  +filter_by(flavorid=marker).\

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213927] Re: flavor extra spec api fails with XML content type if key contains a colon

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213927

Title:
  flavor extra spec api fails with XML content type if key contains a
  colon

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  The flavor extra spec API  extension (os-extra_specs) fails with "HTTP
  500" when content-type application/xml is requested if the extra spec
  key contains a colon.

  For example:

  curl [endpoint]/flavors/[ID]/os-extra_specs -H "Accept: application/json" -H 
"X-Auth-Token: $TOKEN"
  {"extra_specs": {"foo:bar": "999"}}

  curl -i [endpoint]/flavors/[ID]/os-extra_specs -H "Accept: application/xml" 
-H "X-Auth-Token: $TOKEN"
  {"extra_specs": {"foo:bar": "999"}}
  HTTP/1.1 500 Internal Server Error

  The stack trace shows that the XML parser tries to interpret the ":"
  in key as if it would be a XML namespace, which fails, as the
  namespace is not valid:

  2013-08-19 13:08:14.374 27521 DEBUG nova.api.openstack.wsgi 
[req-afe0c3c8-e7d6-48c5-84f1-782260850e6b redacted redacted] Calling method 
> _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:927
  2013-08-19 13:08:14.377 27521 ERROR nova.api.openstack 
[req-afe0c3c8-e7d6-48c5-84f1-782260850e6b redacted redacted] Caught error: 
Invalid tag name u'foo:bar'
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 110, in 
__call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/hp/middleware/cs_auth_token.py", line 160, in 
__call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
super(CsAuthProtocol, self).__call__(env, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py", 
line 461, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
self.app(env, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 903, in 
__call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack content_type, 
body, accept)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py",

[Yahoo-eng-team] [Bug 1235435] Re: 'SubnetInUse: Unable to complete operation on subnet UUID. One or more ports have an IP allocation from this subnet.'

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235435

Title:
  'SubnetInUse: Unable to complete operation on subnet UUID. One or more
  ports have an IP allocation from this subnet.'

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  Occasional tempest failure:

  http://logs.openstack.org/86/49086/2/gate/gate-tempest-devstack-vm-
  neutron-isolated/ce14ceb/testr_results.html.gz

  ft3.1: tearDownClass 
(tempest.scenario.test_network_basic_ops.TestNetworkBasicOps)_StringException: 
Traceback (most recent call last):
File "tempest/scenario/manager.py", line 239, in tearDownClass
  thing.delete()
File "tempest/api/network/common.py", line 71, in delete
  self.client.delete_subnet(self.id)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 112, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 380, in delete_subnet
  return self.delete(self.subnet_path % (subnet))
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1233, in delete
  headers=headers, params=params)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1222, in retry_request
  headers=headers, params=params)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1165, in do_request
  self._handle_fault_response(status_code, replybody)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1135, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 97, in exception_handler_v20
  message=msg)
  NeutronClientException: 409-{u'NeutronError': {u'message': u'Unable to 
complete operation on subnet 9e820b02-bfe2-47e3-b186-21c5644bc9cf. One or more 
ports have an IP allocation from this subnet.', u'type': u'SubnetInUse', 
u'detail': u''}}

  
  logstash query:

  @message:"One or more ports have an IP allocation from this subnet"
  AND @fields.filename:"logs/screen-q-svc.txt" and @message:"
  SubnetInUse: Unable to complete operation on subnet"


  
http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIEBmaWVsZHMuZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1xLXN2Yy50eHRcIiBhbmQgQG1lc3NhZ2U6XCIgU3VibmV0SW5Vc2U6IFVuYWJsZSB0byBjb21wbGV0ZSBvcGVyYXRpb24gb24gc3VibmV0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODA5MTY1NDUxODcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1235435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234759] Re: Hyper-V fails to spawn snapshots

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1234759

Title:
  Hyper-V fails to spawn snapshots

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Creating a snapshot of an instance and then trying to boot from it
  will result the following Hyper-V exception: "HyperVException: WMI job
  failed with status 10". Here is the trace:
  http://paste.openstack.org/show/47904/ .

  The ideea is that Hyper-V fails to expand the image, as it gets the
  request to resize it to it's actual size, which leads to an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1234759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242597] Re: [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens (CVE-2013-6391)

2013-12-16 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1242597

Title:
  [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens
  (CVE-2013-6391)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  So I finally got around to investigating the scenario I mentioned in
  https://review.openstack.org/#/c/40444/, and unfortunately it seems
  that the ec2tokens API does indeed provide a way to circumvent the
  role delegation provided by trusts, and obtain all the roles of the
  trustor user, not just those explicitly delegated.

  Steps to reproduce:
  - Trustor creates a trust delegating a subset of roles
  - Trustee gets a token scoped to that trust
  - Trustee creates an ec2-keypair
  - Trustee makes a request to the ec2tokens API, to validate a signature 
created with the keypair
  - ec2tokens API returns a new token, which is not scoped to the trust and 
enables access to all the trustor's roles.

  I can provide some test code which demonstrates the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1242597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238374] Re: TypeError in periodic task 'update_available_resource'

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1238374

Title:
  TypeError in periodic task 'update_available_resource'

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  this occurs while I creating an instance under my devstack env:

  2013-10-11 02:56:29.374 ERROR nova.openstack.common.periodic_task [-] Error 
during ComputeManager.update_available_resource: 'NoneType' object is not 
iterable
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task Traceback 
(most recent call last):
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/openstack/common/periodic_task.py", line 180, in 
run_periodic_tasks
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/compute/manager.py", line 4859, in 
update_available_resource
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task 
rt.update_available_resource(context)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/openstack/common/lockutils.py", line 246, in inner
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task return 
f(*args, **kwargs)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 313, in 
update_available_resource
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task 
self.pci_tracker.clean_usage(instances, migrations, orphans)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/pci/pci_manager.py", line 285, in clean_usage
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task for dev 
in self.claims.pop(uuid):
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task TypeError: 
'NoneType' object is not iterable
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1238374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240247] Re: API cell always doing local deletes

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240247

Title:
  API cell always doing local deletes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  It appears a regression was introduced in:

  https://review.openstack.org/#/c/36363/

  Where the API cell is now always doing a _local_delete()... before
  telling child cells to delete the instance.  There's at least a couple
  of bad side effects of this:

  1) The instance disappears immediately from API view, even though the 
instance still exists in the child cell.  The user does not see a 'deleting' 
task state.  And if the delete fails in the child cell, you have a sync issue 
until the instance is 'healed'.
  2) Double delete.start and delete.end notifications are sent.  1 from API 
cell, 1 from child cell.

  The problem seems to be that _local_delete is being called because the
  service is determined to be down... because the compute service does
  not run in the API cell.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237126] Re: nova-api-{ec2, metadata, os-compute} don't allow SSL to be enabled

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237126

Title:
  nova-api-{ec2,metadata,os-compute} don't allow SSL to be enabled

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Although the script bin/nova-api will read nova.conf to determine
  which API services should have SSL enabled (via 'enabled_ssl_apis'),
  the individual API scripts

  bin/nova-api-ec2
  bin/nova-api-metadata
  bin/nova-api-os-compute

  do not contain similar logic to allow configuration of SSL. For
  installations that want to use SSL but not the nova-api wrapper, there
  should be a similar way to enable the former.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1237126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242855] Re: [OSSA 2013-028] Removing role adds role with LDAP backend

2013-12-16 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1242855

Title:
  [OSSA 2013-028] Removing role adds role with LDAP backend

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone grizzly series:
  Fix Committed
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Using the LDAP assignment backend, if you attempt to remove a role
  from a user on a tenant and the user doesn't have that role on the
  tenant then the user is actually granted the role on the tenant. Also,
  the role must not have been granted to anyone on the tenant before.

  To recreate

  0) Start with devstack, configured with LDAP (note especially to set
  KEYSTONE_ASSIGNMENT_BACKEND):

  In localrc,
   enable_service ldap
   KEYSTONE_IDENTITY_BACKEND=ldap
   KEYSTONE_ASSIGNMENT_BACKEND=ldap

  1) set up environment with OS_USERNAME=admin

  export OS_USERNAME=admin
  ...

  2) Create a new user, give admin role, list roles:

  $ keystone user-create --name blktest1 --pass blkpwd
  +--+--+
  | Property |  Value   |
  +--+--+
  |  email   |  |
  | enabled  |   True   |
  |id| 3b71182dc36e45c6be4733d508201694 |
  |   name   | blktest1 |
  +--+--+

  $ keystone user-role-add --user blktest1 --role admin --tenant service
  (no output)

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+---+--+--+
  |id|  name | user_id  
|tenant_id |
  
+--+---+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b | admin | 3b71182dc36e45c6be4733d508201694 
| 5b0af1d5013746b286b0d650da73be57 |
  
+--+---+--+--+

  3) Remove a role from that user that they don't have (using otherrole
  here since devstack sets it up):

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name
  service user-role-remove --user blktest1 --role anotherrole --tenant
  service

  - Expected to fail with 404, but it doesn't!

  4) List roles as that user:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+-+--+--+
  |id| name| user_id
  |tenant_id |
  
+--+-+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b |admin| 
3b71182dc36e45c6be4733d508201694 | 5b0af1d5013746b286b0d650da73be57 |
  | afe23e7955704ccfad803b4a104b28a7 | anotherrole | 
3b71182dc36e45c6be4733d508201694 | 5b0af1d5013746b286b0d650da73be57 |
  
+--+-+--+--+

  - Expected to not include the role that was just removed!

  5) Remove the role again:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name
  service user-role-remove --user blktest1 --role anotherrole --tenant
  service

  - No errors, which I guess is expected since list just said they had
  the role...

  6) List roles, and now it's gone:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+---+--+--+
  |id|  name | user_id  
|tenant_id |
  
+--+---+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b | admin | 3b71182dc36e45c6be4733d508201694 
| 5b0af1d5013746b286b0d650da73be57 |
  
+--+---+--+--+

  7) Remove role again:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-remove --user blktest1 --role anotherrole --tenant service
  Could not find user, 3b71182dc36e45c6be4733d508201694. (HTTP 404)

  - Strangely says user not found rather than role not assigned.

To manage notifications about this bug go to:
https://bugs.launchpad.net

[Yahoo-eng-team] [Bug 1239709] Re: NovaObject does not properly honor VERSION

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239709

Title:
  NovaObject does not properly honor VERSION

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The base object infrastructure has been comparing Object.version
  instead of the Object.VERSION that *all* the objects have been setting
  and incrementing when changes have been made. Since the base object
  defined a .version, and that was used to determine the actual version
  of an object, all objects defining a different VERSION were ignored.

  All systems in the wild currently running broken code are sending
  version '1.0' for all of their objects. The fix is to change the base
  object infrastructure to properly examine, compare and send
  Object.VERSION.

  Impact should be minimal at this point, but getting systems patched as
  soon as possible will be important going forward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243291] Re: Restarting nova compute has an exception

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243291

Title:
  Restarting nova compute has an exception

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  (latest havana code - libvirt driver)

  1. launch a nova vm
  2. see that the instance is deployed on the compute node
  3. restart the compute node

  get the following exception:

  2013-10-22 05:46:53.711 30742 INFO nova.openstack.common.rpc.common 
[req-57056535-4ecd-488a-a75e-ff83341afb98 None None] Connected to AMQP server 
on 192.168.10.111:5672
  2013-10-22 05:46:53.737 30742 AUDIT nova.service [-] Starting compute node 
(version 2013.2)
  2013-10-22 05:46:53.814 30742 ERROR nova.openstack.common.threadgroup [-] 
'NoneType' object has no attribute 'network_info'
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
117, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
x.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
49, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", line 65, 
in run_service
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 154, in start
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 786, in 
init_host
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
self._init_instance(context, instance)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 664, in 
_init_instance
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
net_info = compute_utils.get_nw_info_for_instance(instance)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 349, in 
get_nw_info_for_instance
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return instance.info_cache.network_info
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
AttributeError: 'NoneType' object has no attribute 'network_info'
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243260] Re: Nova api doesn't start with a backdoor port set

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243260

Title:
  Nova api doesn't start with a backdoor port set

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  nova api fails to start properly if a backdoor port is specified.
  Looking at the logs this traceback is repeatedly printed:

  2013-10-22 14:19:46.822 INFO nova.openstack.common.service [-] Child 1460 
exited with status 1
  2013-10-22 14:19:46.824 INFO nova.openstack.common.service [-] Started child 
1468
  2013-10-22 14:19:46.833 INFO nova.openstack.common.eventlet_backdoor [-] 
Eventlet backdoor listening on 60684 for process 1467
  2013-10-22 14:19:46.833 INFO nova.openstack.common.eventlet_backdoor [-] 
Eventlet backdoor listening on 58986 for process 1468
  2013-10-22 14:19:46.837 ERROR nova.openstack.common.threadgroup [-] 
'NoneType' object has no attribute 'backdoor_port'
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/threadgroup.py", line 117, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup x.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/threadgroup.py", line 49, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/service.py", line 448, in run_service
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/service.py", line 357, in start
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
self.manager.backdoor_port = self.backdoor_port
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
AttributeError: 'NoneType' object has no attribute 'backdoor_port'
  2013-10-22 14:19:46.840 TRACE nova   File "/usr/local/bin/nova-api", line 10, 
in 
  2013-10-22 14:19:46.840 TRACE nova sys.exit(main())
  2013-10-22 14:19:46.840 TRACE nova   File "/opt/stack/nova/nova/cmd/api.py", 
line 53, in main
  2013-10-22 14:19:46.840 TRACE nova launcher.wait()
  2013-10-22 14:19:46.840 TRACE nova   File 
"/opt/stack/nova/nova/openstack/common/service.py", line 351, in wait
  2013-10-22 14:19:46.840 TRACE nova self._respawn_children()
  2013-10-22 14:19:46.840 TRACE nova   File 
"/opt/stack/nova/nova/openstack/common/service.py", line 341, in 
_respawn_children
  2013-10-22 14:19:46.840 TRACE nova self._start_child(wrap)
  2013-10-22 14:19:46.840 TRACE nova   File 
"/opt/stack/nova/nova/openstack/common/service.py", line 287, in _start_child
  2013-10-22 14:19:46.840 TRACE nova os._exit(status)
  2013-10-22 14:19:46.840 TRACE nova TypeError: an integer is required
  2013-10-22 14:19:46.840 TRACE nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246103] Re: encryptors module forces cert and scheduler services to depend on cinderclient

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246103

Title:
  encryptors module forces cert and scheduler services to depend on
  cinderclient

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Packstack:
  Invalid

Bug description:
  When Nova Scheduler is installed via packstack as the only explicitly
  installed service on a particular node, it will fail to start.  This
  is because it depends on the Python cinderclient library, which is not
  marked as a dependency in 'nova::scheduler' class in Packstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246412] Re: Unshelving an instance with an attached volume causes the volume to not get attached

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246412

Title:
  Unshelving an instance with an attached volume causes the volume to
  not get attached

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When shelving an instance that has a volume attached - once it's
  unshelved, the volume will not get re-attached.

  Reproduce by:

  $nova boot --image  --flavor  test
  $nova attach   #ssh into the instance and make sure the 
volume is there
  $nova shelve  #Make sure the instance is done shelving
  $nova unshelve  #Log in and see that the volume is not visible any 
more

  It can also be seen that the volume remains attached as per

  $sinder list

  And if you take a look at the generated xml (if you use libvirt) you
  can see that the volume is not there when the instance is done
  unshelving.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253510] Re: Error mispelt in disk api file

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253510

Title:
  Error mispelt in disk api file

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Error is spelt errror, this is causing a key error. See bug 1253508

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1253510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261559] [NEW] Timeouts due to VMs not sending DHCPDISCOVER messages

2013-12-16 Thread Salvatore Orlando
Public bug reported:

In some instances, tempest scenario tests fail with a timeout error
similarly to bug 1253896, but unlike other occurrences of this bug, the
failure happens even if all the elements connecting the floating IP to
the VM are properly wired.

Further investigations revealed the a DHCPDISCOVER is apparently not sent from 
the VM.
An instance of this failure can be seen here: 
http://logs.openstack.org/60/58860/2/gate/gate-tempest-dsvm-neutron/b9b25eb

Looking at syslog for this tempest run, only one DHCPDISCOVER is
detected, even if 27 DHCPRELEASE are sent (thus meaning the
notifications were properly handled and the dnsmasq processed were up
and running).

Relevant events from a specific failure (boot_volume_pattern)

Server boot: 15:18:44.972
Fip create: 15:18:45.075
Port wired: 15:18:45.279
Fip wired: 15:18:46:356
Server delete: 15:22:03 (timeout expired)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261559

Title:
  Timeouts due to VMs not sending DHCPDISCOVER messages

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In some instances, tempest scenario tests fail with a timeout error
  similarly to bug 1253896, but unlike other occurrences of this bug,
  the failure happens even if all the elements connecting the floating
  IP to the VM are properly wired.

  Further investigations revealed the a DHCPDISCOVER is apparently not sent 
from the VM.
  An instance of this failure can be seen here: 
http://logs.openstack.org/60/58860/2/gate/gate-tempest-dsvm-neutron/b9b25eb

  Looking at syslog for this tempest run, only one DHCPDISCOVER is
  detected, even if 27 DHCPRELEASE are sent (thus meaning the
  notifications were properly handled and the dnsmasq processed were up
  and running).

  Relevant events from a specific failure (boot_volume_pattern)

  Server boot: 15:18:44.972
  Fip create: 15:18:45.075
  Port wired: 15:18:45.279
  Fip wired: 15:18:46:356
  Server delete: 15:22:03 (timeout expired)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >